SAN DIEGO – Agreement between the Children's Interview for Psychiatric Syndromes and the Schedule for Affective Disorders and Schizophrenia for School-Age Children ranges from 66% to 90%. But the ChIPS instrument is more sensitive than the K-SADS in detecting psychopathology, results of a comparative study show.
The finding marks the first independent evaluation of the DSM-IV version of the ChIPS, Dr. Jeffrey I. Hunt said in an interview during a poster session at the annual meeting of the American Academy of Child and Adolescent Psychiatry.
“We've been using the ChIPS for the last 4 years, but we thought we needed to make sure that it was valid compared to what we think the gold standard is: the K-SADS,” said Dr. Hunt, of the department of psychiatry and human behavior at Brown University, Providence, R.I. “We had hoped that the ChIPS was as valid as the K-SADS. We found that the ChIPS is a bit more sensitive. It picks up more diagnoses than the K-SADS, and it may be overdiagnosing somewhat.”
He and his associates administered the ChIPS and the K-SADS to 100 psychiatric inpatients aged 12–18 years who were enrolled in a study exploring the cognitive risk factors for suicidality. The mean age of the patients was 15 years, and 73% were female. Most (83%) were white.
The researchers reported that the percentage of agreement between the two diagnostic tools ranged from 66% to 90%, but they described the kappa agreement as “small to moderate.” They also noted that the mean number of diagnoses endorsed on the ChIPS was 4.5, compared with a mean number of 3.15 on the K-SADS, a difference that was statistically significant.
“Because the ChIPS appears to be more sensitive and not necessarily highly specific in its diagnostic categories, it seems that the ChIPS may be better suited as a screening measure, for use in ruling out diagnoses, rather than as a diagnostic instrument,” the researchers wrote in their poster.
They wrote that further studies should be conducted with an even larger sample size to figure out whether the ChIPS is reliable with other diagnostic measures. In addition, the investigators said, comparisons of ChIPS-derived diagnoses to scores obtained by using self-report instruments or checklists are needed to investigate the divergent validity of the interview.
“In the meantime, clinicians should be aware of the sensitivity of the ChIPS in diagnosis, and use it cautiously,” they wrote.