A computer program for assessing interexaminer agreement when multiple ratings are made on a single subject

被引:10
作者
Cicchetti, DV
Showalter, D
机构
[1] YALE UNIV,SCH MED,CTR CHILD STUDY,NEW HAVEN,CT 06510
[2] NE PROGRAM EVALUAT CTR,W HAVEN,CT 06516
关键词
statistics; psychometrics; reliability;
D O I
10.1016/S0165-1781(97)00093-0
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
This report describes a computer program for applying a new statistical method for determining levels of agreement, or reliability, when multiple examiners evaluate a single subject. The statistics thar are performed include the following: an overall level of agreement, expressed as a percentage, that takes into account all possible levels of partial agreement; the same statistical approach for deriving a separate level of agreement of every examiner with every other examiner; and tests of the extent to which a giver examiner's rating (say a symptom score of three on a five-category ordinal rating scale) deviates from the group or overall average rating. These deviation scores are interpreted as standard Z statistics. Finally, both statistical and clinical criteria are provided to evaluate levels of interexaminer agreement. (C) 1997 Elsevier Science Ireland Ltd.
引用
收藏
页码:65 / 68
页数:4
相关论文
共 6 条