Improving participation and interrater agreement in scoring ambulatory pediatric association abstracts - How well have we succeeded?

被引:12
作者
Kemper, KJ
McCarthy, PL
Cicchetti, DV
机构
[1] UNIV WASHINGTON,SEATTLE,WA 98195
[2] YALE UNIV,SCH MED,W HAVEN,CT 06516
[3] VET AFFAIRS MED CTR,SEATTLE,WA 98108
[4] SWEDISH MED CTR,SEATTLE,WA
来源
ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE | 1996年 / 150卷 / 04期
关键词
D O I
10.1001/archpedi.1996.02170290046007
中图分类号
R72 [儿科学];
学科分类号
100202 ;
摘要
Objective: To determine whether increasing the number and types of raters affected interrater agreement in scoring abstracts submitted to the Ambulatory Pediatric Association. Methods: In 1990, all abstracts were rated by each of the 11 members of the board of directors of the Ambulatory Pediatric Association. In 1995, abstracts were reviewed by four to five raters, including eight members of the board of directors, two chairpersons of special interest groups, and 10 regional chairpersons, for a total of 20 potential reviewers. Submissions were divided into the following three categories for review: emergency medicine, behavioral pediatrics, and general pediatrics. Weighted percentage agreement and weighted K scores were computed for 1990 and 1995 abstract scores. Results: Between 1990 and 1995, the number of abstracts submitted to the Ambulatory Pediatric Association increased from 246 to 407, the number of reviewers increased from 11 to 20, the weighted percentage agreement between raters remained approximately 79%, and weighted kappa scores remained less than 0.25. Agreement was not significantly better for the emergency medicine and behavioral pediatrics abstracts than for general pediatrics, nor was it better for the raters who reviewed fewer abstracts than those who reviewed many. Conclusions: The number and expertise of those rating abstracts increased from 1990 to 1995. However, interrater agreement did not change and remained low. Further efforts are needed to improve the interrater agreement.
引用
收藏
页码:380 / 383
页数:4
相关论文
共 29 条
[1]   A CONTROLLED TRIAL OF TEACHING CRITICAL-APPRAISAL OF THE CLINICAL LITERATURE TO MEDICAL-STUDENTS [J].
BENNETT, KJ ;
SACKETT, DL ;
HAYNES, RB ;
NEUFELD, VR ;
TUGWELL, P ;
ROBERTS, R .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 1987, 257 (18) :2451-2454
[2]   INSTRUMENTS FOR ASSESSING THE QUALITY OF DRUG STUDIES PUBLISHED IN THE MEDICAL LITERATURE [J].
CHO, MK ;
BERO, LA .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 1994, 272 (02) :101-104
[3]  
CICCHETTI DV, 1976, YALE J BIOL MED, V49, P373
[4]   THE RELIABILITY OF PEER-REVIEW FOR MANUSCRIPT AND GRANT SUBMISSIONS - A CROSS-DISCIPLINARY INVESTIGATION [J].
CICCHETTI, DV .
BEHAVIORAL AND BRAIN SCIENCES, 1991, 14 (01) :119-134
[5]  
CICCHETTI DV, 1981, AM J MENT DEF, V86, P127
[6]   Diagnosing autism using ICD-10 criteria: A comparison of neural networks and standard multivariate procedures [J].
Cicchetti, DV ;
Volkmar, F ;
Klin, A ;
Showalter, D .
CHILD NEUROPSYCHOLOGY, 1995, 1 (01) :26-37
[7]   CHANCE AND CONSENSUS IN PEER-REVIEW [J].
COLE, S ;
COLE, JR ;
SIMON, GA .
SCIENCE, 1981, 214 (4523) :881-886
[8]   AN ANALYSIS OF THE QUALITY OF RESEARCH REPORTS IN THE JOURNAL OF GENERAL INTERNAL MEDICINE [J].
COOPER, GS ;
ZANGWILL, L .
JOURNAL OF GENERAL INTERNAL MEDICINE, 1989, 4 (03) :232-236
[9]   MEASURES OF CLINICAL AGREEMENT FOR NOMINAL AND CATEGORICAL-DATA - THE KAPPA COEFFICIENT [J].
CYR, L ;
FRANCIS, K .
COMPUTERS IN BIOLOGY AND MEDICINE, 1992, 22 (04) :239-246
[10]  
Fleiss J.L., 1978, Appl. Psych. Meus, V2, P113, DOI DOI 10.1177/014662167800200111