Reviewer agreement trends from four years of electronic submissions of conference abstract

被引:15
作者
Rowe B.H. [1 ,2 ]
Strome T.L. [3 ]
Spooner C. [1 ]
Blitz S. [1 ]
Grafstein E. [4 ]
Worster A. [5 ]
机构
[1] Department of Emergency Medicine, University of Alberta, Edmonton, Alta.
[2] Department of Public Health Sciences, University of Alberta, Edmonton, Alta.
[3] E-health Services, Winnipeg Regional Health Authority, Winnipeg, Man.
[4] Department of Emergency Medicine, St. Paul's Hospital, Providence Health Group, Vancouver, BC
[5] Department of Emergency Medicine, Hamilton Health Sciences, McMaster University, Hamilton, Ont.
关键词
Intraclass Correlation Coefficient; Individual Criterion; Abstract Submission; Intraclass Correlation Coefficient Score; Review Agreement;
D O I
10.1186/1471-2288-6-14
中图分类号
学科分类号
摘要
Background: The purpose of this study was to determine the inter-rater agreement between reviewers on the quality of abstract submissions to an annual national scientific meeting (Canadian Association of Emergency Physicians; CAEP) to identify factors associated with low agreement. Methods: All abstracts were submitted using an on-line system and assessed by three volunteer CAEP reviewers blinded to the abstracts' source. Reviewers used an on-line form specific for each type of study design to score abstracts based on nine criteria, each contributing from two to six points toward the total (maximum 24). The final score was determined to be the mean of the three reviewers' scores using Intraclass Correlation Coefficient (ICC). Results: 495 Abstracts were received electronically during the four-year period, 2001 - 2004, increasing from 94 abstracts in 2001 to 165 in 2004. The mean score for submitted abstracts over the four years was 14.4 (95% CI: 14.1-14.6). While there was no significant difference between mean total scores over the four years (p = 0.23), the ICC increased from fair (0.36; 95% CI: 0.24-0.49) to moderate (0.59; 95% CI: 0.50-0.68). Reviewers agreed less on individual criteria than on the total score in general, and less on subjective than objective criteria. Conclusion: The correlation between reviewers' total scores suggests general recognition of "high quality" and "low quality" abstracts. Criteria based on the presence/absence of objective methodological parameters (i.e., blinding in a controlled clinical trial) resulted in higher inter-rater agreement than the more subjective and opinion-based criteria. In future abstract competitions, defining criteria more objectively so that reviewers can base their responses on empirical evidence may lead to increased consistency of scoring and, presumably, increased fairness to submitters. © 2006 Rowe et al; licensee BioMed Central Ltd.
引用
收藏
相关论文
共 16 条
[1]  
Scherer R.W., Langenberg P., Full publication of results initially presented in abstracts, The Cochrane Library, 2, (2004)
[2]  
Scherer R.W., Dickersin K., Landenberg P., Full publication of results initially presented in abstracts. A meta-analysis, JAMA, 272, pp. 158-162, (1994)
[3]  
Timmer A., Sutherland L.R., Hilsden R.J., Development and evaluation of a quality score for abstracts, BMC Medical Research Methodology, 3, (2003)
[4]  
McAuley L., Pham B., Tugwell P., Moher D., Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses?, Lancet, 356, pp. 1228-1231, (2000)
[5]  
Cook D.J., Guyatt G.H., Ryan G., Clifton J., Buckingham L., Willan A., McIlroy W., Oxman A., Should unpublished data be included in meta-analyses?, JAMA, 269, pp. 2749-2753, (1993)
[6]  
Kemper K.J., McCarthy P.L., Cicchetti D.V., Improving participation and interrater agreement in scoring ambulatory pediatric association abstracts. How well have we succeeded?, Arch Pediatr Adolesc Med, 150, pp. 380-383, (1996)
[7]  
Abstract Submission CAEP Annual Scientific Assembly June 14-17, 2003 - Call for Abstracts, Can J Emerg Med, 4, 6, (2002)
[8]  
Abstract Submission CAEP Annual Scientific Assembly April 26-29, 2004 - Call for Abstracts, Can J Emerg Med, 5, (2003)
[9]  
Rubin H.R., Redelmeier D.A., Wu A.W., Steinberg E.P., How reliable is peer review of scientific abstracts?, J Gen Intern Med, 8, (1993)
[10]  
Streiner D.L., Norman G.R., Health Measurement Scales: A Practical Guide to Their Development and Use, (1989)