Looking for Landmarks: The Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication Outputs

被引:111
作者
Allen, Liz
Jones, Ceri
Dolby, Kevin
Lynn, David
Walport, Mark
机构
[1] Wellcome Trust, London
来源
PLOS ONE | 2009年 / 4卷 / 06期
基金
英国惠康基金; 美国国家卫生研究院;
关键词
D O I
10.1371/journal.pone.0005910
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Objective: To compare expert assessment with bibliometric indicators as tools to assess the quality and importance of scientific research papers. Methods and Materials: Shortly after their publication in 2005, the quality and importance of a cohort of nearly 700 Wellcome Trust (WT) associated research papers were assessed by expert reviewers; each paper was reviewed by two WT expert reviewers. After 3 years, we compared this initial assessment with other measures of paper impact. Results: Shortly after publication, 62 (9%) of the 687 research papers were determined to describe at least a 'major addition to knowledge' - 6 were thought to be 'landmark' papers. At an aggregate level, after 3 years, there was a strong positive association between expert assessment and impact as measured by number of citations and F1000 rating. However, there were some important exceptions indicating that bibliometric measures may not be sufficient in isolation as measures of research quality and importance, and especially not for assessing single papers or small groups of research publications. Conclusion: When attempting to assess the quality and importance of research papers, we found that sole reliance on bibliometric indicators would have led us to miss papers containing important results as judged by expert review. In particular, some papers that were highly rated by experts were not highly cited during the first three years after publication. Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.
引用
收藏
页数:8
相关论文
共 21 条
[1]   The counting house [J].
Adam, D .
NATURE, 2002, 415 (6873) :726-729
[2]   Profiling citation impact: A new methodology [J].
Adams, Jonathan ;
Gurney, Karen ;
Marshall, Stuart .
SCIENTOMETRICS, 2007, 72 (02) :325-344
[3]  
[Anonymous], ACKN FUND SCHOL J AR
[4]   Citation indexes do not reflect methodological quality in lung cancer randomised trials [J].
Berghmans, T ;
Meert, AP ;
Mascaux, C ;
Paesmans, M ;
Lafitte, JJ ;
Sculier, JP .
ANNALS OF ONCOLOGY, 2003, 14 (05) :715-721
[5]   Electronic publication and the narrowing of science and scholarship [J].
Evans, James A. .
SCIENCE, 2008, 321 (5887) :395-399
[6]   Update on JAMA's conflict of interest policy [J].
Flanagin, Annette ;
Fontanarosa, Phil B. ;
DeAngelis, Catherine D. .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2006, 296 (02) :220-221
[7]  
Gallagher R, 2005, SCIENTIST, V19, P6
[8]   The history and meaning of the journal impact factor [J].
Garfield, E .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2006, 295 (01) :90-93
[9]   Who gets acknowledged: Measuring scientific contributions through automatic acknowledgment indexing [J].
Giles, CL ;
Councill, IG .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2004, 101 (51) :17599-17604
[10]   How should we rate research? The UK's current system can be improved but shouldn't be discarded [J].
Hobbs, FDR ;
Stewart, PM .
BRITISH MEDICAL JOURNAL, 2006, 332 (7548) :983-984