The validation of (advanced) bibliometric indicators through peer assessments: A comparative study using data from InCites and F1000

被引:88
作者
Bornmann, Lutz [1 ]
Leydesdorff, Loet [2 ]
机构
[1] Adm Headquarters Max Planck Soc, Div Sci & Innovat Studies, D-80539 Munich, Germany
[2] Univ Amsterdam, Amsterdam Sch Commun Res ASCoR, NL-1012 CX Amsterdam, Netherlands
关键词
Advanced bibliometric indicators; Peer review; F1000; InCites; IMPACT; QUALITY; FACULTY;
D O I
10.1016/j.joi.2012.12.003
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The data of F1000 and InCites provide us with the unique opportunity to investigate the relationship between peers' ratings and bibliometric metrics on a broad and comprehensive data set with high-quality ratings. F1000 is a post-publication peer review system of the biomedical literature. The comparison of metrics with peer evaluation has been widely acknowledged as a way of validating metrics. Based on the seven indicators offered by InCites, we analyzed the validity of raw citation counts (Times Cited, 2nd Generation Citations, and 2nd Generation Citations per Citing Document), normalized indicators (journal Actual/Expected Citations, Category Actual/Expected Citations, and Percentile in Subject Area), and a journal based indicator (Journal Impact Factor). The data set consists of 125 papers published in 2008 and belonging to the subject category cell biology or immunology. As the results show, Percentile in Subject Area achieves the highest correlation with F1000 ratings; we can assert that for further three other indicators (Times Cited, 2nd Generation Citations, and Category Actual/Expected Citations) the "true" correlation with the ratings reaches at least a medium effect size. (c) 2012 Elsevier Ltd. All rights reserved.
引用
收藏
页码:286 / 291
页数:6
相关论文
共 35 条
[1]   Looking for Landmarks: The Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication Outputs [J].
Allen, Liz ;
Jones, Ceri ;
Dolby, Kevin ;
Lynn, David ;
Walport, Mark .
PLOS ONE, 2009, 4 (06)
[2]  
[Anonymous], 2012, STAT STAT SOFTW REL
[3]  
Bollen J, 2009, PLOS ONE, V4, DOI [10.1371/journal.pone.0004803, 10.1371/journal.pone.0006022]
[4]   Are there better indices for evaluation purposes than the h index?: a comparison of nine different variants of the h index using data from biomedicine [J].
Bornmann, Lutz ;
Mutz, Ruediger ;
Daniel, Hans-Dieter .
JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 2008, 59 (05) :830-837
[5]   A multilevel modelling approach to investigating the predictive validity of editorial decisions: do the editors of a high profile journal select manuscripts that are highly cited after publication? [J].
Bornmann, Lutz ;
Mutz, Ruediger ;
Marx, Werner ;
Schier, Hermann ;
Daniel, Hans-Dieter .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES A-STATISTICS IN SOCIETY, 2011, 174 :857-879
[6]   Scientific Peer Review [J].
Bornmann, Lutz .
ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY, 2011, 45 :199-245
[7]   Towards an ideal method of measuring research performance: Some comments to the Opthof and Leydesdorff (2010) paper [J].
Bornmann, Lutz .
JOURNAL OF INFORMETRICS, 2010, 4 (03) :441-443
[8]   Universality of Citation Distributions-A Validation of Radicchi et al.'s Relative Indicator cf = c/c0 at the Micro Level Using Data From Chemistry [J].
Bornmann, Lutz ;
Daniel, Hans-Dieter .
JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 2009, 60 (08) :1664-1670
[9]   evaluating university research performance using metrics [J].
Butler, Linda ;
Mcallister, Ian .
EUROPEAN POLITICAL SCIENCE, 2011, 10 (01) :44-58
[10]  
Cohen J., 1988, Statistical power analysis for the behavioral sciences, VSecond