Breast imaging reporting and data system: Inter- and intraobserver variability in feature analysis and final assessment

被引:338
作者
Berg, WA
Campassi, C
Langenberg, P
Sexton, MJ
机构
[1] Univ Maryland, Sch Med, Dept Radiol, Baltimore, MD 21201 USA
[2] Univ Maryland, Sch Med, Greenebaum Canc Ctr, Baltimore, MD 21201 USA
[3] Univ Maryland, Sch Med, Dept Epidemiol & Prevent Med, Baltimore, MD 21201 USA
关键词
D O I
10.2214/ajr.174.6.1741769
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
OBJECTIVE, We sought to evaluate the use of the Breast Imaging Reporting and Data System (BI-RADS) standardized mammography lexicon among and within observers and to distinguish variability in feature analysis from variability in lesion management. MATERIALS AND METHODS. Five experienced mammographers, not specifically trained in BI-RADS, used the lexicon to describe and assess 103 screening mammograms, including 30 (29%) showing cancer, and a subset of 86 mammograms with diagnostic evaluation, including 23 (27%) showing cancer. A subset of 13 screening mammograms (two with malignant findings, 11 with diagnostic evaluation) were rereviewed by each observer 2 months later. Kappa statistics were calculated as measures of agreement beyond chance. RESULTS, After diagnostic evaluation, the interobserver kappa values for describing features were as follows: boast density, 0.43; lesion type, 0.75; mass borders, 0.40; special cases, 0.56; mass density, 0.40; mass shape, 0.28; microcalcification morphology, 0.36; and microcalcification distribution, 0.47. Lesion management was highly variable, with a kappa value for final assessment of 0.37, When we grouped assessments recommending immediate additional evaluation and biopsy (BI-RADS categories 0, 4, and 5 combined) versus follow-up (categories 1, 2, and 3 combined), five observers agreed on management for only 47 (55%) of 86 lesions. Intraobserver agreement on management (additional evaluation or biopsy versus follow-up) was seen in 47 (85%) of 55 interpretations, with a kappa value of 0.35-1.0 (mean, 0.60) for final assessment. CONCLUSION. Inter- and intraobserver variability in mammographic interpretation is substantial for both feature analysis and management. Continued development of methods to improve standardization in mammographic interpretation is needed.
引用
收藏
页码:1769 / 1777
页数:9
相关论文
共 30 条
[1]  
American college of Radiology, 1995, BREAST IM REP DAT SY
[2]  
American College of Radiology, 1998, ILL BREAST IM REP DA
[3]   Breast imaging reporting and data system standardized mammography lexicon: Observer variability in lesion description [J].
Baker, JA ;
Kornguth, PJ ;
Floyd, CE .
AMERICAN JOURNAL OF ROENTGENOLOGY, 1996, 166 (04) :773-778
[4]  
BAKER JA, 1995, RADIOLOGY, V196, P818
[5]   Variability in the interpretation of screening mammograms by US radiologists - Findings from a national sample [J].
Beam, CA ;
Layde, PM ;
Sullivan, DC .
ARCHIVES OF INTERNAL MEDICINE, 1996, 156 (02) :209-213
[6]   OBSERVER VARIATION IN THE CLASSIFICATION OF MAMMOGRAPHIC PARENCHYMAL PATTERNS [J].
BOYD, NF ;
WOLFSON, C ;
MOSKOWITZ, M ;
CARLILE, T ;
PETITCLERC, C ;
FERRI, HA ;
FISHELL, E ;
GREGOIRE, A ;
KIERNAN, M ;
LONGLEY, JD ;
SIMOR, IS ;
MILLER, AB .
JOURNAL OF CHRONIC DISEASES, 1986, 39 (06) :465-472
[7]  
BOYD NF, 1982, J NATL CANCER I, V68, P357
[8]   A COEFFICIENT OF AGREEMENT FOR NOMINAL SCALES [J].
COHEN, J .
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1960, 20 (01) :37-46
[9]  
DORSI CJ, 1995, NEW ENGL J MED, V332, P1172
[10]   MAMMOGRAPHIC FEATURE ANALYSIS [J].
DORSI, CJ ;
KOPANS, DB .
SEMINARS IN ROENTGENOLOGY, 1993, 28 (03) :204-230