Benchmarking classification models for software defect prediction: A proposed framework and novel findings

被引:767
作者
Lessmann, Stefan [1 ]
Baesens, Bart [2 ]
Mues, Christophe [3 ]
Pietsch, Swantje [1 ]
机构
[1] Univ Hamburg, Inst Informat Syst, D-20146 Hamburg, Germany
[2] Katholieke Univ Leuven, Dept Appl Econ Sci, B-3000 Louvain, Belgium
[3] Univ Southampton, Sch Management, Southampton SO17 1BJ, Hants, England
关键词
complexity measures; data mining; formal methods; statistical methods; software defect prediction;
D O I
10.1109/TSE.2008.35
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers.
引用
收藏
页码:485 / 496
页数:12
相关论文
共 67 条
[1]   A replicated quantitative analysis of fault distributions in complex software systems [J].
Andersson, Carina ;
Runeson, Per .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2007, 33 (05) :273-286
[2]   A replicated empirical study of a selection method for software reliability growth models [J].
Andersson, Carina .
EMPIRICAL SOFTWARE ENGINEERING, 2007, 12 (02) :161-182
[3]  
[Anonymous], 2006, P 12 ACM SIGKDD INT
[4]  
[Anonymous], P 15 INT S SOFTW REL
[5]   Benchmarking state-of-the-art classification algorithms for credit scoring [J].
Baesens, B ;
Van Gestel, T ;
Viaene, S ;
Stepanova, M ;
Suykens, J ;
Vanthienen, J .
JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 2003, 54 (06) :627-635
[6]   A validation of object-oriented design metrics as quality indicators [J].
Basili, VR ;
Briand, LC ;
Melo, WL .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1996, 22 (10) :751-761
[7]  
Bishop CM., 1995, Neural networks for pattern recognition
[8]   The use of the area under the roc curve in the evaluation of machine learning algorithms [J].
Bradley, AP .
PATTERN RECOGNITION, 1997, 30 (07) :1145-1159
[9]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[10]   Assessing the applicability of fault-proneness models across object-oriented software projects [J].
Briand, LC ;
Melo, WL ;
Wüst, J .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2002, 28 (07) :706-720