Benchmarking classification models for software defect prediction: A proposed framework and novel findings

被引:767
作者
Lessmann, Stefan [1 ]
Baesens, Bart [2 ]
Mues, Christophe [3 ]
Pietsch, Swantje [1 ]
机构
[1] Univ Hamburg, Inst Informat Syst, D-20146 Hamburg, Germany
[2] Katholieke Univ Leuven, Dept Appl Econ Sci, B-3000 Louvain, Belgium
[3] Univ Southampton, Sch Management, Southampton SO17 1BJ, Hants, England
关键词
complexity measures; data mining; formal methods; statistical methods; software defect prediction;
D O I
10.1109/TSE.2008.35
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers.
引用
收藏
页码:485 / 496
页数:12
相关论文
共 67 条
[61]   Least squares support vector machine classifiers [J].
Suykens, JAK ;
Vandewalle, J .
NEURAL PROCESSING LETTERS, 1999, 9 (03) :293-300
[62]  
Tipping ME, 2000, ADV NEUR IN, V12, P652
[63]   Benchmarking least squares support vector machine classifiers [J].
van Gestel, T ;
Suykens, JAK ;
Baesens, B ;
Viaene, S ;
Vanthienen, J ;
Dedene, G ;
de Moor, B ;
Vandewalle, J .
MACHINE LEARNING, 2004, 54 (01) :5-32
[64]   Mining software repositories for comprehensible software fault prediction models [J].
Vandecruys, Olivier ;
Martens, David ;
Baesens, Bart ;
Mues, Christophe ;
De Backer, Manu ;
Haesen, Raf .
JOURNAL OF SYSTEMS AND SOFTWARE, 2008, 81 (05) :823-839
[65]  
Zar J.H., 2010, Biostatistical Analysis, V5, DOI DOI 10.2307/2265725
[66]   Comments on "data mining static code attributes to learn defect predictors" [J].
Zhang, Hongyu ;
Zhang, Xiuzhen .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2007, 33 (09) :635-636
[67]   Analyzing software measurement data with clustering techniques [J].
Zhong, S ;
Khoshgoftaar, TM ;
Seliya, N .
IEEE INTELLIGENT SYSTEMS, 2004, 19 (02) :20-27