Resource oriented selection of rule-based classification models: An empirical case study

被引:3
作者
Khoshgoftaar, Taghi M.
Herzberg, Angela
Seliya, Naeem
机构
[1] Florida Atlantic Univ, Boca Raton, FL 33431 USA
[2] Univ Michigan, Dearborn, MI 48128 USA
关键词
software metrics; rule-based classification model; resource-based software development; software quality; modified expected cost of misclassification;
D O I
10.1007/s11219-006-0038-1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The amount of resources allocated for software quality improvements is often not enough to achieve the desired software quality. Software quality classification models that yield a risk-based quality estimation of program modules, such as fault-prone (fp) and not fault-prone (nfp), are useful as software quality assurance techniques. Their usefulness is largely dependent on whether enough resources are available for inspecting the fp modules. Since a given development project has its own budget and time limitations, a resource-based software quality improvement seems more appropriate for achieving its quality goals. A classification model should provide quality improvement guidance so as to maximize resource-utilization. We present a procedure for building software quality classification models from the limited resources perspective. The essence of the procedure is the use of our recently proposed Modified Expected Cost of Misclassification (MECM) measure for developing resource-oriented software quality classification models. The measure penalizes a model, in terms of costs of misclassifications, if the model predicts more number of fp modules than the number that can be inspected with the allotted resources. Our analysis is presented in the context of our Rule-Based Classification Modeling (RBCM) technique. An empirical case study of a large-scale software system demonstrates the promising results of using the MECM measure to select an appropriate resource-based rule-based classification model.
引用
收藏
页码:309 / 338
页数:30
相关论文
共 29 条
[1]  
[Anonymous], 1998, Practical nonparametric statistics
[2]  
[Anonymous], J SYST SOFTWARE
[3]   A validation of object-oriented design metrics as quality indicators [J].
Basili, VR ;
Briand, LC ;
Melo, WL .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1996, 22 (10) :751-761
[4]  
BRIAND L, 1996, EMPIR SOFTW ENG, V1, P1
[5]   Controlling overfitting in software quality models: Experiments with regression trees and classification [J].
Khoshgoftaar, TM ;
Allen, EB ;
Deng, JY .
SEVENTH INTERNATIONAL SOFTWARE METRICS SYMPOSIUM - METRICS 2001, PROCEEDINGS, 2000, :190-198
[6]   Classification techniques for metric-based software development [J].
Ebert, C .
SOFTWARE QUALITY JOURNAL, 1996, 5 (04) :255-272
[7]   Data mining and knowledge discovery: Making sense out of data [J].
Fayyad, UM .
IEEE EXPERT-INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1996, 11 (05) :20-25
[8]  
Fenton N., 1997, SOFTWARE METRICS RIG
[9]   Emerald: Software metrics and models on the desktop [J].
Hudepohl, JP ;
Aud, SJ ;
Khoshgoftaar, TM ;
Allen, EB ;
Mayrand, J .
IEEE SOFTWARE, 1996, 13 (05) :56-+
[10]  
Johnson RA, 1992, APPL MULTIVARIATE ST