Test input reduction for result inspection to facilitate fault localization

被引:56
作者
Hao, Dan [1 ]
Xie, Tao [2 ]
Zhang, Lu [1 ]
Wang, Xiaoyin [1 ]
Sun, Jiasu [1 ]
Mei, Hong [1 ]
机构
[1] Peking Univ, Key Lab High Confidence Software Technol, Sch Elect Engn & Comp Sci, Minist Educ,Inst Software, Beijing 100871, Peoples R China
[2] N Carolina State Univ, Dept Comp Sci, Raleigh, NC 27695 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Test suite reduction; Testing; Debugging; Fault localization; TEST-SUITE REDUCTION; PRIORITIZATION;
D O I
10.1007/s10515-009-0056-x
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Testing-based fault-localization (TBFL) approaches often require the availability of high-statement-coverage test suites that sufficiently exercise the areas around the faults. However, in practice, fault localization often starts with a test suite whose quality may not be sufficient to apply TBFL approaches. Recent capture/replay or traditional test-generation tools can be used to acquire a high-statement-coverage test collection (i.e., test inputs only) without expected outputs. But it is expensive or even infeasible for developers to manually inspect the results of so many test inputs. To enable practical application of TBFL approaches, we propose three strategies to reduce the test inputs in an existing test collection for result inspection. These three strategies are based on the execution traces of test runs using the test inputs. With the three strategies, developers can select only a representative subset of the test inputs for result inspection and fault localization. We implemented and applied the three test-input-reduction strategies to a series of benchmarks: the Siemens programs, DC, and TCC. The experimental results show that our approach can help developers inspect the results of a smaller subset (less than 10%) of test inputs, whose fault-localization effectiveness is close to that of the whole test collection.
引用
收藏
页码:5 / 31
页数:27
相关论文
共 44 条
[1]  
Agrawal H, 1995, SIXTH INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING, PROCEEDINGS, P143, DOI 10.1109/ISSRE.1995.497652
[2]  
[Anonymous], 2007, The Research Methods Knowledge Base
[3]   Toward the determination of sufficient mutant operators for C [J].
Barbosa, EF ;
Maldonado, JC ;
Vincenzi, AMR .
SOFTWARE TESTING VERIFICATION & RELIABILITY, 2001, 11 (02) :113-136
[4]  
Baresi Luciano., 2001, Test Oracles
[5]  
Baudry B., 2006, P 28 INT C SOFTW ENG, P82, DOI DOI 10.1145/1134285.1134299
[6]  
Cleve H, 2005, PROC INT CONF SOFTW, P342
[7]   On similarity-awareness in testing-based fault localization [J].
Dan Hao ;
Lu Zhang ;
Ying Pan ;
Hong Mei ;
Jiasu Sun .
AUTOMATED SOFTWARE ENGINEERING, 2008, 15 (02) :207-249
[8]   Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact [J].
Do, HS ;
Elbaum, S ;
Rothermel, G .
EMPIRICAL SOFTWARE ENGINEERING, 2005, 10 (04) :405-435
[9]   Test case prioritization: A family of empirical studies [J].
Elbaum, S ;
Malishevsky, AG ;
Rothermel, G .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2002, 28 (02) :159-182
[10]  
Enderton HB., 1977, Elements of set theory