The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs

被引:213
作者
Le Goues, Claire [1 ]
Holtschulte, Neal [2 ]
Smith, Edward K. [3 ]
Brun, Yuriy [3 ]
Devanbu, Premkumar [4 ]
Forrest, Stephanie [2 ]
Weimer, Westley [5 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
[2] Univ New Mexico, Dept Comp Sci, Albuquerque, NM 87131 USA
[3] Univ Massachusetts, Coll Informat & Comp Sci, Amherst, MA 01003 USA
[4] Univ Calif Davis, Dept Comp Sci, Davis, CA 95616 USA
[5] Univ Virginia, Dept Comp Sci, Charlottesville, VA 22904 USA
基金
美国国家科学基金会;
关键词
Automated program repair; benchmark; subject defect; reproducibility; MANYBUGS; INTROCLASS;
D O I
10.1109/TSE.2015.2454513
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The field of automated software repair lacks a set of common benchmark problems. Although benchmark sets are used widely throughout computer science, existing benchmarks are not easily adapted to the problem of automatic defect repair, which has several special requirements. Most important of these is the need for benchmark programs with reproducible, important defects and a deterministic method for assessing if those defects have been repaired. This article details the need for a new set of benchmarks, outlines requirements, and then presents two datasets, MANYBUGS and INTROCLASS, consisting between them of 1,183 defects in 15 C programs. Each dataset is designed to support the comparative evaluation of automatic repair algorithms asking a variety of experimental questions. The datasets have empirically defined guarantees of reproducibility and benchmark quality, and each study object is categorized to facilitate qualitative evaluation and comparisons by category of bug or program. The article presents baseline experimental results on both datasets for three existing repair methods, GenProg, AE, and TrpAutoRepair, to reduce the burden on researchers who adopt these datasets for their own comparative evaluations.
引用
收藏
页码:1236 / 1256
页数:21
相关论文
共 69 条
[21]  
Carbin M, 2011, LECT NOTES COMPUT SC, V6813, P609, DOI 10.1007/978-3-642-22655-7_28
[22]  
Carzaniga A., 2010, Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering, P237
[23]  
Carzaniga A, 2013, PROCEEDINGS OF THE 35TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2013), P782, DOI 10.1109/ICSE.2013.6606624
[24]  
Coker Z, 2013, PROCEEDINGS OF THE 35TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2013), P792, DOI 10.1109/ICSE.2013.6606625
[25]  
Dallmeier V., 2007, P 22 IEEE ACM INT C, P433
[26]   Generating Fixes from Object Behavior Anomalies [J].
Dallmeier, Valentin ;
Zeller, Andreas ;
Meyer, Bertrand .
2009 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, PROCEEDINGS, 2009, :550-554
[27]  
Debroy Vidroha, 2010, Proceedings of the Third IEEE International Conference on Software Testing, Verification and Validation (ICST 2010), P65, DOI 10.1109/ICST.2010.66
[28]   Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact [J].
Do, HS ;
Elbaum, S ;
Rothermel, G .
EMPIRICAL SOFTWARE ENGINEERING, 2005, 10 (04) :405-435
[29]  
Elkarablieh B, 2008, ICSE'08 PROCEEDINGS OF THE THIRTIETH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, P855, DOI 10.1145/1368088.1368222
[30]  
Fast E., 2010, Conference on Genetic and Evolutionary Computation, P965