A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification

被引:472
作者
Statnikov, Alexander [1 ]
Wang, Lily [2 ]
Aliferis, Constantin F. [1 ,2 ,3 ,4 ]
机构
[1] Vanderbilt Univ, Dept Biomed Informat, Nashville, TN 37203 USA
[2] Vanderbilt Univ, Dept Biostat, Nashville, TN USA
[3] Vanderbilt Univ, Dept Canc Biol, Nashville, TN USA
[4] Vanderbilt Univ, Dept Comp Sci, Nashville, TN 37235 USA
关键词
D O I
10.1186/1471-2105-9-319
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Background: Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of gene expression microarray technology with several molecular signatures on their way toward clinical deployment. Use of the most accurate classification algorithms available for microarray gene expression data is a critical ingredient in order to develop the best possible molecular signatures for patient care. As suggested by a large body of literature to date, support vector machines can be considered "best of class" algorithms for classification of such data. Recent work, however, suggests that random forest classifiers may outperform support vector machines in this domain. Results: In the present paper we identify methodological biases of prior work comparing random forests and support vector machines and conduct a new rigorous evaluation of the two algorithms that corrects these limitations. Our experiments use 22 diagnostic and prognostic datasets and show that support vector machines outperform random forests, often by a large margin. Our data also underlines the importance of sound research design in benchmarking and comparison of bioinformatics algorithms. Conclusion: We found that both on average and in the majority of microarray datasets, random forests are outperformed by support vector machines both in the settings when no gene selection is performed and when several popular gene selection methods are used.
引用
收藏
页数:10
相关论文
共 30 条
[1]  
[Anonymous], 2003, MANUAL SETTING UP US
[2]  
[Anonymous], 2000, PERMUTATION TESTS PR
[3]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[4]   Multi-class feature selection for texture classification [J].
Chen, Xue-wen ;
Zeng, Xiangyan ;
van Alphen, Deborah .
PATTERN RECOGNITION LETTERS, 2006, 27 (14) :1685-1691
[5]   Gene selection and classification of microarray data using random forest -: art. no. 3 [J].
Díaz-Uriarte, R ;
de Andrés, SA .
BMC BIOINFORMATICS, 2006, 7 (1)
[6]   Ensemble methods in machine learning [J].
Dietterich, TG .
MULTIPLE CLASSIFIER SYSTEMS, 2000, 1857 :1-15
[7]   Comparison of discrimination methods for the classification of tumors using gene expression data [J].
Dudoit, S ;
Fridlyand, J ;
Speed, TP .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2002, 97 (457) :77-87
[8]   Critical review of published microarray studies for cancer outcome and guidelines on statistical analysis and reporting [J].
Dupuy, Alain ;
Simon, Richard M. .
JNCI-JOURNAL OF THE NATIONAL CANCER INSTITUTE, 2007, 99 (02) :147-157
[9]   Improvements on cross-validation: The .632+ bootstrap method [J].
Efron, B ;
Tibshirani, R .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1997, 92 (438) :548-560
[10]  
FAN RE, 2005, J MACHINE LEARNING R, V6, P1918