Prediction Errors in Learning Drug Response from Gene Expression Data - Influence of Labeling, Sample Size, and Machine Learning Algorithm

被引:15
作者
Bayer, Immanuel [1 ]
Groth, Philip [2 ]
Schneckener, Sebastian [3 ]
机构
[1] Rhein Westfal TH Aachen, Aachen Inst Adv Study Computat Engn Sci AICES, Aachen, Germany
[2] Bayer Pharma AG, Therapeut Res Grp, Berlin, Germany
[3] Bayer Technol Serv GmbH, Syst Biol, Leverkusen, Germany
来源
PLOS ONE | 2013年 / 8卷 / 07期
关键词
MICROARRAY; SELECTION; REGULARIZATION; CLASSIFICATION; UPDATE;
D O I
10.1371/journal.pone.0070294
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two-or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment.
引用
收藏
页数:13
相关论文
共 33 条
[1]  
[Anonymous], 2011, e1071: Misc Functions of the Department of Statistics (e1071)
[2]  
Bishop CM, 2007, COMMITTEES PATTERN R, P655
[3]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[4]   Knowledge-based analysis of microarray gene expression data by using support vector machines [J].
Brown, MPS ;
Grundy, WN ;
Lin, D ;
Cristianini, N ;
Sugnet, CW ;
Furey, TS ;
Ares, M ;
Haussler, D .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2000, 97 (01) :262-267
[5]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[6]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297
[7]   Gene selection and classification of microarray data using random forest -: art. no. 3 [J].
Díaz-Uriarte, R ;
de Andrés, SA .
BMC BIOINFORMATICS, 2006, 7 (1)
[8]   Regularization Paths for Generalized Linear Models via Coordinate Descent [J].
Friedman, Jerome ;
Hastie, Trevor ;
Tibshirani, Rob .
JOURNAL OF STATISTICAL SOFTWARE, 2010, 33 (01) :1-22
[9]  
Good PI, 2000, TBD PERMUTATION TEST
[10]  
Guyon I., 1993, Advances in Neural Information Processing Systems