Adaptive regression by mixing

被引:203
作者
Yang, YH [1 ]
机构
[1] Iowa State Univ Sci & Technol, Dept Stat, Ames, IA 50011 USA
关键词
adaptive estimation; combining procedures; nonparametric regression;
D O I
10.1198/016214501753168262
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Adaptation over different procedures is of practical importance. Different procedures perform well under different conditions. In many practical situations, it is rather hard to assess which conditions are (approximately) satisfied so as to identify the best procedure for the data at hand. Thus automatic adaptation over various scenarios is desirable. A practically feasible method, named adaptive regression by mixing (ARM), is proposed to convexly combine general candidate regression procedures. Under mild conditions, the resulting estimator is theoretically shown to perform optimally in rates of convergence without knowing which of the original procedures work the best. Simulations are conducted in several settings, including comparing a parametric model with nonparametric alternatives, comparing a neural network with a projection pursuit in multidimensional regression, and combining bandwidths in kernel regression. The results clearly support the theoretical property of ABM. The ARM algorithm assigns weights on the candidate models-procedures via proper assessment of performance of the estimators. The data are split into two parts, one for estimation and the other for measuring behavior in prediction. Although there are many plausible ways to assign the weights, ARM has a connection with information theory, which ensures the desired adaptation capability. Indeed, under mild conditions, we show that the squared L-2 risk of the estimator based on ARM is basically bounded above by the risk of each candidate procedure plus a small penalty term of order 1/n. Minimizing over the procedures gives the automatically optimal rate of convergence for ARM. Model selection often induces unnecessarily large variability in estimation. Alternatively a proper weighting of the candidate models can be more stable, resulting in a smaller risk. Simulations suggest that ARM works better than model selection using Akaike or Bayesian information criteria when the error variance is not very small.
引用
收藏
页码:574 / 588
页数:15
相关论文
共 45 条
[1]  
Akaike H, 1973, 2 INT S INFORM THEOR, P199, DOI [10.1007/978-1-4612-1694-0_15, 10.1007/978-1-4612-1694-0]
[2]  
[Anonymous], 1993, Visualizing Data
[3]   Risk bounds for model selection via penalization [J].
Barron, A ;
Birgé, L ;
Massart, P .
PROBABILITY THEORY AND RELATED FIELDS, 1999, 113 (03) :301-413
[4]   The minimum description length principle in coding and modeling [J].
Barron, A ;
Rissanen, J ;
Yu, B .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (06) :2743-2760
[5]  
Barron Andrew R., 1987, Open problems in communication and computation, P85
[6]   MINIMUM COMPLEXITY DENSITY-ESTIMATION [J].
BARRON, AR ;
COVER, TM .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1991, 37 (04) :1034-1054
[7]  
BEIMAN L, 1996, MACH LEARN, V24, P123
[8]  
Breiman L, 1996, MACH LEARN, V24, P49
[9]   Model selection: An integral part of inference [J].
Buckland, ST ;
Burnham, KP ;
Augustin, NH .
BIOMETRICS, 1997, 53 (02) :603-618
[10]  
CATONI O, 1997, LIENS9722 EC NORM SU