Least squares after model selection in high-dimensional sparse models

被引:355
作者
Belloni, Alexandre
Chernozhukov, Victor
机构
[1] not available, 100 Fuqua Drive, Durham
[2] not available, 250 Memorial Drive, Cambridge
基金
美国国家科学基金会;
关键词
Lasso; OLS post-Lasso; post-model selection estimators; LASSO; RECOVERY; AGGREGATION; REGRESSION;
D O I
10.3150/11-BEJ410
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In this article we study post-model selection estimators that apply ordinary least squares (OLS) to the model selected by first-step penalized estimators, typically Lasso. It is well known that Lasso can estimate the nonparametric regression function at nearly the oracle rate, and is thus hard to improve upon. We show that the OLS post-Lasso estimator performs at least as well as Lasso in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the Lasso-based model selection "fails" in the sense of missing some components of the "true" regression model. By the "true" model, we mean the best s-dimensional approximation to the nonparametric regression function chosen by the oracle. Furthermore, OLS post-Lasso estimator can perform strictly better than Lasso, in the sense of a strictly faster rate of convergence, if the Lasso-based model selection correctly includes all components of the "true" model as a subset and also achieves sufficient sparsity. In the extreme case, when Lasso perfectly selects the "true" model, the OLS post-Lasso estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by Lasso, which guarantees that this dimension is at most of the same order as the dimension of the "true" model. Our rate results are nonasymptotic and hold in both parametric and nonparametric models. Moreover, our analysis is not limited to the Lasso estimator acting as a selector in the first step, but also applies to any other estimator, for example, various forms of thresholded Lasso, with good rates and good sparsity properties. Our analysis covers both traditional thresholding and a new practical, data-driven thresholding scheme that induces additional sparsity subject to maintaining a certain goodness of fit. The latter scheme has theoretical guarantees similar to those of Lasso or OLS post-Lasso, but it dominates those procedures as well as traditional thresholding in a wide variety of experiments.
引用
收藏
页码:521 / 547
页数:27
相关论文
共 27 条
[1]  
[Anonymous], 2006, ALL NONPARAMETRIC ST
[2]  
[Anonymous], 2008, INTRO NONPARAMETRIC
[3]  
BELLONI A, 2010, L1 PENALIZED QUANT S, P35443, DOI DOI 10.1214/10-AOS827SUPP
[4]   l1-PENALIZED QUANTILE REGRESSION IN HIGH-DIMENSIONAL SPARSE MODELS [J].
Belloni, Alexandre ;
Chernozhukov, Victor .
ANNALS OF STATISTICS, 2011, 39 (01) :82-130
[5]   SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR [J].
Bickel, Peter J. ;
Ritov, Ya'acov ;
Tsybakov, Alexandre B. .
ANNALS OF STATISTICS, 2009, 37 (04) :1705-1732
[6]  
Bunea F., 2008, IMS LECT NOTES MONOG, V123, P123
[7]   Sparsity oracle inequalities for the Lasso [J].
Bunea, Florentina ;
Tsybakov, Alexandre ;
Wegkamp, Marten .
ELECTRONIC JOURNAL OF STATISTICS, 2007, 1 :169-194
[8]   Aggregation for gaussian regression [J].
Bunea, Florentina ;
Tsybakov, Alexandre B. ;
Wegkamp, Marten H. .
ANNALS OF STATISTICS, 2007, 35 (04) :1674-1697
[9]   Aggregation and sparsity via l1 penalized least squares [J].
Bunea, Florentina ;
Tsybakov, Alexandre B. ;
Wegkamp, Marten H. .
LEARNING THEORY, PROCEEDINGS, 2006, 4005 :379-391
[10]  
Candes E, 2007, ANN STAT, V35, P2313, DOI 10.1214/009053606000001523