Maximum trimmed likelihood estimators: a unified approach, examples, and algorithms

被引:60
作者
Hadi, AS
Luceno, A
机构
[1] CORNELL UNIV, DEPT STAT, ITHACA, NY 14853 USA
[2] UNIV CANTABRIA, ETS INGN CAMINOS, E-39005 SANTANDER, SPAIN
基金
美国国家科学基金会;
关键词
maximum likelihood; least median of squares; least squares; least trimmed squares; minimum volume ellipsoid; robust estimation;
D O I
10.1016/S0167-9473(97)00011-X
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The likelihood principle is one of the most important concepts in Statistics. Among other things, it is used to obtain point estimators for the parameters of probability distributions of random variables by maximizing the likelihood function. The resulting maximum likelihood estimators usually have desirable properties such as consistency and efficiency. However, these estimators are often not robust as it is known that there is usually a trade-off between robustness and efficiency; the more robust an estimator is the less efficient it may be when the data come from a Gaussian distribution. In this paper we investigate how the estimators change when the likelihood function is replaced by a trimmed version of it. The idea here is to trim the likelihood function rather than directly trim the data. Because the likelihood is scalar-valued, it is always possible to order and trim univariate as well as multivariate observations according to their contributions to the likelihood function. The degree of trimming depends on some parameters to be specified by the analyst. We show how this trimmed likelihood principle produces many of the existing estimators (e.g., maximum likelihood, least squares, least trimmed squares, least median of squares, and minimum volume ellipsoid estimators) as special cases. Since the resulting estimators may be very robust, they can be used for example for outliers detection. In some cases the estimators can be obtained in closed form. In other cases they may require numerical solutions. In cases where the estimators cannot be obtained in closed forms, we provide several algorithms for computing the estimates. The method and the algorithms are illustrated by several examples of both discrete and continuous distributions. (C) 1997 Elsevier Science B.V.
引用
收藏
页码:251 / 272
页数:22
相关论文
共 26 条
  • [1] BARNETT V, 1978, OUTLIERS STATISTICAL
  • [2] BICKEL PJ, 1977, MATH STATISTICS
  • [3] Casella George, 2021, STAT INFERENCE
  • [4] Chatterjee S., 1988, Sensitivity Analysis in Linear Regression, DOI 10.1002/9780470316764
  • [5] CRAMER H, 1946, MATH METHODS STATIST
  • [6] HAMPEL F. R., 1986, Robust Statistics: The Approach Based on Influence Functions
  • [7] Hawkins D.M, 1980, IDENTIFICATION OUTLI, V11, DOI [10.1007/978-94-015-3994-4, DOI 10.1007/978-94-015-3994-4]
  • [8] HAWKINS DM, 1993, APPL STAT-J ROY ST C, V42, P423
  • [9] ROBUST ESTIMATION OF LOCATION PARAMETER
    HUBER, PJ
    [J]. ANNALS OF MATHEMATICAL STATISTICS, 1964, 35 (01): : 73 - &
  • [10] Huber PJ., 1981, ROBUST STATISTICS