Sensitivity of Ensemble Forecast Verification to Model Bias

被引:34
作者
Wang, Jingzhuo [1 ,2 ]
Chen, Jing [2 ]
Du, Jun [3 ]
Zhang, Yutao [2 ]
Xia, Yu [4 ]
Deng, Guo [2 ]
机构
[1] China Meteorol Adm, Chinese Acad Meteorol Sci, Beijing, Peoples R China
[2] China Meteorol Adm, Numer Weather Predict Ctr, Beijing, Peoples R China
[3] NOAA, Environm Modeling Ctr, NWS, NCEP, College Pk, MD USA
[4] Nanjing Univ Informat Sci & Technol, Nanjing, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Ensembles; Forecast verification; skill; RANKED PROBABILITY SCORE; TRANSFORM KALMAN FILTER; TEMPERATURE FORECASTS; INITIAL PERTURBATIONS; MESOSCALE; SPREAD; RELIABILITY; SYSTEM; ERROR;
D O I
10.1175/MWR-D-17-0223.1
中图分类号
P4 [大气科学(气象学)];
学科分类号
0706 ; 070601 ;
摘要
This study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent strong and weak bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread's spatial structure is much less; the spread-skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble.
引用
收藏
页码:781 / 796
页数:16
相关论文
共 55 条
  • [1] [Anonymous], 19 C PROB STAT NEW O
  • [2] Increasing the Skill of Probabilistic Forecasts: Understanding Performance Improvements from Model-Error Representations
    Berner, J.
    Fossell, K. R.
    Ha, S. -Y.
    Hacker, J. P.
    Snyder, C.
    [J]. MONTHLY WEATHER REVIEW, 2015, 143 (04) : 1295 - 1320
  • [3] Bishop CH, 2001, MON WEATHER REV, V129, P420, DOI 10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO
  • [4] 2
  • [5] The MOGREPS short-range ensemble prediction system
    Bowler, Neill E.
    Arribas, Alberto
    Mylne, Kenneth R.
    Robertson, Kelvyn B.
    Beare, Sarah E.
    [J]. QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, 2008, 134 (632) : 703 - 722
  • [6] A comparison of the ECMWF, MSC, and NCEP global ensemble prediction systems
    Buizza, R
    Houtekamer, PL
    Toth, Z
    Pellerin, G
    Wei, MZ
    Zhu, YJ
    [J]. MONTHLY WEATHER REVIEW, 2005, 133 (05) : 1076 - 1097
  • [7] Evaluation of probabilistic prediction systems for a scalar variable
    Candille, G
    Talagrand, O
    [J]. QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, 2005, 131 (609) : 2131 - 2150
  • [8] Bias Correction for Global Ensemble Forecast
    Cui, Bo
    Toth, Zoltan
    Zhu, Yuejian
    Hou, Dingchen
    [J]. WEATHER AND FORECASTING, 2012, 27 (02) : 396 - 410
  • [9] Du J., 2010, Meteor Mon, V36, P1, DOI 10.7519/j.issn.1000-0526.2010.11.001
  • [10] DU J, 2007, NOAA NWS SCI TECHNOL