Predicting conditional probability densities of stationary stochastic time series

被引:15
作者
Husmeier, D
Taylor, JG
机构
[1] Department of Mathematics, King's College London, London WC2R 2LS, The Strand
关键词
stationary Markov process; conditional probability density; moment generating function; universal approximation theorem; maximum likelihood; Bayes rule;
D O I
10.1016/S0893-6080(96)00062-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Feedforward neural networks applied to time series prediction are usually trained to predict the next time step x(t + 1) as a function of m previous values, x(t):=(x(t), x(t + 1),...,x(t - m + 1)), which, if a sum-of-squares error function is chosen, results in predicting the conditional mean (y\x(t)) However, further information about the distribution is lost, which is a serious drawback especially in the case of multimodality, where the conditional mean alone turns out to be a rather insufficient or even misleading quantity. The only satisfactory approach in the general case is therefore to predict the whole conditional probability density for the time series, P(x(t + 1)\x(t),x(t - 1),...,x(t - m + 1)). We deduce here a two-hidden-layer universal approximator network for modelling this function, and develop a training algorithm from maximum likelihood. The method is tested on three time series of different nature, which will demonstrate how state-space dependent variances and multimodal transitions can be learned. We will finally note comparisons with other recent neural network approaches to this problem, and will state results on a benchmark problem. (C) 1997 Elsevier Science Ltd.
引用
收藏
页码:479 / 497
页数:19
相关论文
共 21 条
[1]  
ALLEN DW, 1994, P ICANN 94, P529
[2]  
[Anonymous], TASCHENBUCH MATH
[3]  
Bishop C. M., 1994, Tech. Rep. NCRG/4288
[4]  
Bishop C. M., 1995, Neural networks for pattern recognition
[5]  
Devaney R, 1987, An introduction to chaotic dynamical systems, DOI 10.2307/3619398
[6]   APPROXIMATION CAPABILITIES OF MULTILAYER FEEDFORWARD NETWORKS [J].
HORNIK, K .
NEURAL NETWORKS, 1991, 4 (02) :251-257
[7]   MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS [J].
HORNIK, K ;
STINCHCOMBE, M ;
WHITE, H .
NEURAL NETWORKS, 1989, 2 (05) :359-366
[8]  
HUSMEIER D, 1995, IN PRESS J MATH ARTI
[9]   INCREASED RATES OF CONVERGENCE THROUGH LEARNING RATE ADAPTATION [J].
JACOBS, RA .
NEURAL NETWORKS, 1988, 1 (04) :295-307
[10]  
MacKay D., 1993, BAYESIAN NONLINEAR M