An artificial neural network with partitionable outputs

被引:4
作者
Guan, BT
Gertner, GZ
机构
[1] UNIV ILLINOIS,DEPT NAT RESOURCES & ENVIRONM SCI,URBANA,IL 61801
[2] NATL TAIWAN UNIV,DEPT FORESTRY,TAIPEI 10764,TAIWAN
关键词
neural network; ecological model; partitionable output; uncertainty assessment; mechanistic forest growth model; error budget;
D O I
10.1016/S0168-1699(96)00025-7
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
In this paper we present a neural network with partitionable outputs. The network is a feedforward network but with special weight connection patterns. Hidden nodes are grouped into subsets with each subset connecting only to one input and no connection between any two subsets. The activation function for the hidden nodes is the are tangent function (tan(-1)), and the activation function for the outputs is a linear function (i.e., no squashing). A network of this type can be trained by any available training algorithm. As an example, the proposed network was used to estimate the prediction variances of an ecological model. Two sets of data were generated using a Monte Carlo method; one set for training and the other set for validation. A random optimization procedure was used as the training algorithm. Validation results showed that the network indeed can approximate the unknown variance propagating function. One of the network outputs was then partitioned according to the contribution of each input, and the relative importance of each input was then determined. We believe that the proposed network is a good alternative to certain statistical methods, and it will be a valuable tool for approximation problems that require the partitioning of outputs as part of the results.
引用
收藏
页码:39 / 46
页数:8
相关论文
共 13 条
[2]  
BAFFES PT, 1989, NETS USERS GUIDE VER
[3]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[4]  
GERTNER G, 1987, FOREST SCI, V33, P230
[5]  
GERTNER G, 1996, FOREST SCI, V42
[6]  
GUAN BT, 1993, P C APPL ART NEUR NE, V4, P682
[7]   MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS [J].
HORNIK, K ;
STINCHCOMBE, M ;
WHITE, H .
NEURAL NETWORKS, 1989, 2 (05) :359-366
[8]  
Kira, 1964, JAPANESE J ECOL, V14, P97, DOI [DOI 10.18960/SEITAI.14.3_97, 18960/seitai.14.397.353]
[9]  
Shinozaki K., 1964, JAPANESE J ECOL, V14, P113, DOI DOI 10.18960/SEITAI.14.4_133
[10]  
STINCHCOMBE M, 1989, P INT JOINT C NEURAL, P613