Learning in the multiple class random neural network

被引:90
作者
Gelenbe, E [1 ]
Hussain, KF [1 ]
机构
[1] Univ Cent Florida, Sch Elect Engn & Comp Sci, Orlando, FL 32816 USA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2002年 / 13卷 / 06期
关键词
color image textures; learning; multiple class random neural network (MCRNN); neural networks; random neural network (RNN); recurrent networks;
D O I
10.1109/TNN.2002.804228
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning is one of the most important useful features of artificial neural networks. In engineering applications, neural-network learning is used widely to capture relationships between sets of data when input-output examples are available and a mathematical representation of the relationship is not available in advance. Networks which have "learned" are then capable of "generalization." Spiked recurrent neural networks with "multiple classes" of signals have been recently introduced by Gelenbe and Fourneau, as an extension of the recurrent spiked random neural network introduced by Gelenbe. These new networks can represent interconnected neurons, which simultaneously process multiple streams of data such as the color information of images, or networks which simultaneously process streams of data from multiple sensors. This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs). It is based on gradient descent optimization of a cost function. The algorithm exploits the analytical properties of the MCRNN and requires the solution of a system of nC linear and nC nonlinear equations (where C is the number of signal classes and n is the number of neurons) each time the network learns a new input-output pair. Thus, the a algorithm is of O([nC](3)) complexity for the recurrent case, and O([nC](2)) for a feedforward MCRNN. Finally, we apply this learning algorithm to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image. The same trained recurrent network is then used to generate a synthetic texture that imitates the original. This approach is illustrated with various synthetic and natural textures.
引用
收藏
页码:1257 / 1267
页数:11
相关论文
共 41 条
[1]  
ABDELBAKI AH, 2000, P INT JOINT C NEUR N
[2]  
ACKLEY DH, 1985, COGNITIVE SCI, V9, P147
[3]  
ALMEIDA LB, 1987, 1ST P IEEE INT C NEU, V2, P609
[4]  
Atalay V., 1992, International Journal of Pattern Recognition and Artificial Intelligence, V6, P437, DOI 10.1142/S0218001492000266
[5]  
Atalay V., 1992, International Journal of Pattern Recognition and Artificial Intelligence, V6, P131, DOI 10.1142/S0218001492000072
[6]   Random Neural Network recognition of shaped objects in strong clutter [J].
Bakircioglu, H ;
Gelenbe, E .
APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN IMAGE PROCESSING III, 1998, 3307 :22-28
[7]  
BEHRENS H, 1991, ARTIFICIAL NEURAL NE, V2, P1511
[8]  
CHELLAPPA R, 1992, NEURAL NETWORKS SIGN
[9]   A SPATIAL-FILTERING APPROACH TO TEXTURE ANALYSIS [J].
COGGINS, JM ;
JAIN, AK .
PATTERN RECOGNITION LETTERS, 1985, 3 (03) :195-203
[10]   SIMPLE PARALLEL HIERARCHICAL AND RELAXATION ALGORITHMS FOR SEGMENTING NONCAUSAL MARKOVIAN RANDOM-FIELDS [J].
COHEN, FS ;
COOPER, DB .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1987, 9 (02) :195-219