ROBUST TRAINABILITY OF SINGLE NEURONS

被引:80
作者
HOFFGEN, KU [1 ]
SIMON, HU [1 ]
VANHORN, KS [1 ]
机构
[1] BRIGHAM YOUNG UNIV, DEPT COMP SCI, PROVO, UT 84602 USA
关键词
D O I
10.1006/jcss.1995.1011
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
It is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknown halfspace from examples, using linear-programming methods. We want to analyze how the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that the problem of learning a probably almost optimal weight vector for a neuron is so difficult that the minimum error cannot even be approximated to within a constant factor in polynomial time (unless RP = NP); we obtain the same hardness result for several variants of this problem. We considerably strengthen these negative results for neurons with binary weights 0 or 1. We also show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject rate is efficiently possible (unless RP = NP). (C) 1995 Academic Press, Inc.
引用
收藏
页码:114 / 125
页数:12
相关论文
共 25 条
[1]  
ABE N, 1990, 3RD P WORKSH COMP LE, P52
[2]  
[Anonymous], 1990, ADV NEURAL INF PROCE
[3]  
[Anonymous], 1971, STOC 71, DOI DOI 10.1145/800157.805047
[4]  
ARORA S, 1992, AN S FDN CO, P14
[5]  
BELLARE M, 25TH P ANN ACM THEOR, P294
[6]  
BLUM A, 1988 P WORKSH COMP L, P9
[7]  
BLUM A, 1992, COMMUNICATION
[8]   LEARNABILITY AND THE VAPNIK-CHERVONENKIS DIMENSION [J].
BLUMER, A ;
EHRENFEUCHT, A ;
HAUSSLER, D ;
WARMUTH, MK .
JOURNAL OF THE ACM, 1989, 36 (04) :929-965
[9]   COMPLETENESS IN APPROXIMATION CLASSES [J].
CRESCENZI, P ;
PANCONESI, A .
INFORMATION AND COMPUTATION, 1991, 93 (02) :241-262
[10]  
Garey M.R., 1979, COMPUTERS INTRACTABI, V174