REPRESENTING AND LEARNING BOOLEAN FUNCTIONS OF MULTIVALUED FEATURES

被引:8
作者
HAMPSON, SE
VOLPER, DJ
机构
[1] University of California, Dept. of Information and Computer Science, Irvine
来源
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS | 1990年 / 20卷 / 01期
关键词
D O I
10.1109/21.47810
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Analysis and empirical measurement of thresholded linear functions of binary features is extended to determine the corresponding properties for multivalued features, where each feature can take on n equally spaced values. The number of thresholded linear functions, maximum weight size, training speed, and the number of nodes necessary to represent arbitrary Boolean functions are all shown to increase polynomially with the number of distinct values the input features can assume. As was the case with binary input features, all of these characteristics are shown to increase exponentially with the number of features. If input patterns are viewed as grid points in a multivalued, multidimensional space, origin placement is shown to affect training time. Adaptive origin placement can significantly accelerate training and is consistent with most behavioral models of habituation. Two network training algorithms are described, focusing and back propagation. Empirically, they are capable of learning arbitrary Boolean functions of multivalued features in a two-level net. Focusing is proved to converge to a correct classification and permits some time-space complexity analysis. Training time for this algorithm is polynomial in the number of values a feature can assume, and exponential in the number of features. Back propagation is not necessarily convergent, but for randomly generated Boolean functions, the empirical behavior of our implementation is similar to the focusing algorithm. © 1990 IEEE
引用
收藏
页码:67 / 80
页数:14
相关论文
共 40 条