General and efficient multisplitting of numerical attributes

被引:114
作者
Elomaa, T
Rousu, J
机构
[1] Univ Helsinki, Dept Comp Sci, FIN-00014 Helsinki, Finland
[2] VTT Biotechnol & Food Res, FIN-02044 VTT, Finland
基金
芬兰科学院;
关键词
supervised learning; numerical attributes; optimal partitions; evaluation functions;
D O I
10.1023/A:1007674919412
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Often in supervised learning numerical attributes require special treatment and do not fit the learning scheme as well as one could hope. Nevertheless, they are common in practical tasks and, therefore, need to be taken into account. We characterize the well-behavedness of an evaluation function, a property that guarantees the optimal multi-partition of an arbitrary numerical domain to be defined on boundary points. Well-behavedness reduces the number of candidate cut points that need to be examined in multisplitting numerical attributes. Many commonly used attribute evaluation functions possess this property; we demonstrate that the cumulative functions Information Gain and Training Set Error as well as the non-cumulative functions Gain Ratio and Normalized Distance Measure are all well-behaved. We also devise a method of finding optimal multisplits efficiently by examining the minimum number of boundary point combinations that is required to produce partitions which are optimal with respect to a cumulative and well-behaved evaluation function. Our empirical experiments validate the utility of optimal multisplitting: it produces constantly better partitions than alternative approaches do and it only requires comparable time. In top-down induction of decision trees the choice of evaluation function has a more decisive effect on the result than the choice of partitioning strategy; optimizing the value of most common attribute evaluation functions does not raise the accuracy of the produced decision trees. In our tests the construction time using optimal multisplitting was, on the average, twice that required by greedy multisplitting, which in its part required on the average twice the time of binary splitting.
引用
收藏
页码:201 / 244
页数:44
相关论文
共 43 条
[21]   VERY SIMPLE CLASSIFICATION RULES PERFORM WELL ON MOST COMMONLY USED DATASETS [J].
HOLTE, RC .
MACHINE LEARNING, 1993, 11 (01) :63-91
[22]   ANALYSIS OF ARITHMETIC CODING FOR DATA-COMPRESSION [J].
HOWARD, PG ;
VITTER, JS .
INFORMATION PROCESSING & MANAGEMENT, 1992, 28 (06) :749-763
[23]  
IBA W, 1992, MACHINE LEARNING /, P233
[24]  
KONONENKO I, 1984, EXPT AUTOMATIC LEARN
[25]   BINARY-TREE VERSUS SINGLE LEVEL TREE CLASSIFICATION OF WHITE BLOOD-CELLS [J].
LANDEWEERD, GH ;
TIMMERS, T ;
GELSEMA, ES ;
BINS, M ;
HALIE, MR .
PATTERN RECOGNITION, 1983, 16 (06) :571-577
[26]  
Lubinsky D. J., 1995, Machine Learning. Proceedings of the Twelfth International Conference on Machine Learning, P371
[27]  
Maass W., 1994, Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, COLT 94, P67, DOI 10.1145/180139.181016
[28]  
Merz C, 1996, UCI REPOSITORY MACHI
[29]  
Mingers J., 1989, Machine Learning, V3, P319, DOI 10.1007/BF00116837
[30]  
Quinlan J. R., 1986, Machine Learning, V1, P81, DOI 10.1023/A:1022643204877