Tolerating Concept and Sampling Shift in Lazy Learning Using Prediction Error Context Switching

被引:10
作者
MARCOS Salganicoff
机构
[1] University of Delaware/Alfred I. duPont Institute,Applied Science and Engineering Laboratories (ASEL)
来源
Artificial Intelligence Review | 1997年 / 11卷
关键词
lazy learning; non-stationary function; concept drift; nearest neighbor learning; time-varying functions;
D O I
暂无
中图分类号
学科分类号
摘要
In their unmodified form, lazy-learning algorithms may have difficulty learning and tracking time-varying input/output function maps such as those that occur in concept shift. Extensions of these algorithms, such as Time-Windowed forgetting (TWF), can permit learning of time-varying mappings by deleting older exemplars, but have decreased classification accuracy when the input-space sampling distribution of the learning set is time-varying. Additionally, TWF suffers from lower asymptotic classification accuracy than equivalent non-forgetting algorithms when the input sampling distributions are stationary. Other shift-sensitive algorithms, such as Locally-Weighted forgetting (LWF) avoid the negative effects of time-varying sampling distributions, but still have lower asymptotic classification in non-varying cases. We introduce Prediction Error Context Switching (PECS) which allows lazy-learning algorithms to have good classification accuracy in conditions having a time-varying function mapping and input sampling distributions, while still maintaining their asymptotic classification accuracy in static tasks. PECS works by selecting and re-activating previously stored instances based on their most recent consistency record. The classification accuracy and active learning set sizes for the above algorithms are compared in a set of learning tasks that illustrate the differing time-varying conditions described above. The results show that the PECS algorithm has the best overall classification accuracy over these differing time-varying conditions, while still having asymptotic classification accuracy competitive with unmodified lazy-learners intended for static environments.
引用
收藏
页码:133 / 155
页数:22
相关论文
共 10 条
[1]  
Aha D. W.(1991)Instance-Based Learning Algorithms Machine Learning 6 37-66
[2]  
Kibler D.(1977)An Algorithm for Finding Best Matches in Logarithmic Expected Time ACM Transactions on Mathematical Software 3 209-226
[3]  
Albert M. K.(1990)Connectionist Models of Recognition Memory: Constraints Imposed by Learning and Forgetting Functions Psychological Reviews 97 285-308
[4]  
Friedman J. H.(1986)Incremental Learning from Noisy Data Machine Learning 1 317-354
[5]  
Bentley J. L.(1984)A Theory of the Learnable Communications of the ACM 27 1134-1142
[6]  
Finkel R. A.(undefined)undefined undefined undefined undefined-undefined
[7]  
Ratcliff R.(undefined)undefined undefined undefined undefined-undefined
[8]  
Schlimmer J. C.(undefined)undefined undefined undefined undefined-undefined
[9]  
Granger R. H.(undefined)undefined undefined undefined undefined-undefined
[10]  
Valiant L. G.(undefined)undefined undefined undefined undefined-undefined