Learning in the presence of concept drift and hidden contexts

被引:430
作者
Widmer, G
Kubat, M
机构
[1] AUSTRIAN RES INST ARTIFICIAL INTELLIGENCE, A-1010 VIENNA, AUSTRIA
[2] UNIV OTTAWA, DEPT COMP SCI, OTTAWA, ON K1N 6N5, CANADA
关键词
incremental concept learning; on-line learning; context dependence; concept drift; forgetting;
D O I
10.1007/BF00116900
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and I:an take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and reusing them when a previous context re-appears: and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.
引用
收藏
页码:69 / 101
页数:33
相关论文
共 38 条
  • [21] Lebowitz M., 1987, Machine Learning, V2, P103, DOI 10.1023/A:1022800624210
  • [22] MAASS W, 1991, 4TH P ANN WORKSH COM, P167
  • [23] MARKOVITCH S, 1988, P 5 INT C MACH LEARN, P450
  • [24] MICHALSKI RS, 1983, MACHINE LEARNING, V1
  • [25] Mitchell T. M., 1986, Machine Learning, V1, P47, DOI 10.1007/BF00116250
  • [26] MOORE AW, 1992, ADV NEUR IN, V4, P571
  • [27] ROUGH SETS
    PAWLAK, Z
    [J]. INTERNATIONAL JOURNAL OF COMPUTER & INFORMATION SCIENCES, 1982, 11 (05): : 341 - 356
  • [28] Salganicoff M., 1993, P 10 INT C MACH LEAR, P276
  • [29] SALGANICOFF M, 1993, MSCIS9380 U PENNS DE
  • [30] SALZBERG S, 1991, MACH LEARN, V6, P251, DOI 10.1023/A:1022661727670