A general framework for mining massive data streams

被引:90
作者
Domingos, P [1 ]
Hulten, G [1 ]
机构
[1] Univ Washington, Dept Comp Sci & Engn, Seattle, WA 98195 USA
关键词
data mining; Hoeffding bounds; machine learning; scalability; subsampling;
D O I
10.1198/1061860032544
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In many domains, data now arrive faster than we are able to mine it. To avoid wasting these data, we must switch from the traditional "one-shot" data mining approach to systems that are able to mine continuous, high-volume, open-ended data streams as they arrive. In this article we identify some desiderata for such systems, and outline our framework for realizing them. A key property of our approach is that it minimizes the time required to build a model on a stream while guaranteeing (as long as the data are iid) that the model learned is effectively indistinguishable from the one that would be obtained using infinite data. Using this framework, we have successfully adapted several learning algorithms to massive data streams, including decision tree induction, Bayesian network learning, k-means clustering, and the EM algorithm for mixtures of Gaussians. These algorithms are able to process on the order of billions of examples per day using off-the-shelf hardware. Building on this, we are currently developing software primitives for scaling arbitrary learning algorithms to massive data streams with minimal effort.
引用
收藏
页码:945 / 949
页数:5
相关论文
共 6 条
[1]  
[Anonymous], 2001, P 18 INT C MACH LEAR
[2]  
Domingos P, 2002, ADV NEUR IN, V14, P673
[3]  
Domingos P., 2000, Proceedings. KDD-2000. Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, P71, DOI 10.1145/347090.347107
[5]  
Hulten G, 2001, P 7 ACM SIGKDD INT C, P97, DOI DOI 10.1145/502512.502529
[6]  
Hulten G., 2002, P 8 ACM SIGKDD INT C, P525