An Efficient Global K-means Clustering Algorithm

被引:74
作者
Xie, Juanying [1 ,2 ]
Jiang, Shuai [2 ]
Xie, Weixin [1 ,3 ,4 ]
Gao, Xinbo [5 ,6 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Shaanxi, Peoples R China
[2] Shaanxi Normal Univ, Sch Comp Sci, Xian 710062, Shaanxi, Peoples R China
[3] Shenzhen Univ, Natl Lab Automat Target Recognit ATR, Shenzhen 518001, Peoples R China
[4] Shenzhen Univ, Coll Informat Engn, Shenzhen 518001, Peoples R China
[5] Xidian Univ, Sch Elect Engn, VIPS Lab, Xian 710071, Peoples R China
[6] Xidian Univ, Minist Educ China, Key Lab Intelligent Percept & Image Understanding, Xian 710071, Peoples R China
关键词
clustering; K-means clustering; global K-means clustering; machine learning; pattern recognition; data mining; non-smooth optimization;
D O I
10.4304/jcp.6.2.271-279
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
K-means clustering is a popular clustering algorithm based on the partition of data. However, K-means clustering algorithm suffers from some shortcomings, such as its requiring a user to give out the number of clusters at first, and its sensitiveness to initial conditions, and its being easily trapped into a local solution et cetera. The global K-means algorithm proposed by Likas et al is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N (with N being the size of the data set) runs of the K-means algorithm from suitable initial positions. It avoids the depending on any initial conditions or parameters, and considerably outperforms the K-means algorithms, but it has a heavy computational load. In this paper, we propose a new version of the global K-means algorithm. That is an efficient global K-means clustering algorithm. The outstanding feature of our algorithm is its superiority in execution time. It takes less run time than that of the available global K-means algorithms do. In this algorithm we modified the way of finding the optimal initial center of the next new cluster by defining a new function as the criterion to select the optimal candidate center for the next new cluster. Our idea grew under enlightened by Park and Jun's idea of K-medoids clustering algorithm. We chose the best candidate initial center for the next cluster by calculating the value of our new function which uses the information of the natural distribution of data, so that the optimal initial center we chose is the point which is not only with the highest density, but also apart from the available cluster centers. Experiments on fourteen well-known data sets from UCI machine learning repository show that our new algorithm can significantly reduce the computational time without affecting the performance of the global K-means algorithms. Further experiments demonstrate that our improved global K-means algorithm outperforms the global K-means algorithm greatly and is suitable for clustering large data sets. Experiments on colon cancer tissue data set revealed that our new global K-means algorithm can efficiently deal with gene expression data with high dimensions. And experiment results on synthetic data sets with different proportions noisy data points prove that our global k-means can avoid the influence of noisy data on clustering results efficiently.
引用
收藏
页码:271 / 279
页数:9
相关论文
共 18 条
[1]  
Murty M.N., Jain A.K., Data clustering: A review, ACM Computing Surveys, 31, pp. 264-323, (1999)
[2]  
Everitt B., Landau S., Leese M., Cluster Analysis, (2001)
[3]  
Theodoridis S., Koutroumbas K., Pattern Recognition, (2003)
[4]  
Kanungo T., Mount, "An efficient k-means clustering algorithm: Analysis and implantation," IEEE Trans, PAMI, 24, pp. 881-892, (2004)
[5]  
Pena J.M., Lozano J.A., Larranaga P., An empirical comparison of four initialization methods for the k-means algorithm, Pattern Recognition Letters, 20, pp. 1027-1040, (1999)
[6]  
Bradley P.S., Fayyad U.M., Refining initial points for k-means clustering, Proceedings of the Fifteenth International Conference On Machine Learning, pp. 91-99, (1998)
[7]  
Huang Z., Clustering large data sets with mixed numerical and categorical value, Proceedings of the First Pacific Asia Knowledge Discovery and Data Mining Conference, pp. 21-34, (1997)
[8]  
Sun Y., Zhu Q.M., Chen Z.X., An iterative initialpoints refinement algorithm for categorical data clustering, Pattern Recognition Letters, 23, pp. 875-884, (2002)
[9]  
Strehl A., Ghosh J., Cluster ensembles - a knowledge ruse framework for combining multiple partitions, Journal of Machine Learning Reserch, 3, pp. 583-617, (2002)
[10]  
Likas A., Vlassis M., Verbeek J., The global k-means clustering algorithm, Pattern Recognition, 36, pp. 451-461, (2003)