K-nearest neighbor finding using MaxNearestDist

被引:99
作者
Samet, Hanan [1 ]
机构
[1] Univ Maryland, Inst Adv Comp Studies, Ctr Automat Res, Dept Comp Sci, College Pk, MD 20742 USA
基金
美国国家科学基金会;
关键词
k-nearest neighbors; similarity searching; metric spaces; depth-first nearest neighbor finding; best-first nearest neighbor finding;
D O I
10.1109/TPAMI.2007.1182
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Similarity searching often reduces to finding the k nearest neighbors to a query object. Finding the k nearest neighbors is achieved by applying either a depth-first or a best-first algorithm to the search hierarchy containing the data. These algorithms are generally applicable to any index based on hierarchical clustering. The idea is that the data is partitioned into clusters that are aggregated to form other clusters, with the total aggregation being represented as a tree. These algorithms have traditionally used a lower bound corresponding to the minimum distance at which a nearest neighbor can be found (termed MINDIST) to prune the search process by avoiding the processing of some of the clusters, as well as individual objects when they can be shown to be farther from the query object q than all of the current k nearest neighbors of q. An alternative pruning technique that uses an upper bound corresponding to the maximum possible distance at which a nearest neighbor is guaranteed to be found (termed MAXNEARESTDIST) is described. The MAXNEARESTDIST upper bound is adapted to enable its use for finding the k nearest neighbors instead of just the nearest neighbor (that is, k 1) as in its previous uses. Both the depth-first and best-first k-nearest neighbor algorithms are modified to use MAXNEARESTDIST, which is shown to enhance both algorithms by overcoming their shortcomings. In particular, for the depth-first algorithm, the number of clusters in the search hierarchy that must be examined is not increased thereby potentially lowering its execution time, while for the best-first algorithm, the number of clusters in the search hierarchy that must be retained in the priority queue used to control the ordering of processing of the clusters is also not increased, thereby potentially lowering its storage requirements.
引用
收藏
页码:243 / 252
页数:10
相关论文
共 30 条
[1]  
[Anonymous], 1994, P 2 ACMWORKSHOP ADV
[2]  
[Anonymous], 1961, Adaptive Control Processes: a Guided Tour, DOI DOI 10.1515/9781400874668
[3]  
[Anonymous], 2006, Foundations of Multidimensional and Metric Data Structures
[4]   An optimal algorithm for approximate nearest neighbor searching in fixed dimensions [J].
Arya, S ;
Mount, DM ;
Netanyahu, NS ;
Silverman, R ;
Wu, AY .
JOURNAL OF THE ACM, 1998, 45 (06) :891-923
[5]  
BECKMANN N, 1990, SIGMOD REC, V19, P322, DOI 10.1145/93605.98741
[6]  
Berchtold S., 1997, Proceedings of the Sixteenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, PODS 1997, P78, DOI 10.1145/263661.263671
[7]  
Berchtold S, 1998, LECT NOTES COMPUT SC, V1377, P216
[8]  
Beyer K, 1999, LECT NOTES COMPUT SC, V1540, P217
[9]   Searching in high-dimensional spaces -: Index structures for improving the performance of multimedia Databases [J].
Böhm, C ;
Berchtold, S ;
Keim, D .
ACM COMPUTING SURVEYS, 2001, 33 (03) :322-373
[10]   STRATEGIES FOR EFFICIENT INCREMENTAL NEAREST NEIGHBOR SEARCH [J].
BRODER, AJ .
PATTERN RECOGNITION, 1990, 23 (1-2) :171-178