The Google similarity distance

被引:874
作者
Cilibrasi, Rudi L. [1 ]
Vitanyi, Paul M. B. [1 ]
机构
[1] Ctr Wiskunde & Informat, NL-1098 SJ Amsterdam, Netherlands
关键词
accuracy comparison with WordNet categories; automatic classification and clustering; automatic meaning discovery using Google; automatic relative semantics; automatic translation; dissimilarity semantic distance; Google search; Google distribution via page hit counts; Google code; Kolmogorov complexity; normalized compression distance (NCD); normalized information distance (NID); normalized Google distance (NGD); meaning of words and phrases extracted from the Web; parameter-free data mining; universal similarity metric;
D O I
10.1109/TKDE.2007.48
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers, the equivalent of "society" is "database," and the equivalent of " use" is "a way to search the database." We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts, we use the World Wide Web (WWW) as the database, and Google as the search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the WWW using Google page counts. The WWW is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87 percent with the expert crafted WordNet categories.
引用
收藏
页码:370 / 383
页数:14
相关论文
共 41 条
[1]  
[Anonymous], ECONOMIST 0120
[2]  
[Anonymous], WORDNET LEXICAL DATA
[3]  
[Anonymous], 1996, P ROY SOC A-MATH PHY, V452, P769, DOI DOI 10.1098/rspa.1996.0039
[4]  
Bagrow JR, 2005, AIP CONF PROC, V779, P81, DOI 10.1063/1.2008594
[5]   Chain letters & evolutionary histories [J].
Bennett, CH ;
Li, M ;
Ma, B .
SCIENTIFIC AMERICAN, 2003, 288 (06) :76-81
[6]   Information distance [J].
Bennett, CH ;
Gacs, P ;
Li, M ;
Vitanyi, FMB ;
Zurek, WH .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (04) :1407-1423
[7]   A tutorial on Support Vector Machines for pattern recognition [J].
Burges, CJC .
DATA MINING AND KNOWLEDGE DISCOVERY, 1998, 2 (02) :121-167
[8]  
CHANG CC, 2004, LIBSVM LIB SUPPORT V
[9]   Algorithmic clustering of music based on string compression [J].
Cilibrasi, R ;
Vitányi, P ;
de Wolf, R .
COMPUTER MUSIC JOURNAL, 2004, 28 (04) :49-67
[10]   Clustering by compression [J].
Cilibrasi, R ;
Vitányi, PMB .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2005, 51 (04) :1523-1545