Towards a theoretical foundation for Laplacian-based manifold methods

被引:124
作者
Belkin, M [1 ]
Niyogi, P [1 ]
机构
[1] Univ Chicago, Dept Comp Sci, Chicago, IL 60637 USA
来源
LEARNING THEORY, PROCEEDINGS | 2005年 / 3559卷
关键词
D O I
10.1007/11503415_33
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed "manifold-motivated" as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between theory and practice for a class of Laplacian-based manifold methods. We show that under certain conditions the graph Laplacian of a point cloud converges to the Laplace-Beltrami operator on the underlying manifold. Theorem 1 contains the first result showing convergence of a random graph Laplacian to manifold Laplacian in the machine learning context.
引用
收藏
页码:486 / 500
页数:15
相关论文
共 30 条
  • [1] [Anonymous], CLUSTERINGS GOOD BAD
  • [2] [Anonymous], NIPS
  • [3] [Anonymous], 2001, NIPS
  • [4] [Anonymous], 1997, REGIONAL C SERIES MA
  • [5] [Anonymous], 2003, ICML
  • [6] Laplacian eigenmaps for dimensionality reduction and data representation
    Belkin, M
    Niyogi, P
    [J]. NEURAL COMPUTATION, 2003, 15 (06) : 1373 - 1396
  • [7] Belkin M., 2003, THESIS U CHICAGO
  • [8] Belkin M., 2002, USING MANIFOLD STRUC
  • [9] BELKIN M, 2005, AI STATS
  • [10] Bengio Y, 2003, OUT OF SAMPLE EXTENS