Generalized Transfer Subspace Learning Through Low-Rank Constraint

被引:16
作者
Ming Shao
Dmitry Kit
Yun Fu
机构
来源
International Journal of Computer Vision | 2014年 / 109卷
关键词
Transfer learning; Domain adaptation; Low-rank constraint; Subspace learning;
D O I
暂无
中图分类号
学科分类号
摘要
It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.
引用
收藏
页码:74 / 93
页数:19
相关论文
共 71 条
  • [1] Bartels RH(1972)Solution of the matrix equation ax+ xb= c [f4] Communications of the ACM 15 820-826
  • [2] Stewart G(2002)Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection IEEE Transactions on Pattern Analysis and Machine Intelligence 19 711-720
  • [3] Belhumeur P(2003)Laplacian eigenmaps for dimensionality reduction and data representation Neural Computation 15 1373-1396
  • [4] Hespanha J(2011)Domain adaptation with coupled subspaces Journal of Machine Learning Research-Proceedings Track 15 173-181
  • [5] Kriegman D(2010)A singular value thresholding algorithm for matrix completion SIAM Journal on Optimization 20 1956-1982
  • [6] Belkin M(2009)Exact matrix completion via convex optimization Foundations of Computational Mathematics 9 717-772
  • [7] Niyogi P(1990)Matrix multiplication via arithmetic progressions Journal of Symbolic Computation 9 251-280
  • [8] Blitzer J(2007)Frustratingly easy domain adaptation Annual Meeting-Association for Computational Linguistics 45 256-263
  • [9] Foster D(2006)Domain adaptation for statistical classifiers Journal of Artificial Intelligence Research 26 101-126
  • [10] Kakade S(2012)Domain adaptation from multiple sources: A domain-dependent regularization approach IEEE Transactions on Neural Networks and Learning Systems 23 504-518