DeepCD: Learning Deep Complementary Descriptors for Patch Representations

被引:31
作者
Yang, Tsun-Yi [1 ,2 ]
Hsu, Jo-Han [1 ,2 ]
Lin, Yen-Yu [1 ]
Chuang, Yung-Yu [2 ]
机构
[1] Acad Sinica, Taipei, Taiwan
[2] Natl Taiwan Univ, Taipei, Taiwan
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
关键词
D O I
10.1109/ICCV.2017.359
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method(1) is simple yet effective, outperforming state-of-the-art methods.
引用
收藏
页码:3334 / 3342
页数:9
相关论文
共 41 条
[1]  
Amos B., 2015, OPENFACE GEN PURPOSE
[2]  
[Anonymous], 2015, ICCV
[3]  
[Anonymous], 2016, NIPS
[4]  
[Anonymous], 2004, IJCV
[5]  
[Anonymous], 2016, CVPR
[6]  
[Anonymous], 2016, CVPR
[7]  
[Anonymous], 2008, CVPR
[8]  
[Anonymous], 2015, P 28 INT C NEUR INF
[9]  
[Anonymous], 2016, ECCV
[10]  
[Anonymous], 2011, ICCV