Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation

被引:104
作者
Cho, Wonwoong [1 ]
Choi, Sungha [1 ,2 ]
Park, David Keetae [1 ]
Shin, Inkyu [3 ]
Choo, Jaegul [1 ]
机构
[1] Korea Univ, Seoul, South Korea
[2] LG Elect, Seoul, South Korea
[3] Hanyang Univ, Seoul, South Korea
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR.2019.01089
中图分类号
TP18 [人工智能理论];
学科分类号
140502 [人工智能];
摘要
Recently, unsupervised exemplar-based image-to-image translation, conditioned on a given exemplar without the paired data, has accomplished substantial advancements. In order to transfer the information from an exemplar to an input image, existing methods often use a normalization technique, e.g., adaptive instance normalization, that controls the channel-wise statistics of an input activation map at a particular layer, such as the mean and the variance. Meanwhile, style transfer approaches similar task to image translation by nature, demonstrated superior performance by using the higher-order statistics such as covariance among channels in representing a style. In detail, it works via whitening (given a zero-mean input feature, transforming its covariance matrix into the identity). followed by coloring (changing the covariance matrix of the whitened feature to those of the style feature). However, applying this approach in image translation is computationally intensive and error-prone due to the expensive time complexity and its non-trivial backpropagation. In response, this paper proposes an end-to-end approach tailored for image translation that efficiently approximates this transformation with our novel regularization methods. We further extend our approach to a group-wise form for memory and time efficiency as well as image quality. Extensive qualitative and quantitative experiments demonstrate that our proposed method is fast, both in training and inference, and highly effective in reflecting the style of an exemplar.
引用
收藏
页码:10631 / 10639
页数:9
相关论文
共 36 条
[1]
[Anonymous], 2016, CoRR abs/1512.00567, DOI DOI 10.1109/CVPR.2016.308
[2]
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.265
[3]
[Anonymous], 2017, ICCV
[4]
Bahng H, 2018, ECCV
[5]
PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup [J].
Chang, Huiwen ;
Lu, Jingwan ;
Yu, Fisher ;
Finkelstein, Adam .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :40-48
[6]
StyleBank: An Explicit Representation for Neural Image Style Transfer [J].
Chen, Dongdong ;
Yuan, Lu ;
Liao, Jing ;
Yu, Nenghai ;
Hua, Gang .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2770-2779
[7]
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[8]
Learning Diverse Image Colorization [J].
Deshpande, Aditya ;
Lu, Jiajun ;
Yeh, Mao-Chuang ;
Chong, Min Jin ;
Forsyth, David .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2877-2885
[9]
Learning a Deep Convolutional Network for Image Super-Resolution [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2014, PT IV, 2014, 8692 :184-199
[10]
Gatys L., 2015, Texture Synthesis Using Convolutional Neural NetworksOpen Source Implementation on GitHub, P262