StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks

被引:1788
作者
Zhang, Han [1 ]
Xu, Tao [2 ]
Li, Hongsheng [3 ]
Zhang, Shaoting [4 ]
Wang, Xiaogang [3 ]
Huang, Xiaolei [2 ]
Metaxas, Dimitris [1 ]
机构
[1] Rutgers State Univ, New Brunswick, NJ 08901 USA
[2] Lehigh Univ, Bethlehem, PA USA
[3] Chinese Univ Hong Kong, Hong Kong, Hong Kong, Peoples R China
[4] Baidu Res, Sunnyvale, CA USA
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
关键词
D O I
10.1109/ICCV.2017.629
中图分类号
TP18 [人工智能理论];
学科分类号
140502 [人工智能];
摘要
Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256x256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.
引用
收藏
页码:5908 / 5916
页数:9
相关论文
共 39 条
[1]
[Anonymous], 2017, ICLR
[2]
[Anonymous], 2017, ICLR
[3]
[Anonymous], 2015, NIPS
[4]
[Anonymous], 2016, ECCV
[5]
[Anonymous], 2017, ICLR, DOI DOI 10.48550/ARXIV.1701.04862
[6]
[Anonymous], 2016, ICLR
[7]
[Anonymous], 2016, P ADV NEURAL INFORM
[8]
[Anonymous], 2014, ICML
[9]
[Anonymous], 2016, ECCV
[10]
[Anonymous], 2016, NIPS