Naturalness-Aware Deep No-Reference Image Quality Assessment

被引:96
作者
Yan, Bo [1 ]
Bare, Bahetiyaer [1 ]
Tan, Weimin [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200433, Peoples R China
关键词
No-reference image quality assessment; natural scene statistics; multi-task learning; naturalness-aware deep image quality assessment; FREE-ENERGY PRINCIPLE; STRUCTURAL SIMILARITY; DATABASE;
D O I
10.1109/TMM.2019.2904879
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
No-reference image quality assessment (NR-IQA) is a non-trivial task, because it is hard to find a pristine counterpart for an image in real applications, such as image selection, high quality image recommendation, etc. In recent years, deep learning-based NR-IQA methods emerged and achieved better performance than previous methods. In this paper, we present a novel deep neural networks-based multi-task learning approach for NR-IQA. Our proposed network is designed by a multi-task learning manner that consists of two tasks, namely, natural scene statistics (NSS) features prediction task and the quality score prediction task. NSS features prediction is an auxiliary task, which helps the quality score prediction task to learn better mapping between the input image and its quality score. The main contribution of this work is to integrate the NSS features prediction task to the deep learning-based image quality prediction task to improve the representation ability and generalization ability. To the best of our knowledge, it is the first attempt. We conduct the same database validation and cross database validation experiments on LIVE1, TID2013(2), CSIQ(3), LIVE multiply distorted image quality database (LIVE MD)(4), CID2013(5), and LIVE in the wild image quality challenge (LIVE challenge)(6) databases to verify the superiority and generalization ability of the proposed method. Experimental results confirm the superior performance of our method on the same database validation; our method especially achieves 0.984 and 0.986 on the LIVE image quality assessment database in terms of the Pearson linear correlation coefficient (PLCC) and Spearman rank-order correlation coefficient (SROCC), respectively. Also, experimental results from cross database validation verify the strong generalization ability of our method. Specifically, our method gains significant improvement up to 21.8% on unseen distortion types.
引用
收藏
页码:2603 / 2615
页数:13
相关论文
共 53 条
[1]  
[Anonymous], 2015, Live in the wild image quality challenge database
[2]  
Bare B, 2017, IEEE INT CON MULTI, P1356, DOI 10.1109/ICME.2017.8019508
[3]   On the use of deep learning for blind image quality assessment [J].
Bianco, Simone ;
Celona, Luigi ;
Napoletano, Paolo ;
Schettini, Raimondo .
SIGNAL IMAGE AND VIDEO PROCESSING, 2018, 12 (02) :355-362
[4]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[5]   DeepSim: Deep similarity for image quality assessment [J].
Gao, Fei ;
Wang, Yi ;
Li, Panpeng ;
Tan, Min ;
Yu, Jun ;
Zhu, Yani .
NEUROCOMPUTING, 2017, 257 :104-114
[6]   Massive Online Crowdsourced Study of Subjective and Objective Picture Quality [J].
Ghadiyaram, Deepti ;
Bovik, Alan C. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (01) :372-387
[7]  
Glorot X., 2011, P 14 INT C ART INT S, P315
[8]   Blind Image Quality Assessment via Vector Regression and Object Oriented Pooling [J].
Gu, Jie ;
Meng, Gaofeng ;
Redi, Judith A. ;
Xiang, Shiming ;
Pan, Chunhong .
IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (05) :1140-1153
[9]   Using Free Energy Principle For Blind Image Quality Assessment [J].
Gu, Ke ;
Zhai, Guangtao ;
Yang, Xiaokang ;
Zhang, Wenjun .
IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (01) :50-63
[10]   Visual Importance and Distortion Guided Deep Image Quality Assessment Framework [J].
Guan, Jingwei ;
Yi, Shuai ;
Zeng, Xingyu ;
Cham, Wai-Kuen ;
Wang, Xiaogang .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (11) :2505-2520