17 March 2022 Saliency-enhanced two-stream convolutional network for no-reference image quality assessment
Huanhuan Ma, Ziguan Cui, Zongliang Gan, Guijin Tang, Feng Liu
Author Affiliations +
Abstract

We propose a saliency-enhanced two-stream convolutional network (SETNet) for no-reference image quality assessment. The proposed SETNet contains two subnetworks of image stream and saliency stream. The image stream focuses on the whole image content, while the saliency stream explicitly guides the network to learn spatial salient features that are more attractive to humans. In addition, the spatial attention module and dilated convolution-based channel attention module are employed to refine multiple levels features in spatial and channel dimensions. Finally, the image stream and saliency stream features fusion strategy is proposed to integrate features at the corresponding layer, and the final quality scores are predicted by using multiple levels of integrated features and weighting strategy. The experimental results of the proposed method and several representative methods on four synthetic distortion datasets and two real distortion datasets show that our SETNet has higher prediction accuracy and generalization ability.

© 2022 SPIE and IS&T 1017-9909/2022/$28.00 © 2022 SPIE and IS&T
Huanhuan Ma, Ziguan Cui, Zongliang Gan, Guijin Tang, and Feng Liu "Saliency-enhanced two-stream convolutional network for no-reference image quality assessment," Journal of Electronic Imaging 31(2), 023010 (17 March 2022). https://doi.org/10.1117/1.JEI.31.2.023010
Received: 30 July 2021; Accepted: 25 February 2022; Published: 17 March 2022
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image quality

Distortion

Image fusion

Image enhancement

Feature extraction

Convolution

Networks

RELATED CONTENT


Back to Top