4 August 2022 Lightweight image super-resolution with multiscale residual attention network
Cunjun Xiao, Hui Dong, Haibin Li, Yaqian Li, Wenming Zhang
Author Affiliations +
Abstract

In recent years, various convolutional neural networks have successfully applied to single-image super-resolution task. However, most existing models with deeper or wider networks require heavy computation and memory consumption that restrict them in practice. To solve the above questions, we propose a lightweight multiscale residual attention network, which not merely can extract more detail to improve the quality of the image but also decrease the usage of the parameters. More specifically, a multiscale residual attention block (MRAB) as the basic unit can fully exploit the image features with different sizes of convolutional kernels. Meanwhile, the attention mechanism can be adaptive to recalibrate channel and spatial information of feature mappings. Furthermore, a local information integration module (LFIM) is designed as the network architecture to maximize the use of local information. The LFIM consists of several MRAB and a local skip connection to complement information loss. Our experimental results show that our method is superior to the representative algorithms in performance with fewer parameters and computational overhead. Code is available at https://github.com/xiaotian3/EMRAB.

© 2022 SPIE and IS&T
Cunjun Xiao, Hui Dong, Haibin Li, Yaqian Li, and Wenming Zhang "Lightweight image super-resolution with multiscale residual attention network," Journal of Electronic Imaging 31(4), 043028 (4 August 2022). https://doi.org/10.1117/1.JEI.31.4.043028
Received: 3 December 2021; Accepted: 13 July 2022; Published: 4 August 2022
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Convolution

Super resolution

Lawrencium

Image quality

Network architectures

Information fusion

Data modeling

Back to Top