Self-attention has been proven to be a quite powerful yet calculation-intensive method for scene semantic segmentation. Even though many efforts have been made to explore more effective and resource-saving ways to apply self-attention, there is still space in reducing the calculation consumption. Meanwhile, since self-attention is good at fusing information, its application should be extended to multi-scale-feature-fusion, which is barely researched while the information exchange paths between features in different resolutions are mostly addition and concatenation. A special partition method decreasing the computational complexity of self-attention is investigated, and a multi-scale-feature-attention (MFA) module fusing low-resolution features containing semantic information with high-resolution features having detailed information is presented at the same time. To be specific, the proposed multi-scale-partition-attention (MPA) module and MFA module are inserted into the backbone in sequence to fuse information among all the pixels in one highly extracted feature and the pixels from features with different resolutions, respectively. Extensive experiments are carried out on semantic segmentation benchmarks including PASCAL-Context and Cityscapes to demonstrate that these two improved modules can improve the performance of the backbone in scene semantic segmentation tasks that contain multiple classes and objects in both big and small sizes. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Semantics
Image segmentation
Feature extraction
Convolution
Education and training
Feature fusion
Ablation