Feature boosting with efficient attention for scene parsing
ORCID
- Vivek Singh: 0000-0003-1728-1198
Abstract
The complexity of scene parsing grows with the number of object and scene classes, which is higher in unrestricted open scenes. The biggest challenge is to model the spatial relation between scene elements while succeeding in identifying objects at smaller scales. This paper presents a novel feature-boosting network that gathers spatial context from multiple levels of feature extraction and computes the attention weights for each level of representation to generate the final class labels. A novel ‘channel attention module’ is designed to compute the attention weights, ensuring that features from the relevant extraction stages are boosted while the others are attenuated. The model also learns spatial context information at low resolution to preserve the abstract spatial relationships among scene elements and reduce computational cost. Spatial attention is subsequently concatenated into a final feature set before applying feature boosting. Low-resolution spatial attention features are trained using an auxiliary task that help to learn a coarse global scene structure. The proposed model outperforms all state-of-the-art models on both the ADE20K and the Cityscapes datasets.
DOI
10.1016/j.neucom.2024.128222
Publication Date
2024-10-07
Publication Title
Neurocomputing
Volume
601
ISSN
0925-2312
Embargo Period
2026-07-18
Keywords
Channel attention, Feature attention, Scene parsing, Semantic segmentation, Spatial attention
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Recommended Citation
Sharma, S., Singh, V., & Cuzzolin, F. (2024) 'Feature boosting with efficient attention for scene parsing', Neurocomputing, 601. Available at: https://doi.org/10.1016/j.neucom.2024.128222