Context-Guided Spatial Feature Reconstruction for Efficient Semantic Segmentation

Zhenliang Ni, Xinghao Chen*, Yingjie Zhai, Yehui Tang, Yunhe Wang* ;

Abstract


"Semantic segmentation is an important task for numerous applications but it is still quite challenging to achieve advanced performance with limited computational costs. In this paper, we present CGRSeg, an efficient yet competitive segmentation framework based on context-guided spatial feature reconstruction. A Rectangular Self-Calibration Module is carefully designed for spatial feature reconstruction and pyramid context extraction. It captures the axial global context in both horizontal and vertical directions to explicitly model rectangular key areas. A shape self-calibration function is designed to make the key areas closer to foreground objects. Besides, a lightweight Dynamic Prototype Guided head is proposed to improve the classification of foreground objects by explicit class embedding. Our CGRSeg is extensively evaluated on ADE20K, COCO-Stuff, and Pascal Context benchmarks, and achieves state-of-the-art semantic performance. Specifically, it achieves 43.6% mIoU on ADE20K with only 4.0 GFLOPs, which is 0.9% and 2.5% mIoU better than SeaFormer and SegNeXt but with about 38.0% fewer GFLOPs. Code is available at https://github.com/nizhenliang/ CGRSeg."

Related Material


[pdf] [DOI]