CBAM: Convolutional Block Attention Module
Sanghyun Woo , Jongchan Park , Joon-Young Lee, In So Kweon; The European Conference on Computer Vision (ECCV), 2018, pp. 3-19
Abstract
We propose Convolutional Block Attention Module (CBAM), a simple and effective attention module that can be integrated with any feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architecture seamlessly with negligible overheads. Our module is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements on classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available upon the acceptance of the paper.
Related Material
[pdf] [
bibtex]
@InProceedings{Woo_2018_ECCV,
author = {Woo, Sanghyun and Park, Jongchan and Lee, Joon-Young and So Kweon, In},
title = {CBAM: Convolutional Block Attention Module},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}