SAN: Learning Relationship between Convolutional Features for Multi-Scale Object Detection
Yonghyun Kim, Bong-Nam Kang, Daijin Kim; The European Conference on Computer Vision (ECCV), 2018, pp. 316-331
Abstract
Most of the recent successful methods in accurate object detection build on the convolutional neural networks (CNN). However, due to the lack of scale normalization in CNN-based detection methods, the activated channels in the feature space can be completely different according to a scale and this difference makes it hard for the classifier to learn samples. We propose a Scale Aware Network (SAN) that maps the convolutional features from the different scales onto a scale-invariant subspace to make CNN-based detection methods more robust to the scale variation, and also construct a unique learning method which considers purely the relationship between channels without the spatial information for the efficient learning of SAN. To show the validity of our method, we visualize how convolutional features change according to the scale through a channel activation matrix and experimentally show that SAN reduces the feature differences in the scale space. We evaluate our method on VOC PASCAL and MS COCO dataset. We demonstrate SAN by conducting several experiments on structures and parameters. The proposed SAN can be generally applied to many CNN-based detection methods to enhance the detection accuracy with a slight increase in the computing time.
Related Material
[pdf] [
bibtex]
@InProceedings{Kim_2018_ECCV,
author = {Kim, Yonghyun and Kang, Bong-Nam and Kim, Daijin},
title = {SAN: Learning Relationship between Convolutional Features for Multi-Scale Object Detection},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}