Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics
Matthias Kummerer, Thomas S. A. Wallis, Matthias Bethge; The European Conference on Computer Vision (ECCV), 2018, pp. 770-787
Abstract
Dozens of new models on fixation prediction are published every year and compared on open benchmarks such as MIT300 and LSUN. However, progress in the field can be difficult to judge because models are compared using a variety of inconsistent metrics. Here we show that no single saliency map can perform well under all metrics. Instead, we propose a principled approach to solve the benchmarking problem by separating the notions of saliency models, maps and metrics. Inspired by Bayesian decision theory, we define a saliency model to be a probabilistic model of fixation density prediction and a saliency map to be a metric-specific prediction derived from the model density which maximizes the expected performance on that metric given the model density. We derive these optimal saliency maps for the most commonly used saliency metrics (AUC, sAUC, NSS, CC, SIM, KL-Div) and show that they can be computed analytically or approximated with high precision. We show that this leads to consistent rankings in all metrics and avoids the penalties of using one saliency map for all metrics. Our method allows researchers to have their model compete on many different metrics with state-of-the-art in those metrics: "good" models will perform well in all metrics.
Related Material
[pdf] [
bibtex]
@InProceedings{Kummerer_2018_ECCV,
author = {Kummerer, Matthias and Wallis, Thomas S. A. and Bethge, Matthias},
title = {Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}