DeepGUM: Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model
Stephane Lathuiliere, Pablo Mesejo, Xavier Alameda-Pineda, Radu Horaud; The European Conference on Computer Vision (ECCV), 2018, pp. 202-217
Abstract
In this paper we address the problem of how to robustly train a ConvNet for regression, or deep robust regression. Traditionally, deep regression employ the L2 loss function, known to be sensitive to outliers, i.e. samples that either lie at an abnormal distance away from the majority of the training samples, or that correspond to wrongly annotated targets. This means that, during back-propagation, outliers may bias the training process due to the high magnitude of their gradient. In this paper, we propose DeepGUM: a deep regression model that is robust to outliers thanks to the use of a Gaussian-uniform mixture model. We derive an optimization algorithm that alternates between the unsupervised detection of outliers using expectation-maximization, and the supervised training with cleaned samples using stochastic gradient descent. DeepGUM is able to adapt to a continuously evolving outlier distribution, avoiding to manually impose any threshold on the proportion of outliers in the training set. Extensive experimental evaluations on four different tasks (facial and fashion landmark detection, age and head pose estimation) lead us to conclude that our novel robust technique provides reliability in the presence of various types of noise and protection against a high percentage of outliers.
Related Material
[pdf] [
bibtex]
@InProceedings{Lathuiliere_2018_ECCV,
author = {Lathuiliere, Stephane and Mesejo, Pablo and Alameda-Pineda, Xavier and Horaud, Radu},
title = {DeepGUM: Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}