ContextVP: Fully Context-Aware Video Prediction
Wonmin Byeon, Qin Wang, Rupesh Kumar Srivastava, Petros Koumoutsakos; The European Conference on Computer Vision (ECCV), 2018, pp. 753-769
Abstract
Video prediction models based on convolutional networks, recurrent networks, and their combinations often result in blurry predictions. We identify an important contributing factor for imprecise predictions that has not been studied adequately in the literature: blind spots, i.e., lack of access to all relevant past information for accurately predicting the future. To address this issue, we introduce a fully context-aware architecture that captures the entire available past context for each pixel using Parallel Multi-Dimensional LSTM units and aggregates it using blending units. Our model outperforms a strong baseline network of 20 recurrent convolutional layers and yields state-of-the-art performance for next step prediction on three challenging real-world video datasets: Human 3.6M, Caltech Pedestrian, and UCF-101. Moreover, it does so with fewer parameters than several recently proposed models, and does not rely on deep convolutional networks, multi-scale architectures, separation of background and foreground modeling, motion flow learning, or adversarial training. These results highlight that full awareness of past context is of crucial importance for video prediction.
Related Material
[pdf] [
bibtex]
@InProceedings{Byeon_2018_ECCV,
author = {Byeon, Wonmin and Wang, Qin and Kumar Srivastava, Rupesh and Koumoutsakos, Petros},
title = {ContextVP: Fully Context-Aware Video Prediction},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}