Contextual-based Image Inpainting: Infer, Match, and Translate
Yuhang Song, Chao Yang, Zhe Lin, Xiaofeng Liu, Qin Huang, Hao Li, C.-C. Jay Kuo; The European Conference on Computer Vision (ECCV), 2018, pp. 3-19
Abstract
We study the task of image inpainting, which is to fill in the missing region of an incomplete image with plausible contents. To this end, we propose a learning-based approach to generate visually coherent completion given a high-resolution image with missing components. In order to overcome the difficulty to directly learn the distribution of high-dimensional image data, we divide the task into inference and translation as two separate steps and model each step with a deep neural network. We also use simple heuristics to guide the propagation of local textures from the boundary to the hole. We show that, by using such techniques, inpainting reduces to the problem of learning two image-feature translation functions in much smaller space and hence easier to train. We evaluate our method on several public datasets and show that we generate results of better visual quality than previous state-of-the-art methods.
Related Material
[pdf] [
bibtex]
@InProceedings{Song_2018_ECCV,
author = {Song, Yuhang and Yang, Chao and Lin, Zhe and Liu, Xiaofeng and Huang, Qin and Li, Hao and Jay Kuo, C.-C.},
title = {Contextual-based Image Inpainting: Infer, Match, and Translate},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}