3D Recurrent Neural Networks with Context Fusion for Point Cloud Semantic Segmentation
Xiaoqing Ye, Jiamao Li, Hexiao Huang, Liang Du, Xiaolin Zhang; The European Conference on Computer Vision (ECCV), 2018, pp. 403-417
Abstract
Semantic segmentation of 3D unstructured point clouds remains an open research problem. Recent works predict semantic labels of 3D points by virtue of neural networks but take limited context knowledge into consideration. In this paper, a novel end-to-end approach for unstructured point cloud semantic segmentation is proposed to exploit the inherent contextual features. First the efficient pointwise pyramid pooling module is investigated to capture local structures at various densities by taking multi-scale neighborhood into account. Then the two-dimensional hierarchical recurrent neural networks (RNNs) are utilized to explore long-range spatial dependencies. Each recurrent layer takes as input the local features derived from unrolled cells and sweeps the 3D space along two horizontal directions successively to integrate structure knowledge. On challenging indoor and outdoor 3D datasets, the proposed framework demonstrates robust performance superior to state-of-the-arts.
Related Material
[pdf] [
bibtex]
@InProceedings{Ye_2018_ECCV,
author = {Ye, Xiaoqing and Li, Jiamao and Huang, Hexiao and Du, Liang and Zhang, Xiaolin},
title = {3D Recurrent Neural Networks with Context Fusion for Point Cloud Semantic Segmentation},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}