Header

UZH-Logo

Maintenance Infos

PGCNet: Patch Graph Convolutional Network for point cloud segmentation of indoor scenes


Sun, Yuliang; Miao, Yongwei; Chen, Jiazhou; Pajarola, R (2020). PGCNet: Patch Graph Convolutional Network for point cloud segmentation of indoor scenes. The Visual Computer, 36:2407-2418.

Abstract

Semantic segmentation of 3D point clouds is a crucial task in scene understanding and is also fundamental to indoor scene applications such as indoor navigation, mobile robotics, augmented reality. Recently, deep learning frameworks have been successfully adopted to point clouds but are limited by the size of data. While most existing works focus on individual sampling points, we use surface patches as a more efficient representation and propose a novel indoor scene segmentation framework called patch graph convolution network (PGCNet). This framework treats patches as input graph nodes and subsequently aggregates neighboring node features by dynamic graph U-Net (DGU) module, which consists of dynamic edge convolution operation inside U-shaped encoder–decoder architecture. The DGU module dynamically update graph structures at each level to encode hierarchical edge features. Incorporating PGCNet, we can segment the input scene into two types, i.e., room layout and indoor objects, which is afterward utilized to carry out final rich semantic labeling of various indoor scenes. With considerable speedup training, the proposed framework achieves effective performance equivalent to state-of-the-art for segmenting standard indoor scene dataset.

Abstract

Semantic segmentation of 3D point clouds is a crucial task in scene understanding and is also fundamental to indoor scene applications such as indoor navigation, mobile robotics, augmented reality. Recently, deep learning frameworks have been successfully adopted to point clouds but are limited by the size of data. While most existing works focus on individual sampling points, we use surface patches as a more efficient representation and propose a novel indoor scene segmentation framework called patch graph convolution network (PGCNet). This framework treats patches as input graph nodes and subsequently aggregates neighboring node features by dynamic graph U-Net (DGU) module, which consists of dynamic edge convolution operation inside U-shaped encoder–decoder architecture. The DGU module dynamically update graph structures at each level to encode hierarchical edge features. Incorporating PGCNet, we can segment the input scene into two types, i.e., room layout and indoor objects, which is afterward utilized to carry out final rich semantic labeling of various indoor scenes. With considerable speedup training, the proposed framework achieves effective performance equivalent to state-of-the-art for segmenting standard indoor scene dataset.

Statistics

Citations

Altmetrics

Downloads

2 downloads since deposited on 16 Dec 2020
2 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Scopus Subject Areas:Physical Sciences > Software
Physical Sciences > Computer Vision and Pattern Recognition
Physical Sciences > Computer Graphics and Computer-Aided Design
Uncontrolled Keywords:graphics, indoor scene reconstruction, point cloud, segmentation
Language:English
Date:July 2020
Deposited On:16 Dec 2020 15:43
Last Modified:31 Jan 2021 11:23
Publisher:Springer
ISSN:0178-2789
OA Status:Closed
Publisher DOI:https://doi.org/10.1007/s00371-020-01892-8
Other Identification Number:merlin-id:20184

Download

Closed Access: Download allowed only for UZH members