Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast

1Xi'an Jiaotong University 2IIISCT 3IIIS, Tsinghua University
*Corresponding Author.
arXiv(soon) Code
overview

Point-GCC utilizes the Siamese network to extract the features of geometry and color with positional embedding respectively. Then we implement the hierarchical supervision on extracted features which contains point-level contrast and reconstruct and object-level contrast based on the deep clustering module.

Abstract

Geometry and color information provided by the point clouds are both crucial for 3D scene understanding. Two pieces of information characterize the different aspects of point clouds, but existing methods lack an elaborate design for the discrimination and relevance. Hence we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Specifically, we propose a universal 3D scene pre-training framework via Geometry-Color Contrast (PointGCC), which aligns geometry and color information using a Siamese network. To take care of actual application tasks, we design (i) hierarchical supervision with point-level contrast and reconstruct and object-level contrast based on the novel deep clustering module to close the gap between pre-training and downstream tasks; (ii) architecture-agnostic backbone to adapt for various downstream models. Benefiting from the object-level representation associated with downstream tasks, Point-GCC can directly evaluate model performance and the result demonstrates the effectiveness of our methods. Transfer learning results on a wide range of tasks also show consistent improvements across all datasets. e.g., new state-of-theart object detection results on SUN RGB-D and S3DIS datasets.

Overview

method

(a) The deep clustering module obtains pseudo prediction for different features and enforces consistent with the swapped partition distribution from the Sinkhorn-Knop algorithm.
(b) Point-GCC generates the pseudo-labels by utilizing cluster prediction from both branches and projects to groundtruth labels for unsupervised semantic segmentation using Hungarian matching alignment.

3D Object detection results

res1

+ means fine-tuning with pre-training on the corresponding dataset. * means that we evaluate the performance on VoteNet with the stronger MMDetection3D implementation for a fair comparison. † means with extra training dataset ScanNetV2.

3D semantic segmentation results by different level of supervision

res2

+ means fine-tuning with pre-training on the corresponding dataset.

BibTeX


    @article{point-gcc,
        title={{Point-GCC:} Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast},
        author={Fan, Guofan and Qi, Zekun and Shi, Wenkai and and Ma, Kaisheng},
        journal={arXiv preprint arXiv:2305.19623},
        year={2023}
    }