3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser. (2016)cite arxiv:1603.08182Comment: To appear at the Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Project webpage: http://3dmatch.cs.princeton.edu.
Abstract
Matching local geometric features on real-world depth images is a challenging
task due to the noisy, low-resolution, and incomplete nature of 3D scan data.
These difficulties limit the performance of current state-of-art methods, which
are typically based on histograms over geometric properties. In this paper, we
present 3DMatch, a data-driven model that learns a local volumetric patch
descriptor for establishing correspondences between partial 3D data. To amass
training data for our model, we propose a self-supervised feature learning
method that leverages the millions of correspondence labels found in existing
RGB-D reconstructions. Experiments show that our descriptor is not only able to
match local geometry in new scenes for reconstruction, but also generalize to
different tasks and spatial scales (e.g. instance-level object model alignment
for the Amazon Picking Challenge, and mesh surface correspondence). Results
show that 3DMatch consistently outperforms other state-of-the-art approaches by
a significant margin. Code, data, benchmarks, and pre-trained models are
available online at http://3dmatch.cs.princeton.edu
Description
[1603.08182] 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
cite arxiv:1603.08182Comment: To appear at the Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Project webpage: http://3dmatch.cs.princeton.edu
%0 Generic
%1 zeng20163dmatch
%A Zeng, Andy
%A Song, Shuran
%A Nießner, Matthias
%A Fisher, Matthew
%A Xiao, Jianxiong
%A Funkhouser, Thomas
%D 2016
%K 2016 3D arxiv deep-learning princeton reconstruction
%T 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
%U http://arxiv.org/abs/1603.08182
%X Matching local geometric features on real-world depth images is a challenging
task due to the noisy, low-resolution, and incomplete nature of 3D scan data.
These difficulties limit the performance of current state-of-art methods, which
are typically based on histograms over geometric properties. In this paper, we
present 3DMatch, a data-driven model that learns a local volumetric patch
descriptor for establishing correspondences between partial 3D data. To amass
training data for our model, we propose a self-supervised feature learning
method that leverages the millions of correspondence labels found in existing
RGB-D reconstructions. Experiments show that our descriptor is not only able to
match local geometry in new scenes for reconstruction, but also generalize to
different tasks and spatial scales (e.g. instance-level object model alignment
for the Amazon Picking Challenge, and mesh surface correspondence). Results
show that 3DMatch consistently outperforms other state-of-the-art approaches by
a significant margin. Code, data, benchmarks, and pre-trained models are
available online at http://3dmatch.cs.princeton.edu
@misc{zeng20163dmatch,
abstract = {Matching local geometric features on real-world depth images is a challenging
task due to the noisy, low-resolution, and incomplete nature of 3D scan data.
These difficulties limit the performance of current state-of-art methods, which
are typically based on histograms over geometric properties. In this paper, we
present 3DMatch, a data-driven model that learns a local volumetric patch
descriptor for establishing correspondences between partial 3D data. To amass
training data for our model, we propose a self-supervised feature learning
method that leverages the millions of correspondence labels found in existing
RGB-D reconstructions. Experiments show that our descriptor is not only able to
match local geometry in new scenes for reconstruction, but also generalize to
different tasks and spatial scales (e.g. instance-level object model alignment
for the Amazon Picking Challenge, and mesh surface correspondence). Results
show that 3DMatch consistently outperforms other state-of-the-art approaches by
a significant margin. Code, data, benchmarks, and pre-trained models are
available online at http://3dmatch.cs.princeton.edu},
added-at = {2018-06-14T05:07:36.000+0200},
author = {Zeng, Andy and Song, Shuran and Nießner, Matthias and Fisher, Matthew and Xiao, Jianxiong and Funkhouser, Thomas},
biburl = {https://www.bibsonomy.org/bibtex/2de3bda12665fd3c08a9039eb7cf5e75b/achakraborty},
description = {[1603.08182] 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions},
interhash = {a1beb18acf8a3e1c17dff30377c5ca70},
intrahash = {de3bda12665fd3c08a9039eb7cf5e75b},
keywords = {2016 3D arxiv deep-learning princeton reconstruction},
note = {cite arxiv:1603.08182Comment: To appear at the Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Project webpage: http://3dmatch.cs.princeton.edu},
timestamp = {2018-06-14T05:07:36.000+0200},
title = {3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions},
url = {http://arxiv.org/abs/1603.08182},
year = 2016
}