The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Uma abordagem típica para reconstruir um modelo de ambiente 3D é digitalizar o ambiente com um sensor de profundidade e ajustar a nuvem de pontos acumulada aos modelos 3D. Neste tipo de cenário, uma aplicação geral de reconstrução de ambiente 3D assume uma varredura temporalmente contínua. No entanto, em alguns usos práticos, esta suposição é inaceitável. Assim, é necessário um método de correspondência de nuvem de pontos para unir várias varreduras 3D não contínuas. A correspondência de nuvens de pontos geralmente inclui erros na detecção de pontos característicos porque uma nuvem de pontos é basicamente uma amostragem esparsa do ambiente real e pode incluir erros de quantização que não podem ser ignorados. Além disso, os sensores de profundidade tendem a apresentar erros devido às propriedades reflexivas da superfície observada. Portanto, assumimos que os pares de pontos característicos entre duas nuvens de pontos incluirão erros. Neste trabalho, propomos um método de descrição de características robusto ao erro de registro de pontos de características descrito acima. Para atingir esse objetivo, projetamos um modelo de descrição de recursos baseado em aprendizado profundo que consiste em uma descrição de recursos locais em torno dos pontos de recursos e uma descrição de recursos globais de toda a nuvem de pontos. Para obter uma descrição de recurso robusta ao erro de registro de pontos de recurso, inserimos pares de pontos de recurso com erros e treinamos os modelos com aprendizado métrico. Os resultados experimentais mostram que nosso modelo de descrição de recursos pode estimar corretamente se o par de pontos de recursos está próximo o suficiente para ser considerado uma correspondência ou não, mesmo quando os erros de registro de pontos de recursos são grandes, e nosso modelo pode estimar com maior precisão em comparação com métodos como como FPFH ou 3DMatch. Além disso, conduzimos experimentos para combinações de nuvens de pontos de entrada, incluindo nuvens de pontos locais ou globais, ambos os tipos de nuvem de pontos e codificadores.
Kenshiro TAMATA
Osaka University
Tomohiro MASHITA
Osaka University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Kenshiro TAMATA, Tomohiro MASHITA, "Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 1, pp. 134-140, January 2022, doi: 10.1587/transinf.2021EDP7082.
Abstract: A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7082/_p
Copiar
@ARTICLE{e105-d_1_134,
author={Kenshiro TAMATA, Tomohiro MASHITA, },
journal={IEICE TRANSACTIONS on Information},
title={Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders},
year={2022},
volume={E105-D},
number={1},
pages={134-140},
abstract={A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.},
keywords={},
doi={10.1587/transinf.2021EDP7082},
ISSN={1745-1361},
month={January},}
Copiar
TY - JOUR
TI - Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders
T2 - IEICE TRANSACTIONS on Information
SP - 134
EP - 140
AU - Kenshiro TAMATA
AU - Tomohiro MASHITA
PY - 2022
DO - 10.1587/transinf.2021EDP7082
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2022
AB - A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.
ER -