The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Para conseguir o reconhecimento de objetos, é necessário encontrar as características únicas dos objetos a serem reconhecidos. Os resultados de pesquisas anteriores sugerem que métodos que usam informações de múltiplas modalidades são eficazes para encontrar características únicas. Neste artigo, é mostrada a visão geral do sistema que pode extrair as características dos objetos a serem reconhecidos, integrando informações visuais, táteis e auditivas como informações de sensores multimodais com VRAE. Além disso, também é mostrada uma discussão sobre a alteração da combinação de informações das modalidades.
Kazuki HAYASHI
National Institute of Technology (KOSEN), Niihama College
Daisuke TANAKA
National Institute of Technology (KOSEN), Niihama College
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Kazuki HAYASHI, Daisuke TANAKA, "Effectiveness of Feature Extraction System for Multimodal Sensor Information Based on VRAE and Its Application to Object Recognition" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 5, pp. 833-835, May 2023, doi: 10.1587/transinf.2022DLL0008.
Abstract: To achieve object recognition, it is necessary to find the unique features of the objects to be recognized. Results in prior research suggest that methods that use multiple modalities information are effective to find the unique features. In this paper, the overview of the system that can extract the features of the objects to be recognized by integrating visual, tactile, and auditory information as multimodal sensor information with VRAE is shown. Furthermore, a discussion about changing the combination of modalities information is also shown.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022DLL0008/_p
Copiar
@ARTICLE{e106-d_5_833,
author={Kazuki HAYASHI, Daisuke TANAKA, },
journal={IEICE TRANSACTIONS on Information},
title={Effectiveness of Feature Extraction System for Multimodal Sensor Information Based on VRAE and Its Application to Object Recognition},
year={2023},
volume={E106-D},
number={5},
pages={833-835},
abstract={To achieve object recognition, it is necessary to find the unique features of the objects to be recognized. Results in prior research suggest that methods that use multiple modalities information are effective to find the unique features. In this paper, the overview of the system that can extract the features of the objects to be recognized by integrating visual, tactile, and auditory information as multimodal sensor information with VRAE is shown. Furthermore, a discussion about changing the combination of modalities information is also shown.},
keywords={},
doi={10.1587/transinf.2022DLL0008},
ISSN={1745-1361},
month={May},}
Copiar
TY - JOUR
TI - Effectiveness of Feature Extraction System for Multimodal Sensor Information Based on VRAE and Its Application to Object Recognition
T2 - IEICE TRANSACTIONS on Information
SP - 833
EP - 835
AU - Kazuki HAYASHI
AU - Daisuke TANAKA
PY - 2023
DO - 10.1587/transinf.2022DLL0008
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2023
AB - To achieve object recognition, it is necessary to find the unique features of the objects to be recognized. Results in prior research suggest that methods that use multiple modalities information are effective to find the unique features. In this paper, the overview of the system that can extract the features of the objects to be recognized by integrating visual, tactile, and auditory information as multimodal sensor information with VRAE is shown. Furthermore, a discussion about changing the combination of modalities information is also shown.
ER -