The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Este artigo apresenta um método baseado em aprendizagem não supervisionada para seleção de pontos característicos e classificação de categorias de objetos sem definição prévia do número de categorias. Nosso método consiste nos seguintes procedimentos: 1) detecção de pontos característicos e descrição de recursos usando uma Transformada de Recursos Invariável em Escala (SIFT), 2) seleção de pontos característicos alvo usando Máquinas de Vetores de Suporte de Uma Classe (OC-SVMs), 3 )geração de palavras visuais de todos os descritores SIFT e histogramas em cada imagem de pontos de características selecionados usando mapas auto-organizados (SOMs), 4)formação de rótulos usando Adaptive Resonance Theory-2 (ART-2) e 5)criação e classificação de categorias em um mapa de categorias de Redes de Contrapropagação (CPNs) para visualizar relações espaciais entre categorias. Os resultados da classificação de imagens estáticas usando um conjunto de dados de categoria de objeto Caltech-256 e imagens dinâmicas usando imagens de séries temporais obtidas usando um robô de acordo com movimentos respectivamente demonstram que nosso método pode visualizar relações espaciais de categorias enquanto mantém características de séries temporais. Além disso, enfatizamos a eficácia do nosso método para classificação de categorias de mudanças na aparência dos objetos.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Masahiro TSUKADA, Yuya UTSUMI, Hirokazu MADOKORO, Kazuhito SATO, "Unsupervised Feature Selection and Category Classification for a Vision-Based Mobile Robot" in IEICE TRANSACTIONS on Information,
vol. E94-D, no. 1, pp. 127-136, January 2011, doi: 10.1587/transinf.E94.D.127.
Abstract: This paper presents an unsupervised learning-based method for selection of feature points and object category classification without previous setting of the number of categories. Our method consists of the following procedures: 1)detection of feature points and description of features using a Scale-Invariant Feature Transform (SIFT), 2)selection of target feature points using One Class-Support Vector Machines (OC-SVMs), 3)generation of visual words of all SIFT descriptors and histograms in each image of selected feature points using Self-Organizing Maps (SOMs), 4)formation of labels using Adaptive Resonance Theory-2 (ART-2), and 5)creation and classification of categories on a category map of Counter Propagation Networks (CPNs) for visualizing spatial relations between categories. Classification results of static images using a Caltech-256 object category dataset and dynamic images using time-series images obtained using a robot according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category classification of appearance changes of objects.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E94.D.127/_p
Copiar
@ARTICLE{e94-d_1_127,
author={Masahiro TSUKADA, Yuya UTSUMI, Hirokazu MADOKORO, Kazuhito SATO, },
journal={IEICE TRANSACTIONS on Information},
title={Unsupervised Feature Selection and Category Classification for a Vision-Based Mobile Robot},
year={2011},
volume={E94-D},
number={1},
pages={127-136},
abstract={This paper presents an unsupervised learning-based method for selection of feature points and object category classification without previous setting of the number of categories. Our method consists of the following procedures: 1)detection of feature points and description of features using a Scale-Invariant Feature Transform (SIFT), 2)selection of target feature points using One Class-Support Vector Machines (OC-SVMs), 3)generation of visual words of all SIFT descriptors and histograms in each image of selected feature points using Self-Organizing Maps (SOMs), 4)formation of labels using Adaptive Resonance Theory-2 (ART-2), and 5)creation and classification of categories on a category map of Counter Propagation Networks (CPNs) for visualizing spatial relations between categories. Classification results of static images using a Caltech-256 object category dataset and dynamic images using time-series images obtained using a robot according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category classification of appearance changes of objects.},
keywords={},
doi={10.1587/transinf.E94.D.127},
ISSN={1745-1361},
month={January},}
Copiar
TY - JOUR
TI - Unsupervised Feature Selection and Category Classification for a Vision-Based Mobile Robot
T2 - IEICE TRANSACTIONS on Information
SP - 127
EP - 136
AU - Masahiro TSUKADA
AU - Yuya UTSUMI
AU - Hirokazu MADOKORO
AU - Kazuhito SATO
PY - 2011
DO - 10.1587/transinf.E94.D.127
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E94-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2011
AB - This paper presents an unsupervised learning-based method for selection of feature points and object category classification without previous setting of the number of categories. Our method consists of the following procedures: 1)detection of feature points and description of features using a Scale-Invariant Feature Transform (SIFT), 2)selection of target feature points using One Class-Support Vector Machines (OC-SVMs), 3)generation of visual words of all SIFT descriptors and histograms in each image of selected feature points using Self-Organizing Maps (SOMs), 4)formation of labels using Adaptive Resonance Theory-2 (ART-2), and 5)creation and classification of categories on a category map of Counter Propagation Networks (CPNs) for visualizing spatial relations between categories. Classification results of static images using a Caltech-256 object category dataset and dynamic images using time-series images obtained using a robot according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category classification of appearance changes of objects.
ER -