The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Este artigo propõe uma estrutura para anotar automaticamente os pontos-chave de um corpo humano em imagens para aprender modelos de estimativa de pose 2D. Anotações verdadeiras para aprendizado supervisionado são difíceis e complicadas na maioria das tarefas de visão de máquina. Embora contribuições consideráveis na comunidade nos forneçam um grande número de imagens com poses anotadas, todas elas se concentram principalmente em pessoas vestindo roupas comuns, que são relativamente fáceis de anotar os pontos-chave do corpo. Este artigo, por outro lado, concentra-se na anotação de pessoas que usam roupas largas (por exemplo, quimono japonês) que ocluem muitos pontos-chave do corpo. Para anotar essas pessoas de forma automática e correta, desviamos as coordenadas 3D dos pontos-chave observados sem roupas largas, que podem ser capturadas por um sistema de captura de movimento (MoCap). Esses pontos-chave 3D são projetados em uma imagem onde a pose do corpo sob roupas largas é semelhante àquela capturada pelo MoCap. A similaridade de pose entre corpos com e sem roupas largas é avaliada com configurações geométricas 3D de marcadores MoCap que são visíveis mesmo com roupas largas (por exemplo, marcadores na cabeça, pulsos e tornozelos). Resultados experimentais validam a eficácia da nossa estrutura proposta para estimativa de pose humana.
Takuya MATSUMOTO
Toyota Technological Institute
Kodai SHIMOSATO
Toyota Technological Institute
Takahiro MAEDA
Toyota Technological Institute
Tatsuya MURAKAMI
Toyota Technological Institute
Koji MURAKOSO
Toei Digital Center
Kazuhiko MINO
Toei Digital Center
Norimichi UKITA
Toyota Technological Institute
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Takuya MATSUMOTO, Kodai SHIMOSATO, Takahiro MAEDA, Tatsuya MURAKAMI, Koji MURAKOSO, Kazuhiko MINO, Norimichi UKITA, "Human Pose Annotation Using a Motion Capture System for Loose-Fitting Clothes" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 6, pp. 1257-1264, June 2020, doi: 10.1587/transinf.2019MVP0007.
Abstract: This paper proposes a framework for automatically annotating the keypoints of a human body in images for learning 2D pose estimation models. Ground-truth annotations for supervised learning are difficult and cumbersome in most machine vision tasks. While considerable contributions in the community provide us a huge number of pose-annotated images, all of them mainly focus on people wearing common clothes, which are relatively easy to annotate the body keypoints. This paper, on the other hand, focuses on annotating people wearing loose-fitting clothes (e.g., Japanese Kimono) that occlude many body keypoints. In order to automatically and correctly annotate these people, we divert the 3D coordinates of the keypoints observed without loose-fitting clothes, which can be captured by a motion capture system (MoCap). These 3D keypoints are projected to an image where the body pose under loose-fitting clothes is similar to the one captured by the MoCap. Pose similarity between bodies with and without loose-fitting clothes is evaluated with 3D geometric configurations of MoCap markers that are visible even with loose-fitting clothes (e.g., markers on the head, wrists, and ankles). Experimental results validate the effectiveness of our proposed framework for human pose estimation.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019MVP0007/_p
Copiar
@ARTICLE{e103-d_6_1257,
author={Takuya MATSUMOTO, Kodai SHIMOSATO, Takahiro MAEDA, Tatsuya MURAKAMI, Koji MURAKOSO, Kazuhiko MINO, Norimichi UKITA, },
journal={IEICE TRANSACTIONS on Information},
title={Human Pose Annotation Using a Motion Capture System for Loose-Fitting Clothes},
year={2020},
volume={E103-D},
number={6},
pages={1257-1264},
abstract={This paper proposes a framework for automatically annotating the keypoints of a human body in images for learning 2D pose estimation models. Ground-truth annotations for supervised learning are difficult and cumbersome in most machine vision tasks. While considerable contributions in the community provide us a huge number of pose-annotated images, all of them mainly focus on people wearing common clothes, which are relatively easy to annotate the body keypoints. This paper, on the other hand, focuses on annotating people wearing loose-fitting clothes (e.g., Japanese Kimono) that occlude many body keypoints. In order to automatically and correctly annotate these people, we divert the 3D coordinates of the keypoints observed without loose-fitting clothes, which can be captured by a motion capture system (MoCap). These 3D keypoints are projected to an image where the body pose under loose-fitting clothes is similar to the one captured by the MoCap. Pose similarity between bodies with and without loose-fitting clothes is evaluated with 3D geometric configurations of MoCap markers that are visible even with loose-fitting clothes (e.g., markers on the head, wrists, and ankles). Experimental results validate the effectiveness of our proposed framework for human pose estimation.},
keywords={},
doi={10.1587/transinf.2019MVP0007},
ISSN={1745-1361},
month={June},}
Copiar
TY - JOUR
TI - Human Pose Annotation Using a Motion Capture System for Loose-Fitting Clothes
T2 - IEICE TRANSACTIONS on Information
SP - 1257
EP - 1264
AU - Takuya MATSUMOTO
AU - Kodai SHIMOSATO
AU - Takahiro MAEDA
AU - Tatsuya MURAKAMI
AU - Koji MURAKOSO
AU - Kazuhiko MINO
AU - Norimichi UKITA
PY - 2020
DO - 10.1587/transinf.2019MVP0007
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 6
JA - IEICE TRANSACTIONS on Information
Y1 - June 2020
AB - This paper proposes a framework for automatically annotating the keypoints of a human body in images for learning 2D pose estimation models. Ground-truth annotations for supervised learning are difficult and cumbersome in most machine vision tasks. While considerable contributions in the community provide us a huge number of pose-annotated images, all of them mainly focus on people wearing common clothes, which are relatively easy to annotate the body keypoints. This paper, on the other hand, focuses on annotating people wearing loose-fitting clothes (e.g., Japanese Kimono) that occlude many body keypoints. In order to automatically and correctly annotate these people, we divert the 3D coordinates of the keypoints observed without loose-fitting clothes, which can be captured by a motion capture system (MoCap). These 3D keypoints are projected to an image where the body pose under loose-fitting clothes is similar to the one captured by the MoCap. Pose similarity between bodies with and without loose-fitting clothes is evaluated with 3D geometric configurations of MoCap markers that are visible even with loose-fitting clothes (e.g., markers on the head, wrists, and ankles). Experimental results validate the effectiveness of our proposed framework for human pose estimation.
ER -