The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Propomos um mecanismo de atenção em redes de aprendizagem profunda para reconhecimento de gênero utilizando a distribuição do olhar de observadores humanos quando julgam o gênero das pessoas em imagens de pedestres. Os mecanismos de atenção predominantes calculam espacialmente a correlação entre os valores de todas as células em um mapa de recursos de entrada para calcular os pesos de atenção. Se houver um grande viés no fundo das imagens de pedestres (por exemplo, amostras de teste e amostras de treinamento contendo fundos diferentes), os pesos de atenção aprendidos usando os mecanismos de atenção predominantes serão afetados pelo viés, o que por sua vez reduz a precisão do reconhecimento de gênero. Para evitar esse problema, incorporamos um mecanismo de atenção denominado autoatenção guiada pelo olhar (GSA), inspirado na atenção visual humana. Nosso método atribui pesos de atenção espacialmente adequados a cada mapa de características de entrada usando a distribuição do olhar de observadores humanos. Em particular, o GSA produz resultados promissores mesmo quando se utiliza amostras de treinamento com viés de fundo. Os resultados de experimentos em conjuntos de dados disponíveis publicamente confirmam que nosso GSA, usando a distribuição do olhar, é mais preciso no reconhecimento de gênero do que os métodos baseados na atenção atualmente disponíveis no caso de viés de fundo entre amostras de treinamento e teste.
Masashi NISHIYAMA
Tottori University
Michiko INOUE
Tottori University
Yoshio IWAI
Tottori University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Masashi NISHIYAMA, Michiko INOUE, Yoshio IWAI, "Gender Recognition Using a Gaze-Guided Self-Attention Mechanism Robust Against Background Bias in Training Samples" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 2, pp. 415-426, February 2022, doi: 10.1587/transinf.2021EDP7117.
Abstract: We propose an attention mechanism in deep learning networks for gender recognition using the gaze distribution of human observers when they judge the gender of people in pedestrian images. Prevalent attention mechanisms spatially compute the correlation among values of all cells in an input feature map to calculate attention weights. If a large bias in the background of pedestrian images (e.g., test samples and training samples containing different backgrounds) is present, the attention weights learned using the prevalent attention mechanisms are affected by the bias, which in turn reduces the accuracy of gender recognition. To avoid this problem, we incorporate an attention mechanism called gaze-guided self-attention (GSA) that is inspired by human visual attention. Our method assigns spatially suitable attention weights to each input feature map using the gaze distribution of human observers. In particular, GSA yields promising results even when using training samples with the background bias. The results of experiments on publicly available datasets confirm that our GSA, using the gaze distribution, is more accurate in gender recognition than currently available attention-based methods in the case of background bias between training and test samples.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7117/_p
Copiar
@ARTICLE{e105-d_2_415,
author={Masashi NISHIYAMA, Michiko INOUE, Yoshio IWAI, },
journal={IEICE TRANSACTIONS on Information},
title={Gender Recognition Using a Gaze-Guided Self-Attention Mechanism Robust Against Background Bias in Training Samples},
year={2022},
volume={E105-D},
number={2},
pages={415-426},
abstract={We propose an attention mechanism in deep learning networks for gender recognition using the gaze distribution of human observers when they judge the gender of people in pedestrian images. Prevalent attention mechanisms spatially compute the correlation among values of all cells in an input feature map to calculate attention weights. If a large bias in the background of pedestrian images (e.g., test samples and training samples containing different backgrounds) is present, the attention weights learned using the prevalent attention mechanisms are affected by the bias, which in turn reduces the accuracy of gender recognition. To avoid this problem, we incorporate an attention mechanism called gaze-guided self-attention (GSA) that is inspired by human visual attention. Our method assigns spatially suitable attention weights to each input feature map using the gaze distribution of human observers. In particular, GSA yields promising results even when using training samples with the background bias. The results of experiments on publicly available datasets confirm that our GSA, using the gaze distribution, is more accurate in gender recognition than currently available attention-based methods in the case of background bias between training and test samples.},
keywords={},
doi={10.1587/transinf.2021EDP7117},
ISSN={1745-1361},
month={February},}
Copiar
TY - JOUR
TI - Gender Recognition Using a Gaze-Guided Self-Attention Mechanism Robust Against Background Bias in Training Samples
T2 - IEICE TRANSACTIONS on Information
SP - 415
EP - 426
AU - Masashi NISHIYAMA
AU - Michiko INOUE
AU - Yoshio IWAI
PY - 2022
DO - 10.1587/transinf.2021EDP7117
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2022
AB - We propose an attention mechanism in deep learning networks for gender recognition using the gaze distribution of human observers when they judge the gender of people in pedestrian images. Prevalent attention mechanisms spatially compute the correlation among values of all cells in an input feature map to calculate attention weights. If a large bias in the background of pedestrian images (e.g., test samples and training samples containing different backgrounds) is present, the attention weights learned using the prevalent attention mechanisms are affected by the bias, which in turn reduces the accuracy of gender recognition. To avoid this problem, we incorporate an attention mechanism called gaze-guided self-attention (GSA) that is inspired by human visual attention. Our method assigns spatially suitable attention weights to each input feature map using the gaze distribution of human observers. In particular, GSA yields promising results even when using training samples with the background bias. The results of experiments on publicly available datasets confirm that our GSA, using the gaze distribution, is more accurate in gender recognition than currently available attention-based methods in the case of background bias between training and test samples.
ER -