The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Este artigo discute o reconhecimento de até intensidades de mistura de expressões faciais primárias em tempo real. O método de reconhecimento proposto é compatível com o Parâmetro de Animação Facial (FAP) de expressão de alto nível MPEG-4. Em nosso método, toda a imagem facial é considerada como um padrão único, sem qualquer segmentação de bloco. Como características do modelo, um vetor de expressão, viz. são usadas alterações de baixo coeficiente de frequência global (DCT) em relação à imagem facial neutra de uma pessoa. Esses recursos são robustos e bons o suficiente para lidar com processamento em tempo real. Para construir um modelo específico de pessoa, imagens de ápice de categorias primárias de expressão facial são utilizadas como referências. O espaço de expressão facial pessoal (PFES) é construído usando escala multidimensional. O PFES com sua capacidade de generalização mapeia uma imagem de entrada desconhecida em relação a imagens de referência conhecidas. Como o PFES possui características de mapeamento linear, a expressão de alto nível MPEG-4 FAP pode ser facilmente calculada pela localização da face de entrada no PFES. Além disso, variações temporais de expressões faciais podem ser vistas no PFES como trajetórias. Resultados experimentais são mostrados para demonstrar a eficácia do método proposto.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Naiwala Pathirannehelage CHANDRASIRI, Takeshi NAEMURA, Hiroshi HARASHIMA, "Real Time Facial Expression Recognition System with Applications to Facial Animation in MPEG-4" in IEICE TRANSACTIONS on Information,
vol. E84-D, no. 8, pp. 1007-1017, August 2001, doi: .
Abstract: This paper discusses recognition up to intensities of mix of primary facial expressions in real time. The proposed recognition method is compatible with the MPEG-4 high level expression Facial Animation Parameter (FAP). In our method, the whole facial image is considered as a single pattern without any block segmentation. As model features, an expression vector, viz. low global frequency coefficient (DCT) changes relative to neutral facial image of a person is used. These features are robust and good enough to deal with real time processing. To construct a person specific model, apex images of primary facial expression categories are utilized as references. Personal facial expression space (PFES) is constructed by using multidimensional scaling. PFES with its generalization capability maps an unknown input image relative to known reference images. As PFES possesses linear mapping characteristics, MPEG-4 high level expression FAP can be easily calculated by the location of the input face on PFES. Also, temporal variations of facial expressions can be seen on PFES as trajectories. Experimental results are shown to demonstrate the effectiveness of the proposed method.
URL: https://global.ieice.org/en_transactions/information/10.1587/e84-d_8_1007/_p
Copiar
@ARTICLE{e84-d_8_1007,
author={Naiwala Pathirannehelage CHANDRASIRI, Takeshi NAEMURA, Hiroshi HARASHIMA, },
journal={IEICE TRANSACTIONS on Information},
title={Real Time Facial Expression Recognition System with Applications to Facial Animation in MPEG-4},
year={2001},
volume={E84-D},
number={8},
pages={1007-1017},
abstract={This paper discusses recognition up to intensities of mix of primary facial expressions in real time. The proposed recognition method is compatible with the MPEG-4 high level expression Facial Animation Parameter (FAP). In our method, the whole facial image is considered as a single pattern without any block segmentation. As model features, an expression vector, viz. low global frequency coefficient (DCT) changes relative to neutral facial image of a person is used. These features are robust and good enough to deal with real time processing. To construct a person specific model, apex images of primary facial expression categories are utilized as references. Personal facial expression space (PFES) is constructed by using multidimensional scaling. PFES with its generalization capability maps an unknown input image relative to known reference images. As PFES possesses linear mapping characteristics, MPEG-4 high level expression FAP can be easily calculated by the location of the input face on PFES. Also, temporal variations of facial expressions can be seen on PFES as trajectories. Experimental results are shown to demonstrate the effectiveness of the proposed method.},
keywords={},
doi={},
ISSN={},
month={August},}
Copiar
TY - JOUR
TI - Real Time Facial Expression Recognition System with Applications to Facial Animation in MPEG-4
T2 - IEICE TRANSACTIONS on Information
SP - 1007
EP - 1017
AU - Naiwala Pathirannehelage CHANDRASIRI
AU - Takeshi NAEMURA
AU - Hiroshi HARASHIMA
PY - 2001
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E84-D
IS - 8
JA - IEICE TRANSACTIONS on Information
Y1 - August 2001
AB - This paper discusses recognition up to intensities of mix of primary facial expressions in real time. The proposed recognition method is compatible with the MPEG-4 high level expression Facial Animation Parameter (FAP). In our method, the whole facial image is considered as a single pattern without any block segmentation. As model features, an expression vector, viz. low global frequency coefficient (DCT) changes relative to neutral facial image of a person is used. These features are robust and good enough to deal with real time processing. To construct a person specific model, apex images of primary facial expression categories are utilized as references. Personal facial expression space (PFES) is constructed by using multidimensional scaling. PFES with its generalization capability maps an unknown input image relative to known reference images. As PFES possesses linear mapping characteristics, MPEG-4 high level expression FAP can be easily calculated by the location of the input face on PFES. Also, temporal variations of facial expressions can be seen on PFES as trajectories. Experimental results are shown to demonstrate the effectiveness of the proposed method.
ER -