The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A transformação de recursos acústicos é amplamente utilizada para reduzir a dimensionalidade e melhorar o desempenho do reconhecimento de fala. Nesta carta nos concentramos em métodos de redução de dimensionalidade que minimizam o erro médio de classificação. Infelizmente, a minimização do erro médio de classificação pode causar sobreposições consideráveis entre distribuições de algumas classes. Para mitigar riscos de sobreposições consideráveis, propomos um método de redução de dimensionalidade que minimiza o erro máximo de classificação. Também propomos dois métodos interpolados que podem descrever os erros médios e máximos de classificação. Resultados experimentais mostram que esses métodos propostos melhoram o desempenho do reconhecimento de fala.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Makoto SAKAI, Norihide KITAOKA, Kazuya TAKEDA, "Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria" in IEICE TRANSACTIONS on Information,
vol. E93-D, no. 7, pp. 2005-2008, July 2010, doi: 10.1587/transinf.E93.D.2005.
Abstract: Acoustic feature transformation is widely used to reduce dimensionality and improve speech recognition performance. In this letter we focus on dimensionality reduction methods that minimize the average classification error. Unfortunately, minimization of the average classification error may cause considerable overlaps between distributions of some classes. To mitigate risks of considerable overlaps, we propose a dimensionality reduction method that minimizes the maximum classification error. We also propose two interpolated methods that can describe the average and maximum classification errors. Experimental results show that these proposed methods improve speech recognition performance.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E93.D.2005/_p
Copiar
@ARTICLE{e93-d_7_2005,
author={Makoto SAKAI, Norihide KITAOKA, Kazuya TAKEDA, },
journal={IEICE TRANSACTIONS on Information},
title={Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria},
year={2010},
volume={E93-D},
number={7},
pages={2005-2008},
abstract={Acoustic feature transformation is widely used to reduce dimensionality and improve speech recognition performance. In this letter we focus on dimensionality reduction methods that minimize the average classification error. Unfortunately, minimization of the average classification error may cause considerable overlaps between distributions of some classes. To mitigate risks of considerable overlaps, we propose a dimensionality reduction method that minimizes the maximum classification error. We also propose two interpolated methods that can describe the average and maximum classification errors. Experimental results show that these proposed methods improve speech recognition performance.},
keywords={},
doi={10.1587/transinf.E93.D.2005},
ISSN={1745-1361},
month={July},}
Copiar
TY - JOUR
TI - Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria
T2 - IEICE TRANSACTIONS on Information
SP - 2005
EP - 2008
AU - Makoto SAKAI
AU - Norihide KITAOKA
AU - Kazuya TAKEDA
PY - 2010
DO - 10.1587/transinf.E93.D.2005
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E93-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2010
AB - Acoustic feature transformation is widely used to reduce dimensionality and improve speech recognition performance. In this letter we focus on dimensionality reduction methods that minimize the average classification error. Unfortunately, minimization of the average classification error may cause considerable overlaps between distributions of some classes. To mitigate risks of considerable overlaps, we propose a dimensionality reduction method that minimizes the maximum classification error. We also propose two interpolated methods that can describe the average and maximum classification errors. Experimental results show that these proposed methods improve speech recognition performance.
ER -