The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Um modelo neural conversacional (NCM) baseado em uma rede neural recorrente codificador-decodificador (RNN) com um mecanismo de atenção aprende diferentes mapeamentos sequência a sequência do que a tradução automática neural (NMT) aprende, mesmo quando baseada na mesma técnica. No NCM, confirmamos que os mapeamentos da palavra-alvo para a palavra-fonte capturados pelo mecanismo de atenção não são tão claros e estacionários quanto os do NMT. Considerando que as normas vetoriais indicam uma magnitude de informação no processamento, analisamos o funcionamento interno de um NCM codificador-decodificador baseado em GRU com foco nas normas de vetores de incorporação de palavras e vetores ocultos. Primeiro, conduzimos análises de correlação nas normas de vetores de incorporação de palavras com frequências no conjunto de treinamento e com entropias condicionais de um modelo de linguagem bi-grama para entender o que está correlacionado com as normas no codificador e no decodificador. Em segundo lugar, conduzimos análises de correlação sobre normas de mudança no vetor oculto da camada recorrente com seus vetores de entrada para o codificador e decodificador, respectivamente. Essas análises foram feitas para entender como a magnitude da informação se propaga pela rede. Os resultados analíticos sugeriram que as normas dos vetores de incorporação de palavras estão associadas à sua informação semântica no codificador, enquanto aquelas estão associadas à previsibilidade como modelo de linguagem no decodificador. Os resultados analíticos revelaram ainda como as normas se propagam através da camada recorrente no codificador e no decodificador.
Manaya TOMIOKA
Doshisha University
Tsuneo KATO
Doshisha University
Akihiro TAMURA
Doshisha University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Manaya TOMIOKA, Tsuneo KATO, Akihiro TAMURA, "Analysis on Norms of Word Embedding and Hidden Vectors in Neural Conversational Model Based on Encoder-Decoder RNN" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 10, pp. 1780-1789, October 2022, doi: 10.1587/transinf.2021EDP7227.
Abstract: A neural conversational model (NCM) based on an encoder-decoder recurrent neural network (RNN) with an attention mechanism learns different sequence-to-sequence mappings from what neural machine translation (NMT) learns even when based on the same technique. In the NCM, we confirmed that target-word-to-source-word mappings captured by the attention mechanism are not as clear and stationary as those for NMT. Considering that vector norms indicate a magnitude of information in the processing, we analyzed the inner workings of an encoder-decoder GRU-based NCM focusing on the norms of word embedding vectors and hidden vectors. First, we conducted correlation analyses on the norms of word embedding vectors with frequencies in the training set and with conditional entropies of a bi-gram language model to understand what is correlated with the norms in the encoder and decoder. Second, we conducted correlation analyses on norms of change in the hidden vector of the recurrent layer with their input vectors for the encoder and decoder, respectively. These analyses were done to understand how the magnitude of information propagates through the network. The analytical results suggested that the norms of the word embedding vectors are associated with their semantic information in the encoder, while those are associated with the predictability as a language model in the decoder. The analytical results further revealed how the norms propagate through the recurrent layer in the encoder and decoder.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7227/_p
Copiar
@ARTICLE{e105-d_10_1780,
author={Manaya TOMIOKA, Tsuneo KATO, Akihiro TAMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Analysis on Norms of Word Embedding and Hidden Vectors in Neural Conversational Model Based on Encoder-Decoder RNN},
year={2022},
volume={E105-D},
number={10},
pages={1780-1789},
abstract={A neural conversational model (NCM) based on an encoder-decoder recurrent neural network (RNN) with an attention mechanism learns different sequence-to-sequence mappings from what neural machine translation (NMT) learns even when based on the same technique. In the NCM, we confirmed that target-word-to-source-word mappings captured by the attention mechanism are not as clear and stationary as those for NMT. Considering that vector norms indicate a magnitude of information in the processing, we analyzed the inner workings of an encoder-decoder GRU-based NCM focusing on the norms of word embedding vectors and hidden vectors. First, we conducted correlation analyses on the norms of word embedding vectors with frequencies in the training set and with conditional entropies of a bi-gram language model to understand what is correlated with the norms in the encoder and decoder. Second, we conducted correlation analyses on norms of change in the hidden vector of the recurrent layer with their input vectors for the encoder and decoder, respectively. These analyses were done to understand how the magnitude of information propagates through the network. The analytical results suggested that the norms of the word embedding vectors are associated with their semantic information in the encoder, while those are associated with the predictability as a language model in the decoder. The analytical results further revealed how the norms propagate through the recurrent layer in the encoder and decoder.},
keywords={},
doi={10.1587/transinf.2021EDP7227},
ISSN={1745-1361},
month={October},}
Copiar
TY - JOUR
TI - Analysis on Norms of Word Embedding and Hidden Vectors in Neural Conversational Model Based on Encoder-Decoder RNN
T2 - IEICE TRANSACTIONS on Information
SP - 1780
EP - 1789
AU - Manaya TOMIOKA
AU - Tsuneo KATO
AU - Akihiro TAMURA
PY - 2022
DO - 10.1587/transinf.2021EDP7227
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 10
JA - IEICE TRANSACTIONS on Information
Y1 - October 2022
AB - A neural conversational model (NCM) based on an encoder-decoder recurrent neural network (RNN) with an attention mechanism learns different sequence-to-sequence mappings from what neural machine translation (NMT) learns even when based on the same technique. In the NCM, we confirmed that target-word-to-source-word mappings captured by the attention mechanism are not as clear and stationary as those for NMT. Considering that vector norms indicate a magnitude of information in the processing, we analyzed the inner workings of an encoder-decoder GRU-based NCM focusing on the norms of word embedding vectors and hidden vectors. First, we conducted correlation analyses on the norms of word embedding vectors with frequencies in the training set and with conditional entropies of a bi-gram language model to understand what is correlated with the norms in the encoder and decoder. Second, we conducted correlation analyses on norms of change in the hidden vector of the recurrent layer with their input vectors for the encoder and decoder, respectively. These analyses were done to understand how the magnitude of information propagates through the network. The analytical results suggested that the norms of the word embedding vectors are associated with their semantic information in the encoder, while those are associated with the predictability as a language model in the decoder. The analytical results further revealed how the norms propagate through the recurrent layer in the encoder and decoder.
ER -