The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A interpretabilidade tornou-se uma questão importante no campo do aprendizado de máquina, juntamente com o sucesso das redes neurais em camadas em diversas tarefas práticas. Como uma rede neural em camadas treinada consiste em um relacionamento não linear complexo entre um grande número de parâmetros, não conseguimos entender como eles poderiam obter mapeamentos de entrada-saída com um determinado conjunto de dados. Neste artigo, propomos o método de decomposição de matrizes de tarefas não negativas, que aplica a fatoração de matrizes não negativas a uma rede neural em camadas treinada. Isso nos permite decompor o mecanismo de inferência de uma rede neural em camadas treinada em múltiplas tarefas principais de mapeamento de entrada-saída e revelar os papéis das unidades ocultas em termos de sua contribuição para cada tarefa principal.
Chihiro WATANABE
NTT Communication Science Laboratories
Kaoru HIRAMATSU
NTT Geospace Corporation, NEXTSITE Asakusa Building
Kunio KASHINO
NTT Communication Science Laboratories
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Chihiro WATANABE, Kaoru HIRAMATSU, Kunio KASHINO, "Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 2, pp. 390-397, February 2020, doi: 10.1587/transinf.2019EDP7136.
Abstract: Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDP7136/_p
Copiar
@ARTICLE{e103-d_2_390,
author={Chihiro WATANABE, Kaoru HIRAMATSU, Kunio KASHINO, },
journal={IEICE TRANSACTIONS on Information},
title={Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition},
year={2020},
volume={E103-D},
number={2},
pages={390-397},
abstract={Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.},
keywords={},
doi={10.1587/transinf.2019EDP7136},
ISSN={1745-1361},
month={February},}
Copiar
TY - JOUR
TI - Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition
T2 - IEICE TRANSACTIONS on Information
SP - 390
EP - 397
AU - Chihiro WATANABE
AU - Kaoru HIRAMATSU
AU - Kunio KASHINO
PY - 2020
DO - 10.1587/transinf.2019EDP7136
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2020
AB - Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.
ER -