The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
O baixo consumo de energia é importante em chips de inteligência artificial (IA) de ponta, onde o fornecimento de energia é limitado. Portanto, propomos um acelerador de rede neural reconfigurável (ReNA), um chip de IA que pode processar tanto uma camada convolucional quanto uma camada totalmente conectada com a mesma estrutura, reconfigurando o circuito. Além disso, desenvolvemos ferramentas para pré-avaliação do desempenho quando um modelo de rede neural profunda (DNN) é implementado no ReNA. Com esta abordagem, estabelecemos o fluxo para implementação de modelos DNN no ReNA e avaliamos seu consumo de energia. ReNA alcançou 1.51TOPS/W na camada convolucional e 1.38TOPS/W geral em um modelo VGG16 com uma taxa de poda de 70%.
Yasuhiro NAKAHARA
Kumamoto University
Masato KIYAMA
Kumamoto University
Motoki AMAGASAKI
Kumamoto University
Qian ZHAO
Kyushu Institute of Technology
Masahiro IIDA
Kumamoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Yasuhiro NAKAHARA, Masato KIYAMA, Motoki AMAGASAKI, Qian ZHAO, Masahiro IIDA, "Reconfigurable Neural Network Accelerator and Simulator for Model Implementation" in IEICE TRANSACTIONS on Fundamentals,
vol. E105-A, no. 3, pp. 448-458, March 2022, doi: 10.1587/transfun.2021VLP0012.
Abstract: Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2021VLP0012/_p
Copiar
@ARTICLE{e105-a_3_448,
author={Yasuhiro NAKAHARA, Masato KIYAMA, Motoki AMAGASAKI, Qian ZHAO, Masahiro IIDA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Reconfigurable Neural Network Accelerator and Simulator for Model Implementation},
year={2022},
volume={E105-A},
number={3},
pages={448-458},
abstract={Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.},
keywords={},
doi={10.1587/transfun.2021VLP0012},
ISSN={1745-1337},
month={March},}
Copiar
TY - JOUR
TI - Reconfigurable Neural Network Accelerator and Simulator for Model Implementation
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 448
EP - 458
AU - Yasuhiro NAKAHARA
AU - Masato KIYAMA
AU - Motoki AMAGASAKI
AU - Qian ZHAO
AU - Masahiro IIDA
PY - 2022
DO - 10.1587/transfun.2021VLP0012
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E105-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2022
AB - Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.
ER -