The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A confiabilidade das redes neurais profundas (DNN) contra erros de hardware é essencial, pois as DNNs são cada vez mais empregadas em aplicações críticas de segurança, como a direção automática. Erros transitórios na memória, como erros leves induzidos por radiação, podem se propagar através do cálculo de inferência, resultando em resultados inesperados, o que pode desencadear adversamente falhas catastróficas no sistema. Como primeiro passo para resolver este problema, este artigo propõe a construção de um modelo de vulnerabilidade (VM) com um pequeno número de injeções de falhas para identificar parâmetros de modelo vulneráveis em DNN. Reduzimos significativamente o número de localizações de bits para injeção de falhas e desenvolvemos um fluxo para coletar incrementalmente os dados de treinamento, ou seja, os resultados da injeção de falhas, para melhorar a precisão da VM. Enumeramos os principais recursos (KF) que caracterizam a vulnerabilidade dos parâmetros e usamos KF e os dados de treinamento coletados para construir VM. Os resultados experimentais mostram que a VM pode estimar vulnerabilidades de todos os parâmetros do modelo DNN apenas com cálculos de 1/3490 em comparação com a estimativa de vulnerabilidade tradicional baseada em injeção de falhas.
Yangchao ZHANG
Osaka University
Hiroaki ITSUJI
Research & Development Group, Hitachi, Ltd.
Takumi UEZONO
Research & Development Group, Hitachi, Ltd.
Tadanobu TOBA
Research & Development Group, Hitachi, Ltd.
Masanori HASHIMOTO
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Yangchao ZHANG, Hiroaki ITSUJI, Takumi UEZONO, Tadanobu TOBA, Masanori HASHIMOTO, "Vulnerability Estimation of DNN Model Parameters with Few Fault Injections" in IEICE TRANSACTIONS on Fundamentals,
vol. E106-A, no. 3, pp. 523-531, March 2023, doi: 10.1587/transfun.2022VLP0004.
Abstract: The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2022VLP0004/_p
Copiar
@ARTICLE{e106-a_3_523,
author={Yangchao ZHANG, Hiroaki ITSUJI, Takumi UEZONO, Tadanobu TOBA, Masanori HASHIMOTO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Vulnerability Estimation of DNN Model Parameters with Few Fault Injections},
year={2023},
volume={E106-A},
number={3},
pages={523-531},
abstract={The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.},
keywords={},
doi={10.1587/transfun.2022VLP0004},
ISSN={1745-1337},
month={March},}
Copiar
TY - JOUR
TI - Vulnerability Estimation of DNN Model Parameters with Few Fault Injections
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 523
EP - 531
AU - Yangchao ZHANG
AU - Hiroaki ITSUJI
AU - Takumi UEZONO
AU - Tadanobu TOBA
AU - Masanori HASHIMOTO
PY - 2023
DO - 10.1587/transfun.2022VLP0004
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E106-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2023
AB - The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.
ER -