The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A inteligência artificial (IA), especialmente o aprendizado profundo (EAD), tem sido notável e aplicada a diversos setores. No entanto, exemplos adversários (AE), que adicionam pequenas perturbações aos dados de entrada de redes neurais profundas (DNNs) para classificação incorreta, estão atraindo a atenção. Neste artigo, propomos um novo ataque de caixa preta para criar AE usando apenas o tempo de processamento que é informação de canal lateral de DNNs, sem usar dados de treinamento, arquitetura de modelo e parâmetros, modelos substitutos ou probabilidade de saída. Embora vários ataques de caixa preta existentes utilizem probabilidade de saída, nosso ataque explora uma relação entre o número de nós ativados e o tempo de processamento de DNNs. As perturbações para AE são decididas pelo tempo de processamento diferencial de acordo com os dados de entrada em nosso ataque. Mostramos resultados experimentais nos quais o AE do nosso ataque aumenta o número de nós ativados e causa efetivamente a classificação incorreta para um dos rótulos incorretos. Além disso, os resultados experimentais destacam que nosso ataque pode escapar de contramedidas de mascaramento de gradiente que mascaram a probabilidade de saída para evitar a criação de AE contra vários ataques de caixa preta.
Tsunato NAKAI
Mitsubishi Electric Corporation
Daisuke SUZUKI
Mitsubishi Electric Corporation
Fumio OMATSU
Mitsubishi Electric Corporation
Takeshi FUJINO
Ritsumeikan University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Tsunato NAKAI, Daisuke SUZUKI, Fumio OMATSU, Takeshi FUJINO, "Adversarial Black-Box Attacks with Timing Side-Channel Leakage" in IEICE TRANSACTIONS on Fundamentals,
vol. E104-A, no. 1, pp. 143-151, January 2021, doi: 10.1587/transfun.2020CIP0022.
Abstract: Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2020CIP0022/_p
Copiar
@ARTICLE{e104-a_1_143,
author={Tsunato NAKAI, Daisuke SUZUKI, Fumio OMATSU, Takeshi FUJINO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Adversarial Black-Box Attacks with Timing Side-Channel Leakage},
year={2021},
volume={E104-A},
number={1},
pages={143-151},
abstract={Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.},
keywords={},
doi={10.1587/transfun.2020CIP0022},
ISSN={1745-1337},
month={January},}
Copiar
TY - JOUR
TI - Adversarial Black-Box Attacks with Timing Side-Channel Leakage
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 143
EP - 151
AU - Tsunato NAKAI
AU - Daisuke SUZUKI
AU - Fumio OMATSU
AU - Takeshi FUJINO
PY - 2021
DO - 10.1587/transfun.2020CIP0022
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E104-A
IS - 1
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - January 2021
AB - Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.
ER -