The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
As redes neurais convolucionais (CNNS) têm uma forte capacidade de compreender e julgar imagens. No entanto, os enormes parâmetros e computação do CNNS limitaram a sua aplicação em dispositivos com recursos limitados. Nesta carta, usamos a ideia de compartilhamento de parâmetros e conexão densa para compactar os parâmetros na direção do canal do kernel de convolução, reduzindo bastante o número de parâmetros do modelo. Com base nisso, projetamos redes convolucionais compartilhadas e densas por canal (SDChannelNets), compostas principalmente por camada de convolução por canal SD separável em profundidade. A vantagem do SDChannelNets é que o número de parâmetros do modelo é bastante reduzido sem ou com pouca perda de precisão. Também introduzimos um hiperparâmetro que pode equilibrar efetivamente o número de parâmetros e a precisão de um modelo. Avaliamos o modelo por nós proposto por meio de duas tarefas populares de reconhecimento de imagem (CIFAR-10 e CIFAR-100). Os resultados mostraram que SDChannelNets tinham precisão semelhante a outras CNNs, mas o número de parâmetros foi bastante reduzido.
JianNan ZHANG
Hangzhou Dianzi University
JiJun ZHOU
Hangzhou Dianzi University
JianFeng WU
Hangzhou Dianzi University
ShengYing YANG
Hangzhou Dianzi University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
JianNan ZHANG, JiJun ZHOU, JianFeng WU, ShengYing YANG, "SDChannelNets: Extremely Small and Efficient Convolutional Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E102-D, no. 12, pp. 2646-2650, December 2019, doi: 10.1587/transinf.2019EDL8120.
Abstract: Convolutional neural networks (CNNS) have a strong ability to understand and judge images. However, the enormous parameters and computation of CNNS have limited its application in resource-limited devices. In this letter, we used the idea of parameter sharing and dense connection to compress the parameters in the convolution kernel channel direction, thus greatly reducing the number of model parameters. On this basis, we designed Shared and Dense Channel-wise Convolutional Networks (SDChannelNets), mainly composed of Depth-wise Separable SD-Channel-wise Convolution layer. The advantage of SDChannelNets is that the number of model parameters is greatly reduced without or with little loss of accuracy. We also introduced a hyperparameter that can effectively balance the number of parameters and the accuracy of a model. We evaluated the model proposed by us through two popular image recognition tasks (CIFAR-10 and CIFAR-100). The results showed that SDChannelNets had similar accuracy to other CNNs, but the number of parameters was greatly reduced.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDL8120/_p
Copiar
@ARTICLE{e102-d_12_2646,
author={JianNan ZHANG, JiJun ZHOU, JianFeng WU, ShengYing YANG, },
journal={IEICE TRANSACTIONS on Information},
title={SDChannelNets: Extremely Small and Efficient Convolutional Neural Networks},
year={2019},
volume={E102-D},
number={12},
pages={2646-2650},
abstract={Convolutional neural networks (CNNS) have a strong ability to understand and judge images. However, the enormous parameters and computation of CNNS have limited its application in resource-limited devices. In this letter, we used the idea of parameter sharing and dense connection to compress the parameters in the convolution kernel channel direction, thus greatly reducing the number of model parameters. On this basis, we designed Shared and Dense Channel-wise Convolutional Networks (SDChannelNets), mainly composed of Depth-wise Separable SD-Channel-wise Convolution layer. The advantage of SDChannelNets is that the number of model parameters is greatly reduced without or with little loss of accuracy. We also introduced a hyperparameter that can effectively balance the number of parameters and the accuracy of a model. We evaluated the model proposed by us through two popular image recognition tasks (CIFAR-10 and CIFAR-100). The results showed that SDChannelNets had similar accuracy to other CNNs, but the number of parameters was greatly reduced.},
keywords={},
doi={10.1587/transinf.2019EDL8120},
ISSN={1745-1361},
month={December},}
Copiar
TY - JOUR
TI - SDChannelNets: Extremely Small and Efficient Convolutional Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 2646
EP - 2650
AU - JianNan ZHANG
AU - JiJun ZHOU
AU - JianFeng WU
AU - ShengYing YANG
PY - 2019
DO - 10.1587/transinf.2019EDL8120
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E102-D
IS - 12
JA - IEICE TRANSACTIONS on Information
Y1 - December 2019
AB - Convolutional neural networks (CNNS) have a strong ability to understand and judge images. However, the enormous parameters and computation of CNNS have limited its application in resource-limited devices. In this letter, we used the idea of parameter sharing and dense connection to compress the parameters in the convolution kernel channel direction, thus greatly reducing the number of model parameters. On this basis, we designed Shared and Dense Channel-wise Convolutional Networks (SDChannelNets), mainly composed of Depth-wise Separable SD-Channel-wise Convolution layer. The advantage of SDChannelNets is that the number of model parameters is greatly reduced without or with little loss of accuracy. We also introduced a hyperparameter that can effectively balance the number of parameters and the accuracy of a model. We evaluated the model proposed by us through two popular image recognition tasks (CIFAR-10 and CIFAR-100). The results showed that SDChannelNets had similar accuracy to other CNNs, but the number of parameters was greatly reduced.
ER -