The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
O processo de aprendizagem é essencial para um bom desempenho quando uma rede neural é aplicada a uma aplicação prática. O algoritmo de retropropagação é um método de aprendizagem bem conhecido e amplamente utilizado na maioria das redes neurais. No entanto, como o algoritmo de retropropagação é demorado, muitas pesquisas foram feitas para acelerar o processo. O algoritmo de retropropagação de blocos, que parece ser mais eficiente que a retropropagação, foi recentemente proposto por Coetzee em [2]. Neste artigo, propomos um algoritmo paralelo eficiente para o método de retropropagação de blocos e seu modelo de desempenho em sistemas computacionais paralelos conectados em malha. O algoritmo proposto adota modelo mestre-escravo para transmissão de pesos e paralelismo de dados para cálculo de pesos. Para validar nosso modelo de desempenho, uma rede neural é implementada para aplicação de reconhecimento de caracteres impressos no TiME que é um protótipo de máquina paralela composta por 32 transputadores conectados em topologia mesh. É mostrado que a aceleração do nosso modelo de desempenho é muito próxima daquela dos experimentos.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Han-Wook LEE, Chan-Ik PARK, "An Efficient Parallel Block Backpropagation Learning Algorithm in Transputer-Based Mesh-Connected Parallel Computers" in IEICE TRANSACTIONS on Information,
vol. E83-D, no. 8, pp. 1622-1630, August 2000, doi: .
Abstract: Learning process is essential for good performance when a neural network is applied to a practical application. The backpropagation algorithm is a well-known learning method widely used in most neural networks. However, since the backpropagation algorithm is time-consuming, much research have been done to speed up the process. The block backpropagation algorithm, which seems to be more efficient than the backpropagation, is recently proposed by Coetzee in [2]. In this paper, we propose an efficient parallel algorithm for the block backpropagation method and its performance model in mesh-connected parallel computer systems. The proposed algorithm adopts master-slave model for weight broadcasting and data parallelism for computation of weights. In order to validate our performance model, a neural network is implemented for printed character recognition application in the TiME which is a prototype parallel machine consisting of 32 transputers connected in mesh topology. It is shown that speedup by our performance model is very close to that by experiments.
URL: https://global.ieice.org/en_transactions/information/10.1587/e83-d_8_1622/_p
Copiar
@ARTICLE{e83-d_8_1622,
author={Han-Wook LEE, Chan-Ik PARK, },
journal={IEICE TRANSACTIONS on Information},
title={An Efficient Parallel Block Backpropagation Learning Algorithm in Transputer-Based Mesh-Connected Parallel Computers},
year={2000},
volume={E83-D},
number={8},
pages={1622-1630},
abstract={Learning process is essential for good performance when a neural network is applied to a practical application. The backpropagation algorithm is a well-known learning method widely used in most neural networks. However, since the backpropagation algorithm is time-consuming, much research have been done to speed up the process. The block backpropagation algorithm, which seems to be more efficient than the backpropagation, is recently proposed by Coetzee in [2]. In this paper, we propose an efficient parallel algorithm for the block backpropagation method and its performance model in mesh-connected parallel computer systems. The proposed algorithm adopts master-slave model for weight broadcasting and data parallelism for computation of weights. In order to validate our performance model, a neural network is implemented for printed character recognition application in the TiME which is a prototype parallel machine consisting of 32 transputers connected in mesh topology. It is shown that speedup by our performance model is very close to that by experiments.},
keywords={},
doi={},
ISSN={},
month={August},}
Copiar
TY - JOUR
TI - An Efficient Parallel Block Backpropagation Learning Algorithm in Transputer-Based Mesh-Connected Parallel Computers
T2 - IEICE TRANSACTIONS on Information
SP - 1622
EP - 1630
AU - Han-Wook LEE
AU - Chan-Ik PARK
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E83-D
IS - 8
JA - IEICE TRANSACTIONS on Information
Y1 - August 2000
AB - Learning process is essential for good performance when a neural network is applied to a practical application. The backpropagation algorithm is a well-known learning method widely used in most neural networks. However, since the backpropagation algorithm is time-consuming, much research have been done to speed up the process. The block backpropagation algorithm, which seems to be more efficient than the backpropagation, is recently proposed by Coetzee in [2]. In this paper, we propose an efficient parallel algorithm for the block backpropagation method and its performance model in mesh-connected parallel computer systems. The proposed algorithm adopts master-slave model for weight broadcasting and data parallelism for computation of weights. In order to validate our performance model, a neural network is implemented for printed character recognition application in the TiME which is a prototype parallel machine consisting of 32 transputers connected in mesh topology. It is shown that speedup by our performance model is very close to that by experiments.
ER -