The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Um algoritmo de roteamento robusto foi desenvolvido baseado em aprendizado por reforço que utiliza (1) análise de componentes principais ponderada por recompensa, que comprime o espaço de estados de uma rede com um grande número de nós e elimina os efeitos adversos de vários tipos de ataques ou ruídos de perturbação, (2) alocação de índice orientada a atividades, que constrói adaptativamente uma base que é usada para aproximar probabilidades de roteamento, e (3) compressão de espaço recentemente desenvolvida com base em um modelo potencial que reduz o espaço para probabilidades de roteamento. Este algoritmo leva em consideração todos os estados da rede e reduz os efeitos adversos dos ruídos de perturbação. O algoritmo, portanto, funciona bem e as frequências que causam loops de roteamento e caem para um ótimo local são reduzidas, mesmo se a informação de roteamento for perturbada.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Hideki SATOH, "A Nonlinear Approach to Robust Routing Based on Reinforcement Learning with State Space Compression and Adaptive Basis Construction" in IEICE TRANSACTIONS on Fundamentals,
vol. E91-A, no. 7, pp. 1733-1740, July 2008, doi: 10.1093/ietfec/e91-a.7.1733.
Abstract: A robust routing algorithm was developed based on reinforcement learning that uses (1) reward-weighted principal component analysis, which compresses the state space of a network with a large number of nodes and eliminates the adverse effects of various types of attacks or disturbance noises, (2) activity-oriented index allocation, which adaptively constructs a basis that is used for approximating routing probabilities, and (3) newly developed space compression based on a potential model that reduces the space for routing probabilities. This algorithm takes all the network states into account and reduces the adverse effects of disturbance noises. The algorithm thus works well, and the frequencies of causing routing loops and falling to a local optimum are reduced even if the routing information is disturbed.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1093/ietfec/e91-a.7.1733/_p
Copiar
@ARTICLE{e91-a_7_1733,
author={Hideki SATOH, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Nonlinear Approach to Robust Routing Based on Reinforcement Learning with State Space Compression and Adaptive Basis Construction},
year={2008},
volume={E91-A},
number={7},
pages={1733-1740},
abstract={A robust routing algorithm was developed based on reinforcement learning that uses (1) reward-weighted principal component analysis, which compresses the state space of a network with a large number of nodes and eliminates the adverse effects of various types of attacks or disturbance noises, (2) activity-oriented index allocation, which adaptively constructs a basis that is used for approximating routing probabilities, and (3) newly developed space compression based on a potential model that reduces the space for routing probabilities. This algorithm takes all the network states into account and reduces the adverse effects of disturbance noises. The algorithm thus works well, and the frequencies of causing routing loops and falling to a local optimum are reduced even if the routing information is disturbed.},
keywords={},
doi={10.1093/ietfec/e91-a.7.1733},
ISSN={1745-1337},
month={July},}
Copiar
TY - JOUR
TI - A Nonlinear Approach to Robust Routing Based on Reinforcement Learning with State Space Compression and Adaptive Basis Construction
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1733
EP - 1740
AU - Hideki SATOH
PY - 2008
DO - 10.1093/ietfec/e91-a.7.1733
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E91-A
IS - 7
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - July 2008
AB - A robust routing algorithm was developed based on reinforcement learning that uses (1) reward-weighted principal component analysis, which compresses the state space of a network with a large number of nodes and eliminates the adverse effects of various types of attacks or disturbance noises, (2) activity-oriented index allocation, which adaptively constructs a basis that is used for approximating routing probabilities, and (3) newly developed space compression based on a potential model that reduces the space for routing probabilities. This algorithm takes all the network states into account and reduces the adverse effects of disturbance noises. The algorithm thus works well, and the frequencies of causing routing loops and falling to a local optimum are reduced even if the routing information is disturbed.
ER -