The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Este artigo apresenta uma abordagem de aprendizagem por reforço profundo para resolver o problema de escalonamento de funções de redes virtuais em cenários dinâmicos. Formulamos um modelo de programação linear inteira para o problema em cenários estáticos. Em cenários dinâmicos, definimos o estado, a ação e a recompensa para formar a abordagem de aprendizagem. Os agentes de aprendizagem são aplicados com o algoritmo ator-crítico de vantagem assíncrona. Atribuímos um agente mestre e vários agentes trabalhadores a cada nó de virtualização de função de rede no problema. Os agentes trabalhadores trabalham em paralelo para ajudar o agente mestre a tomar decisões. Comparamos a abordagem introduzida com abordagens existentes, aplicando-as em ambientes simulados. As abordagens existentes incluem três abordagens gananciosas, uma abordagem de recozimento simulado e uma abordagem de programação linear inteira. Os resultados numéricos mostram que a abordagem de aprendizagem por reforço profundo introduzida melhora o desempenho em 6-27% nos casos examinados.
Zixiao ZHANG
Kyoto University
Fujun HE
Kyoto University
Eiji OKI
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Zixiao ZHANG, Fujun HE, Eiji OKI, "Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach" in IEICE TRANSACTIONS on Communications,
vol. E106-B, no. 7, pp. 557-570, July 2023, doi: 10.1587/transcom.2022EBP3160.
Abstract: This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.
URL: https://global.ieice.org/en_transactions/communications/10.1587/transcom.2022EBP3160/_p
Copiar
@ARTICLE{e106-b_7_557,
author={Zixiao ZHANG, Fujun HE, Eiji OKI, },
journal={IEICE TRANSACTIONS on Communications},
title={Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach},
year={2023},
volume={E106-B},
number={7},
pages={557-570},
abstract={This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.},
keywords={},
doi={10.1587/transcom.2022EBP3160},
ISSN={1745-1345},
month={July},}
Copiar
TY - JOUR
TI - Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
T2 - IEICE TRANSACTIONS on Communications
SP - 557
EP - 570
AU - Zixiao ZHANG
AU - Fujun HE
AU - Eiji OKI
PY - 2023
DO - 10.1587/transcom.2022EBP3160
JO - IEICE TRANSACTIONS on Communications
SN - 1745-1345
VL - E106-B
IS - 7
JA - IEICE TRANSACTIONS on Communications
Y1 - July 2023
AB - This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.
ER -