The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A computação de borda móvel (MEC) é uma tecnologia chave para fornecer serviços que exigem baixa latência, migrando funções da nuvem para a borda da rede. A potencial baixa qualidade do canal sem fio deve ser observada quando usuários móveis com recursos computacionais limitados transferem tarefas para um servidor MEC. Para melhorar a confiabilidade da transmissão, é necessário realizar a alocação de recursos em um servidor MEC, levando em consideração a qualidade atual do canal e a contenção de recursos. Existem vários trabalhos que adotam uma abordagem de aprendizagem por reforço profundo (DRL) para abordar essa alocação de recursos. No entanto, estas abordagens consideram um número fixo de utilizadores descarregando as suas tarefas e não assumem uma situação em que o número de utilizadores varia devido à mobilidade do utilizador. Este artigo propõe um modelo de aprendizagem por reforço profundo para Alocação de Recursos MEC com Dummy (DMRA-D), um modelo de aprendizagem online que aborda a alocação de recursos em um servidor MEC sob a situação em que o número de usuários varia. Ao adotar estado/ação fictício, DMRA-D mantém a representação de estado/ação. Portanto, o DMRA-D pode continuar a aprender um modelo independentemente da variação no número de usuários durante a operação. Os resultados numéricos mostram que o DMRA-D melhora a taxa de sucesso no envio de tarefas, ao mesmo tempo que continua o aprendizado em situações em que o número de usuários varia.
Kairi TOKUDA
Kyoto University
Takehiro SATO
Kyoto University
Eiji OKI
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Kairi TOKUDA, Takehiro SATO, Eiji OKI, "Resource Allocation for Mobile Edge Computing System Considering User Mobility with Deep Reinforcement Learning" in IEICE TRANSACTIONS on Communications,
vol. E107-B, no. 1, pp. 173-184, January 2024, doi: 10.1587/transcom.2023EBP3043.
Abstract: Mobile edge computing (MEC) is a key technology for providing services that require low latency by migrating cloud functions to the network edge. The potential low quality of the wireless channel should be noted when mobile users with limited computing resources offload tasks to an MEC server. To improve the transmission reliability, it is necessary to perform resource allocation in an MEC server, taking into account the current channel quality and the resource contention. There are several works that take a deep reinforcement learning (DRL) approach to address such resource allocation. However, these approaches consider a fixed number of users offloading their tasks, and do not assume a situation where the number of users varies due to user mobility. This paper proposes Deep reinforcement learning model for MEC Resource Allocation with Dummy (DMRA-D), an online learning model that addresses the resource allocation in an MEC server under the situation where the number of users varies. By adopting dummy state/action, DMRA-D keeps the state/action representation. Therefore, DMRA-D can continue to learn one model regardless of variation in the number of users during the operation. Numerical results show that DMRA-D improves the success rate of task submission while continuing learning under the situation where the number of users varies.
URL: https://global.ieice.org/en_transactions/communications/10.1587/transcom.2023EBP3043/_p
Copiar
@ARTICLE{e107-b_1_173,
author={Kairi TOKUDA, Takehiro SATO, Eiji OKI, },
journal={IEICE TRANSACTIONS on Communications},
title={Resource Allocation for Mobile Edge Computing System Considering User Mobility with Deep Reinforcement Learning},
year={2024},
volume={E107-B},
number={1},
pages={173-184},
abstract={Mobile edge computing (MEC) is a key technology for providing services that require low latency by migrating cloud functions to the network edge. The potential low quality of the wireless channel should be noted when mobile users with limited computing resources offload tasks to an MEC server. To improve the transmission reliability, it is necessary to perform resource allocation in an MEC server, taking into account the current channel quality and the resource contention. There are several works that take a deep reinforcement learning (DRL) approach to address such resource allocation. However, these approaches consider a fixed number of users offloading their tasks, and do not assume a situation where the number of users varies due to user mobility. This paper proposes Deep reinforcement learning model for MEC Resource Allocation with Dummy (DMRA-D), an online learning model that addresses the resource allocation in an MEC server under the situation where the number of users varies. By adopting dummy state/action, DMRA-D keeps the state/action representation. Therefore, DMRA-D can continue to learn one model regardless of variation in the number of users during the operation. Numerical results show that DMRA-D improves the success rate of task submission while continuing learning under the situation where the number of users varies.},
keywords={},
doi={10.1587/transcom.2023EBP3043},
ISSN={1745-1345},
month={January},}
Copiar
TY - JOUR
TI - Resource Allocation for Mobile Edge Computing System Considering User Mobility with Deep Reinforcement Learning
T2 - IEICE TRANSACTIONS on Communications
SP - 173
EP - 184
AU - Kairi TOKUDA
AU - Takehiro SATO
AU - Eiji OKI
PY - 2024
DO - 10.1587/transcom.2023EBP3043
JO - IEICE TRANSACTIONS on Communications
SN - 1745-1345
VL - E107-B
IS - 1
JA - IEICE TRANSACTIONS on Communications
Y1 - January 2024
AB - Mobile edge computing (MEC) is a key technology for providing services that require low latency by migrating cloud functions to the network edge. The potential low quality of the wireless channel should be noted when mobile users with limited computing resources offload tasks to an MEC server. To improve the transmission reliability, it is necessary to perform resource allocation in an MEC server, taking into account the current channel quality and the resource contention. There are several works that take a deep reinforcement learning (DRL) approach to address such resource allocation. However, these approaches consider a fixed number of users offloading their tasks, and do not assume a situation where the number of users varies due to user mobility. This paper proposes Deep reinforcement learning model for MEC Resource Allocation with Dummy (DMRA-D), an online learning model that addresses the resource allocation in an MEC server under the situation where the number of users varies. By adopting dummy state/action, DMRA-D keeps the state/action representation. Therefore, DMRA-D can continue to learn one model regardless of variation in the number of users during the operation. Numerical results show that DMRA-D improves the success rate of task submission while continuing learning under the situation where the number of users varies.
ER -