The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A aprendizagem por reforço baseada em modelo utiliza as informações coletadas, durante cada experiência, de forma mais eficiente do que a aprendizagem por reforço sem modelo. Isto é especialmente interessante em sistemas multiagentes, uma vez que é necessário um grande número de experiências para alcançar um bom desempenho. Neste artigo, a aprendizagem por reforço baseada em modelo é desenvolvida para um grupo de agentes com interesse próprio, com seleção de ações sequenciais baseada na tradicional varredura priorizada. Cada situação de tomada de decisão neste processo de aprendizagem, denominado jogo extensivo de Markov, é modelada como nJogo extenso de soma geral de pessoas com informações perfeitas. Uma versão modificada da indução retroativa é proposta para a seleção de ações, que ajusta o equilíbrio entre a seleção de pontos de equilíbrio perfeitos no subjogo, como as ações conjuntas ótimas, e o aprendizado de novas ações conjuntas. O algoritmo é provado ser convergente e discutido com base nos novos resultados sobre a convergência da tradicional varredura priorizada.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Ali AKRAMIZADEH, Ahmad AFSHAR, Mohammad Bagher MENHAJ, Samira JAFARI, "Model-Based Reinforcement Learning in Multiagent Systems with Sequential Action Selection" in IEICE TRANSACTIONS on Information,
vol. E94-D, no. 2, pp. 255-263, February 2011, doi: 10.1587/transinf.E94.D.255.
Abstract: Model-based reinforcement learning uses the gathered information, during each experience, more efficiently than model-free reinforcement learning. This is especially interesting in multiagent systems, since a large number of experiences are necessary to achieve a good performance. In this paper, model-based reinforcement learning is developed for a group of self-interested agents with sequential action selection based on traditional prioritized sweeping. Every single situation of decision making in this learning process, called extensive Markov game, is modeled as n-person general-sum extensive form game with perfect information. A modified version of backward induction is proposed for action selection, which adjusts the tradeoff between selecting subgame perfect equilibrium points, as the optimal joint actions, and learning new joint actions. The algorithm is proved to be convergent and discussed based on the new results on the convergence of the traditional prioritized sweeping.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E94.D.255/_p
Copiar
@ARTICLE{e94-d_2_255,
author={Ali AKRAMIZADEH, Ahmad AFSHAR, Mohammad Bagher MENHAJ, Samira JAFARI, },
journal={IEICE TRANSACTIONS on Information},
title={Model-Based Reinforcement Learning in Multiagent Systems with Sequential Action Selection},
year={2011},
volume={E94-D},
number={2},
pages={255-263},
abstract={Model-based reinforcement learning uses the gathered information, during each experience, more efficiently than model-free reinforcement learning. This is especially interesting in multiagent systems, since a large number of experiences are necessary to achieve a good performance. In this paper, model-based reinforcement learning is developed for a group of self-interested agents with sequential action selection based on traditional prioritized sweeping. Every single situation of decision making in this learning process, called extensive Markov game, is modeled as n-person general-sum extensive form game with perfect information. A modified version of backward induction is proposed for action selection, which adjusts the tradeoff between selecting subgame perfect equilibrium points, as the optimal joint actions, and learning new joint actions. The algorithm is proved to be convergent and discussed based on the new results on the convergence of the traditional prioritized sweeping.},
keywords={},
doi={10.1587/transinf.E94.D.255},
ISSN={1745-1361},
month={February},}
Copiar
TY - JOUR
TI - Model-Based Reinforcement Learning in Multiagent Systems with Sequential Action Selection
T2 - IEICE TRANSACTIONS on Information
SP - 255
EP - 263
AU - Ali AKRAMIZADEH
AU - Ahmad AFSHAR
AU - Mohammad Bagher MENHAJ
AU - Samira JAFARI
PY - 2011
DO - 10.1587/transinf.E94.D.255
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E94-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2011
AB - Model-based reinforcement learning uses the gathered information, during each experience, more efficiently than model-free reinforcement learning. This is especially interesting in multiagent systems, since a large number of experiences are necessary to achieve a good performance. In this paper, model-based reinforcement learning is developed for a group of self-interested agents with sequential action selection based on traditional prioritized sweeping. Every single situation of decision making in this learning process, called extensive Markov game, is modeled as n-person general-sum extensive form game with perfect information. A modified version of backward induction is proposed for action selection, which adjusts the tradeoff between selecting subgame perfect equilibrium points, as the optimal joint actions, and learning new joint actions. The algorithm is proved to be convergent and discussed based on the new results on the convergence of the traditional prioritized sweeping.
ER -