Game flow represents a state where the player is neither frustrated nor bored. In turn-based battle video games it can be achieved by Dynamic Difficulty Adjustment (DDA), whose research has begun rising over the last decade. This paper introduces an idea for incorporating DDA through the use of Reinforcement Learning (RL) to agents of turn-based battle video games. We design and implement an RL agent that shows, in a simple environment, the idea of how a game could achieve balance through adequate choices in actions depending on the player's level of skill. For achieving this purpose, we incorporated the design and implementation of state-action-reward-state-action (SARSA) algorithm to the agent of our implemented game. In addition, we added tracking of the on-going games and depending on the frequency of the player's repeated wins or losses, the rewards of the RL agent are modified. This modification of the rewards has an impact on the RL agent's actions, which involves an increase/decrease of the difficulty of the battle game. The evaluation performed shows that the idea of the paper is demonstrated, since players face personalized challenges that we believe are in range of game flow.

Go with the Flow: Reinforcement Learning in Turn-based Battle Video Games / Pagalyte, E.; Mancini, M.; Climent, L.. - (2020), pp. 1-8. (Intervento presentato al convegno 20th ACM International Conference on Intelligent Virtual Agents, IVA 2020 tenutosi a online) [10.1145/3383652.3423868].

Go with the Flow: Reinforcement Learning in Turn-based Battle Video Games

Mancini M.;
2020

Abstract

Game flow represents a state where the player is neither frustrated nor bored. In turn-based battle video games it can be achieved by Dynamic Difficulty Adjustment (DDA), whose research has begun rising over the last decade. This paper introduces an idea for incorporating DDA through the use of Reinforcement Learning (RL) to agents of turn-based battle video games. We design and implement an RL agent that shows, in a simple environment, the idea of how a game could achieve balance through adequate choices in actions depending on the player's level of skill. For achieving this purpose, we incorporated the design and implementation of state-action-reward-state-action (SARSA) algorithm to the agent of our implemented game. In addition, we added tracking of the on-going games and depending on the frequency of the player's repeated wins or losses, the rewards of the RL agent are modified. This modification of the rewards has an impact on the RL agent's actions, which involves an increase/decrease of the difficulty of the battle game. The evaluation performed shows that the idea of the paper is demonstrated, since players face personalized challenges that we believe are in range of game flow.
2020
20th ACM International Conference on Intelligent Virtual Agents, IVA 2020
DDA; Dynamic Difficulty Adjustment; Game Flow; Reinforcement Learning; RL; SARSA; Turn-based battle video game
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Go with the Flow: Reinforcement Learning in Turn-based Battle Video Games / Pagalyte, E.; Mancini, M.; Climent, L.. - (2020), pp. 1-8. (Intervento presentato al convegno 20th ACM International Conference on Intelligent Virtual Agents, IVA 2020 tenutosi a online) [10.1145/3383652.3423868].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1530533
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact