Simulation is a powerful technique to explore complex scenarios and analyze systems related to a wide range of disciplines. To allow for an efficient exploitation of the available computing power, speculative Time Warp-based Parallel Discrete Event Simulation is universally recognized as a viable solution. In this context, the rollback operation is a fundamental building block to support a correct execution even when causality inconsistencies are a posteriori materialized. If this operation is supported via checkpoint/restore strategies, memory management plays a fundamental role to ensure high performance of the simulation run. With few exceptions, adaptive protocols targeting memory management for Time Warp-based simulations have been mostly based on a pre-defined analytic models of the system, expressed as a closed-form functions that map system's state to control parameters. The underlying assumption is that the model itself is optimal. In this paper, we present an approach that exploits reinforcement learning techniques. Rather than assuming an optimal control strategy, we seek to find the optimal strategy through parameter exploration. A value function that captures the history of system feedback is used, and no a-priori knowledge of the system is required. An experimental assessment of the viability of our proposal is also provided for a mobile cellular system simulation.

Optimizing memory management for optimistic simulation with reinforcement learning / Pellegrini, Alessandro. - (2016), pp. 26-33. (Intervento presentato al convegno 14th International Conference on High Performance Computing and Simulation tenutosi a Innsbruck, Austria) [10.1109/HPCSim.2016.7568312].

Optimizing memory management for optimistic simulation with reinforcement learning

PELLEGRINI, ALESSANDRO
Primo
2016

Abstract

Simulation is a powerful technique to explore complex scenarios and analyze systems related to a wide range of disciplines. To allow for an efficient exploitation of the available computing power, speculative Time Warp-based Parallel Discrete Event Simulation is universally recognized as a viable solution. In this context, the rollback operation is a fundamental building block to support a correct execution even when causality inconsistencies are a posteriori materialized. If this operation is supported via checkpoint/restore strategies, memory management plays a fundamental role to ensure high performance of the simulation run. With few exceptions, adaptive protocols targeting memory management for Time Warp-based simulations have been mostly based on a pre-defined analytic models of the system, expressed as a closed-form functions that map system's state to control parameters. The underlying assumption is that the model itself is optimal. In this paper, we present an approach that exploits reinforcement learning techniques. Rather than assuming an optimal control strategy, we seek to find the optimal strategy through parameter exploration. A value function that captures the history of system feedback is used, and no a-priori knowledge of the system is required. An experimental assessment of the viability of our proposal is also provided for a mobile cellular system simulation.
2016
14th International Conference on High Performance Computing and Simulation
Parallel Discrete Event Simulation; Autonomic Computing; Reinforcement Learning; Automatic State Saving
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Optimizing memory management for optimistic simulation with reinforcement learning / Pellegrini, Alessandro. - (2016), pp. 26-33. (Intervento presentato al convegno 14th International Conference on High Performance Computing and Simulation tenutosi a Innsbruck, Austria) [10.1109/HPCSim.2016.7568312].
File allegati a questo prodotto
File Dimensione Formato  
Pellegrini_Postprint_Optimizing-memory-management_2016.pdf

accesso aperto

Note: https://ieeexplore.ieee.org/document/7568312
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 324.75 kB
Formato Adobe PDF
324.75 kB Adobe PDF
Pellegrini_Optimizing-memory-management_2016.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 245.58 kB
Formato Adobe PDF
245.58 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/931830
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact