In this work, we investigate the concept of "restraining bolt'", envisioned in Science Fiction. Specifically, we introduce a novel problem in AI. We have two distinct sets of features extracted from the world, one by the agent and one by the authority imposing restraining specifications (the "restraining bolt"). The two sets are apparently unrelated since of interest to independent parties, however, they both account for (aspects of) the same world. We consider the case in which the agent is a reinforcement learning agent on the first set of features, while the restraining bolt is specified logically using linear time logic on finite traces LTLf/LDLf over the second set of features. We show formally, and illustrate with examples, that, under general circumstances, the agent can learn while shaping its goals to suitably conform (as much as possible) to the restraining bolt specifications.

Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications / DE GIACOMO, Giuseppe; Iocchi, Luca; Favorito, Marco; Patrizi, Fabio. - 29:(2019), pp. 128-136. (Intervento presentato al convegno Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling tenutosi a Berkeley, CA, USA).

Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications

Giuseppe De Giacomo
;
Luca Iocchi
;
Marco Favorito
;
Fabio Patrizi
2019

Abstract

In this work, we investigate the concept of "restraining bolt'", envisioned in Science Fiction. Specifically, we introduce a novel problem in AI. We have two distinct sets of features extracted from the world, one by the agent and one by the authority imposing restraining specifications (the "restraining bolt"). The two sets are apparently unrelated since of interest to independent parties, however, they both account for (aspects of) the same world. We consider the case in which the agent is a reinforcement learning agent on the first set of features, while the restraining bolt is specified logically using linear time logic on finite traces LTLf/LDLf over the second set of features. We show formally, and illustrate with examples, that, under general circumstances, the agent can learn while shaping its goals to suitably conform (as much as possible) to the restraining bolt specifications.
2019
Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling
Probabilistic planning; MDPs and POMDPs; Reasoning about action and change; Temporal planning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications / DE GIACOMO, Giuseppe; Iocchi, Luca; Favorito, Marco; Patrizi, Fabio. - 29:(2019), pp. 128-136. (Intervento presentato al convegno Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling tenutosi a Berkeley, CA, USA).
File allegati a questo prodotto
File Dimensione Formato  
DeGiacomo_Foundations_2019.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.9 MB
Formato Adobe PDF
4.9 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1401140
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 66
  • ???jsp.display-item.citation.isi??? ND
social impact