In this work, we have investigated the concept of “restraining bolt”, inspired by Science Fiction. We have two distinct sets of features extracted from the world, one by the agent and one by the authority imposing some restraining specifications on the behaviour of the agent (the “restraining bolt”). The two sets of features and, hence the model of the world attainable from them, are apparently unrelated since of interest to independent parties. However, they both account for (aspects of) the same world. We have considered the case in which the agent is a reinforcement learning agent on a set of low-level (subsymbolic) features, while the restraining bolt is specified logically using linear time logic on finite traces LTLf/LDLf over a set of high-level symbolic features. We show formally, and illustrate with examples, that, under general circumstances, the agent can learn while shaping its goals to suitably conform (as much as possible) to the restraining bolt specifications.1

Restraining bolts for reinforcement learning agents / Giacomo, De; Favorito, Marco; Iocchi, Luca; Patrizi, Fabio. - 34th AAAI Conference on Artificial Intelligence:9(2020), pp. 13659-13662. (Intervento presentato al convegno National Conference of the American Association for Artificial Intelligence tenutosi a New York, NY, USA) [10.1609/aaai.v34i09.7114].

Restraining bolts for reinforcement learning agents

De Giacomo
;
Marco Favorito
;
Luca Iocchi
;
Fabio Patrizi
2020

Abstract

In this work, we have investigated the concept of “restraining bolt”, inspired by Science Fiction. We have two distinct sets of features extracted from the world, one by the agent and one by the authority imposing some restraining specifications on the behaviour of the agent (the “restraining bolt”). The two sets of features and, hence the model of the world attainable from them, are apparently unrelated since of interest to independent parties. However, they both account for (aspects of) the same world. We have considered the case in which the agent is a reinforcement learning agent on a set of low-level (subsymbolic) features, while the restraining bolt is specified logically using linear time logic on finite traces LTLf/LDLf over a set of high-level symbolic features. We show formally, and illustrate with examples, that, under general circumstances, the agent can learn while shaping its goals to suitably conform (as much as possible) to the restraining bolt specifications.1
2020
National Conference of the American Association for Artificial Intelligence
Restraining bolts; non-markovian rewards; reinforcement learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Restraining bolts for reinforcement learning agents / Giacomo, De; Favorito, Marco; Iocchi, Luca; Patrizi, Fabio. - 34th AAAI Conference on Artificial Intelligence:9(2020), pp. 13659-13662. (Intervento presentato al convegno National Conference of the American Association for Artificial Intelligence tenutosi a New York, NY, USA) [10.1609/aaai.v34i09.7114].
File allegati a questo prodotto
File Dimensione Formato  
DeGiacomo_Restraining-Bolts_2020.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 870.69 kB
Formato Adobe PDF
870.69 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1435479
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 5
social impact