We address two central notions of fairness in the literature of nondeterministic fully observable domains. The first, which we call stochastic fairness, is classical, and assumes an environment which operates probabilistically using possibly unknown probabilities. The second, which is language-theoretic, assumes that if an action is taken from a given state infinitely often then all its possible outcomes should appear infinitely often; we call this state-action fairness. While the two notions coincide for standard reachability goals, they differ for temporally extended goals. This important difference has been overlooked in the planning literature and has led to the use of a product-based reduction in a number of published algorithms which were stated for state-action fairness, for which they are incorrect, while being correct for stochastic fairness. We remedy this and provide a correct optimal algorithm for solving state-action fair planning for LTL/LTLf goals, as well as a correct proof of the lower bound of the goal-complexity. Our proof is general enough that it also pro- vides, for the no-fairness and stochastic-fairness cases, multiple missing lower bounds and new proofs of known lower bounds. Overall, we show that stochastic fairness is better behaved than state-action fairness.

Stochastic Fairness and Language-Theoretic Fairness in Planning in Nondeterministic Domains / Aminof, Benjamin; DE GIACOMO, Giuseppe; Rubin, Sasha. - 30:(2020), pp. 20-28. (Intervento presentato al convegno International Conference on Automated Planning and Scheduling tenutosi a Nancy; France).

Stochastic Fairness and Language-Theoretic Fairness in Planning in Nondeterministic Domains

Giuseppe De Giacomo
Secondo
;
2020

Abstract

We address two central notions of fairness in the literature of nondeterministic fully observable domains. The first, which we call stochastic fairness, is classical, and assumes an environment which operates probabilistically using possibly unknown probabilities. The second, which is language-theoretic, assumes that if an action is taken from a given state infinitely often then all its possible outcomes should appear infinitely often; we call this state-action fairness. While the two notions coincide for standard reachability goals, they differ for temporally extended goals. This important difference has been overlooked in the planning literature and has led to the use of a product-based reduction in a number of published algorithms which were stated for state-action fairness, for which they are incorrect, while being correct for stochastic fairness. We remedy this and provide a correct optimal algorithm for solving state-action fair planning for LTL/LTLf goals, as well as a correct proof of the lower bound of the goal-complexity. Our proof is general enough that it also pro- vides, for the no-fairness and stochastic-fairness cases, multiple missing lower bounds and new proofs of known lower bounds. Overall, we show that stochastic fairness is better behaved than state-action fairness.
2020
International Conference on Automated Planning and Scheduling
Artificial Intelligence; Knowledge Representation; Reasoning about Actions
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Stochastic Fairness and Language-Theoretic Fairness in Planning in Nondeterministic Domains / Aminof, Benjamin; DE GIACOMO, Giuseppe; Rubin, Sasha. - 30:(2020), pp. 20-28. (Intervento presentato al convegno International Conference on Automated Planning and Scheduling tenutosi a Nancy; France).
File allegati a questo prodotto
File Dimensione Formato  
Aminof_preprint_Stochastic_2020.pdf

accesso aperto

Note: https://ojs.aaai.org/index.php/ICAPS/article/view/6641
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 221.7 kB
Formato Adobe PDF
221.7 kB Adobe PDF
Aminof_Stochastic_2020.pdf

accesso aperto

Note: https://ojs.aaai.org/index.php/ICAPS/article/view/6641
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 544.87 kB
Formato Adobe PDF
544.87 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1506307
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? ND
social impact