Recent results in the ML community have revealed that learning algorithms used to compute the optimal strategy for the leader to commit to in a Stackelberg game, are susceptible to manipulation by the follower. Such a learning algorithm operates by querying the best responses or the payoffs of the follower, who consequently can deceive the algorithm by responding as if their payoffs were much different than what they actually are. For this strategic behavior to be successful, the main challenge faced by the follower is to pinpoint the payoffs that would make the learning algorithm compute a commitment so that best responding to it maximizes the follower's utility, according to the true payoffs. While this problem has been considered before, the related literature only focused on the simplified scenario in which the payoff space is finite, thus leaving the general version of the problem unanswered. In this paper, we fill this gap by showing that it is always possible for the follower to efficiently compute (near-)optimal payoffs for various scenarios of learning interaction between the leader and the follower.

Optimally Deceiving a Learning Leader in Stackelberg Games / Birmpas, Georgios; Gan, Jiarui; Hollender, Alexandros; Marmolejo, Francisco; Rajgopal, Ninad; Voudouris, Alexandros. - 33:(2020), pp. 20624-20635. (Intervento presentato al convegno Advances in Neural Information Processing Systems 33 (NeurIPS 2020) tenutosi a Virtual).

Optimally Deceiving a Learning Leader in Stackelberg Games

Birmpas Georgios
;
2020

Abstract

Recent results in the ML community have revealed that learning algorithms used to compute the optimal strategy for the leader to commit to in a Stackelberg game, are susceptible to manipulation by the follower. Such a learning algorithm operates by querying the best responses or the payoffs of the follower, who consequently can deceive the algorithm by responding as if their payoffs were much different than what they actually are. For this strategic behavior to be successful, the main challenge faced by the follower is to pinpoint the payoffs that would make the learning algorithm compute a commitment so that best responding to it maximizes the follower's utility, according to the true payoffs. While this problem has been considered before, the related literature only focused on the simplified scenario in which the payoff space is finite, thus leaving the general version of the problem unanswered. In this paper, we fill this gap by showing that it is always possible for the follower to efficiently compute (near-)optimal payoffs for various scenarios of learning interaction between the leader and the follower.
2020
Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
stackelberg games; strong stackelberg equilibrium;
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Optimally Deceiving a Learning Leader in Stackelberg Games / Birmpas, Georgios; Gan, Jiarui; Hollender, Alexandros; Marmolejo, Francisco; Rajgopal, Ninad; Voudouris, Alexandros. - 33:(2020), pp. 20624-20635. (Intervento presentato al convegno Advances in Neural Information Processing Systems 33 (NeurIPS 2020) tenutosi a Virtual).
File allegati a questo prodotto
File Dimensione Formato  
Birmpas_Optimally_2020.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 534.02 kB
Formato Adobe PDF
534.02 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1504557
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact