We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over T rounds of bidding with the set of arms given by the set of distinct bidding m-tuples, where m is the number of platforms. We modify the algorithm proposed in Badanidiyuru et al., [11] to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret , where OPT is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is . When restricted to this special-case, this bound improves over Sankararaman and Slivkins [34] in the regime OPT < < T, as is the case in the particular application at hand. Second, we show an lower bound for the discrete case and an ?(m1/3B2/3) lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.

Stochastic bandits for multi-platform budget optimization in online advertising / Avadhanula, V.; Colini Baldeschi, R.; Leonardi, S.; Sankararaman, K. A.; Schrijvers, O.. - (2021), pp. 2805-2817. (Intervento presentato al convegno International World Wide Web Conference tenutosi a Ljubljana Slovenia/online) [10.1145/3442381.3450074].

Stochastic bandits for multi-platform budget optimization in online advertising

Leonardi S.
;
2021

Abstract

We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over T rounds of bidding with the set of arms given by the set of distinct bidding m-tuples, where m is the number of platforms. We modify the algorithm proposed in Badanidiyuru et al., [11] to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret , where OPT is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is . When restricted to this special-case, this bound improves over Sankararaman and Slivkins [34] in the regime OPT < < T, as is the case in the particular application at hand. Second, we show an lower bound for the discrete case and an ?(m1/3B2/3) lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.
2021
International World Wide Web Conference
Multi-armed bandits; online advertising; multiple platforms
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Stochastic bandits for multi-platform budget optimization in online advertising / Avadhanula, V.; Colini Baldeschi, R.; Leonardi, S.; Sankararaman, K. A.; Schrijvers, O.. - (2021), pp. 2805-2817. (Intervento presentato al convegno International World Wide Web Conference tenutosi a Ljubljana Slovenia/online) [10.1145/3442381.3450074].
File allegati a questo prodotto
File Dimensione Formato  
Avadhanula_preprint_Stochastic-bandits_2021.pdf

accesso aperto

Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 828.9 kB
Formato Adobe PDF
828.9 kB Adobe PDF
Avadhanula_Stochastic-bandits_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.07 MB
Formato Adobe PDF
1.07 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1557213
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 13
social impact