Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83 approximation and runs in O(n log n) time, i.e., at least a factor n faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.

Fast adaptive non-monotone submodular maximization subject to a knapsack constraint / Amanatidis, Georgios; Fusco, Federico; Lazos, Filippos; Leonardi, Stefano; Reiffenhauser, Rebecca. - (2020). (Intervento presentato al convegno Advances in Neural Information Processing Systems (was NIPS) tenutosi a Virtual).

Fast adaptive non-monotone submodular maximization subject to a knapsack constraint

Georgios Amanatidis;Federico Fusco;Lazos;Stefano Leonardi
;
Rebecca Reiffenhauser
2020

Abstract

Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83 approximation and runs in O(n log n) time, i.e., at least a factor n faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
2020
Advances in Neural Information Processing Systems (was NIPS)
Submodular function optimization; non-monotone submodularity; knapsack constraints
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Fast adaptive non-monotone submodular maximization subject to a knapsack constraint / Amanatidis, Georgios; Fusco, Federico; Lazos, Filippos; Leonardi, Stefano; Reiffenhauser, Rebecca. - (2020). (Intervento presentato al convegno Advances in Neural Information Processing Systems (was NIPS) tenutosi a Virtual).
File allegati a questo prodotto
File Dimensione Formato  
Amanatidis_Fast-adaptive_2020.pdf

accesso aperto

Note: https://dl.acm.org/doi/abs/10.5555/3495724.3497142
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 338.75 kB
Formato Adobe PDF
338.75 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1470525
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 33
  • ???jsp.display-item.citation.isi??? ND
social impact