A Random Utility Model (RUM) is a classical model of user behavior defined by a distribution over R^n. A user, presented with a subset of {1,..., n}, will select the item of the subset with the highest utility, according to a utility vector drawn from the specified distribution. In practical settings, the subset is often of small size, as in the "ten blue links" of web search. In this paper, we consider a learning setting with complete information on user choices from subsets of size at most k. We show that k = Θ(√n) is both necessary and sufficient to predict the distribution of all user choices with an arbitrarily small, constant error. Based on the upper bound, we obtain new algorithms for approximate RUM learning and variations thereof. Furthermore, we employ our lower bound for approximate RUM learning to derive lower bounds to fractional extensions of the well-studied k-deck and trace reconstruction problems.

Tight Bounds for Learning RUMs from Small Slates / Chierichetti, F.; Giacchini, M.; Kumar, R.; Panconesi, A.; Tomkins, A.. - 37:(2024). ( Proceedings of the 38th International Conference on Neural Information Processing Systems Vancouver, Canada ).

Tight Bounds for Learning RUMs from Small Slates

Chierichetti F.;Giacchini M.;Panconesi A.;
2024

Abstract

A Random Utility Model (RUM) is a classical model of user behavior defined by a distribution over R^n. A user, presented with a subset of {1,..., n}, will select the item of the subset with the highest utility, according to a utility vector drawn from the specified distribution. In practical settings, the subset is often of small size, as in the "ten blue links" of web search. In this paper, we consider a learning setting with complete information on user choices from subsets of size at most k. We show that k = Θ(√n) is both necessary and sufficient to predict the distribution of all user choices with an arbitrarily small, constant error. Based on the upper bound, we obtain new algorithms for approximate RUM learning and variations thereof. Furthermore, we employ our lower bound for approximate RUM learning to derive lower bounds to fractional extensions of the well-studied k-deck and trace reconstruction problems.
2024
Proceedings of the 38th International Conference on Neural Information Processing Systems
random utility model, discrete choice, machine learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Tight Bounds for Learning RUMs from Small Slates / Chierichetti, F.; Giacchini, M.; Kumar, R.; Panconesi, A.; Tomkins, A.. - 37:(2024). ( Proceedings of the 38th International Conference on Neural Information Processing Systems Vancouver, Canada ).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1755455
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact