The NP-hard Max-k-cover problem requires selecting k sets from a collection so as to maximize the size of the union. This classic problem occurs commonly in many settings in web search and advertising. For moderately-sized instances, a greedy algorithm gives an approximation of (1-1/e). However, the greedy algorithm requires updating scores of arbitrary elements after each step, and hence becomes intractable for large datasets. We give the first max cover algorithm designed for today's large-scale commodity clusters. Our algorithm has provably almost the same approximation as greedy, but runs much faster. Furthermore, it can be easily expressed in the MapReduce programming paradigm, and requires only polylogarithmically many passes over the data. Our experiments on five large problem instances show that our algorithm is practical and can achieve good speedups compared to the sequential greedy algorithm. © 2010 International World Wide Web Conference Committee (IW3C2).

Max-cover in map-reduce / Chierichetti, Flavio; Ravi, Kumar; Tomkins, Andrew. - (2010), pp. 231-240. (Intervento presentato al convegno 19th International World Wide Web Conference, WWW2010 tenutosi a Raleigh, NC nel 26 April 2010 through 30 April 2010) [10.1145/1772690.1772715].

Max-cover in map-reduce

CHIERICHETTI, FLAVIO;
2010

Abstract

The NP-hard Max-k-cover problem requires selecting k sets from a collection so as to maximize the size of the union. This classic problem occurs commonly in many settings in web search and advertising. For moderately-sized instances, a greedy algorithm gives an approximation of (1-1/e). However, the greedy algorithm requires updating scores of arbitrary elements after each step, and hence becomes intractable for large datasets. We give the first max cover algorithm designed for today's large-scale commodity clusters. Our algorithm has provably almost the same approximation as greedy, but runs much faster. Furthermore, it can be easily expressed in the MapReduce programming paradigm, and requires only polylogarithmically many passes over the data. Our experiments on five large problem instances show that our algorithm is practical and can achieve good speedups compared to the sequential greedy algorithm. © 2010 International World Wide Web Conference Committee (IW3C2).
2010
19th International World Wide Web Conference, WWW2010
greedy algorithm; map-reduce; maximum cover
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Max-cover in map-reduce / Chierichetti, Flavio; Ravi, Kumar; Tomkins, Andrew. - (2010), pp. 231-240. (Intervento presentato al convegno 19th International World Wide Web Conference, WWW2010 tenutosi a Raleigh, NC nel 26 April 2010 through 30 April 2010) [10.1145/1772690.1772715].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/497794
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 84
  • ???jsp.display-item.citation.isi??? ND
social impact