Search and recommendation algorithms are playing a primary role in supporting individuals at filtering the overwhelming alternatives offered by our daily life. This automated intelligence is being used on a myriad of platforms covering different domains, from e-commerce to education, from healthcare to social media, and so on. The ongoing research in these fields is posing search and recommendation algorithms closer and closer, with search algorithms being personalized on users’ characteristics, and recommendation algorithms being optimized on the ranking quality. This attitude results in enabling the identification of common challenges and priorities, essential to tailor these systems to the needs of our society. Over the aspects that are getting special attention in search and recommendation so far, being able to uncover, characterize, and counteract data and algorithmic biases, while preserving the original level of accuracy, is proving to be prominent and timely. Both classes of algorithms are trained on historical data, which often conveys imbalances and inequalities. These patterns in the training data might be captured and emphasized in the results these algorithms provide to users, leading to biased or even unfair decisions. The latter can happen when an algorithm systematically discriminates against a legally-protected class of users, identified by a common sensitive attribute, or does not consider users equally at individual level. Given the increasing adoption of systems empowered with search and recommendation capabilities, it is crucial to ensure that their decisions do not lead to biased or even discriminatory outcomes for particular groups or individuals. Controlling the effects generated by popularity bias to improve the user’s perceived quality of the results, supporting minority providers while creating recommendations for consumers, and being able to interpret why an algorithm provides a given biased result are examples of challenges that require attention. This special issue brings together original research methods and applications on algorithmic bias and fairness in search and recommendation. The rest of this article is structured as follows: Section 2 summarizes the contributions included in this special issue, and Section 3 provides concluding remarks.

Guest editorial of the IPM special issue on algorithmic bias and fairness in search and recommendation / Boratto, L.; Faralli, S.; Marras, M.; Stilo, G.. - In: INFORMATION PROCESSING & MANAGEMENT. - ISSN 0306-4573. - 59:1(2022), p. 102791. [10.1016/j.ipm.2021.102791]

Guest editorial of the IPM special issue on algorithmic bias and fairness in search and recommendation

Faralli S.
Co-primo
Membro del Collaboration Group
;
Stilo G.
Co-primo
Membro del Collaboration Group
2022

Abstract

Search and recommendation algorithms are playing a primary role in supporting individuals at filtering the overwhelming alternatives offered by our daily life. This automated intelligence is being used on a myriad of platforms covering different domains, from e-commerce to education, from healthcare to social media, and so on. The ongoing research in these fields is posing search and recommendation algorithms closer and closer, with search algorithms being personalized on users’ characteristics, and recommendation algorithms being optimized on the ranking quality. This attitude results in enabling the identification of common challenges and priorities, essential to tailor these systems to the needs of our society. Over the aspects that are getting special attention in search and recommendation so far, being able to uncover, characterize, and counteract data and algorithmic biases, while preserving the original level of accuracy, is proving to be prominent and timely. Both classes of algorithms are trained on historical data, which often conveys imbalances and inequalities. These patterns in the training data might be captured and emphasized in the results these algorithms provide to users, leading to biased or even unfair decisions. The latter can happen when an algorithm systematically discriminates against a legally-protected class of users, identified by a common sensitive attribute, or does not consider users equally at individual level. Given the increasing adoption of systems empowered with search and recommendation capabilities, it is crucial to ensure that their decisions do not lead to biased or even discriminatory outcomes for particular groups or individuals. Controlling the effects generated by popularity bias to improve the user’s perceived quality of the results, supporting minority providers while creating recommendations for consumers, and being able to interpret why an algorithm provides a given biased result are examples of challenges that require attention. This special issue brings together original research methods and applications on algorithmic bias and fairness in search and recommendation. The rest of this article is structured as follows: Section 2 summarizes the contributions included in this special issue, and Section 3 provides concluding remarks.
2022
Bias; Fairness, Search and Recommendation
01 Pubblicazione su rivista::01a Articolo in rivista
Guest editorial of the IPM special issue on algorithmic bias and fairness in search and recommendation / Boratto, L.; Faralli, S.; Marras, M.; Stilo, G.. - In: INFORMATION PROCESSING & MANAGEMENT. - ISSN 0306-4573. - 59:1(2022), p. 102791. [10.1016/j.ipm.2021.102791]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1617419
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact