Machine learning algorithms often require quantitative ratings from users to effectively predict helpful content. When these ratings are unavailable, systems make implicit assumptions or imputations to fill in the missing information; however, users are generally kept unaware of these processes. In our work, we explore ways of informing the users about system imputations, and experiment with imputed ratings and various explanations required by users to correct imputations. We investigate these approaches through the deployment of a text messaging probe to 26 participants to help them manage psychological wellbeing. We provide quantitative results to report users' reactions to correct vs incorrect imputations and potential risks of biasing their ratings. Using semi-structured interviews with participants, we characterize the potential trade-offs regarding user autonomy, and draw insights about alternative ways of involving users in the imputation process. Our findings provide useful directions for future research on communicating system imputation and interpreting user non-responses.
Informing Users about Data Imputation: Exploring the Design Space for Dealing With Non-Responses / Bhattacharjee, Ananya; Song, Haochen; Wu, Xuening; Tomlinson, Justice; Reza, Mohi; Chowdhury, Akmar Ehsan; Deliu, Nina; Price, Thomas W.; Williams, Joseph Jay. - 11:1(2023), pp. 14-26. (Intervento presentato al convegno The 11th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2023) tenutosi a Delft, Netherlands) [10.1609/hcomp.v11i1.27544].
Informing Users about Data Imputation: Exploring the Design Space for Dealing With Non-Responses
Deliu, NinaMethodology
;
2023
Abstract
Machine learning algorithms often require quantitative ratings from users to effectively predict helpful content. When these ratings are unavailable, systems make implicit assumptions or imputations to fill in the missing information; however, users are generally kept unaware of these processes. In our work, we explore ways of informing the users about system imputations, and experiment with imputed ratings and various explanations required by users to correct imputations. We investigate these approaches through the deployment of a text messaging probe to 26 participants to help them manage psychological wellbeing. We provide quantitative results to report users' reactions to correct vs incorrect imputations and potential risks of biasing their ratings. Using semi-structured interviews with participants, we characterize the potential trade-offs regarding user autonomy, and draw insights about alternative ways of involving users in the imputation process. Our findings provide useful directions for future research on communicating system imputation and interpreting user non-responses.File | Dimensione | Formato | |
---|---|---|---|
Deliu_Informing-Users_2023.pdf
accesso aperto
Note: full_article
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
742.82 kB
Formato
Adobe PDF
|
742.82 kB | Adobe PDF | |
Proceedings of the Eleventh AAAI Conference on Human Computation and Crowdsourcing.pdf
accesso aperto
Note: Frontespizio e TOC
Tipologia:
Altro materiale allegato
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
905.44 kB
Formato
Adobe PDF
|
905.44 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.