A growing trend in UX research is the use of Online Panels (OPs), namely people enrolled in a web platform who have agreed to participate regularly in online studies and/or in the execution of simple and repetitive operations. The effect of the participation of such “professional respondents” on data quality has been questioned in a variety of fields (e.g., Psychology and Marketing). Notwithstanding the increasing use of OPs in UX research, there is a lack of studies investigating the bias affecting usability assessments provided by this type of respondents. In this paper we have addressed this issue by comparing the usability evaluations provided by professional respondents commonly involved in debugging activities, non-professional respondents, and naive people not belonging to any OP. In a set of three studies, we have addressed both the effect of expertise and type of task (debugging vs. browsing) on the usability assessments. A total of 138 individuals participated in these studies. Results showed that individuals who performed the debugging test provided more positive usability ratings regardless of their skills, conversely, professional respondents provided more severe and critical ratings of perceived usability than non-professionals. Finally, the comparison between the online panelists and naive users allowed us to better understand whether professional respondents can be involved in usability evaluations without jeopardizing them.
Usability Evaluations Employing Online Panels Are Not Bias-Free / Maggi, Piero; Mastrangelo, Simon; Scelsi, Marco; Manara, Luca; Tempestini, Giorgia; DI NOCERA, Francesco. - In: APPLIED SCIENCES. - ISSN 2076-3417. - 12:17(2022), p. 8621. [10.3390/app12178621]
Usability Evaluations Employing Online Panels Are Not Bias-Free
Piero Maggi;Simon Mastrangelo;Giorgia Tempestini;Francesco Di Nocera
2022
Abstract
A growing trend in UX research is the use of Online Panels (OPs), namely people enrolled in a web platform who have agreed to participate regularly in online studies and/or in the execution of simple and repetitive operations. The effect of the participation of such “professional respondents” on data quality has been questioned in a variety of fields (e.g., Psychology and Marketing). Notwithstanding the increasing use of OPs in UX research, there is a lack of studies investigating the bias affecting usability assessments provided by this type of respondents. In this paper we have addressed this issue by comparing the usability evaluations provided by professional respondents commonly involved in debugging activities, non-professional respondents, and naive people not belonging to any OP. In a set of three studies, we have addressed both the effect of expertise and type of task (debugging vs. browsing) on the usability assessments. A total of 138 individuals participated in these studies. Results showed that individuals who performed the debugging test provided more positive usability ratings regardless of their skills, conversely, professional respondents provided more severe and critical ratings of perceived usability than non-professionals. Finally, the comparison between the online panelists and naive users allowed us to better understand whether professional respondents can be involved in usability evaluations without jeopardizing them.File | Dimensione | Formato | |
---|---|---|---|
Maggi_Usability Evaluations_2022.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
4.12 MB
Formato
Adobe PDF
|
4.12 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.