A/B testing - also known as bucket testing - allows an organization to evaluate the performance/impact of new features implemented on their website by exposing a small fraction of visitors to them. In this paper, we propose a novel methodology that can reveal an ongoing bucket testing and the various features being tested. To evaluate the effectiveness of our proposed methodology, we began with testing the homepages of seven popular websites. We discover that four of them were actively performing bucket testing during our experiments, and we successfully spot different features being tested. Moreover, to investigate the factors that might affect bucket testing, we setup another experiment. Here, we request web pages from different browsers and record several features of server response, e.g. cookies set by the server, IP and port address of the responding server, and response time. We observe variations in the response time for different browsers, which suggest that the type of user agent plays an important role. Finally, we showcase the captured bucket-elements and release our dataset that can serve as ground truth for future investigations in this direction of research. © 2018 IEEE.
Spot the difference: your bucket is leaking. A novel methodology to expose A/B testing effortlessly / Conti, Mauro; Gangwal, Ankit; Sarada Prasad, Gochhayat; Tolomei, Gabriele. - (2018), pp. 1-7. (Intervento presentato al convegno 6th IEEE Conference on Communications and Network Security, CNS 2018 tenutosi a Beijing; China) [10.1109/cns.2018.8433122].
Spot the difference: your bucket is leaking. A novel methodology to expose A/B testing effortlessly
Mauro Conti;Gabriele Tolomei
2018
Abstract
A/B testing - also known as bucket testing - allows an organization to evaluate the performance/impact of new features implemented on their website by exposing a small fraction of visitors to them. In this paper, we propose a novel methodology that can reveal an ongoing bucket testing and the various features being tested. To evaluate the effectiveness of our proposed methodology, we began with testing the homepages of seven popular websites. We discover that four of them were actively performing bucket testing during our experiments, and we successfully spot different features being tested. Moreover, to investigate the factors that might affect bucket testing, we setup another experiment. Here, we request web pages from different browsers and record several features of server response, e.g. cookies set by the server, IP and port address of the responding server, and response time. We observe variations in the response time for different browsers, which suggest that the type of user agent plays an important role. Finally, we showcase the captured bucket-elements and release our dataset that can serve as ground truth for future investigations in this direction of research. © 2018 IEEE.File | Dimensione | Formato | |
---|---|---|---|
Conti_Spot_2018.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
6.24 MB
Formato
Adobe PDF
|
6.24 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.