The pervasiveness of modern machine learning algorithms exposes users to new vulnerabilities: violation of sensitive information stored in the training data and wrong model behaviors caused by adversaries. State-of-the-art approaches to prevent such behaviors are usually based on Differential Privacy (DP) and Adversarial Training (AT). DP is a rigorous formulation of privacy in probabilist terms to prevent information leakages that could reveal private information about the users, while AT algorithms empirically increase the system’s robustness, injecting adversarial examples during the training process. Both techniques involve achieving their goal by modeling noise introduced into the system. We propose analyzing the relationship between these two techniques, studying how one affects the other. Our objective is to design a mechanism that guarantees DP and robustness against adversarial attacks, injecting modeled noise into the system. We propose Recommender Systems as an application scenario because of the severe risks to user privacy and system sensitivity to adversaries.
Towards Differentially Private Machine Learning Models and Their Robustness to Adversaries / Mancino, Alberto Carlo Maria; Di Noia, Tommaso. - 13362:(2022), pp. 455-461. (Intervento presentato al convegno 22nd International Conference on Web Engineering, ICWE 2022 tenutosi a Bari, Italy) [10.1007/978-3-031-09917-5_35].
Towards Differentially Private Machine Learning Models and Their Robustness to Adversaries
Alberto Carlo Maria Mancino
;
2022
Abstract
The pervasiveness of modern machine learning algorithms exposes users to new vulnerabilities: violation of sensitive information stored in the training data and wrong model behaviors caused by adversaries. State-of-the-art approaches to prevent such behaviors are usually based on Differential Privacy (DP) and Adversarial Training (AT). DP is a rigorous formulation of privacy in probabilist terms to prevent information leakages that could reveal private information about the users, while AT algorithms empirically increase the system’s robustness, injecting adversarial examples during the training process. Both techniques involve achieving their goal by modeling noise introduced into the system. We propose analyzing the relationship between these two techniques, studying how one affects the other. Our objective is to design a mechanism that guarantees DP and robustness against adversarial attacks, injecting modeled noise into the system. We propose Recommender Systems as an application scenario because of the severe risks to user privacy and system sensitivity to adversaries.File | Dimensione | Formato | |
---|---|---|---|
Mancino_preprint_Towards_2022.pdf
accesso aperto
Note: https://link.springer.com/chapter/10.1007/978-3-031-09917-5_35
Tipologia:
Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
195.61 kB
Formato
Adobe PDF
|
195.61 kB | Adobe PDF | |
Mancino_Towards_2022.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.82 MB
Formato
Adobe PDF
|
1.82 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.