The pervasiveness of modern machine learning algorithms exposes users to new vulnerabilities: violation of sensitive information stored in the training data and wrong model behaviors caused by adversaries. State-of-the-art approaches to prevent such behaviors are usually based on Differential Privacy (DP) and Adversarial Training (AT). DP is a rigorous formulation of privacy in probabilist terms to prevent information leakages that could reveal private information about the users, while AT algorithms empirically increase the system’s robustness, injecting adversarial examples during the training process. Both techniques involve achieving their goal by modeling noise introduced into the system. We propose analyzing the relationship between these two techniques, studying how one affects the other. Our objective is to design a mechanism that guarantees DP and robustness against adversarial attacks, injecting modeled noise into the system. We propose Recommender Systems as an application scenario because of the severe risks to user privacy and system sensitivity to adversaries.

Towards Differentially Private Machine Learning Models and Their Robustness to Adversaries / Mancino, ALBERTO CARLO MARIA; Di Noia, Tommaso. - (2022). (Intervento presentato al convegno 22nd International Conference on Web Engineering, ICWE 2022 tenutosi a Bari, Italy) [10.1007/978-3-031-09917-5_35].

Towards Differentially Private Machine Learning Models and Their Robustness to Adversaries

Alberto Carlo Maria Mancino
;
2022

Abstract

The pervasiveness of modern machine learning algorithms exposes users to new vulnerabilities: violation of sensitive information stored in the training data and wrong model behaviors caused by adversaries. State-of-the-art approaches to prevent such behaviors are usually based on Differential Privacy (DP) and Adversarial Training (AT). DP is a rigorous formulation of privacy in probabilist terms to prevent information leakages that could reveal private information about the users, while AT algorithms empirically increase the system’s robustness, injecting adversarial examples during the training process. Both techniques involve achieving their goal by modeling noise introduced into the system. We propose analyzing the relationship between these two techniques, studying how one affects the other. Our objective is to design a mechanism that guarantees DP and robustness against adversarial attacks, injecting modeled noise into the system. We propose Recommender Systems as an application scenario because of the severe risks to user privacy and system sensitivity to adversaries.
2022
22nd International Conference on Web Engineering, ICWE 2022
Differential privacy, Adversarial training, Recommender systems, Privacy preservation, System robustness
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Towards Differentially Private Machine Learning Models and Their Robustness to Adversaries / Mancino, ALBERTO CARLO MARIA; Di Noia, Tommaso. - (2022). (Intervento presentato al convegno 22nd International Conference on Web Engineering, ICWE 2022 tenutosi a Bari, Italy) [10.1007/978-3-031-09917-5_35].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1671647
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact