Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields. However, their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content, raising concerns about digital authenticity and potential misuse in creating deepfakes. This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier. We propose a novel loss that can improve the detector's robustness and handle imbalanced datasets. Additionally, we flatten the loss landscape during the model training to improve the detector's generalization capabilities. The effectiveness of our method, which outperforms traditional detection techniques, is demonstrated through extensive experiments, underscoring its potential to set a new state-of-the-art approach in DM-generated image detection. The code is available at https://github.com/Purdue-M2/RobustDM_Generated_Image_Detection.

Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images / Santosh, ; Lin, L.; Amerini, I.; Wang, X.; Hu, S.. - 2024(2024), pp. 1-7. (Intervento presentato al convegno 20th IEEE International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2024 tenutosi a Canada) [10.1109/AVSS61716.2024.10672612].

Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images

Amerini I.;
2024

Abstract

Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields. However, their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content, raising concerns about digital authenticity and potential misuse in creating deepfakes. This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier. We propose a novel loss that can improve the detector's robustness and handle imbalanced datasets. Additionally, we flatten the loss landscape during the model training to improve the detector's generalization capabilities. The effectiveness of our method, which outperforms traditional detection techniques, is demonstrated through extensive experiments, underscoring its potential to set a new state-of-the-art approach in DM-generated image detection. The code is available at https://github.com/Purdue-M2/RobustDM_Generated_Image_Detection.
2024
20th IEEE International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2024
AI images; CLIP; Diffusion models; Robust
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images / Santosh, ; Lin, L.; Amerini, I.; Wang, X.; Hu, S.. - 2024(2024), pp. 1-7. (Intervento presentato al convegno 20th IEEE International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2024 tenutosi a Canada) [10.1109/AVSS61716.2024.10672612].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1727943
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact