Image repurposing is a commonly used method for spreading misinformation on social media and online forums, which involves publishing untampered images with modified metadata to create rumors and further propaganda. While manual verification is possible, given vast amounts of verified knowledge available on the internet, the increasing prevalence and ease of this form of semantic manipulation call for the development of robust automatic ways of assessing the semantic integrity of multimedia data. In this paper, we present a novel method for image repurposing detection that is based on the real-world adversarial interplay between a bad actor who repurposes images with counterfeit metadata and a watchdog who verifies the semantic consistency between images and their accompanying metadata, where both players have access to a reference dataset of verified content, which they can use to achieve their goals. The proposed method exhibits state-of-the-art performance on location-identity, subject-identity and painting-artist verification, showing its efficacy across a diverse set of scenarios.

AIRD: Adversarial Learning Framework for Image Repurposing Detection / Jaiswal, A; Wu, Y; Abdalmageed, W; Masi, I; Natarajan, P. - (2019). (Intervento presentato al convegno IEEE/CVF Computer Vision and Pattern Recognition (CVPR) tenutosi a Long Beach).

AIRD: Adversarial Learning Framework for Image Repurposing Detection

Masi I;
2019

Abstract

Image repurposing is a commonly used method for spreading misinformation on social media and online forums, which involves publishing untampered images with modified metadata to create rumors and further propaganda. While manual verification is possible, given vast amounts of verified knowledge available on the internet, the increasing prevalence and ease of this form of semantic manipulation call for the development of robust automatic ways of assessing the semantic integrity of multimedia data. In this paper, we present a novel method for image repurposing detection that is based on the real-world adversarial interplay between a bad actor who repurposes images with counterfeit metadata and a watchdog who verifies the semantic consistency between images and their accompanying metadata, where both players have access to a reference dataset of verified content, which they can use to achieve their goals. The proposed method exhibits state-of-the-art performance on location-identity, subject-identity and painting-artist verification, showing its efficacy across a diverse set of scenarios.
2019
IEEE/CVF Computer Vision and Pattern Recognition (CVPR)
media forensics, adversarial training, deep learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
AIRD: Adversarial Learning Framework for Image Repurposing Detection / Jaiswal, A; Wu, Y; Abdalmageed, W; Masi, I; Natarajan, P. - (2019). (Intervento presentato al convegno IEEE/CVF Computer Vision and Pattern Recognition (CVPR) tenutosi a Long Beach).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1458934
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 16
  • ???jsp.display-item.citation.isi??? 12
social impact