The spread of misinformation through synthetically generated yet realistic images and videos has become a significant problem, calling for robust manipulation detection methods. Despite the predominant effort of detecting face manipulation in still images, less attention has been paid to the identification of tampered faces in videos by taking advantage of the temporal information present in the stream. Recurrent convolutional models are a class of deep learning models which have proven effective at exploiting the temporal information from image streams across domains. We thereby distill the best strategy for combining variations in these models along with domain specific face preprocessing techniques through extensive experimentation to obtain state-of-the-art performance on publicly available videobased facial manipulation benchmarks. Specifically, we attempt to detect Deepfake, Face2Face and FaceSwap tampered faces in video streams. Evaluation is performed on the recently introduced FaceForensics++ dataset, improving the previous state-of-the-art by up to 4.55% in accuracy.

Recurrent Convolutional Strategies for Face Manipulation Detection in Videos / Sabir, E; Cheng, J; Jaiswal, A; Abdalmageed, W; Masi, I; Natarajan, P. - (2019). (Intervento presentato al convegno Workshop on Applications of Computer Vision and Pattern Recognition to Media Forensics (CVPR Workshops) tenutosi a Long Beach).

Recurrent Convolutional Strategies for Face Manipulation Detection in Videos

Masi I;
2019

Abstract

The spread of misinformation through synthetically generated yet realistic images and videos has become a significant problem, calling for robust manipulation detection methods. Despite the predominant effort of detecting face manipulation in still images, less attention has been paid to the identification of tampered faces in videos by taking advantage of the temporal information present in the stream. Recurrent convolutional models are a class of deep learning models which have proven effective at exploiting the temporal information from image streams across domains. We thereby distill the best strategy for combining variations in these models along with domain specific face preprocessing techniques through extensive experimentation to obtain state-of-the-art performance on publicly available videobased facial manipulation benchmarks. Specifically, we attempt to detect Deepfake, Face2Face and FaceSwap tampered faces in video streams. Evaluation is performed on the recently introduced FaceForensics++ dataset, improving the previous state-of-the-art by up to 4.55% in accuracy.
2019
Workshop on Applications of Computer Vision and Pattern Recognition to Media Forensics (CVPR Workshops)
deepfake detection, media forensics, deep learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Recurrent Convolutional Strategies for Face Manipulation Detection in Videos / Sabir, E; Cheng, J; Jaiswal, A; Abdalmageed, W; Masi, I; Natarajan, P. - (2019). (Intervento presentato al convegno Workshop on Applications of Computer Vision and Pattern Recognition to Media Forensics (CVPR Workshops) tenutosi a Long Beach).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1458930
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact