The current spike of hyper-realistic faces artificially generated using deepfakes calls for media forensics solutions that are tailored to video streams and work reliably with a low false alarm rate at the video level. We present a method for deepfake detection based on a two-branch network structure that isolates digitally manipulated faces by learning to amplify artifacts while suppressing the high-level face content. Unlike current methods that extract spatial frequencies as a preprocessing step, we propose a two-branch structure: one branch propagates the original information, while the other branch suppresses the face content yet amplifies multi-band frequencies using a Laplacian of Gaussian (LoG) as a bottleneck layer. To better isolate manipulated faces, we derive a novel cost function that, unlike regular classification, compresses the variability of natural faces and pushes away the unrealistic facial samples in the feature space. Our two novel components show promising results on the FaceForensics+ +, Celeb-DF, and Facebook’s DFDC preview benchmarks, when compared to prior work. We then offer a full, detailed ablation study of our network architecture and cost function. Finally, although the bar is still high to get very remarkable figures at a very low false alarm rate, our study shows that we can achieve good video-level performance when cross-testing in terms of video-level AUC.

Two-Branch Recurrent Network for Isolating Deepfakes in Videos / Masi, I.; Killekar, A.; Mascarenhas, R. M.; Gurudatt, S. P.; Abdalmageed, W.. - 12352:(2020), pp. 667-684. (Intervento presentato al convegno 16th European Conference on Computer Vision, ECCV 2020 tenutosi a Scotland (Virtual)) [10.1007/978-3-030-58571-6_39].

Two-Branch Recurrent Network for Isolating Deepfakes in Videos

Masi I.
;
2020

Abstract

The current spike of hyper-realistic faces artificially generated using deepfakes calls for media forensics solutions that are tailored to video streams and work reliably with a low false alarm rate at the video level. We present a method for deepfake detection based on a two-branch network structure that isolates digitally manipulated faces by learning to amplify artifacts while suppressing the high-level face content. Unlike current methods that extract spatial frequencies as a preprocessing step, we propose a two-branch structure: one branch propagates the original information, while the other branch suppresses the face content yet amplifies multi-band frequencies using a Laplacian of Gaussian (LoG) as a bottleneck layer. To better isolate manipulated faces, we derive a novel cost function that, unlike regular classification, compresses the variability of natural faces and pushes away the unrealistic facial samples in the feature space. Our two novel components show promising results on the FaceForensics+ +, Celeb-DF, and Facebook’s DFDC preview benchmarks, when compared to prior work. We then offer a full, detailed ablation study of our network architecture and cost function. Finally, although the bar is still high to get very remarkable figures at a very low false alarm rate, our study shows that we can achieve good video-level performance when cross-testing in terms of video-level AUC.
2020
16th European Conference on Computer Vision, ECCV 2020
Deepfake detection; Loss function; Two-branch recurrent net
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Two-Branch Recurrent Network for Isolating Deepfakes in Videos / Masi, I.; Killekar, A.; Mascarenhas, R. M.; Gurudatt, S. P.; Abdalmageed, W.. - 12352:(2020), pp. 667-684. (Intervento presentato al convegno 16th European Conference on Computer Vision, ECCV 2020 tenutosi a Scotland (Virtual)) [10.1007/978-3-030-58571-6_39].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1555531
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 228
  • ???jsp.display-item.citation.isi??? ND
social impact