In this paper, we offer a gamification-based application for the cultural heritage sector that aims to enhance the learning and fruition of museum artworks. The application encourages users to experience history and culture in the first person, based on the idea that the artworks in a museum can tell their own story, thus improving the engagement of the museums and providing information on the artwork itself. Specifically, we propose an application that allows museum visitors to create a deepfake video of a sculpture directly through their smartphone. More in detail, starting from a few live frames of a statue, the application generates in a short time a deepfake video where the statue talks by moving its lips synchronized with a text or audio file. The application exploits an underlying generative adversarial network technology and has been specialized on a custom statues dataset collected for the purpose. Experiments show that the generated videos exhibit great realism in the vast majority of cases, demonstrating the importance of a reliable statue face detection algorithm. The final aim of our application is to make the museum experience different, with a more immersive interaction and an engaging user experience, which could potentially attract more people to deepen classical history and culture.

Why Don't You Speak?: A Smartphone Application to Engage Museum Visitors Through Deepfakes Creation / Zaramella, Matteo; Amerini, Irene; Russo, Paolo. - (2023), pp. 29-37. (Intervento presentato al convegno 5th Workshop on the analySis, Understanding and proMotion of heritAge Contents, SUMAC 2023 tenutosi a Ottawa ON; Canada) [10.1145/3607542.3617359].

Why Don't You Speak?: A Smartphone Application to Engage Museum Visitors Through Deepfakes Creation

Zaramella, Matteo
;
Amerini, Irene;Russo, Paolo
2023

Abstract

In this paper, we offer a gamification-based application for the cultural heritage sector that aims to enhance the learning and fruition of museum artworks. The application encourages users to experience history and culture in the first person, based on the idea that the artworks in a museum can tell their own story, thus improving the engagement of the museums and providing information on the artwork itself. Specifically, we propose an application that allows museum visitors to create a deepfake video of a sculpture directly through their smartphone. More in detail, starting from a few live frames of a statue, the application generates in a short time a deepfake video where the statue talks by moving its lips synchronized with a text or audio file. The application exploits an underlying generative adversarial network technology and has been specialized on a custom statues dataset collected for the purpose. Experiments show that the generated videos exhibit great realism in the vast majority of cases, demonstrating the importance of a reliable statue face detection algorithm. The final aim of our application is to make the museum experience different, with a more immersive interaction and an engaging user experience, which could potentially attract more people to deepen classical history and culture.
2023
5th Workshop on the analySis, Understanding and proMotion of heritAge Contents, SUMAC 2023
museum user experience; deepfake; face detection; generative adversarial network
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Why Don't You Speak?: A Smartphone Application to Engage Museum Visitors Through Deepfakes Creation / Zaramella, Matteo; Amerini, Irene; Russo, Paolo. - (2023), pp. 29-37. (Intervento presentato al convegno 5th Workshop on the analySis, Understanding and proMotion of heritAge Contents, SUMAC 2023 tenutosi a Ottawa ON; Canada) [10.1145/3607542.3617359].
File allegati a questo prodotto
File Dimensione Formato  
Zaramella_Why_2023.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.39 MB
Formato Adobe PDF
2.39 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1693647
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact