The automatic identification of harmful content online is of major concern for social media platforms, policymakers, and society. Researchers have studied textual, visual, and audio content, but typically in isolation. Yet, harmful content often combines multiple modalities, as in the case of memes. With this in mind, here we offer a comprehensive survey with a focus on harmful memes. Based on a systematic analysis of recent literature, we first propose a new typology of harmful memes, and then we highlight and summarize the relevant state of the art. One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism, partly due to the lack of suitable datasets. We further find that existing datasets mostly capture multi-class scenarios, which are not inclusive of the affective spectrum that memes can represent. Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual, blending different cultures. We conclude by highlighting several challenges related to multimodal semiotics, technological constraints, and non-trivial social engagement, and we present several open-ended aspects such as delineating online harm and empirically examining related frameworks and assistive interventions, which we believe will motivate and drive future research.

Detecting and Understanding Harmful Memes: A Survey / Sharma, Shivam; Alam, Firoj; Akhtar, Md. Shad; Dimitrov, Dimitar; Da San Martino, Giovanni; Firooz, Hamed; Halevy, Alon; Silvestri, Fabrizio; Nakov, Preslav; Chakraborty, Tanmoy. - (2022), pp. 5597-5606. (Intervento presentato al convegno International Joint Conference on Artificial Intelligence tenutosi a Vienna) [10.24963/ijcai.2022/781].

Detecting and Understanding Harmful Memes: A Survey

Silvestri, Fabrizio;
2022

Abstract

The automatic identification of harmful content online is of major concern for social media platforms, policymakers, and society. Researchers have studied textual, visual, and audio content, but typically in isolation. Yet, harmful content often combines multiple modalities, as in the case of memes. With this in mind, here we offer a comprehensive survey with a focus on harmful memes. Based on a systematic analysis of recent literature, we first propose a new typology of harmful memes, and then we highlight and summarize the relevant state of the art. One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism, partly due to the lack of suitable datasets. We further find that existing datasets mostly capture multi-class scenarios, which are not inclusive of the affective spectrum that memes can represent. Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual, blending different cultures. We conclude by highlighting several challenges related to multimodal semiotics, technological constraints, and non-trivial social engagement, and we present several open-ended aspects such as delineating online harm and empirically examining related frameworks and assistive interventions, which we believe will motivate and drive future research.
2022
International Joint Conference on Artificial Intelligence
harmful meme; deep learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Detecting and Understanding Harmful Memes: A Survey / Sharma, Shivam; Alam, Firoj; Akhtar, Md. Shad; Dimitrov, Dimitar; Da San Martino, Giovanni; Firooz, Hamed; Halevy, Alon; Silvestri, Fabrizio; Nakov, Preslav; Chakraborty, Tanmoy. - (2022), pp. 5597-5606. (Intervento presentato al convegno International Joint Conference on Artificial Intelligence tenutosi a Vienna) [10.24963/ijcai.2022/781].
File allegati a questo prodotto
File Dimensione Formato  
Sharma_Detecting_2022.pdf

accesso aperto

Note: https://www.ijcai.org/proceedings/2022/0781.pdf
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 4.85 MB
Formato Adobe PDF
4.85 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1678447
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 40
  • ???jsp.display-item.citation.isi??? 16
social impact