The rise of social media has democratized access to information and fostered global connectivity, reshaping public discourse at an unprecedented scale. However, social media platforms have also become environments where polarization, echo chambers, and coordinated disinformation campaigns emerge, particularly around controversial topics. These phenomena, which can have significant societal and political implications, raise fundamental questions about how opinions form, spread, and evolve in digital spaces. This thesis investigates these complex dynamics across three primary dimensions: modeling, detecting, and mitigating harmful phenomena in social media. First, we develop computational models of opinion dynamics to analyze the interplay between recommender systems and social polarization. We explore how algorithmic content curation influences ideological communities, potentially reinforcing existing opinions and shaping discourse. By leveraging Monte Carlo simulations and network-based opinion models, we assess how different recommendation strategies interact with underlying social structures, revealing that their effects depend strongly on initial homophily within the network. Next, we introduce novel detection methodologies to identify echo chambers and characterize ideological communities. Our approach combines network analysis with socio-demographic profiling, providing deeper insights into the formation and dynamics of fringe communities. By adopting a multidimensional perspective on user opinions, we reconcile seemingly contradictory findings in polarization research, demonstrating how homophilic and heterophilic interactions coexist in online discourse. Additionally, we develop techniques to detect coordinated inauthentic activity, unveiling how cross-platform manipulation tactics shape political conversations and public perception. Finally, we explore intervention strategies aimed at mitigating polarization and fostering exposure to diverse viewpoints. We propose optimization-based approaches to recalibrate recommendation systems, balancing relevance and ideological diversity in social feeds while preserving user engagement. Our work extends traditional opinion dynamics models by incorporating algorithmic interventions that promote cross-cutting exposure. Additionally, we address the challenge of reducing polarization when innate opinions remain partially unknown, presenting methodologies for optimizing interventions under uncertainty. Looking ahead, this thesis anticipates the increasing role of AI-driven agents in shaping digital interactions. The emergence of AI-powered bots introduces new challenges for content moderation, automated influence campaigns, and the governance of digital discourse. We discuss the implications of these advancements in the context of AI-mediated human behavior, the evolution of machine behavior as a research field, and the broader societal impact of AI-driven content generation and moderation. As AI agents become more sophisticated, the need for robust detection, mitigation, and regulatory frameworks will become even more pressing. By providing a systematic framework for modeling, detecting, and mitigating harmful phenomena in social media, this thesis contributes to a deeper understanding of the evolving information ecosystem and offers actionable insights for researchers, policymakers, and platform designers seeking to foster healthier digital environments.
Modeling, Detecting, and Mitigating Harmful Phenomena in Social Platforms / Cinus, Federico. - (2025 May 21).
Modeling, Detecting, and Mitigating Harmful Phenomena in Social Platforms
CINUS, FEDERICO
21/05/2025
Abstract
The rise of social media has democratized access to information and fostered global connectivity, reshaping public discourse at an unprecedented scale. However, social media platforms have also become environments where polarization, echo chambers, and coordinated disinformation campaigns emerge, particularly around controversial topics. These phenomena, which can have significant societal and political implications, raise fundamental questions about how opinions form, spread, and evolve in digital spaces. This thesis investigates these complex dynamics across three primary dimensions: modeling, detecting, and mitigating harmful phenomena in social media. First, we develop computational models of opinion dynamics to analyze the interplay between recommender systems and social polarization. We explore how algorithmic content curation influences ideological communities, potentially reinforcing existing opinions and shaping discourse. By leveraging Monte Carlo simulations and network-based opinion models, we assess how different recommendation strategies interact with underlying social structures, revealing that their effects depend strongly on initial homophily within the network. Next, we introduce novel detection methodologies to identify echo chambers and characterize ideological communities. Our approach combines network analysis with socio-demographic profiling, providing deeper insights into the formation and dynamics of fringe communities. By adopting a multidimensional perspective on user opinions, we reconcile seemingly contradictory findings in polarization research, demonstrating how homophilic and heterophilic interactions coexist in online discourse. Additionally, we develop techniques to detect coordinated inauthentic activity, unveiling how cross-platform manipulation tactics shape political conversations and public perception. Finally, we explore intervention strategies aimed at mitigating polarization and fostering exposure to diverse viewpoints. We propose optimization-based approaches to recalibrate recommendation systems, balancing relevance and ideological diversity in social feeds while preserving user engagement. Our work extends traditional opinion dynamics models by incorporating algorithmic interventions that promote cross-cutting exposure. Additionally, we address the challenge of reducing polarization when innate opinions remain partially unknown, presenting methodologies for optimizing interventions under uncertainty. Looking ahead, this thesis anticipates the increasing role of AI-driven agents in shaping digital interactions. The emergence of AI-powered bots introduces new challenges for content moderation, automated influence campaigns, and the governance of digital discourse. We discuss the implications of these advancements in the context of AI-mediated human behavior, the evolution of machine behavior as a research field, and the broader societal impact of AI-driven content generation and moderation. As AI agents become more sophisticated, the need for robust detection, mitigation, and regulatory frameworks will become even more pressing. By providing a systematic framework for modeling, detecting, and mitigating harmful phenomena in social media, this thesis contributes to a deeper understanding of the evolving information ecosystem and offers actionable insights for researchers, policymakers, and platform designers seeking to foster healthier digital environments.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


