Neurodevelopmental disorders (NDDs) are complex and heterogeneous conditions whose diagnosis still relies largely on subjective behavioral evaluation, leading to misdiagnosis and delays. The goal of this PhD thesis is to explore how Artificial Intelligence (AI), and in particular Deep Learning (DL) combined with explainability techniques, can contribute to a more objective, transparent, and early identification of these disorders through the analysis of behavioral and neurophysiological data. The research focuses in particular on Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD), and adopts a multimodal perspective, integrating eye gaze, gait analysis, and electroencephalographic (EEG) signals: data modalities that reflect clinically validated biomarkers of neurodevelopmental functioning. Each domain is investigated through dedicated methodological frameworks that couple high-performance architectures with model interpretability to achieve both predictive accuracy and clinical trustworthiness. For gaze-based ASD detection, bimodal DL architectures and Graph Attention Networks fuse stimulus features with scanpaths, achieving state-of-the-art accuracy while providing interpretable visual attributions aligned with established behavioral clinical evidence. For gait, convolutional and graph architectures over 3D joint trajectories capture spatio-temporal dependencies and reveal distinctive inter-joint coordination patterns consistent with clinical reports. For EEG, a hyperdimensional-computing pipeline enables data-efficient ADHD classification on limited cohorts, with prototype-level interpretability and robustness to noise. Finally, an EEG-conditioned diffusion framework reconstructs visual representations coherent with the eliciting stimuli and demonstrates advantages over adversarial baselines in perceptual quality and semantic agreement. Across all studies, explainability techniques were employed in a systematic way to understand model reasoning, confirm the relevance of extracted features, and enhance clinical interpretability, with results suggesting that objective, multimodal, and interpretable AI is feasible for NDD assessment, while external validation, model calibration, and prospective evaluation remain prerequisites for clinical integration. This work contributes both methodologically and ethically towards a more transparent, explainable, and equitable integration of AI into neurodevelopmental research and healthcare.

Deep learning for assisted and interpretable recognition of neurodevelopmental disorders / Colonnese, Federica. - (2026 Jan 20).

Deep learning for assisted and interpretable recognition of neurodevelopmental disorders

COLONNESE, FEDERICA
20/01/2026

Abstract

Neurodevelopmental disorders (NDDs) are complex and heterogeneous conditions whose diagnosis still relies largely on subjective behavioral evaluation, leading to misdiagnosis and delays. The goal of this PhD thesis is to explore how Artificial Intelligence (AI), and in particular Deep Learning (DL) combined with explainability techniques, can contribute to a more objective, transparent, and early identification of these disorders through the analysis of behavioral and neurophysiological data. The research focuses in particular on Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD), and adopts a multimodal perspective, integrating eye gaze, gait analysis, and electroencephalographic (EEG) signals: data modalities that reflect clinically validated biomarkers of neurodevelopmental functioning. Each domain is investigated through dedicated methodological frameworks that couple high-performance architectures with model interpretability to achieve both predictive accuracy and clinical trustworthiness. For gaze-based ASD detection, bimodal DL architectures and Graph Attention Networks fuse stimulus features with scanpaths, achieving state-of-the-art accuracy while providing interpretable visual attributions aligned with established behavioral clinical evidence. For gait, convolutional and graph architectures over 3D joint trajectories capture spatio-temporal dependencies and reveal distinctive inter-joint coordination patterns consistent with clinical reports. For EEG, a hyperdimensional-computing pipeline enables data-efficient ADHD classification on limited cohorts, with prototype-level interpretability and robustness to noise. Finally, an EEG-conditioned diffusion framework reconstructs visual representations coherent with the eliciting stimuli and demonstrates advantages over adversarial baselines in perceptual quality and semantic agreement. Across all studies, explainability techniques were employed in a systematic way to understand model reasoning, confirm the relevance of extracted features, and enhance clinical interpretability, with results suggesting that objective, multimodal, and interpretable AI is feasible for NDD assessment, while external validation, model calibration, and prospective evaluation remain prerequisites for clinical integration. This work contributes both methodologically and ethically towards a more transparent, explainable, and equitable integration of AI into neurodevelopmental research and healthcare.
20-gen-2026
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Colonnese.pdf

accesso aperto

Note: tesi completa
Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 38.54 MB
Formato Adobe PDF
38.54 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1761820
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact