Functional Magnetic Resonance Imaging is a powerful tool for studying brain function but presents challenges due to high dimensionality and variability. We propose a self-supervised transformer-based foundation model using a masked autoencoder to learn generalizable representations of fMRI time series. Trained on the Human Connectome Project (HCP) S1200 dataset, the model is evaluated on cognitive task classification and neuroticism prediction using linear, MLP, and ConvLSTM probes under zero-shot and fine-tuning settings. Our model outperforms training from scratch, exceeding 90% accuracy in cognitive task classification and improving correlations in neuroticism prediction. Architectural enhancements, including contrastive loss and spatiotemporal attention, further refine representations. These results highlight the potential of self-supervised transformers for fMRI analysis, enabling scalable, generalizable models for neuroscience and clinical applications.

Self-Supervised Transformer-Based Foundation Model for functional Magnetic resonance Imaging / Ferrante, M.; Iervese, S.; Astolfi, L.; Toschi, N.. - 2025:(2025). ( 2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Copenhagen; Denmark ) [10.1109/EMBC58623.2025.11254125].

Self-Supervised Transformer-Based Foundation Model for functional Magnetic resonance Imaging

Iervese S.
;
Astolfi L.
;
2025

Abstract

Functional Magnetic Resonance Imaging is a powerful tool for studying brain function but presents challenges due to high dimensionality and variability. We propose a self-supervised transformer-based foundation model using a masked autoencoder to learn generalizable representations of fMRI time series. Trained on the Human Connectome Project (HCP) S1200 dataset, the model is evaluated on cognitive task classification and neuroticism prediction using linear, MLP, and ConvLSTM probes under zero-shot and fine-tuning settings. Our model outperforms training from scratch, exceeding 90% accuracy in cognitive task classification and improving correlations in neuroticism prediction. Architectural enhancements, including contrastive loss and spatiotemporal attention, further refine representations. These results highlight the potential of self-supervised transformers for fMRI analysis, enabling scalable, generalizable models for neuroscience and clinical applications.
2025
2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Self-Supervised Learning; Transformer-based Foundation Models; Masked Autoencoders (MAE); Functional MRI (fMRI); Cognitive State Decoding
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Self-Supervised Transformer-Based Foundation Model for functional Magnetic resonance Imaging / Ferrante, M.; Iervese, S.; Astolfi, L.; Toschi, N.. - 2025:(2025). ( 2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Copenhagen; Denmark ) [10.1109/EMBC58623.2025.11254125].
File allegati a questo prodotto
File Dimensione Formato  
Ferrante_Self-supervised_2025.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.36 MB
Formato Adobe PDF
1.36 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1760417
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact