INTRODUCTION Neuroimaging research on human sensorimotor interactions is limited to oversimplified, impoverished experimental setups, with static environmental stimuli and absent or constrained visual feedback of the limbs. Here, we developed and tested MOTUM (Motion Online Tracking Under MRI), a hardware and software setup for fMRI that combines virtual reality (VR) and motion capture to track body movements and reproduce them in real time in a VR environment. We used the system during a reach-to-grasp task, analysed potential artefacts due to movement and examined the parametric modulation of kinematic indeces on task-evoked activations. METHODS MOTUM is composed of a MR-compatible glove and a motion capture system of three cameras mounted on the front wall within the MR room. Data are delivered to a control PC to reconstruct a 3D skeleton representation, which is then fed into Unity and used to animate humanoid avatar through ad-hoc software package. Visual output is then presented through a binocular headset. We tested the system in a 3T Siemens Prisma scanner on 7 right-handed healthy volunteers asked to perform a precision or a power grip on a custom 3D object. We collected frame-by-frame kinematic data and extracted kinematic indeces including movement duration, velocity and grip aperture. fMRI data were analysed using fMRIprep and SPM12. We quantified head movements for each volume by computing the framewise displacement (FD) and estimated its correlation with volume-wise amount of arm motion. Movement duration measure was used as modulatory parameter in the GLM to adjust its impact on task-evoked activity. We also assessed how movement-evoked activity was parametrically modulated by velocity of movement and grip aperture between thumb and index fingers. RESULTS The system successfully tracked and streamed the reach-to-grasp kinematics of all participants in real-time, with occasional sub-second loss of tracking, less frequent when hands were closer to the scanner bore end (e.g., for taller volunteers). Head movements were not critically higher than the usually accepted standards (i.e., FD < 0.9 mm). FD did not correlate with framewise movement of the arm (mean r = 0.05, sd = 0.03). Average velocity parameter modulated left primary motor and Premotor (PMd, PMv) areas, and we observed variability of its impact on activity of bilateral Intraparietal Sulcus and Superior Parietal Lobe. In most subjects, average grip aperture parametrically modulated motor-evoked activity on left anterior Intraparietal Sulcus. CONCLUSIONS The MOTUM system provides a high-quality immersive experience while not introducing evident movement-related artefacts and is a promising tool for studying human visuomotor functions. It provides a shared framework for the study of kinematic invariants across different motor tasks, which strenghten generalizability of their neural underpinnings. Although we tested it during hand movements, it can be easily expanded to other body parts (e.g., lower limbs) by adding more cameras, which would also improve the quality of hand tracking. This paves the way for a wide range of real-life actions to be performed during fMRI scans, with an extensive repertoire of possible virtual (realistic or no) scenarios, potentially determining a breakthrough in the research field of sensorimotor integration and beyond.
MOTUM: Motion Online Tracking Under MRI / Tani, Michelangelo; Bencivenga, Federica; Vyas, Krishnendu; Giove, Federico; Gazzitano, Steve; Pitzalis, Sabrina; Galati, Gaspare. - (2025). (Intervento presentato al convegno MNESYS III ANNUAL MEETING tenutosi a Genoa, Italy).
MOTUM: Motion Online Tracking Under MRI
Tani, MichelangeloCo-primo
;Bencivenga, FedericaCo-primo
;Vyas, Krishnendu;Giove, Federico;Pitzalis, Sabrina;Galati, GaspareUltimo
2025
Abstract
INTRODUCTION Neuroimaging research on human sensorimotor interactions is limited to oversimplified, impoverished experimental setups, with static environmental stimuli and absent or constrained visual feedback of the limbs. Here, we developed and tested MOTUM (Motion Online Tracking Under MRI), a hardware and software setup for fMRI that combines virtual reality (VR) and motion capture to track body movements and reproduce them in real time in a VR environment. We used the system during a reach-to-grasp task, analysed potential artefacts due to movement and examined the parametric modulation of kinematic indeces on task-evoked activations. METHODS MOTUM is composed of a MR-compatible glove and a motion capture system of three cameras mounted on the front wall within the MR room. Data are delivered to a control PC to reconstruct a 3D skeleton representation, which is then fed into Unity and used to animate humanoid avatar through ad-hoc software package. Visual output is then presented through a binocular headset. We tested the system in a 3T Siemens Prisma scanner on 7 right-handed healthy volunteers asked to perform a precision or a power grip on a custom 3D object. We collected frame-by-frame kinematic data and extracted kinematic indeces including movement duration, velocity and grip aperture. fMRI data were analysed using fMRIprep and SPM12. We quantified head movements for each volume by computing the framewise displacement (FD) and estimated its correlation with volume-wise amount of arm motion. Movement duration measure was used as modulatory parameter in the GLM to adjust its impact on task-evoked activity. We also assessed how movement-evoked activity was parametrically modulated by velocity of movement and grip aperture between thumb and index fingers. RESULTS The system successfully tracked and streamed the reach-to-grasp kinematics of all participants in real-time, with occasional sub-second loss of tracking, less frequent when hands were closer to the scanner bore end (e.g., for taller volunteers). Head movements were not critically higher than the usually accepted standards (i.e., FD < 0.9 mm). FD did not correlate with framewise movement of the arm (mean r = 0.05, sd = 0.03). Average velocity parameter modulated left primary motor and Premotor (PMd, PMv) areas, and we observed variability of its impact on activity of bilateral Intraparietal Sulcus and Superior Parietal Lobe. In most subjects, average grip aperture parametrically modulated motor-evoked activity on left anterior Intraparietal Sulcus. CONCLUSIONS The MOTUM system provides a high-quality immersive experience while not introducing evident movement-related artefacts and is a promising tool for studying human visuomotor functions. It provides a shared framework for the study of kinematic invariants across different motor tasks, which strenghten generalizability of their neural underpinnings. Although we tested it during hand movements, it can be easily expanded to other body parts (e.g., lower limbs) by adding more cameras, which would also improve the quality of hand tracking. This paves the way for a wide range of real-life actions to be performed during fMRI scans, with an extensive repertoire of possible virtual (realistic or no) scenarios, potentially determining a breakthrough in the research field of sensorimotor integration and beyond.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.