Neuroimaging research on sensorimotor interactions faces theoretical and methodological challenges. fMRI studies on the topic are usually restricted to oversimplified, impoverished experimental setups, with static environmental stimuli and absent or constrained visual feedback of the limbs. Here, we developed and tested MOTUM (Motion Online Tracking Under MRI), a hardware and software setup for fMRI that combines virtual reality (VR) and motion capture to track body movements and reproduce them in real time in a VR environment. MOTUM allows the generation of dynamically responsive experimental scenarios that capture the complexity of body-environment interactions with fine precision. As a proof-of-concept, we present an investigation of reach-to-grasp visually guided movements, disentangling the effect of visual feedback on one's movement. MOTUM is composed of two acquisition devices (Fig.1): an MR-compatible glove (Data Glove Ultra, 5DT, Pretoria, South Africa) with 14 sensors tracking flexion and extension movements of the right-hand fingers, and a motion capture system of three MR-compatible cameras (Oqus, Qualisys, Göteborg, Sweden) mounted on the front wall within the MR room, used to track limb movements (in this study, right forearm and wrist rotational and translational movements). Data are fed into a control PC running Qualysis Track Manager to reconstruct images from cameras into a 3D skeleton representation, which is then fed into Unity (Unity Technologies, San Francisco, US) and used to animate a first-person humanoid avatar through ad-hoc software package (publicly available). Visual output is then presented through a binocular MR-compatible headset (Visual System HD, NordicNeuroLab, Bergen, Norway) providing an immersive experience. We tested the system in a Siemens Prisma scanner on 7 right-handed healthy volunteers asked to grasp narrow or wide parts of a custom 3D object through a precision or power grip. In separate blocks, participants (a) grasped without visual feedback, (b) grasped with online feedback of the hand movement through MOTUM, (c) observed replays of movements recorded in previous trials or (d) observed the static scene (baseline). Data were analyzed using fMRIprep and SPM12. We quantified head movements for each volume by computing the framewise displacement (FD) and estimated the trial-by-trial amount of arm motion. Both head- and arm motion estimates were used as modulatory regressors in the GLM to control for motion artifacts. The system successfully tracked and streamed the reach-to-grasp kinematics of all participants in real-time, with occasional sub-second loss of tracking, less frequent when hands were closer to the scanner bore end (e.g., for taller volunteers). Head movements were not critically higher than the usually accepted standards (i.e., FD < 0.9 mm). The main effect of movement factor showed the expected clusters in contralateral M1, dPMC, and vPMC and in bilateral aIPS, SMA, S1, SPL, mCC, and cerebellum. The main effect of movement observation showed activity in visual areas such as hMT+ and SOG and in aIPS and lateral SPL, and cerebellum bilaterally. The interaction resulted in activity within dorsal secondary visual areas in most subjects and in the M1-S1 hand area in four of them. The MOTUM system provides a high-quality immersive experience while not introducing evident movement- related artifacts and is a promising tool for studying human visuomotor functions. Although we tested it during hand movements, it can be easily expanded to other body parts (e.g., lower limbs) by adding some additional cameras, which would also improve the quality of hand tracking. This paves the way for a wide range of real-life actions to be performed during fMRI scans, with an extensive repertoire of possible virtual (realistic or non-realistic) scenarios, potentially determining a breakthrough in the research field of sensorimotor integration and beyond.
MOTUM: Motion Online Tracking Under MRI system / Tani, Michelangelo; Bencivenga, Federica; Vyas, Krishnendu; Giove, Federico; Gazzitano, Steve; Pitzalis, Sabrina; Galati, Gaspare. - (2025), pp. 3318-3319. ( Organization for Human Brain Mapping (OHBM) Brisbane; Australia ) [10.5281/zenodo.15641971].
MOTUM: Motion Online Tracking Under MRI system
Tani, Michelangelo
Co-primo
;Bencivenga, FedericaCo-primo
;Vyas, Krishnendu;Giove, Federico;Pitzalis, Sabrina;Galati, GaspareUltimo
2025
Abstract
Neuroimaging research on sensorimotor interactions faces theoretical and methodological challenges. fMRI studies on the topic are usually restricted to oversimplified, impoverished experimental setups, with static environmental stimuli and absent or constrained visual feedback of the limbs. Here, we developed and tested MOTUM (Motion Online Tracking Under MRI), a hardware and software setup for fMRI that combines virtual reality (VR) and motion capture to track body movements and reproduce them in real time in a VR environment. MOTUM allows the generation of dynamically responsive experimental scenarios that capture the complexity of body-environment interactions with fine precision. As a proof-of-concept, we present an investigation of reach-to-grasp visually guided movements, disentangling the effect of visual feedback on one's movement. MOTUM is composed of two acquisition devices (Fig.1): an MR-compatible glove (Data Glove Ultra, 5DT, Pretoria, South Africa) with 14 sensors tracking flexion and extension movements of the right-hand fingers, and a motion capture system of three MR-compatible cameras (Oqus, Qualisys, Göteborg, Sweden) mounted on the front wall within the MR room, used to track limb movements (in this study, right forearm and wrist rotational and translational movements). Data are fed into a control PC running Qualysis Track Manager to reconstruct images from cameras into a 3D skeleton representation, which is then fed into Unity (Unity Technologies, San Francisco, US) and used to animate a first-person humanoid avatar through ad-hoc software package (publicly available). Visual output is then presented through a binocular MR-compatible headset (Visual System HD, NordicNeuroLab, Bergen, Norway) providing an immersive experience. We tested the system in a Siemens Prisma scanner on 7 right-handed healthy volunteers asked to grasp narrow or wide parts of a custom 3D object through a precision or power grip. In separate blocks, participants (a) grasped without visual feedback, (b) grasped with online feedback of the hand movement through MOTUM, (c) observed replays of movements recorded in previous trials or (d) observed the static scene (baseline). Data were analyzed using fMRIprep and SPM12. We quantified head movements for each volume by computing the framewise displacement (FD) and estimated the trial-by-trial amount of arm motion. Both head- and arm motion estimates were used as modulatory regressors in the GLM to control for motion artifacts. The system successfully tracked and streamed the reach-to-grasp kinematics of all participants in real-time, with occasional sub-second loss of tracking, less frequent when hands were closer to the scanner bore end (e.g., for taller volunteers). Head movements were not critically higher than the usually accepted standards (i.e., FD < 0.9 mm). The main effect of movement factor showed the expected clusters in contralateral M1, dPMC, and vPMC and in bilateral aIPS, SMA, S1, SPL, mCC, and cerebellum. The main effect of movement observation showed activity in visual areas such as hMT+ and SOG and in aIPS and lateral SPL, and cerebellum bilaterally. The interaction resulted in activity within dorsal secondary visual areas in most subjects and in the M1-S1 hand area in four of them. The MOTUM system provides a high-quality immersive experience while not introducing evident movement- related artifacts and is a promising tool for studying human visuomotor functions. Although we tested it during hand movements, it can be easily expanded to other body parts (e.g., lower limbs) by adding some additional cameras, which would also improve the quality of hand tracking. This paves the way for a wide range of real-life actions to be performed during fMRI scans, with an extensive repertoire of possible virtual (realistic or non-realistic) scenarios, potentially determining a breakthrough in the research field of sensorimotor integration and beyond.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


