We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to “catch fireflies”. This task requires animals to actively sample from a closed-loop visual environment while concurrently computing continuous latent variables: (i) the distance and angle they have travelled (i.e., path integration), as well as (ii) the remaining distance and angle to the memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye-movements. Interestingly, however, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating vector-coding of hidden spatial goals and path integration. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these areas and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals’ gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the natural task strategy wherein monkeys continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain subnetworks may be dynamically established to reflect (embodied) task strategies.
Multiplexed and flexible neural coding in sensory, parietal, and frontal cortices during goal-directed virtual navigation / Noel, Jean-Paul; Balzani, Edoardo; Avila, Eric; Lakshminarasimhan, Kaushik; Bruni, Stefania; Alefantis, Panos; Savin, Cristina; Angelaki, Dora. - (2022). [10.21203/rs.3.rs-1025042/v1]
Multiplexed and flexible neural coding in sensory, parietal, and frontal cortices during goal-directed virtual navigation
Stefania BruniPenultimo
;
2022
Abstract
We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to “catch fireflies”. This task requires animals to actively sample from a closed-loop visual environment while concurrently computing continuous latent variables: (i) the distance and angle they have travelled (i.e., path integration), as well as (ii) the remaining distance and angle to the memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye-movements. Interestingly, however, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating vector-coding of hidden spatial goals and path integration. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these areas and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals’ gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the natural task strategy wherein monkeys continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain subnetworks may be dynamically established to reflect (embodied) task strategies.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.