PhD Dissertation Defense: Michelle Heusser
Bioengineering Graduate Program
Advisor: Professor Neeraj Gandhi
Title: “Neural Population Dynamics of Sensorimotor Signals for Eye Movements”
Date: Friday, June 3, 2022 11:00 AM
Benedum Hall, Room 102
Committee Chair: Neeraj Gandhi, PhD, Professor, Department of Bioengineering
Committee Members: Aaron Batista, PhD, Professor, Department of Bioengineering
Matthew Smith, PhD, Associate Professor, Department of Biomedical Engineering, Carnegie Mellon University
Byron Yu, PhD, Professor, Departments of Electrical & Computer Engineering and Biomedical Engineering, Carnegie Mellon University
ABSTRACT: During active vision, we convert information about visual objects in our periphery into goal-directed eye movements known as saccades. This process of sensorimotor integration is complex; we must incorporate knowledge about our environment, including the spatial location of the target object and the urgency of saccade initiation. The superior colliculus (SC) is a deep brain structure that is critical for active vision, with most neurons in this area responding to the presence of a visual stimulus and increasing their activity to signal for saccade initiation. In the studies presented in this dissertation, we characterized the combined activity patterns of small populations of neurons in the non-human primate SC across multiple contexts to probe various parameters of active vision. We used simple machine learning techniques (i.e., dimensionality reduction and/or classification) that quantitatively capture the activity pattern across many simultaneously recorded channels. First, we examined the dynamics of population activity during the time between sensation and action and found that activity slowly evolves from a visual-like to a motor-like pattern when a delay is imposed. This sensorimotor transformation signature is robust to perturbations induced by small fixational saccades and is correlated with saccade latency, indicative of a potential mechanism for movement generation. Next, we investigated the impact of behavioral context on the population-level representation during the sensation and action periods of active vision and observed unique encoding of both content (sensation/action epochs) and context (two comparable behavioral tasks). Last, we determined the time course and spatial extent of intended saccade target direction encoding by SC neural populations in an eight-target delayed saccade task. We compared these profiles with a second signal modality – the local field potentials (LFPs), which represent collective activity in a broader region of the SC. Neural spiking activity better encoded target direction throughout the time course of sensorimotor integration than did LFP signals. Population activity during the motor epoch exhibited broader spatial tuning than in the visual epoch, indicative of dynamic encoding of spatial parameters. Taken together, these studies provide foundational knowledge of the SC’s role in the process of active vision.
* Covid-19 contingency plan: IWoLP 2022 is planned as an in-person conference. If, however, the pandemic imposes serious restrictions for traveling, we will switch to a virtual conference. Registration fees will depend on the final format, but will be kept below $300 for faculty and lower for junior scholars.
Motor speech: Carrie Niziolek, University of Wisconsin-Madison; Lisa Goffman, University of Texas at Dallas.
Handwriting and typing: Mike McCloskey, Johns Hopkins University; Gordon Logan, Vanderbilt University.
Sign language and gesturing: Naomi Caselli, Boston University; Cristina Baus, University of Barcelona.
Origins of language and linguistic diversity: Hannah Sarvasy, Western Sydney University; Adrien Meguerditchian, Aix-Marseille University.
Abstract Submission opens on March 21, 2022.
Abstract Submission deadline is April 8, 2022, at 5:00 pm Eastern Time.
Abstract acceptenace notifications will be sent by April 18, 2022.
Registration opens on April 18, 2022.
Early-bird registration ends on May 2, 2022, at 5:00 pm Eastern Time.