PNC Milestone: Joel Ye @ Mellon Institute 355 or
Jul 6 @ 12:00 PM – 1:00 PM

Presenter: Joel Ye

Time: 12PM ET, Wednesday 7/6

Title: Modeling neural population responses to intracortical microstimulation.

Location: Mellon Institute 355 or

Advisors: Rob Gaunt, Leila Wehbe

Committee: Byron Yu, Pulkit Grover, Chethan Pandarinath


Electrical stimulation of intracortical microelectrode arrays has the potential to drive local neural population responses in accordance with input stimulation patterns. The full design space of input patterns is, however, exponentially large, which hinders systematic experimental characterization of stimulation effects. We approach this problem, in the context of the human somatosensory cortex, by building deep network models of the neural response to stimulation and analyzing model generalization in unseen conditions.

Thesis Proposal: Arish Alreja
Jul 7 @ 1:00 PM – 2:00 PM

Speaker: Arish Alreja, Joint Ph.D. Program in Neural Computation and Machine Learning

Time: 1pm, July 7th 2022 (Thursday)

Location: GHC 4405 and Zoom

Title: The neurodynamic basis of real-world face perception


Abstract: Visual neuroscience has been shaped by tightly controlled laboratory experiments whose findings have been crucial in advancing our knowledge of the visual brain, and, in broad stokes, many of the principles gleaned from this study generalize. However, ecological validity is essential for understanding how we really see. Many aspects of real-world visual experiences such as social interactions with loved ones or strangers cannot be fully captured in laboratory settings, leaving gaps in our understanding of the neural correlates of real-world social vision. To address these gaps, this thesis introduces a novel experimental paradigm that combines eye-tracking, audio, egocentric video and intracranial EEG recordings from human subjects during natural behavior that includes social interactions with friends, family and caregivers.


Chapter 2 of this thesis investigates representational dynamics of face viewpoint and identity using intracranial EEG recordings from face processing areas in ventral temporal cortex in a traditional paradigm. A novel mixture model approach for representational analysis is developed, revealing new characteristics in the neural representation for face viewpoint and capturing qualitative observations from existing literature. Representational Similarity Analysis (RSA) against a biologically plausible deep learning model of face processing concurs with representational analysis using the new method. The relationship between identity and face viewpoint representations is examined and reveals the mirror symmetric face viewpoint representation as a correlate of the identity code. Notably, we find that face viewpoint and identity representations are not bound by a hierarchy of cortical areas, but instead may rise and dissipate over time in the same cortical location.


Chapter 3 of this thesis introduces a novel paradigm for collecting behavioral data (video, eye–tracking and audio recordings) synchronized with neural activity recorded from intracranial EEG electrodes implanted in patients undergoing treatment for medically refractory epilepsy, during real-world social interactions. Best practices are established to carefully address distinct privacy, ethical and logistical considerations that arise in this paradigm. Data preprocessing and data fusion pipelines are introduced to enable construction of a high-quality multimodal data set that combines real-world social behavior and neural activity, enabling exploration of the neural correlates of social and affective perception in the human brain.


Chapter 4 investigates the neural correlates and dynamics of face processing during real-world social vision. Statistically significant decoding for faces vs other object categories is observed, alongside a graded face response to the presence of faces in the periphery. Statistically significant decoding of face identity across individuals and facial expressions within an individual is also observed. Contrasted against data from a traditional localizer experiment, a much broader swathe of cortex contributes to face related decoding during real-world vision. Notably, pre-saccadic neural activity can predict the category of the following fixation’s target with significant accuracy. Together these preliminary results suggest that real-world social vision is a contextually modulated, distributed and active sensing process.


Chapter 5 investigates neural representations underlying real-world social vision. Bidirectional models are used to predict fixation locked face stimuli using neural activity and neural activity using fixation locked face stimuli, for different individuals and for different expressions within a single individuals. Frequency decomposition comparisons between neural activity and its reconstruction reveal spectral-temporal features that are preserved demonstrating neural representations of facial features can be investigated in the real-world vision paradigm.


Finally, we propose expanding upon preliminary findings from Chapter 5, with the use of deep learning models of vision for representational analysis of neural activity in different brain areas during real-world social vision. Taken together, this thesis proposal includes the development and application of statistical methods and deep learning models to assess the neural correlates, dynamics and representations underlying unscripted real-world vision.


Thesis Committee:

Avniel Singh Ghuman (co-chair) – Department of Neurosurgery, University of Pittsburgh

Robert E. Kass (co-chair) –  Department of Statistics & Data Science and MLD, Carnegie Mellon University

Leila Wehbe – MLD, Carnegie Mellon University

Charles E. Schroeder – Departments of Neurosurgery and Psychiatry, Columbia University and Nathan Kline Institute

Aug 29 @ 4:00 PM – 7:00 PM