Active Perception Laboratory


Mission | People | Projects | Publications | APL Resources | Public Resources


  • Active Perception
  • Active perception is the use of action to gather information to construct, update and improve a system's knowledge and understanding of its environment. One example of active perception is the constant movement of our eyes to search and gather visual information. At each point in time, we only see a very small portion of the world clearly with our fovea. Yet by moving our eyes rapidly, we manage to construct a seemingly stable and coherent internal representation of the world in our mind. How are these bits and pieces of retinal information assembled and integrated over time to form a seemingly coherent mental image of the scene? How does the brain decide where to look next? How and where are these scene elements represented in the brain? To address these questions, we are conducting human psychophysical experiments to examine information transfer across saccadic eye movements. We are also recording from neurons while monkeys are actively scanning and searching in the visual environment to examine the neural basis of such information transfer. We have developed a theoretical framework to describe saccadic behaviors as an information maximization process, and are now formulating and testing computational models for dynamic scene integration.

  • Structure of Neural Code
  • To understand the representations and computational processes underlying active perception, we need to understand the codes used by neurons, i.e. the language with which neurons communicate with each other. For the last thiry years, the average firing rate of a neuron has been considered to be the most reliable measure of its information. We are examining the possible existence of precise temporal spike patterns hidden in the neural spike trains that may encode higher order structures using pattern analysis, system identification, machine learning and statistical techniques. Several projects in the laboratory are focused on this issue: We are developing a new method which allows us to visualize and examine information content of the spike train across time and time-scale. We are also using system identification technique to decode the `speech' of a neuron or the `symphony' of a neuronal ensemble and reconstruct the visual scene based on the spikings of a neuron or of an neuronal ensemble.

  • Feedback and Hierarchical Computation
  • The classical framework for vision, as delineated by Marr, is that vision is accomplished by a series of feedforward computations in the visual hierarchy. The experimental findings from our laboratory show that global contextual information can modify the computation in early visual areas. This evidence suggests that low level visual processes might be tightly coupled to higher level visual processes. The high-level influence is most likely mediated by the massive recurrent feedback connections from one area of the brain to another. What is the functional role of this feedback? What are the advantages of the concurrent and interactive computation across the visual hierarchy? Our working theory is that feedback is for generating prediction and hypothesis to guide visual processing in the visual hierarchy. These prediction and hypotheses are central to the theory of active perception. We are now testing this theory by recording from neurons across the visual hierarchy (areas V1, V2, V4) while the monkeys are playing video games. We are also constructing and testing computational models and realistic neural circuits in conjunction of the neurophysiological experiments.

  • Neural Plasticity and Learning
  • The brain is an adaptive system. Even after development, the neural circuits remain plastic and exhibit changes with learning. Currently, we are experimenting with the implantation of electrode arrays of 100 electrodes into the neocortical surface of monkeys to record from the same neurons in the visual system over long periods of time (months) during which the monkey is trained to perform new perceptual grouping and perceptual discrimination tasks. This would allow us to study the evolution of the neural representation as a function of behaviors and experience of the organism over time. The experiments are inspired by unsupervised and reinforcement learning models, but we hope to infer from the neurophysiological data the neural algorithms underlying the development, formation and maintaince of neural circuits in the visual hierarchy.


    Webpage maintained by Stella X. Yu.