86-375/675 Computational Perception

Carnegie Mellon University

Fall 2013. MWF 3:30-4:20 p.m. Mellon Institute 130.

Course Description

The perceptual capabilities of even the simplest biological organisms are far beyond what we can achieve with machines. Whether you look at sensitivity, robustness, or sheer perceptual power, perception in biology just works, and works in complex, ever changing environments, and can pick up the most subtle sensory patterns. Is it the neural hardware? Does biology solve fundamentally different problems? What can we learn from biological systems and human perception? In this course, we will first study the biological and psychological data of biological perceptual systems in depth, and then apply computational thinking to investigate the principles and mechanisms underlying natural perception. The course will explore four major themes in computational perception this year: 1) scene statistics, sensory and cortical representation, 2) probabilistic models and mechanisms of perception, 3) neural decoding, mental representation and perceptual synthesis, 4) perceptual science, computation and artistic expression. You will learn how to reason scientifically and computationally about problems and issues in perception, how to extract the essential computational properties of those abstract ideas, and finally how to convert these into explicit mathematical models and computational algorithms. The course welcomes students from neuroscience, psychology, arts, architecture, computer science and engineerings who are interested in learning about how computations in the brain allow us to interpret and perceive the world, and how the science and engineering of vision, biology and art can interact and inform one another to foster artistic expression, engineering innovation and scientific understanding.

The undergraduate (9 units) option of the course will require 6-8 homework exercises, a term project or paper, and a final exam. Students will learn to use Matlab over the course of the semester to explore some basic and important computational models of biological or pereptual computation in vision, and to experiment with computation and art. The graduate option of the course (12 units), open to undergraduate students, will require additional reading and presentations on current research in the different areas.

Materials will be drawn from two textbooks: (1) Frisby and Stone's ``Seeing: The computational approach to biological vision'', MIT Press, 2010, and Simon Prince's ``Computer Vision: Models, Learning, and Inference'', Cambridge University Press, 2012. Thus, it will cover the relevant materials in computer vision, machine learning, image and pattern analysis, perceptual psychology, as well as basic neuroscience of the visual systems. Because of the diverse nature of students' background and preparations, problem sets will be tailored to the needs, interests and the ability of the students collectively as well as individually.

Prerequisites: First year college calculus, some linear algebra, probability theory and programming experience are desirable. Discuss with instructor if you have any question.

Course Information

Instructors Office (Office hours) Email (Phone)
Tai Sing Lee (Professor) Mellon Inst. Rm 115 tai@cnbc.cmu.edu (412-268-1060)

Recommended Textbook

Classroom Etiquette

Grading Scheme

Evaluation% of Grade
Assignments 65
Final Exam 20
Term Project 15
675 Journal Club 25
  • Grading scheme: A: > 88%, B: > 75%. C: > 65%.
  • Syllabus

  • Syllabus of an earlier incarnation of the course is available at http://www.cnbc.cmu.edu/~tai/cp12.html. But this year we plan to include additional materials from Simon Prince's book, reorganize and expand the course to explore the following four topical areas.
  • Topic 1: Scene statistics, sensory and cortical representation

    To undersatnd perception, we must understand the natural environments which shape our brain and our perceptual computational machinery. Central to to understanding the neural basis of perceptual inference from a Bayesian perspective is understanding how the statistical regularities in natural scenes are encoded in cortical representation to serve as priors in the inference process. Natural images however are enormously complex and maybe best expressed in hierarchical forms. Thus, a major challenge in computational vision is to understand the basic vocabulary of images, and the computational rules with which elementary components can be composed to form successive compositional structures to encode the hierarchical priors of natural scenes. We will explore statistical models of images, as well as compositional models such as DBN (Deep belief net) and RCM (Recursive compositional models) for learning the hierarchical language of vision. We will explore how these hierarchical scene priors are encoded in neural tunings and neural connectivities to faciliate perceptual inference.

    Topic 2: Probabilistic models and algorithms of perception

    While perception has been popularly formulated in terms of Bayesian inference in the theoretical level, little is known about the computational algorithms and implementation of perceptual inference. We will explore mechanistic and normative models for motion, binocular stereo, texture, surface and contour perception, perceptual organization and hierarchical models for object recognition, drawing knowledge from works in computer vision and computational neural models. We will study a number of algorithms that have been effective in computer vision for performing learning and inference, including gradient descent, particle and Kalman filtering, MCMC sampling and mean field approximation, and explore the links between observed neural dynamics and these inference algorithms. We will explore various theoretical frameworks on how perceptual representations are encoded and represented in neuronal ensembles, including the issue of population codes, synchrony and binding.

    Topic 4: Neural decoding, mental representation and perceptual synthesis

    With an understanding of cortical representation and neural mechanisms for perceptual inference, we can begin to explore how neural decoding and neural simulation technology can be coupled with large-scale multi-electrode array to decode mental images in our brain as well as to generate perceptual representation in the brain by electrical stimulation. There are over 40 million blind individuals in the world. A variety of invasive and noninvasive procedures have emerged over the years to use electrical stimulation to "restore" or create vision, ranging from retinal implant to electrical stimulation in LGN and stimulation of the visual cortex. We will investigate how V1 and the extrastriate cortex can represent mental images and precepts individually and together, both in terms of theories, models and neural evidence. We will study literature of artificial vision in human and animal models and explore paradigms for the development of visual prosthesis by integrating computer vision, electrical recording and stimulation technology.

    Topic 4: Perception, computation and art

    Visual perception and artistic expression are deeply connected at many levels. In fact, visual perception in the brain might involves both analysis and synthesis. That is, our perception is not simply analyzing what is out there, but an active synthesis of an internal mental representation of what is out there, sometimes leading to illusion and hallucination. We will explore this synthesis process and how it might be tied to aesthetics and art making. The integration of visual art and the experimental study of vision has its roots in formal analysis of paintings. Advances in our understanding of how our brain or perception works have lead to resurgence of interests in linking art with vision science. Here, we will explore some of the new links between neuroscience, computational vision and the art, with a view to enrich our understanding and making of arts -- how artistic expression is rooted in perceptual computation and how scientific understanding of vision have transformed arts over the centuries.


    Questions or comments: contact Tai Sing Lee
    Last modified: April 3, 2013, Tai Sing Lee