Representation and inference of 3D surfaces
-- mid-level vision

How does the brain compute and represent 3D visual surfaces? Specifically, how can arbitrary fine surface shapes (e.g. the facial structure of a person, the varieties of curvature in a cup) be represented? The neural basis of mid-level vision, or the computation of what David Marr called the 2.5D sketch, is at present poorly understood. We are developing a principled-approach to this problem by studying the statistical regularities in natural 3D scenes (cross-link) to obtain candidate relevant codes for optimal representation of 3D surfaces, and experimenting with probabilistic algorithm for infering 3D surfaces based on image cues using a generative model.

  • Project 1: Computation model of probabilistic inference of 3D surface
    • Project Leader: Brian Potetz

  • Project 2: Neural basis of 3D surface inference

  • Project 3: Computational study of optimal codes for 3D surfaces
    • Project Leader: Brian Potetz

  • Project 4: Neurophysiology of 3D surface coding