Shape from Specular Flow


A central problem in computer vision is the estimation of 3D shape from images. Much work has been devoted to this problem in the context of lambertian or mostly diffused objects. But what happens when the object is specular? Indeed, an image of a purely specular (mirror-like) object is just a distortion of its surrounding illumination environment. In this sense, an image of a specular object is a distorted version of the environment, where the distortion is a function of its shape. It is therefore natural to examine when and how shape information can be recovered from observations of these distortions.

When the environment is known, it is conceivable to measure the reflected distortion and use it to estimate the shape of the specular object. But when the illumination environment surrounding the object is unknown, this shape recovery problem becomes severely ill pose. Indeed, it is well known that any specular object can create almost any given image of a specular surface simply by manipulating the environment (This property is exploited in art in what has been called anamorphosis, where a painting makes sense to a human observer only when it is reflected from a specific mirrored object while been watched from a very specific viewpoint, as illustrated in the right image). As a result of this difficulty, much of the existing work on recovering a shape from specular reflections has been restricted to settings in which the environment surrounding the object is completely known or otherwise extremely simple (usually just a single light source). The goal of this project is to break this barrier, devising and exploring a theory which permits specular shape reconstruction under no or very little information about its surrounding environment. Dealing with Specular objects is not only challanging, it is also fun. Just watch our Liana being a bit silly at Anish Kapoor’s Cloud Gate in Chicago.

This is our Liana gets a bit silly in Chigaco


The SFSF equation

The key our solution of the problem is to incorporate motion into the scene. In fact, we have shown that the problem becomes more tractable once the camera observes relative motion between the object and the environment, of the sort shown in the video on the left. The motion induces a type of optical-flow-field that has become known as specular flow. We have explored this flow which is invariant to the environment content and hence can be exploited for the reconstruction task when illumination environment is unknown. We have developed the foundations of the shape-from-specular-flow (SFSF) problem and showed that the specular flow is directly related to surface shape through a non-linear partial differential equation (ICCV2007, CVPR2008,PAMI2010). Furthermore, we have shown that a suitable re-parameterization leads to a linear formulation of the SFSFS equation (ICCV2009).Theoretically, one can solve these equations and reconstruct the surface. Some of our work was to show how this can also be done numerically once the specular flow is measured from images.

Reconstruction Algorithms

Reconstruction of specular shape from specular flow requires the solution to the SFSF equation, in which the specular flow is assumed known after being measured from the images. This in itself assume that the measurement of the flow is robust enough, and indeed when the flow is given without major distortions, the reconstruction provide good results. A major flaw in this pipeline of computational steps (i.e., first measure the flow, then solve for shape using the SFSF equation) is that specular flows are very difficult to measure robustly, and in particular, existing optical flow algorithms fail to do so successfully due to the fact that specular flows typically violate the main assumptions used by optical flow algorithms. To cope with this, we have proposed a different formulation to the problem where the flow and the specular shape are all reconstructed or estimated simultaneously (in one shot). The rational is simple : each structure incorporates constraints that could assist in the estimation of its counterpart. As it turns out, taking this strategy for specular shape reconstruction offers improved flow estimation and better robustness to noise, and perhaps the main advantage is that it does not require any initial conditions to permit solution of the shape up to some basic equivalence class of shapes.

SFSF Benchmark

We have established a benchmark dataset with real image sequences with their corresponding specular flow ground truth and corresponding specular shapes (BMVC2010). To do so we have created specular objects with ground truth shape using a state-of-the-art, high precision 3D printer and the acquisition of specular image sequences using a custom made device that mimics the imaging setup inr our theoretical model. This benchmark covers several missing aspects in the Middlebury optical flow dataset is crucial for the SFSF research. Additionally, it supplies a unique case-study for estimation of complex motion field in which the motion structure is complicated enough to raise new challenges to existing optical flow algorithms.

Download SFSF benchmark dataset

We invite you to download our SFSF benchmark database in ZIP format or 7zip format. Please note that each directory contains

  • 2-5 consecutive frames of the specular flow
  • A mask of the object.
  • A mask map of the parabolic lines.
  • The corresponded ground truth flow in “.flo” format (see the Middlebury optical flow benchmark).
  • The corresponded ground truth surface represented as heights maps.



This work was funded in part by the Israel Science Foundation (grant No. 1245/08), the US National Science Foundation (grant IIS-0712956) and the US Air Force European Office of Aerospace Research and Development (grant number FA8655-09-1-3016). Additional funding for Yair Adato was provided by a Google Europe Fellowship in computer vision.

This project is a joint work by Yair Adato and Ohad Ben Shahar of the Computer Science DepartmentBen-Gurion University of The Negev. Most work is a result of a very fuitful collaboration with Todd Zickler of the Harvard School of Engineering and Applied Sciences, his student Yuriy Vasilyev and their colleagues Steve Gortler and Guillermo Canas. Different aspects of the work were presented in various conferences and conference, including ICCV 2007, The Foundation of Computer Vision Workshop 2007, CVPR 2008, the Indian-Israeli workshop in Computer Vision 2008, ICCV 2009, BMVC 2010, CVPR 2011, BMVC 2011, and the Israel Machine Vision Conference (IMVC) 2011.