Classical and contemporary optical flow algorithms aim to estimate image motion fields (a.k.a. optical flows) that emerge from scenes with rigid or articulated objects, i.e., motion fields with roughly piecewise smooth motions, and no turbulences or complicated singularities (and most often, just piecewise constant velocity). However, the variety of motions exhibited by natural or manmade phenomena clearly go beyond this class (see few examples in the images below, one of which are specular flows that we exploit for specular shape inference in another line of research in the lab). To attempt better motion estimation even in these extreme conditions, we suggest to study complex optical flows in polar representation, where each element of the brightness motion field is represented by its magnitude and orientation instead of its Cartesian projections. This seemingly small change in representation provides more direct access to the intrinsic structure of the field and to make readily explicit certain statistical properties not observed in Cartesian representation. When used with the popular variational inference paradigm it provides a framework in which regularizers can be intuitively tailored for very different classes of motion, including those complex motion that motivates this research from the start.
Our evaluations reveal that a flow estimation algorithm that is based on a polar representation can perform as well or better than the state-of-the-art when applied to traditional (approximately piecewise constant) optical flow problems. But at the same time, it facilitates both qualitative and quantitative improvements for non-traditional cases such as fluid flows and specular flows, whose structure is very different.
You are welcome to download the polar based optical flow estimation code . Please note that this implementation is based on Deqing Sun‘s optical flow implementation from Sun et al., CVPR 2010.
Y. Adato, T. Zickler, and O. Ben-Shahar A polar representation of motion and implications for optical flow , In the Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, USA, June, pp. 1145-1152, 2011.
Who and Where…
This research is a joint work by Yair Adato and Ohad Ben Shahar of the Computer Science Department, Ben-Gurion University of The Negev, and Todd Zickler of the Harvard School of Engineering and Applied Sciences. It was published in the Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2011 and also presented in the Israel Machine Vision Conference (IMVC) 2012 .
This work was funded in part by the US Air Force European Office of Aerospace Research and Development (grant number FA8655-09-1-3016), the Israel Science Foundation (grant No. 1245/08), and the US National Science Foundation (grant IIS-0712956). Additional funding for Yair Adato was provided by a Google Europe Fellowship in computer vision.