Symmetry and Pose Estimation in 3D Data

Knowing the poses of objects before their detection or classification has been shown to improve the results of object detectors. However, a robust and fine estimation of object poses is still challenging. To do just that, we suggest to employ the mirror symmetry of objects, providing a part of the pose information. In our paper, we show how the symmetry of objects in 3D can be robustly detected (providing fine but partial pose information) and used to construct a partial pose invariant representation of objects’ shape, allowing state of the art object detection.

More information will be added following the ECCV2014 conference. In the meantime, you are invited to have a look at our paper described below.

Download Code and annotations

  • Our annotations of the 3D center points of the objects in the Berkeley 3D object dataset can be found here.
  • The symmetry detection code can be found here (last update 20.09.2014). Please note that this does not include the calculation of feature vector.

Paper

Acknowledgments

This research was funded in part by the European Commission in the 7th Framework Programme (CROPS GA no. 246252). We also thank the generous support of the Frankel fund and the ABC Robotics Initiative at Ben-Gurion University.