Contextual Emergence of Object Saliency

Welcome

Visual context is used in different forms for saliency computation. While its use in saliency models for fixations prediction is often reasoned, this is less so the case for approaches that aim to compute saliency at the object level.We argue that the types of context employed by these methods lack clear justification and may in fact interfere with the purpose of capturing the saliency of whole visual objects. In this paper we discuss the constraints that different types of context impose and suggest a new interpretation of visual context that allows the emergence of saliency for more complex, abstract, or multiple visual objects. Despite shying away from an explicit attempt to capture “objectness” (e.g., via segmentation), our results are qualitatively superior and quantitatively better than the state-of-the-art.

1a
1b
2a
2b
3a
3b
4a
4b
5a
5b

 

 
Download

Matlab Code (and additional required software)

  • Source : Matlab source code of our method for contextual emergence of object saliency. Additional software required to apply the code is available to download in the link below. Please read the included README.txt file for information.
  • Additional required software

Paper

Video
  • The video lecture of the paper presentation in ECCV ’14 is available in videonectures.net
  • The video lecture page also includes the presentation slides in .pdf format.
Credits

Who and Where…

This research is a joint work by Rotem Mairon and Ohad Ben-Shahar of the Computer Science Department, Ben-Gurion University of The Negev, Beer Sheva, Israel. This work was presented in the proceedings of the European conference of computer vision (ECCV), 2014.

Acknowledgments

This work was supported in part by the National Institute for Psychobiology in Israel (grant no. 9-2012/2013) founded by the Charles E. Smith Family, by the Israel Science Foundation (ISF grants no. 259/12 and 1274/11), and by the European Commission in the 7th Framework Programme (CROPS GA no. 246252). We also thank the Frankel Fund, the ABC Robotics initiative, and the Zlotowski Center for Neuroscience at Ben-Gurion University for their generous support.