Meetings

iCVL Meetings

The iCVL maintains its own weekly seminar series and reading group where we either have guests to discuss their research or we have one of our own lead the study of a topic or a specific research of interest from the literature. Our meetings run as two hour round table discussions (currently on Wednesdays 10:00-12:00) and enjoy a significant level of interaction than most seminars. We will be delighted to host you to learn about your work too. Please contact the lab’s director or our seminar coordinator.

While group meetings started in 2007, organized listings for the web site began in Feb 2012, shortly before its launching in July 2012. The forthcoming meeting is automatically highlighted and centered below. Please scroll up and down for future and past meetings.

2020
  • 24.06.2020
  • Ben Vardi - BGU
  • Puzzle Solving With Relaxation Labeling
Abstract:The topic of the meeting is our ongoing work on jigsaw puzzle solving, done in collaboration with colleagues from Ca' Foscari University of Venice. We will talk about approaches to assemble jigsaw puzzles, and specifically focus on our approach, which is to formulate the problem as a relaxation labeling problem. Moreover, we will discuss general challenges in the puzzle solving problem, and specific challenges that apply for our method.
  • 17.06.2020
  • VisualComputing Seminar: Irit Chelly - BGU
  • JA-POLS: a Moving-camera Background Model via Joint Alignment and Partially-overlapping Local Subspaces
Abstract: Background models are widely used in computer vision. While successful Static-camera Background (SCB) models exist, Moving-camera Background (MCB) models are limited. Seemingly, there is a straightforward solution: 1) align the video frames; 2) learn an SCB model; 3) warp either original or previously-unseen frames toward the model. This approach, however, has drawbacks, especially when the accumulative camera motion is large and/or the video is long. Here we propose a purely-2D unsupervised modular method that systematically eliminates those issues. First, to estimate warps in the original video, we solve a joint-alignment problem while leveraging a certifiably-correct initialization. Next, we learn both multiple partially-overlapping local subspaces and how to predict alignments. Lastly, in test time, we warp a previously-unseen frame, based on the prediction, and project it on a subset of those subspaces to obtain a background/foreground separation. We show the method handles even large scenes with a relatively-free camera motion (provided the camera-to-scene distance does not change much) and that it not only yields State-of-the-Art results on the original video but also generalizes gracefully to previously-unseen videos of the same scene. The talk is based on [Chelly et. all, CVPR '20]

Speaker's short bio:

Irit Chelly is a Computer Science PhD student at Ben-Gurion University under the supervision of Dr. Oren Freifeld at the Vision, Inference, and Learning group. Her current research focuses on unsupervised learning and video analysis. She is interested in probabilistic graphical models, spatial transformations, dimensionality reduction, and deep learning. Irit won the national-level Aloni PhD scholarship from Israel's Ministry of Technology and Science as well as the BGU Hi-tech scholarship for excellent PhD students.

  • 10.06.2020
  • iCVL Group
  • Monthly Reading Group
Abstract: In this meeting we will be discussing the following paper:
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.‏  
  • 03.06.2020
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 27.05.2020
  • iCVL Group
  • Monthly Reading Group
Abstract:   AlexNet - Part 2/2: In this meeting we will be continuing last week's meeting - we will give an introduction to the field of neural networks, with a focus on  CNN's in particular. We will also be discussing AlexNet, as presented in the following paper:
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).‏  
  • 20.05.2020
  • iCVL Group
  • Monthly Reading Group
Abstract: AlexNet - Part 1/2: In this meeting an introduction will be given to the field of neural networks, with a focus on  CNN's in particular. We will also be discussing AlexNet, as presented in the following paper:
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).‏  
  • 13.05.2020
  • Ilan Git - BGU
  • Underwater Objects Localization
Abstract: Ilan will present an overview of his thesis research on underwater objects localization. He will explain the motivation behind the project, describe the current limitations in this research field, and propose some directions for solutions. Ilan will also give an introduction to Simultaneous Localization and Mapping (Slam) algorithm, and present its connection to his research plan.
  • 22.04.2020
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 01.04.2020
  • iCVL Group
  • Monthly Reading Group
Abstract: In this meeting we will be discussing the following paper:
Dalal, N., & Triggs, B. (2005, June). Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05) (Vol. 1, pp. 886-893). IEEE.‏  
  • 25.03.2020
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 18.03.2020
  • Roy Toren - BGU
  • 3D Puzzle Solving with Aspects of Archaeology
Abstract: Roy will present an overview of his thesis on 3D puzzle solving with aspects of archaeology. He will explain the motivation behind the problem, current existing solutions, aspects which could be improved and a direction which he will continue to explore in his thesis work.
  • 04.03.2020
  • Guy Amit - BGU
  • Neural Network Representation Control: Gaussian Isolation Machines and CVC Regularization
Abstract: In many cases, neural network classifiers are likely to be exposed to input data that is outside of their training distribution data. Samples from outside the distribution may be classified as an existing class with high probability by softmax-based classifiers; such incorrect classifications affect the performance of the classifiers and the applications/systems that depend on them. Previous research aimed at distinguishing training distribution data from out-of-distribution data (OOD) has proposed detectors that are external to the classification method. We present Gaussian isolation machine (GIM), a novel hybrid (generative-discriminative) classifier aimed at solving the problem arising when OOD data is encountered. The GIM is based on a neural network and utilizes a new loss function that imposes a distribution on each of the trained classes in the neural network's output space, which can be approximated by a Gaussian. The proposed GIM's novelty lies in its discriminative performance and generative capabilities, a combination of characteristics not usually seen in a single classifier. The GIM achieves state-of-the-art classification results on image recognition and sentiment analysis benchmarking datasets and can also deal with OOD inputs. We also demonstrate the benefits of incorporating part of the GIM's loss function into standard neural networks as a regularization method.  


The paper can be found on arXiv:
   https://arxiv.org/pdf/2002.02176.pdf  
  • 26.02.2020
  • Assaf Arbelle - BGU
  • QANet - A Quality Assurance Network for Image Segmentation
Abstract: We introduce a novel Deep Learning framework, which quantitatively estimates image segmentation quality without the need for human inspection or labeling. We refer to this method as the Quality Assurance Network (QANet). Specifically, given an image, and a proposed corresponding segmentation obtained by any method including manual annotation, QANet solves a regression problem in order to estimate a predefined quality measure with respect to the unknown ground-truth. QANet is by no means yet another segmentation method. Instead, it performs a multi-level, multi-feature comparison of an image-segmentation pair based on a unique network architecture, called RibCage. To demonstrate the strength of QANet, we addressed the evaluation of instance segmentation using two different datasets from different domains, namely, high-throughput live-cell microscopy images from the Cell Segmentation Benchmark and natural images of plants from the Leaf Segmentation Challenge. While synthesized segmentations were used to train the QANet, it was tested on segmentations obtained by publicly available methods that participated in the different challenges. We show that the QANet accurately estimates the scores of the evaluated segmentations with respect to the hidden ground-truth, as published by the challenges’ organizers.
  • 19.02.2020
  • iCVL Group
  • Monthly Reading Group
Abstract: In this meeting we will be discussing the following paper:
Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. CVPR (1)1(511-518), 3.‏  
  • 12.02.2020
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 22.01.2020
  • VisualComputing Seminar: Oran Shayer - BMW
  • Enhancing Generic Segmentation with Learned Region Representations
Abstract:

 Current successful approaches for generic (non-semantic) segmentation rely mostly on edge detection and have leveraged the strengths of deep learning mainly by improving the edge detection stage in the algorithmic pipeline. This is in contrast to semantic and instance segmentation, where deep learning has made a dramatic affect and DNNs are applied directly to generate pixel-wise segment representations. We propose a new method for learning a pixel-wise region representation that reflects segment relatedness. This representation is combined with an edge map to yield a new segmentation algorithm. We show that the representations themselves achieve state-of-the-art segment similarity scores. Moreover, the proposed, combined segmentation algorithm provides results that are either the state of the art or improve it, for most quality measures.


Bio:
Oran holds a BSc and MSc in EE from the Technion, majoring in machine learning, computer vision and deep learning. In the last 10 years, Oran worked at various companies like Apple, Intel and GM, and also experienced in the startup scene working for Clair Labs. He currently holds a position of machine learning researcher at BMW.
  • 15.01.2020
  • Roy Uziel & Meitar Ronen - BGU
  • Bayesian Adaptive Superpixel Segmentation
Abstract:  Roy and Meitar will present their ICCV-2019 paper, Bayesian Adaptive Superpixel Segmentationhttps://www.cs.bgu.ac.il/~orenfr/BASS/Uziel_ICCV_2019.pdf

Superpixels provide a useful intermediate image representation. Existing superpixel methods, however, suffer from at least some of the following drawbacks: 1) topology is handled heuristically; 2) the number of superpixels is either predefined or estimated at a prohibitive cost; 3) lack of adaptiveness. As a remedy, we propose a novel probabilistic model, self-coined Bayesian Adaptive Superpixel Segmentation (BASS), together with an efficient inference. BASS is a Bayesian nonparametric mixture model that also respects topology and favors spatial coherence. The optimization based and topology-aware inference is parallelizable and implemented in GPU. Quantitatively, BASS achieves results that are either better than the state-of-the-art or close to it, depending on the performance index and/or dataset. Qualitatively, we argue it achieves the best results; we demonstrate this by not only subjective visual inspection but also objective quantitative performance evaluation of the downstream application of face detection.  
  • 08.01.2020
  • iCVL Group
  • Monthly Reading Group
Abstract: In this meeting we will be discussing the following paper: Lowe, D. G. (1999, September). Object recognition from local scale-invariant features. In iccv (Vol. 99, No. 2, pp. 1150-1157).‏
  • 01.01.2020
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
2019
2017
2016
2015
2014
2013
2012
2011