Meetings

iCVL Meetings

The iCVL maintains its own weekly seminar series and reading group where we either have guests to discuss their research or we have one of our own lead the study of a topic or a specific research of interest from the literature. Our meetings run as two hour round table discussions (currently on Wednesdays 10:00-12:00) and enjoy a significant level of interaction than most seminars. We will be delighted to host you to learn about your work too. Please contact the lab’s director or our seminar coordinator.

While group meetings started in 2007, organized listings for the web site began in Feb 2012, shortly before its launching in July 2012. The forthcoming meeting is automatically highlighted and centered below. Please scroll up and down for future and past meetings.

  • 22.01.2020
  • VisualComputing Seminar
  • TBA
Abstract: TBA
  • 15.01.2020
  • Seminar Slot
  • TBA
Abstract: TBA
  • 08.01.2020
  • iCVL Group
  • Monthly Reading Group
Abstract: In this meeting we will be discussing the following paper: Lowe, D. G. (1999, September). Object recognition from local scale-invariant features. In iccv (Vol. 99, No. 2, pp. 1150-1157).‏
  • 01.01.2020
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
2019
  • 25.12.2019
  • VisualComputing Seminar
  • TBA
Abstract: TBA
  • 18.12.2019
  • VisualComputing Seminar
  • TBA
Abstract: TBA
  • 11.12.2019
  • Seminar Slot
  • TBA
Abstract: TBA
  • 04.12.2019
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 27.11.2019
  • Ehud Barnea - Trax
  • TBA
Abstract: TBA
  • 20.11.2019
  • iCVL Group
  • Monthly Reading Group
Abstract: In this meeting we will be discussing the following paper: Shi, J. (1994, June). Good features to track. In 1994 Proceedings of IEEE conference on computer vision and pattern recognition (pp. 593-600). IEEE.‏
  • 06.11.2019
  • Matan Shaked - BGU
  • Natural Image Statistics and Reconstruction
Abstract: TBA
  • 23.10.2019
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 21.08.2019
  • Keren Berger - BGU
  • Introduction to Color Spaces
Abstract: TBA
  • 14.08.2019
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 07.08.2019
  • Ben Vardi - BGU
  • Puzzle Solving with Relaxation Labeling
Abstract: We will review fundamental principles of the Relaxation Labeling problem and algorithm, and see how the puzzle problem may be formulated as a Relaxation Labeling problem.
  • 10.07.2019
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 03.07.2019
  • Rotem Mairon - BGU
  • TBA
Abstract: TBA
  • 19.06.2019
  • VisualComputing Seminar: Rana Hanocka - TAU
  • MeshCNN: A Network with an Edge
Abstract: Polygonal meshes provide an efficient representation for 3D shapes. They explicitly capture both shape surface and topology, and leverage non-uniformity to represent large flat regions as well as sharp, intricate features. This non-uniformity and irregularity, however, inhibits mesh analysis efforts using neural networks that combine convolution and pooling operations. In this talk, I discuss how we utilize the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes. Analogous to classic CNNs, MeshCNN combines specialized convolution and pooling layers that operate on the mesh edges, by leveraging their intrinsic geodesic connections. Convolutions are applied on edges and the four edges of their incident triangles, and pooling is applied via an edge collapse operation that retains surface topology, thereby, generating new mesh connectivity for the subsequent convolutions. MeshCNN learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones. We demonstrate the effectiveness of MeshCNN on various learning tasks applied to 3D meshes.

Project page: https://ranahanocka.github.io/MeshCNN/

The speaker's short bio: Rana Hanocka is a Ph.D. candidate under the supervision of Daniel Cohen-Or and Raja Giryes at Tel Aviv University. Her research is focused on using deep learning for understanding 3D shapes.
  • 12.06.2019
  • Rotem Mairon - BGU
  • TBA
Abstract: TBA
  • 05.06.2019
  • VisualComputing Seminar: Prof. Ayellet Tal - Technion
  • Past Forward: When Computer Graphics and Archaeology Meet
Abstract: Technology is the symbol of our age. Nevertheless, some fields have been left out of the revolution. One of these is archaeology, where many tasks are still performed manually - from the initial excavations, through documentation, to restoration. It turns out that some of these activities are classical computer graphics (and/or computer vision) tasks, such as puzzle solving, shape completion, matching, and edge detection. The objects, however, are much harder to deal with than usual, since they are broken and eroded after laying underground for thousands of years. Therefore, being able to handle these objects benefits not only archaeology, but also computer graphics. In this talk I will describe some of the algorithms we have developed to replace manual restoration and show some results.
  • 22.05.2019
  • Beba Cibralic - Georgetown University
  • Autonomous Weapons - Ethical Challenges and Opportunities
Abstract: Advancements in sensor recognition, processing speeds, and machine learning have helped create machines that are increasingly capable of performing complex tasks without human involvement. As machines develop the capacity to operate without human control, new societal questions arise. One salient concern is that this technology will be used to develop fully autonomous weapons, "killer robots", which have the ability to select targets without human engagement. In this discussion, we will examine (i) whether existing paradigms for regulating warfare can accommodate the introduction of fully autonomous weapons system. We will also explore (ii) the relationship between autonomy and responsibility as well as (iii) the ethical benefits of using semi-autonomous weapons. We will conclude by considering (iv) whether a preemptive ban on developing this technology is needed.
  • 15.05.2019
  • Peleg Harel - BGU
  • Solving Archaeological Puzzles
Abstract: The paper presented by Peleg in this meeting proposes a fully-automatic and general algorithm addressing archaeological puzzles' solving.
  • 01.05.2019
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 10.04.2019
  • VisualComputing Seminar: Dr. Derya Akkaynak - Princeton University
  • Sea-thru: Towards A Robust Method to Remove Water From Underwater Images
Abstract: Very large underwater image datasets are generated every day that capture important information regarding the state of our oceans (e.g., coral reef coverage, fish abundance, condition of vulnerable seafloor habitats, etc.). While large image datasets taken on land can be efficiently analyzed with a plethora of computer vision and machine learning algorithms, underwater datasets do not benefit from the full power of these methods because water degrades images too severely for automated analysis. In contrast to images taken in air, path
radiance (backscatter) in underwater images cannot be neglected, and object radiance diminishes quickly even across short distances from the camera. Researchers aiming to restore lost colors and contrast in underwater images are frequently faced with unstable results: available methods are either not robust, are too sensitive, or only work for short object ranges. Consequently, the analysis of most underwater imagery requires costly manual effort; on average, a human expert spends over 2 hours identifying and counting fish in a video that is one hour long.

In this talk, bridging optical oceanography and underwater computer vision, I will show that a fundamental reason for the lack of a robust color reconstruction method is a fairly simple one: the underwater image formation equation used by the computer vision community for the past 30 years is actually a simplification of the radiative transfer equation for horizontal imaging in the atmosphere. Then, based on the physically accurate equation I recently proposed and validated, I will introduce the Sea-thru algorithm that successfully removes
water from underwater images, revealing the underwater world in a way we have never seen before. Finally, I will discuss the potential of leveraging high-resolution (and free) ocean color data from Sentinel 3A/B satellites to boost underwater computer vision algorithms.

The speaker's short bio: Dr. Derya Akkaynak is a mechanical engineer and oceanographer (PhD MIT & Woods Hole Oceanographic Institution ‘14) whose research focuses on problems in underwater imaging and computer vision. In addition to using off-the-shelf RGB cameras for scientific data acquisition underwater, she also uses hyperspectral sensors to investigate how the world appears to non-human animals. Derya has professional, technical, and scientific diving certifications and has conducted fieldwork in the Bering Sea, Red Sea, Antarctica, Caribbean, Northern and Southern Pacific and Atlantic, and her native Aegean.
  • 03.04.2019
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 27.03.2019
  • VisualComputing Seminar: Majeed Kassis - BGU
  • Alignment and Line Detection of Manuscripts: One Learning-Based and One Learning-Free Algorithm
Abstract: Handling manuscripts, in contrary to modern text, is a much more challenging task. Due to the nature and history of these documents, they contain several unique characteristics, such as multi-skewed text lines, different inter-line distances, touching text lines, and multi-size text. All these characteristics are in addition to the deteriorating condition of the manuscript due to ageing, handling, and storage conditions over the centuries. In this talk I wish to present two of my latest works, the first work, based on a deep learning model, tackles the unique text alignment problem for historical documents. It is a widely known problem in manuscript analysis for historians, and the attempt of finding the differences between manuscript editions, until today, is done by hand. Most of the computational tools coming to assist the historians are based on word recognition systems. In this work, I will present a Siamese neural network based system, which automatically identifies whether a pair of images contain the same text without the need of recognizing the text itself. The user is required to annotate several pages of the two manuscripts they wish to align, and with the assistance of the model, we are able to align these two manuscripts. This algorithm is robust to writing style differences between the manuscripts, text condition, and its quality. The second work, which has been submitted recently, attempts to tackle the line detection problem in a learning-free manner. The vast majority of manuscript line detection algorithms are learning-based. These algorithms force the user to annotate data needed to train a model, prior to applying the line detection algorithm. In this work, I will present a learning-free system for line detection in manuscripts. The system is based on the Document Graph automatically generated for the document. We begin by applying a distance transform on the image, extract the image skeleton, and generate a graph by detecting the vertexes and edges of the skeleton. After several iterative and automatic steps we are able to merge the graph edges together to form the document lines. We have applied the system on the recently released DIVA-HisDB dataset, and achieved Line IU detection accuracy of 85.92%.
  • 13.03.2019
  • iCVL Group
  • Monthly Research Status Meeting
Abstract: Each iCVL team member will give an update on the status of his last month's actions, raise issues for discussion and brief consultation, and present his action items for the coming month.
  • 06.03.2019
  • Keren Berger - BGU
  • Retinal Cone Mosaic and Individual Authentication - 3
Abstract: TBA
  • 27.02.2019
  • VisualComputing Seminar: Dr. Jonathan Laserson - Zebra Medical Vision
  • Embrace The Noise: Mining Clinical Reports to Gain a Broad Understanding of Chest X-rays
Abstract: The chest X-ray scan is by far the most commonly performed radiological examination for screening and diagnosis of many cardiac and pulmonary diseases. It also one of the hardest to interpret, with disagreement rating of around 30% even for experienced radiologists. At Zebra, we have access to millions of X-ray scans, as well as their accompanied anonymized textual reports written by hospital radiologists. Can this data be used to teach an algorithm to identify significant clinical findings from these scans? By manually tagging a relatively small set of sentences, we were able to construct a training set of almost 1M studies over the 40 most prevalent chest X-ray pathologies. A deep learning model was trained to predict the findings given the patient frontal and lateral scans. We compared the model's predictions to those made by a team of radiologists. Would the average radiologist agree more with his/her colleagues or with the model?

The speaker's short bio: Dr. Jonathan Laserson is the lead AI researcher at Zebra Medical Vision. He did his master and undergraduate studies in the Technion and has a PhD from the Computer Science AI lab at Stanford University. After a few years doing machine learning at Google and IBM, today he is focused on Deep Learning algorithms and their application to medical images understanding.
  • 30.01.2019
  • Peleg Harel - BGU
  • The Jigsaw Puzzle Problem
Abstract: In this meeting Peleg will present an overview of his thesis topic on "The Jigsaw Puzzle Problem". He will present the motivation behind the jigsaw puzzle problem, some past solutions and the direction that will be explored on his thesis work for solving the problem.
  • 23.01.2019
  • Rotem Mairon -BGU
  • Quantifying the Center Bias in Eye-Movements during Scene Viewing
Abstract:TBA
  • 16.01.2019
  • Keren Berger - BGU
  • Retinal Cone Mosaic and Individual Authentication - 2
Abstract: In this meeting we will continue the discussion on retinal cone mosaic and it's possible use as a biometric method.
  • 10.01.2019
  • iCVL Group
  • Hands-On Experience in Writing Paper Reviews
Abstract: This meeting will be dedicated to discuss papers received for peer review, and to jointly write the reviews. The participation of all lab members in the process allows members to gain experience in this professional skill.
  • 02.01.2019
  • Keren Berger - BGU
  • Retinal Cone Mosaic and Individual Authentication - 1
Abstract: The meeting will begin with Keren reviewing literature on the retinal cone mosaic and presenting it's possible use as a biometric method, which we will then open for group discussion.
2017
2016
2015
2014
2013
2012
2011