SIPL Annual Event 2018 (July 2nd, 2018)

Gathering & Poster Session




Prof. David Malah, Head of SIPL

Improving Training Efficiency in Deep Learning


Prof. Daniel Soudry


I will present several empirical and theoretical results related to improving training efficiency in deep networks, based on the following works:

Wilk Family Awards and Outstanding Supervisor Awards Ceremony


Unsynchronized Acoustic Indoor Positioning


Finalist in the Kasher undergraduate project contest in the Faculty of Electrical Engineering

Guy Feferman, Michal Blatt

Supervisor: Alon Eilam

In cooperation with:  Sonarax Logo

ICASSP 2018 Demo

Break & Poster Session


Review of Teaching Activity in SIPL


Nimrod Peleg

Detection and Localization of Cumulonimbus Clouds in Satellite Images


Wilk family award winner

Etai Wagner, Ron Dorfman

Supervisor: Almog Lahav

In cooperation with:  Rafael Logo

Submitted to ICSEE 2018

Local-to-Global Point Cloud Registration using a Viewpoint Dictionary


David Avidar, M.Sc. student

Advisors: Prof. David Malah, Dr. Meir Bar-Zohar

Partly funded by the OMEK consortium

Presented at ICCV 2017


Local-to global point cloud registration is a challenging task due to the substantial differences between these two types of data, and the different techniques used to acquire them. Global clouds cover large-scale environments and are usually acquired aerially (e.g., using Airborne Laser Scanning – ALS), and local clouds are often acquired from ground level at a much smaller range (e.g., using Terrestrial Laser Scanning – TLS). As a result of these differences, existing point cloud registration approaches, such as keypoint-based registration, tend to fail.

We propose a novel registration method based on converting the global cloud into a viewpoint-based dictionary. We associate each viewpoint with a panoramic range-image, capturing the geometry of the visible environment. Then, plausible local-to-global transformations can be found via a dictionary search. We show efficient dictionary search can be done using phase-correlation between panoramic range-images.

We demonstrate that the proposed viewpoint-dictionary-based registration method achieves better performance than state-of-the-art, keypoint-based methods (e.g., FPFH, RoPS), even without any GPS measurements. For the evaluation, we used a challenging dataset of 108 TLS local clouds and an ALS large-scale global cloud, in a 1km2 urban environment.

The Perception-Distortion Tradeoff


Yochai Blau, Ph.D. student

Advisor: Prof. Tomer Michaeli

Presented at CVPR 2018


Image restoration algorithms are typically evaluated by some distortion measure (e.g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify perceived perceptual quality. In this work, we prove mathematically that distortion and perceptual quality are at odds with each other. Specifically, we study the optimal probability for correctly discriminating the outputs of an image restoration algorithm from real images. We show that as the mean distortion decreases, this probability must increase (indicating worse perceptual quality). As opposed to the common belief, this result holds true for any distortion measure, and is not only a problem of the PSNR or SSIM criteria. However, as we show experimentally, for some measures it is less severe (e.g. distance between VGG features). We also show that generative-adversarial-nets (GANs) provide a principled way to approach the perception-distortion bound. This constitutes theoretical support to their observed success in low-level vision tasks. Based on our analysis, we propose a new methodology for evaluating image restoration methods, and use it to perform an extensive comparison between recent super-resolution algorithms. Our study reveals which methods are currently closest to the theoretical perception-distortion bound.