Photos from the Event
Gathering & Poster Session
13:30-14:00
Welcome
14:00-14:10
Prof. David Malah, Head of SIPL
Review of 40 Years of Research at SIPL
14:10-14:25
Prof. David Malah, Head of SIPL
Wilk Family Award Ceremony
14:25-14:40
SOS Boosting of Image Denoising Algorithms
14:40-15:05
Prof. Michael Elad
Abstract:
We present a generic recursive algorithm for improving general image denoising methods. Given the initial denoised image, we suggest repeating the following “SOS” procedure:
(i) [S]trengthen by adding the previous denoised image to the degraded input image,
(ii) [O]perate the denoising method on the strengthened image, and
(iii) [S]ubtract the previous denoised image from the restored one.
On the applicative side, we demonstrate the effectiveness of this algorithm for several leading denoising methods (KSVD, NLM, BM3D, and EPLL), showing tendency to further improve their performance. On the theoretical front, we provide several key results that expose the nature of this process and reasons for its success:
(1) We provide a study of the convergence of this process for the K-SVD denoising and related algorithms;
(2) Still in the context of the K-SVD, we present an intriguing relation to the desire to close the gap that exists between patch-modeling and global restoration;
(3) We show that SOS emerges as an optimal denoising when considering a graph-Laplacian regularization; and
(4) We derive conditions for a guaranteed improvement by the SOS algorithm.
Overview of SIPL Activity on HEVC Video Encoding
15:05-15:20
Yair Moshe
History of SIPL
15:20-15:40
Yoram Or-Chen
Break & Poster Session
15:40-16:10
Review of 40 Years of Teaching at SIPL
16:10-16:25
Nimrod Peleg
Distance Estimation of Marine Vehicles
16:25-16:45
Wilk family award winner
Ran Gladstone, Avihai Barel
Supervisor: Yair Moshe
In cooperation with:
Abstract:
This project’s goal is estimation of the distance of floating objects, such as boats and personal water craft (water scooters) from a video of maritime environment for the Protector USV, which is a product of Rafael. We propose a novel and efficient algorithm to achieve this goal. The algorithm receives as input a video of a marine environment. In addition, the algorithm receives as input for every video frame the location of a pixel that is on or near the object of interest, which we want to estimate the distance to. For every video frame, the algorithm identifies the horizon line, which we take as a reference point whose distance from the camera can be calculated according to environmental conditions. The algorithm proceeds to identify the contour of the object or its wake and choose the point whose distance is the farthest from the horizon line. We show that this distance, measured in pixels, can be translated to distance in meters according to environmental conditions, height of the CCD camera and its specification. The algorithm has been tested on a number of videos of marine environments taken at various environmental conditions and with different floating objects.
Geometry Learning for Multimodal Signal Processing
16:45-17:10
Prof. Ronen Talmon
Abstract:
In this work, we consider the case of multiple, multimodal sensors measuring the same physical phenomenon, such that the properties of the physical phenomenon are manifested as a hidden common source (which we would like to extract), while each sensor has its own sensor-specific effects. We will address the problem from a manifold learning standpoint and present a method for extracting the common source from multimodal recordings. The generality of the addressed problem sets the stage for the application of the developed method to many real signal processing problems, where different types of devices are typically used to measure the same activity. In particular, we will show application to sleep stage assessment. We demonstrate that using our method the sleep information hidden inside multimodal respiratory signals can be captured. Joint work with Roy Lederman.
Audio-Visual Voice Activity Detection Using Manifold Learning
17:10-17:30
David Dov, Ph.D. student
Advisor: Prof. Israel Cohen
Abstract:
The performance of traditional methods for separating speech from non-speech segments significantly deteriorates in the presence of transient interferences. We tackle this problem by incorporating a video signal, i.e., a video of the mouth region of the speaker, which is invariant to the acoustic environment. We propose a representation of the audio-visual signals which is based on manifold learning and is particularly suitable for merging data captured in different types of sensors. We exploit the proposed representation for voice activity detection demonstrating improved performance compared to competing detectors.