Interactive Demo Systemfor 3D Depth Data Processing

Depth cameras allow more advanced capabilities than 2D cameras, useful in areas such as navigation and mapping in Robotics / Autonomous Vehicles, Virtual Reality, etc. However, the 3D data, called “Point Cloud” which describes the location of each scanned point is space, sets new algorithmic challenges for processing it accurately and fast.
On this project, we present an interactive system to demonstrate depth cameras’ capabilities that serves as a game, using 4 static short-range Intel RealSense SR300 cameras for a complete spatial coverage of relatively small objects.
The game consists of building a given object by a player, whose goal is to build it faithfully to its ideal model. Our task is to remove the background and additional noise from each of the 4 partial scans of the built object, merge them to create the complete object, compare it to the ideal model and assign a score to the player for his accuracy on a dedicated graphical user interface.
The main project objectives are handling multiple cameras and point clouds, understanding and running existing algorithms along with developing solutions and algorithms of our own to solve various problems including segmentation - isolating the object from its surroundings and noise filtering, and registration - fitting and merging partial point clouds for creating the complete object. Due to the project’s nature as a real-time demo system, the system has to be as accurate, efficient, general and robust as possible.

Interactive Demo Systemfor 3D Depth Data Processing
Interactive Demo System for 3D Depth Data Processing