Kinect-based Room Recognition Using 3D Point Clouds

The goal of the project is to create an affordable room recognition system which quickly scans a partial room scene in 3D (using Kinect or Lidar), and searches for a match in a database of rooms, stored in the system. The main motivation is to develop a framework that can serve as a platform for indoor environment navigation and orientation of robots or other autonomous platforms. Additionally, the framework can assist human navigation in a non-familiar indoor environment (without the use of GPS or inertial sensors). The system performs registration between a local scan and the rooms stored in the database and makes the recognition decision by comparing the RMSE to a given threshold. The registration process includes keypoint detection, descriptor extraction, filtering of the best keypoints matches for the computation of an initial transformation, and registration refinement using the Iterative Closest Point (ICP) Algorithm. Two specific approaches for the registration of different but overlapping point clouds are presented in this report. The first approach, which uses curvature-based keypoint detection and FPFH descriptors, could not produce good keypoints, which resulted in a system failure. The second approach used a keypoint detection method based on the intersection of plane triplets, did produce “quality” keypoints, which increased the chances for producing a reliable initial transformation. A good initial transformation guarantees a successful registration.

Kinect-based Room Recognition Using 3D Point Clouds
Kinect-based Room Recognition Using 3D Point Clouds