In this project, an automatic algorithm was developed in order to perform classification of points, in a 3D point cloud, into three classes: “Ground”, “Buildings” and “Others”. The input to the algorithm is a 3D point cloud, captured by a LiDAR scanner mounted on a car or an airplane. The point clouds are assumed to be after registration, i.e., they are given in a common coordinate system. Our goal is to decide for each (x, y, z) point what is its class. The algorithm has three main steps: ground detection, partial “building parts” detection, and finally, “complete building” detection.
The ground detection step is based on a flood-like process, developed as part of the project’s supervisor research activity, which is used over a height-map representation of the 3D point cloud. After ground detection, a partial “building-part” detection step is used. This step uses a region growing algorithm to detect “patches” of buildings and nearby patches are merged, based on normal vectors and inter-cluster distances. A unified cluster is labeled as a part of a building depending on its area. The next step involves adding “building points” that were missed on the previous step, using graph-based methods, based on the assumption that nearby and “strongly connected” points belong to same building.
The algorithm was tested on three point cloud datasets: GeoSim- average of 91% overall accuracy, Semantic3d- average of 94% overall accuracy, and Paris iQmulus/TerraMobilita- average of 94% overall accuracy, and it achieved an average of 94% overall accuracy.
We compared our results to SnapNet (a deep-learning algorithm), which achieved an average overall accuracy of 93.8% on Semantic3d dataset.