Vehicle localization using 3D building models and Point Cloud matching

Main Project Page - The Road Layout Estimation Project

 

 

 

 

Abstract— Detecting buildings in the surrounding of an urban vehicle and matching them to models available on map services is an emerging trend in robotics localization for urban vehicles. In this paper we present a novel technique that improves a previous work in this area by taking advantage of detected building fac¸ade positions and the correspondence with their 3D models available in OpenStreetMap (OSM). The proposed technique uses segmented point clouds produced using stereo images, processed by a Convolutional Neural Network. The point clouds of the fac¸ades are then matched against a reference point cloud, produced extruding the buildings’ outlines, as available on OSM. In order to produce a lane-level localization of the vehicle, the resulting information is then fed into our probabilistic framework, called Road Layout Estimation (RLE). We prove the effectiveness of this proposal testing it on sequences from the well-known KITTI dataset and comparing the results the respect to a basic RLE version without the proposed pipeline.