Return to main Robotic Perception Research page
The specific aspects of the robotic research involved, w.r.t. extra-urban autonomous driving, is the need to localize the vehicle, despite the absence of the GPS signal and/or its not-good-enough accuracy. Theoretically, GPS might be accurate enough, in particular considering its differential RTK variant, but it do needs a given number of satellites in view by the vehicle antenna and, at the same time, by the antenna of a fixed base station. This is something that is is not easy to obtain, not only because of the temporary lack of enough satellites in view, but also because of dense clouds, vegetation, and buildings, which contribute to reduce the visibility of satellites, therefore degrading the accuracy and/or fully disrupting the service.
The challenging robotics tasks we are tackling deal with building very large maps of the city areas that will be traversed by the vehicle, which is a very computationally intensive activity that can be executed off-line. This map will be then used to online localize the vehicle. We also need to leverage existing commercial maps (e.g., OpenStreetMap, Here, etc.) by being able to match the map data against the data from the vehicle sensors. In general, the whole scene about the vehicle, has to be interpreted in terms of human-understandable objects. The vehicle requires also, and this is "normal" in autonomous driving, to dynamically build a local world model, perhaps purely in terms of obstacles, so to control the vehicle: control steering, throttle, brakes, detect and track obstacles, people, animals, etc.
This set of tasks, i.e., offline global world modeling, online localization, online local world modeling, and control, are the same tasks on which the well-known team at Google, aka as the Google Car team, is working.
Our vehicle is an ordinary golf cart, which has been trasformed so to be controlled directly by a computer (i.e., it has been robotized).
The list of our demos and public events is available in the dedicated Public demonstrations of our research achievements page.
For further information of upcoming events please have a look also to our Facebook page. Images from our vehicle are also available in our media server.
- Fusion of data coming from dead-reckoning and GPS sensors.
- Robust odometry for the USAD vehicle
- Evaluation / Experimentation of mapping and localization systems in the university campus
- Implementation of a global planner operating with OSM for the USAD vehicle
- Implementation of a motion planning algorithm for the Ackermann kinematics USAD vehicle
- A pedestrian detection module for the USAD vehicle
- Dynamic Obstacles in the ROS Navigation Stack
- Dynamic Obstacle Detection and Avoidance for Autonomous Driving Vehicles
- Curb detection in urban environments using LiDAR pointclouds
- Integration in the navigation stack of USAD vehicle of a vision-based tool for lane detection