An important property of any intelligent robotic system is that it should be able to make some kind of model (or map) of the environment, to reason with this data. At the same time, the robot needs to be well aware its own state (position) within this model. This is referred to in robotics as the simultaneous localization and mapping (SLAM) problem and is tackled using (expensive) laser scanning equipment.
We investigate a SLAM approach based on vision (camera images). The differentiating factor of our work with respect to similar studies is that our algorithm is designed to work even with large outdoor scenes and that we also incorporate GPS data to provide geo-refernced results..
Video Results (check also our YouTube channel):
Geo-referenced localization of the Robudem robot:
Visual Simultaneous Localization and Mapping:
To be updated