Click on any of the research topics below to find out more about our work:

3D Reconstruction from monocular image sequences

3D Reconstruction from monocular image sequencesThe goal of this research work is to retrieve high-quality 3D models, using nothing more than a series of normal camera images, as such turning an everyday camera into a 3D imaging device. The 3D models can be used for terrain traversability estimation, 3D scene interpretation, ... Our research focuses on an approach which estimates geometry structure and camera motion automatically by formulating the dense 3D reconstruction problem in a variational context. The differentiating factor of our work with respect to similar studies is that our algorithm is designed to be very robust, even working with natural image sequences shot with an in-hand or on-robot camera.

3D Reconstruction from stereo image sequences

3D Reconstruction from stereo image sequencesWhereas the use of monocular (single-camera) image sequences is to be preferred for general consumer applications, these approaches (also ours) nowadays still suffer from a lack of speed and accuracy for robotic applications. Therefore, we also conduct research in extending the monocular reconstruction framework towards binocular (stereo) sequences. The results are 3D reconstructions of higher quality, estimated within a time interval enabling exploitation of the reconstructed 3D model by a mobile robot.

Behavior - based Robot Control

Behavior - based Robot ControlThe control architecture of a robotic system defines the way of integrating all the robot capabilities (perception, modeling, planning, acting) in a coherent structure. Our reseacrh focuses on the use of behavior-based control strategies. In the behavior based context, a complex control problem is split into a number of simpler sub-problems, called behaviors, which are fused. We investigate novel ways to tackle the behavior fusion problem and aim to define a generic control architecture, suitable for as well singular and multi-agent systems. Our research aims to evolve the concept of "human robot control" towards a social interaction between humans and robots.

Centralized Multi-Robot Collaboration

Centralized Multi-Robot CollaborationMaking multiple robots collaborate on the same task requires a careful consideration of all aspects of robot control. In a centralized context, all robots are controlled by a central command station. In this field of research, we investigate innovative and efficient area search algorithms, optimal robot synchronisation strategies, ...

Decentralized Multi-Robot Collaboration

Decentralized Multi-Robot CollaborationMaking multiple robots collaborate on the same task requires a careful consideration of all aspects of robot control. In a decentralized context, there is no central command station, so all agents act as individual autonomous entities, much like is the case in nature with animals and humans. In this field of research, we investigate innovative and efficient area search algorithms, optimal robot synchronisation strategies, emergent behavior, ... An extra problem as opposed to centralized multi-robot collaboration is of course the absence of the single command centre, making it more difficult to control and coordinate the robot actions.

Geo-referenced Visual Simultaneous Localization and Mapping

Geo-referenced Visual Simultaneous Localization and MappingAn important property of any intelligent robotic system is that it should be able to make some kind of model (or map) of the environment, to reason with this data. At the same time, the robot needs to be well aware its own state (position) within this model. This is referred to in robotics as the simultaneous localization and mapping (SLAM) problem and is tackled using (expensive) laser scanning equipment. We investigate a SLAM approach based on vision (camera images). The differentiating factor of our work with respect to similar studies is that our algorithm is designed to work even with large outdoor scenes and that we also incorporate GPS data to provide geo-refernced results.

Robot Path Planning and Navigation

Robot Path Planning and NavigationAutonomous navigation is a requirement for any intelligent robot. At the Unmanned Vehicle Centre, we investigate mainly fuzzy-logic based approaches towards path planning and navigation for autonomous ground vehicles.

Modular Software Architectures for Robotic Systems

Modular Software Architectures for Robotic SystemsIntelligent robotic systems rely on embedded software algorithms incorporating the robot intelligence. Making all these algorithms work together in a coherent way is the task of the software architecture. At the Unmanned Vehicle Centre, the CoRoBa architecture (Controlling Robots with Corba) is developed. The differentiating factor of our work with respect to similar approaches is that CoRoBa is a completely distributed architecture, built on open source software (Corba / ACE / TAO ), is designed to work cross-platform (Windows / Linux) and can accept multiple communication protocols.

Human Detection

Human DetectionAn important task for unmanned robotic systems acting as search and rescue robots is the automated detection of human beings in camera images. Therefore, the Unmanned Vehicle Centre has been working on human detection algorithms, mainly using visual camera data.

Terrain Traversability Estimation

Terrain Traversability EstimationAn autonomous outdoor ground robot needs to assess the traversability of the terrain in order to make a decision on its trajectory. Automatic terrain traversability estimation is no easy task, as it depends upon many parameters: vegetation, slope, robot mobility, ... The Unmanned Vehicle Centre investigates two main approaches towards terrain traversability estimation. A first (real-time) approach uses high-quality stereo depth maps and classifies the terrain as traversable or not based on he analysis of the so-called v-disparity image. A second appraoch employs the full 3D model of the reconstructed terrain (monocular or binocular) to decide on terrain traversability.

3D Scene Recognition and Interpretation

3D Scene Recognition and InterpretationWhenever we humans recognize an environment we have seen before, we do this based upon the 3D model of that environment which we previously stored in our heads. This stands into contrast to most recognition approaches used in the computer vision community, where recognition is performed based on 2D data (a camera image). In this context, we investigate novel approaches towards the recognition of scenes and places using 3D data (from stereo or from monocular or binocular reconstruction). By doing this, we hope to make a robot understand the environment it is operating in.

UGV -UAV collaboration

UGV - UAV collaborationWhen multi-robot collaboration is concerned, mostly homogeneous teams of robots are considered, meaning that all robots are alike. At the Unmanned Vehicle Centre, we investigate also heterogeneous collaboration, meaning different robots working together. An example is an Unmanned Air Vehicle (UAV) taking of from an Unmanned Ground Vehicle (UGV), monitoring an area in front of the UGV and then landing on the UGV, all autonomously.

Robotic aids for humanitarian deming

Robotic aids for humanitarian demingThe Belgian army has a long standing experience with humanitarian demining. Also the Unmanned Vehicle Centre is active in this field, by investigating ways to automate the tedious process of searching for land mines. Therefore, we develop robotic tools, which can scan a suspected minefield semi-autonomously when equipped with the right sensing material.

Design of walking robots

Design of walking robotsCommon designs for unmanned ground vehicles use wheels as a locomotion methodology. This stands in contrast with the biologic example given by humans and many animals, which have a legged design. In this context, we investigate the design of walking robots. Walking robots have interesting properties compared to their wheeled cousins, notably in the area of demining, where their reduced footprint and leg redundancy play in their advantage. On the other hand, multi-legged walking robots are notoriously hard to control, which is why we investigate also neuro-fuzzy control methodologies for these kinds of robots.