Mobiniss

One of the goals of this study is to evaluate and develop telecontrol and autonomous navigation capabilities for a single vehicle. The work is demonstrated on different mobile platforms (indoor and outdoor). In order not to restrict the use of the developed tools to specific machines, attention is paid to the modularity of the structure. Requirements for multi-user and multi-platform systems are considered during the project. The different capabilities are be integrated in a single framework based on CORBA middleware. Another goal is to follow evolution of Advanced Electrical Combat Vehicle or AECV and Multi Robot Systems for military applications by participating in a NATO Working Group. Finally, other research on specialised mobility in unstructured terrains includes the study and development of alternative propulsion systems like legged robots.

The developments focused on the following topics:

  • Implementation of Exteroceptive sensors (ultrasound and Laser)
  • Implementation of Stereo-vision system
  • Implementation of SLAM techniques 
  • Implementation of multi-robot coordination techniques

Project Publications

2013

  • G. De Cubber and H. Sahli, “Augmented Lagrangian-based approach for dense three-dimensional structure and motion estimation from binocular image sequences," IET Computer Vision, 2013.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this study, the authors propose a framework for stereo–motion integration for dense depth estimation. They formulate the stereo–motion depth reconstruction problem into a constrained minimisation one. A sequential unconstrained minimisation technique, namely, the augmented Lagrange multiplier (ALM) method has been implemented to address the resulting constrained optimisation problem. ALM has been chosen because of its relative insensitivity to whether the initial design points for a pseudo-objective function are feasible or not. The development of the method and results from solving the stereo–motion integration problem are presented. Although the authors work is not the only one adopting the ALMs framework in the computer vision context, to thier knowledge the presented algorithm is the first to use this mathematical framework in a context of stereo–motion integration. This study describes how the stereo–motion integration problem was cast in a mathematical context and solved using the presented ALM method. Results on benchmark and real visual input data show the validity of the approach.

    @Article{de2013augmented,
    author = {De Cubber, Geert and Sahli, Hichem},
    journal = {IET Computer Vision},
    title = {Augmented Lagrangian-based approach for dense three-dimensional structure and motion estimation from binocular image sequences},
    year = {2013},
    abstract = {In this study, the authors propose a framework for stereo–motion integration for dense depth estimation. They formulate the stereo–motion depth reconstruction problem into a constrained minimisation one. A sequential unconstrained minimisation technique, namely, the augmented Lagrange multiplier (ALM) method has been implemented to address the resulting constrained optimisation problem. ALM has been chosen because of its relative insensitivity to whether the initial design points for a pseudo-objective function are feasible or not. The development of the method and results from solving the stereo–motion integration problem are presented. Although the authors work is not the only one adopting the ALMs framework in the computer vision context, to thier knowledge the presented algorithm is the first to use this mathematical framework in a context of stereo–motion integration. This study describes how the stereo–motion integration problem was cast in a mathematical context and solved using the presented ALM method. Results on benchmark and real visual input data show the validity of the approach.},
    doi = {10.1049/iet-cvi.2013.0017},
    publisher = {IET Digital Library},
    project = {ICARUS,ViewFinder,Mobiniss},
    url = {https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2013.0017},
    unit= {meca-ras}
    }

2012

  • G. De Cubber and H. Sahli, “Partial differential equation-based dense 3D structure and motion estimation from monocular image sequences," IET computer vision, vol. 6, iss. 3, p. 174–185, 2012.
    [BibTeX] [DOI]
    @Article{de2012partial,
    author = {De Cubber, Geert and Sahli, Hichem},
    journal = {IET computer vision},
    title = {Partial differential equation-based dense {3D} structure and motion estimation from monocular image sequences},
    year = {2012},
    number = {3},
    pages = {174--185},
    volume = {6},
    doi = {10.1049/iet-cvi.2011.0174},
    project = {ViewFinder, Mobiniss},
    publisher = {IET Digital Library},
    unit= {meca-ras,vu-etro}
    }

2011

  • G. De Cubber, D. Doroftei, H. Sahli, and Y. Baudoin, “Outdoor Terrain Traversability Analysis for Robot Navigation using a Time-Of-Flight Camera," in RGB-D Workshop on 3D Perception in Robotics, Vasteras, Sweden, 2011.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous robotic systems operating in unstructured outdoor environments need to estimate the traversabilityof the terrain in order to navigate safely. Traversability estimation is a challenging problem, as the traversability is a complex function of both the terrain characteristics, such as slopes, vegetation, rocks, etc and the robot mobility characteristics, i.e. locomotion method, wheels, etc. It is thus required to analyze in real-time the 3D characteristics of the terrain and pair this data to the robot capabilities. In this paper, a method is introduced to estimate the traversability using data from a time-of-flight camera.

    @InProceedings{de2011outdoor,
    author = {De Cubber, Geert and Doroftei, Daniela and Sahli, Hichem and Baudoin, Yvan},
    booktitle = {RGB-D Workshop on 3D Perception in Robotics},
    title = {Outdoor Terrain Traversability Analysis for Robot Navigation using a Time-Of-Flight Camera},
    year = {2011},
    abstract = {Autonomous robotic systems operating in unstructured outdoor environments need to estimate the traversabilityof the terrain in order to navigate safely. Traversability estimation is a challenging problem, as the traversability is a complex function of both the terrain characteristics, such as slopes, vegetation, rocks, etc and the robot mobility characteristics, i.e. locomotion method, wheels, etc. It is thus required to analyze in real-time the 3D characteristics of the terrain and pair this data to the robot capabilities. In this paper, a method is introduced to estimate the traversability using data from a time-of-flight camera.},
    project = {ViewFinder, Mobiniss},
    address = {Vasteras, Sweden},
    url = {http://mecatron.rma.ac.be/pub/2011/TTA_TOF.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber and D. Doroftei, “Multimodal terrain analysis for an all-terrain crisis Management Robot," in IARP HUDEM 2011, Sibenik, Croatia, 2011.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, a novel stereo-based terrain-traversability estimation methodology is proposed. The novelty is that – in contrary to classic depth-based terrain classification algorithms – all the information of the stereo camera system is used, also the color information. Using this approach, depth and color information are fused in order to obtain a higher classification accuracy than is possible with uni-modal techniques

    @InProceedings{de2011multimodal,
    author = {De Cubber, Geert and Doroftei, Daniela},
    booktitle = {IARP HUDEM 2011},
    title = {Multimodal terrain analysis for an all-terrain crisis Management Robot},
    year = {2011},
    abstract = {In this paper, a novel stereo-based terrain-traversability estimation methodology is proposed. The novelty is that – in contrary to classic depth-based terrain classification algorithms – all the information of the stereo camera system is used, also the color information. Using this approach, depth and color information are fused in order to obtain a higher classification accuracy than is possible with uni-modal techniques},
    project = {Mobiniss},
    address = {Sibenik, Croatia},
    url = {http://mecatron.rma.ac.be/pub/2011/Multimodal terrain analysis for an all-terrain crisis management robot .pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Doroftei, K. Verbiest, and S. A. Berrabah, “Autonomous camp surveillance with the ROBUDEM robot: challenges and results," in IARP Workshop RISE’2011, Belgium, 2011.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous robotic systems can help for risky interventions to reduce the risk to human lives. An example of such a risky intervention is a camp surveillance scenario, where an environment needs to be patrolled and intruders need to be detected and intercepted. This paper describes the development of a mobile outdoor robot which is capable of performing such a camp surveillance task. The key research issues tackled are the robot design, geo-referenced localization and path planning, traversability estimation, the optimization of the terrain coverage strategy and the development of an intuitive human-robot interface.

    @InProceedings{de2011autonomous,
    author = {De Cubber, Geert and Doroftei, Daniela and Verbiest, Kristel and Berrabah, Sid Ahmed},
    booktitle = {IARP Workshop RISE’2011},
    title = {Autonomous camp surveillance with the {ROBUDEM} robot: challenges and results},
    year = {2011},
    abstract = {Autonomous robotic systems can help for risky interventions to reduce the risk to human lives. An example of such a risky intervention is a camp surveillance scenario, where an environment needs to be patrolled and intruders need to be detected and intercepted. This paper describes the development of a mobile outdoor robot which is capable of performing such a camp surveillance task. The key research issues tackled are the robot design, geo-referenced localization and path planning, traversability estimation, the optimization of the terrain coverage strategy and the development of an intuitive human-robot interface.},
    project = {Mobiniss},
    address = {Belgium},
    url = {http://mecatron.rma.ac.be/pub/2011/ELROB-RISE.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber and D. Doroftei, “Using Robots in Hazardous Environments: Landmine Detection, de-Mining and Other Applications," in Using Robots in Hazardous Environments: Landmine Detection, De-Mining and Other Applications, Y. Baudoin and M. Habib, Eds., Woodhead Publishing, 2011, vol. 1, p. 476–498.
    [BibTeX] [Abstract] [Download PDF]

    This chapter presents three main aspects of the development of a crisis management robot. First, we present an approach for robust victim detection in difficult outdoor conditions. Second, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data. Lastly, we present behavior-based control architecture, enabling a robot to search for human victims on an incident site, while navigating semi-autonomously, using stereo vision as the main source of sensor information.

    @InBook{de2010human,
    author = {De Cubber, Geert and Doroftei, Daniela},
    editor = {Baudoin, Yvan and Habib, Maki},
    chapter = {Chapter 20},
    pages = {476--498},
    publisher = {Woodhead Publishing},
    title = {Using Robots in Hazardous Environments: Landmine Detection, de-Mining and Other Applications},
    year = {2011},
    isbn = {1845697863},
    volume = {1},
    abstract = {This chapter presents three main aspects of the development of a crisis management robot. First, we present an approach for robust victim detection in difficult outdoor conditions. Second, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data. Lastly, we present behavior-based control architecture, enabling a robot to search for human victims on an incident site, while navigating semi-autonomously, using stereo vision as the main source of sensor information.},
    booktitle = {Using Robots in Hazardous Environments: Landmine Detection, De-Mining and Other Applications},
    date = {2011-01-11},
    ean = {9781845697860},
    pagetotal = {665},
    project = {Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2009/Handbook Chapter 4 - Human Victim Detection and Stereo-based Terrain Traversability Analysis for Behavior-Based Robot Navigation.pdf},
    unit= {meca-ras}
    }

2010

  • G. De Cubber, S. A. Berrabah, D. Doroftei, Y. Baudoin, and H. Sahli, “Combining Dense Structure from Motion and Visual SLAM in a Behavior-Based Robot Control Architecture," International Journal of Advanced Robotic Systems, vol. 7, iss. 1, 2010.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this paper, we present a control architecture for an intelligent outdoor mobile robot. This enables the robot to navigate in a complex, natural outdoor environment, relying on only a single on-board camera as sensory input. This is achieved through a twofold analysis of the visual data stream: a dense structure from motion algorithm calculates a depth map of the environment and a visual simultaneous localization and mapping algorithm builds a map of the surroundings using image features. This information enables a behavior-based robot motion and path planner to navigate the robot through the environment. In this paper, we show the theoretical aspects of setting up this architecture.

    @Article{de2010combining,
    author = {De Cubber, Geert and Sid Ahmed Berrabah and Daniela Doroftei and Yvan Baudoin and Hichem Sahli},
    journal = {International Journal of Advanced Robotic Systems},
    title = {Combining Dense Structure from Motion and Visual {SLAM} in a Behavior-Based Robot Control Architecture},
    year = {2010},
    month = mar,
    number = {1},
    volume = {7},
    abstract = {In this paper, we present a control architecture for an intelligent outdoor mobile robot. This enables the robot to navigate in a complex, natural outdoor environment, relying on only a single on-board camera as sensory input. This is achieved through a twofold analysis of the visual data stream: a dense structure from motion algorithm calculates a depth map of the environment and a visual simultaneous localization and mapping algorithm builds a map of the surroundings using image features. This information enables a behavior-based robot motion and path planner to navigate the robot through the environment. In this paper, we show the theoretical aspects of setting up this architecture.},
    doi = {10.5772/7240},
    publisher = {{SAGE} Publications},
    project = {ViewFinder, Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2010/e_from_motion_and_visual_slam_in_a_behavior-based_robot_control_architecture.pdf},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, “On-line and Off-line 3D Reconstruction for Crisis Management Applications," in Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE’2010, Sheffield, UK, 2010.
    [BibTeX] [Abstract] [Download PDF]

    We present in this paper a 3D reconstruction methodology. This approach fuses dense stereo and sparse motion data to estimate high quality instantaneous depth maps. This methodology achieves near realtime processing frame rates, such that it can be directly used on-line by the crisis management teams.

    @InProceedings{de2010line,
    author = {De Cubber, Geert},
    booktitle = {Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE’2010},
    title = {On-line and Off-line {3D} Reconstruction for Crisis Management Applications},
    year = {2010},
    abstract = {We present in this paper a 3D reconstruction methodology. This approach fuses dense stereo and sparse motion data to estimate high quality instantaneous depth maps. This methodology achieves near realtime processing frame rates, such that it can be directly used on-line by the crisis management teams.},
    project = {ViewFinder, Mobiniss},
    address = {Sheffield, UK},
    url = {http://mecatron.rma.ac.be/pub/RISE/RISE - 2010/On-line and Off-line 3D Reconstruction_Geert_De_Cubber.pdf},
    unit= {meca-ras}
    }

  • Y. Baudoin, G. De Cubber, E. Colon, D. Doroftei, and S. A. Berrabah, “Robotics Assistance by Risky Interventions: Needs and Realistic Solutions," in Workshop on Robotics for Extreme conditions, Saint-Petersburg, Russia, 2010.
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses the requirements towards robotics systems in the domains of firefighting, CBRN-E and humanitarian demining.

    @InProceedings{baudoin2010robotics,
    author = {Baudoin, Yvan and De Cubber, Geert and Colon, Eric and Doroftei, Daniela and Berrabah, Sid Ahmed},
    booktitle = {Workshop on Robotics for Extreme conditions},
    title = {Robotics Assistance by Risky Interventions: Needs and Realistic Solutions},
    year = {2010},
    abstract = {This paper discusses the requirements towards robotics systems in the domains of firefighting, CBRN-E and humanitarian demining.},
    project = {ViewFinder, Mobiniss},
    address = {Saint-Petersburg, Russia},
    url = {http://mecatron.rma.ac.be/pub/2010/Robotics Assistance by risky interventions.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, “Variational methods for dense depth reconstruction from monocular and binocular video sequences," PhD Thesis, 2010.
    [BibTeX] [Abstract] [Download PDF]

    This research work tackles the problem of dense three-dimensional reconstruction from monocular and binocular image sequences. Recovering 3D-information has been in the focus of attention of the computer vision community for a few decades now, yet no all-satisfying method has been found so far. The main problem with vision, is that the perceived computer image is a two-dimensional projection of the 3D world. Three-dimensional reconstruction can thus be regarded as the process of re-projecting the 2D image(s) back to a 3D model, as such recovering the depth dimension which was lost during projection. In this work, we focus on dense reconstruction, meaning that a depth estimate is sought for each pixel of the input image. Most attention in the 3Dreconstruction area has been on stereo-vision based methods, which use the displacement of objects in two (or more) images. Where stereo vision must be seen as a spatial integration of multiple viewpoints to recover depth, it is also possible to perform a temporal integration. The problem arising in this situation is known as the Structure from Motion problem and deals with extracting 3-dimensional information about the environment from the motion of its projection onto a two-dimensional surface. Based upon the observation that the human visual system uses both stereo and structure from motion for 3D reconstruction, this research work also targets the combination of stereo information in a structure from motion-based 3D-reconstruction scheme. The data fusion problem arising in this case is solved by casting it as an energy minimization problem in a variationalframework.

    @PhdThesis{de2010variational,
    author = {De Cubber, Geert},
    school = {Vrije Universiteit Brussel-Royal Military Academy},
    title = {Variational methods for dense depth reconstruction from monocular and binocular video sequences},
    year = {2010},
    abstract = {This research work tackles the problem of dense three-dimensional reconstruction from monocular and binocular image sequences. Recovering 3D-information has been in the focus of attention of the computer vision community for a few decades now, yet no all-satisfying method has been found so far. The main problem with vision, is that the perceived computer image is a two-dimensional projection of the 3D world. Three-dimensional reconstruction can thus be regarded as the process of re-projecting the 2D image(s) back to a 3D model, as such recovering the depth dimension which was lost during projection.
    In this work, we focus on dense reconstruction, meaning that a depth estimate is sought for each pixel of the input image. Most attention in the 3Dreconstruction area has been on stereo-vision based methods, which use the displacement of objects in two (or more) images. Where stereo vision must be seen as a spatial integration of multiple viewpoints to recover depth, it is also possible to perform a temporal integration. The problem arising in this situation is known as the Structure from Motion problem and deals with extracting 3-dimensional information about the environment from the motion of its projection onto a two-dimensional surface. Based upon the observation that the human visual system uses both stereo and structure from motion for 3D reconstruction, this research work also targets the combination of stereo information in a structure from motion-based 3D-reconstruction scheme. The data fusion problem arising in this case is solved by casting it as an energy minimization problem in a variationalframework.},
    project = {ViewFinder, Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2010/PhD_Thesis_Geert_.pdf},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, D. Doroftei, and S. A. Berrabah, “Using visual perception for controlling an outdoor robot in a crisis management scenario," in ROBOTICS 2010, Clermont-Ferrand, France, 2010.
    [BibTeX] [Abstract] [Download PDF]

    Crisis management teams (e.g. fire and rescue services, anti-terrorist units …) are often confronted with dramatic situations where critical decisions have to be made within hard time constraints. Therefore, they need correct information about what is happening on the crisis site. In this context, the View-Finder projects aims at developing robots which can assist the human crisis managers, by gathering data. This paper gives an overview of the development of such an outdoor robot. The presented robotic system is able to detect human victims at the incident site, by using vision-based human body shape detection. To increase the perceptual awareness of the human crisis managers, the robotic system is capable of reconstructing a 3D model of the environment, based on vision data. Also for navigation, the robot depends mostly on visual perception, as it combines a model-based navigation approach using geo-referenced positioning with stereo-based terrain traversability analysis for obstacle avoidance. The robot control scheme is embedded in a behavior-based robot control architecture, which integrates all the robot capabilities. This paper discusses all the above mentioned technologies.

    @InProceedings{de2010using,
    author = {De Cubber, Geert and Doroftei, Daniela and Berrabah, Sid Ahmed},
    booktitle = {ROBOTICS 2010},
    title = {Using visual perception for controlling an outdoor robot in a crisis management scenario},
    year = {2010},
    abstract = {Crisis management teams (e.g. fire and rescue services, anti-terrorist units ...) are often confronted with dramatic situations where critical decisions have to be made within hard time constraints. Therefore, they need correct information about what is happening on the crisis site. In this context, the View-Finder projects aims at developing robots which can assist the human crisis managers, by gathering data. This paper gives an overview of the development of such an outdoor robot. The presented robotic system is able to detect human victims at the incident site, by using vision-based human body shape detection. To increase the perceptual awareness of the human crisis managers, the robotic system is capable of reconstructing a 3D model of the environment, based on vision data. Also for navigation, the robot depends mostly on visual perception, as it combines a model-based navigation approach using geo-referenced positioning with stereo-based terrain traversability analysis for obstacle avoidance. The robot control scheme is embedded in a behavior-based robot control architecture, which integrates all the robot capabilities. This paper discusses all the above mentioned technologies.},
    project = {ViewFinder, Mobiniss},
    address = {Clermont-Ferrand, France},
    unit= {meca-ras},
    url = {http://mecatron.rma.ac.be/pub/2010/Usingvisualperceptionforcontrollinganoutdoorrobotinacrisismanagementscenario (1).pdf},
    }

2009

  • G. De Cubber, D. Doroftei, L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Stereo-based terrain traversability analysis for robot navigation," in IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium, Brussels, Belgium, 2009.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data.

    @InProceedings{de2009stereo,
    author = {De Cubber, Geert and Doroftei, Daniela and Nalpantidis, Lazaros and Sirakoulis, Georgios Ch and Gasteratos, Antonios},
    booktitle = {IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium},
    title = {Stereo-based terrain traversability analysis for robot navigation},
    year = {2009},
    abstract = {In this paper, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data.},
    project = {ViewFinder, Mobiniss},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2009/RISE-DECUBBER-DUTH.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber and G. Marton, “Human Victim Detection," in Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE, Brussels, Belgium, 2009.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents an approach to achieve robust victim detection from color video images. The applied approach goes out from the Viola-Jones algorithm for Haar-features based template recognition. This algorithm was adapted to recognize persons lying on the ground in difficult outdoor illumination conditions.

    @InProceedings{de2009human,
    author = {De Cubber, Geert and Marton, Gabor},
    booktitle = {Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE},
    title = {Human Victim Detection},
    year = {2009},
    abstract = {This paper presents an approach to achieve robust victim detection from color video images. The applied approach goes out from the Viola-Jones algorithm for Haar-features based template recognition. This algorithm was adapted to recognize persons lying on the ground in difficult outdoor illumination conditions.},
    project = {ViewFinder, Mobiniss},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2009/RISE-DECUBBER_BUTE.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei, G. De Cubber, E. Colon, and Y. Baudoin, “Behavior based control for an outdoor crisis management robot," in Proceedings of the IARP International Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium, 2009, p. 12–14.
    [BibTeX] [Abstract] [Download PDF]

    The design and development of a control architecture for a robotic crisis management agent raises 3 main questions: 1. How can we design the individual behaviors, such that the robot is capable of avoiding obstacles and of navigating semi-autonomously? 2. How can these individual behaviors be combined in an optimal, leading to a rational and coherent global robot behavior? 3. How can all these capabilities be combined in a comprehensive and modular framework, such that the robot can handle a high-level task (searching for human victims) with minimal input from human operators, by navigating in a complex, dynamic and environment, while avoiding potentially hazardous obstacles? In this paper, we present each of these three main aspects of the general robot control architecture more in detail.

    @InProceedings{doroftei2009behavior,
    author = {Doroftei, Daniela and De Cubber, Geert and Colon, Eric and Baudoin, Yvan},
    booktitle = {Proceedings of the IARP International Workshop on Robotics for Risky Interventions and Environmental Surveillance},
    title = {Behavior based control for an outdoor crisis management robot},
    year = {2009},
    pages = {12--14},
    abstract = {The design and development of a control architecture for a robotic crisis management agent raises 3 main questions:
    1. How can we design the individual behaviors, such that the robot is capable of avoiding obstacles and of navigating semi-autonomously?
    2. How can these individual behaviors be combined in an optimal, leading to a rational and coherent global robot behavior?
    3. How can all these capabilities be combined in a comprehensive and modular framework, such that the robot can handle a high-level task (searching for human victims) with minimal input from human operators, by navigating in a complex, dynamic and environment, while avoiding potentially hazardous obstacles?
    In this paper, we present each of these three main aspects of the general robot control architecture more in detail.},
    project = {ViewFinder, Mobiniss},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2009/RISE-DOROFTEI.pdf},
    unit= {meca-ras}
    }

  • Y. Baudoin, D. Doroftei, G. De Cubber, S. A. Berrabah, C. Pinzon, J. Penders, A. Maslowski, and J. Bedkowski, “VIEW-FINDER : Outdoor Robotics Assistance to Fire-Fighting services," in International Symposium Clawar, Istanbul, Turkey, 2009.
    [BibTeX] [Abstract] [Download PDF]

    In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-theshelf units, consisting of wheeled robots. The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command.

    @InProceedings{baudoin2009view02,
    author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Berrabah, Sid Ahmed and Pinzon, Carlos and Penders, Jacques and Maslowski, Andrzej and Bedkowski, Janusz},
    booktitle = {International Symposium Clawar},
    title = {{VIEW-FINDER} : Outdoor Robotics Assistance to Fire-Fighting services},
    year = {2009},
    abstract = {In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-theshelf units, consisting of wheeled robots. The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It
    will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command.},
    project = {ViewFinder, Mobiniss},
    address = {Istanbul, Turkey},
    url = {http://mecatron.rma.ac.be/pub/2009/CLAWAR2009.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei, E. Colon, Y. Baudoin, and H. Sahli, “Development of a behaviour-based control and software architecture for a visually guided mine detection robot," European Journal of Automated Systems (JESA), vol. 43, iss. 3, p. 295–314, 2009.
    [BibTeX] [Abstract] [Download PDF]

    Humanitarian demining is a labor-intensive and high-risk which could benefit from the development of a humanitarian mine detection robot, capable of scanning a minefield semi-automatically. The design of such an outdoor autonomous robots requires the consideration and integration of multiple aspects: sensing, data fusion, path and motion planning and robot control embedded in a control and software architecture. This paper focuses on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour-based control architecture and implementation of a modular software architecture.

    @Article{doro2009development,
    author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan and Sahli, Hichem},
    journal = {European Journal of Automated Systems ({JESA})},
    title = {Development of a behaviour-based control and software architecture for a visually guided mine detection robot},
    year = {2009},
    volume = {43},
    number = {3},
    abstract = { Humanitarian demining is a labor-intensive and high-risk which could benefit from the development of a humanitarian mine detection robot, capable of scanning a minefield semi-automatically. The design of such an outdoor autonomous robots requires the consideration and integration of multiple aspects: sensing, data fusion, path and motion planning and robot control embedded in a control and software architecture. This paper focuses on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour-based control architecture and implementation of a modular software architecture.},
    pages = {295--314},
    project = {Mobiniss, ViewFinder},
    url = {http://mecatron.rma.ac.be/pub/2009/doc-article-hermes.pdf},
    unit= {meca-ras}
    }

2008

  • D. Doroftei, E. Colon, and G. De Cubber, “A Behaviour-Based Control and Software Architecture for the Visually Guided Robudem Outdoor Mobile Robot," Journal of Automation Mobile Robotics and Intelligent Systems, vol. 2, iss. 4, p. 19–24, 2008.
    [BibTeX] [Abstract] [Download PDF]

    The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing a semiautonomous outdoor robot for risky interventions. This paper focuses on three main aspects of the design process: visual sensing using stereo vision and image motion analysis, design of a behaviourbased control architecture and implementation of modular software architecture.

    @Article{doroftei2008behaviour,
    author = {Doroftei, Daniela and Colon, Eric and De Cubber, Geert},
    journal = {Journal of Automation Mobile Robotics and Intelligent Systems},
    title = {A Behaviour-Based Control and Software Architecture for the Visually Guided Robudem Outdoor Mobile Robot},
    year = {2008},
    issn = {1897-8649},
    month = oct,
    number = {4},
    pages = {19--24},
    volume = {2},
    abstract = {The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing a semiautonomous outdoor robot for risky interventions. This paper focuses on three main aspects of the design process: visual sensing using stereo vision and image motion analysis, design of a behaviourbased control architecture and implementation of modular software architecture.},
    project = {ViewFinder, Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2008/XXX JAMRIS No8 - Doroftei.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Intelligent robots need intelligent vision: visual 3D perception," in RISE’08: Proceedings of the EURON/IARP International Workshop on Robotics for Risky Interventions and Surveillance of the Environment, Benicassim, Spain, 2008.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we investigate the possibilities of stereo and structure from motion approaches. It is not the aim to compare both theories of depth reconstruction with the goal of designating a winner and a loser. Both methods are capable of providing sparse as well as dense 3D reconstructions and both approaches have their merits and defects. The thorough, year-long research in the field indicates that accurate depth perception requires a combination of methods rather than a sole one. In fact, cognitive research has shown that the human brain uses no less than 12 different cues to estimate depth. Therefore, we also finally introduce in a following section a methodology to integrate stereo and structure from motion.

    @InProceedings{de2008intelligent,
    author = {De Cubber, Geert and Nalpantidis, Lazaros and Sirakoulis, Georgios Ch and Gasteratos, Antonios},
    booktitle = {RISE’08: Proceedings of the EURON/IARP International Workshop on Robotics for Risky Interventions and Surveillance of the Environment},
    title = {Intelligent robots need intelligent vision: visual {3D} perception},
    year = {2008},
    abstract = {In this paper, we investigate the possibilities of stereo and structure from motion approaches. It is not the aim to compare both theories of depth reconstruction with the goal of designating a winner and a loser. Both methods are capable of providing sparse as well as dense 3D reconstructions and both approaches have their merits and defects. The thorough, year-long research in the field indicates that accurate depth perception requires a combination of methods rather than a sole one. In fact, cognitive research has shown that the human brain uses no less than 12 different cues to estimate depth. Therefore, we also finally introduce in a following section a methodology to integrate stereo and structure from motion.},
    project = {ViewFinder, Mobiniss},
    address = {Benicassim, Spain},
    url = {http://mecatron.rma.ac.be/pub/2008/DeCubber.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Doroftei, and G. Marton, “Development of a visually guided mobile robot for environmental observation as an aid for outdoor crisis management operations," in Proceedings of the IARP Workshop on Environmental Maintenance and Protection, Baden Baden, Germany, 2008.
    [BibTeX] [Abstract] [Download PDF]

    To solve these issues, an outdoor mobile robotic platform was equipped with a differential GPS system for accurate geo-registered positioning, and a stereo vision system. This stereo vision systems serves two purposes: 1) victim detection and 2) obstacle detection and avoidance. For semi-autonomous robot control and navigation, we rely on a behavior-based robot motion and path planner. In this paper, we present each of the three main aspects (victim detection, stereo-based obstacle detection and behavior-based navigation) of the general robot control architecture more in detail.

    @InProceedings{de2008development,
    author = {De Cubber, Geert and Doroftei, Daniela and Marton, Gabor},
    booktitle = {Proceedings of the IARP Workshop on Environmental Maintenance and Protection},
    title = {Development of a visually guided mobile robot for environmental observation as an aid for outdoor crisis management operations},
    year = {2008},
    abstract = {To solve these issues, an outdoor mobile robotic platform was equipped with a differential GPS system for accurate geo-registered positioning, and a stereo vision system. This stereo vision systems serves two purposes: 1) victim detection and 2) obstacle detection and avoidance. For semi-autonomous robot control and navigation, we rely on a behavior-based robot motion and path planner. In this paper, we present each of the three main aspects (victim detection, stereo-based obstacle detection and behavior-based navigation) of the general robot control architecture more in detail.},
    project = {ViewFinder, Mobiniss},
    address = {Baden Baden, Germany},
    url = {http://mecatron.rma.ac.be/pub/2008/environmental observation as an aid for outdoor crisis management operations.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, “Dense 3D structure and motion estimation as an aid for robot navigation," Journal of Automation Mobile Robotics and Intelligent Systems, vol. 2, iss. 4, p. 14–18, 2008.
    [BibTeX] [Abstract] [Download PDF]

    Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.

    @Article{DeCubber2008,
    author = {De Cubber, Geert},
    journal = {Journal of Automation Mobile Robotics and Intelligent Systems},
    title = {Dense {3D} structure and motion estimation as an aid for robot navigation},
    year = {2008},
    issn = {1897-8649},
    month = oct,
    number = {4},
    pages = {14--18},
    volume = {2},
    abstract = {Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.},
    project = {ViewFinder,Mobiniss},
    url = {http://www.jamris.org/images/ISSUES/ISSUE-2008-04/002 JAMRIS No8 - De Cubber.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei and Y. Baudoin, “Development of a semi-autonomous De-mining vehicle," in 7th IARP Workshop HUDEM2008, Cairo, Egypt, 2008.
    [BibTeX] [Abstract]

    The paper describes the Development of a semi-autonomous De-mining vehicle

    @InProceedings{doro2008development,
    author = {Doroftei, Daniela and Baudoin, Yvan},
    booktitle = {7th {IARP} Workshop {HUDEM}2008},
    title = {Development of a semi-autonomous De-mining vehicle},
    year = {2008},
    abstract = {The paper describes the Development of a semi-autonomous De-mining vehicle},
    address = {Cairo, Egypt},
    project = {Mobiniss},
    unit= {meca-ras}
    }

  • D. Doroftei and J. Bedkowski, “Towards the autonomous navigation of robots for risky interventions," in Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance RISE, Benicassim, Spain, 2008.
    [BibTeX] [Abstract] [Download PDF]

    In the course of the ViewFinder project, two robotics teams (RMS and PIAP) are working on the development of an intelligent autonomous mobile robot. This paper reports on the progress of both teams.

    @InProceedings{doro2008towards,
    author = {Doroftei, Daniela and Bedkowski, Janusz},
    booktitle = {Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance {RISE}},
    title = {Towards the autonomous navigation of robots for risky interventions},
    year = {2008},
    abstract = {In the course of the ViewFinder project, two robotics teams (RMS and PIAP) are working on the development of an intelligent autonomous mobile robot. This paper reports on the progress of both teams.},
    project = {ViewFinder, Mobiniss},
    address = {Benicassim, Spain},
    url = {http://mecatron.rma.ac.be/pub/2008/Doroftei.pdf},
    unit= {meca-ras}
    }

2007

  • E. Colon, G. De Cubber, H. Ping, J. Habumuremyi, H. Sahli, and Y. Baudoin, “Integrated Robotic systems for Humanitarian Demining," International Journal of Advanced Robotic Systems, vol. 4, iss. 2, p. 24, 2007.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper summarises the main results of 10 years of research and development in Humanitarian Demining. The Hudem project focuses on mine detection systems and aims at provided different solutions to support the mine detection operations. Robots using different kind of locomotion systems have been designed and tested on dummy minefields. In order to control these robots, software interfaces, control algorithms, visual positioning and terrain following systems have also been developed. Typical data acquisition results obtained during trial campaigns with robots and data acquisition systems are reported. Lessons learned during the project and future work conclude this paper.

    @Article{colon2007integrated,
    author = {Colon, Eric and De Cubber, Geert and Ping, Hong and Habumuremyi, Jean-Claude and Sahli, Hichem and Baudoin, Yvan},
    journal = {International Journal of Advanced Robotic Systems},
    title = {Integrated Robotic systems for Humanitarian Demining},
    year = {2007},
    month = jun,
    number = {2},
    pages = {24},
    volume = {4},
    abstract = {This paper summarises the main results of 10 years of research and development in Humanitarian Demining. The Hudem project focuses on mine detection systems and aims at provided different solutions to support the mine detection operations. Robots using different kind of locomotion systems have been designed and tested on dummy minefields. In order to control these robots, software interfaces, control algorithms, visual positioning and terrain following systems have also been developed. Typical data acquisition results obtained during trial campaigns with robots and data acquisition systems are reported. Lessons learned during the project and future work conclude this paper.},
    doi = {10.5772/5694},
    publisher = {{SAGE} Publications},
    project = {Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2007/10.1.1.691.7544.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, “Dense 3D structure and motion estimation as an aid for robot navigation," in ISMCR 2007, Warsaw, Poland, 2007.
    [BibTeX] [Abstract] [Download PDF]

    Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.

    @InProceedings{de2007dense,
    author = {De Cubber, Geert},
    booktitle = {ISMCR 2007},
    title = {Dense {3D} structure and motion estimation as an aid for robot navigation},
    year = {2007},
    abstract = {Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.},
    project = {ViewFinder,Mobiniss},
    address = {Warsaw, Poland},
    url = {http://mecatron.rma.ac.be/pub/2007/Dense 3D Structure and Motion Estimation as an aid for Robot Navigation.pdf},
    unit= {meca-ras,vub-etro}
    }

  • D. Doroftei, E. Colon, and G. De Cubber, “A behaviour-based control and software architecture for the visually guided Robudem outdoor mobile robot,," in ISMCR 2007, Warsaw, Poland,, 2007.
    [BibTeX] [Abstract] [Download PDF]

    The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing an semi‐autonomous outdoor robot for risky interventions. This paper focuses mainly on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour‐based control architecture and implementation of a modular software architecture.

    @InProceedings{doroftei2007behaviour,
    author = {Doroftei, Daniela and Colon, Eric and De Cubber, Geert},
    booktitle = {ISMCR 2007},
    title = {A behaviour-based control and software architecture for the visually guided {Robudem} outdoor mobile robot,},
    year = {2007},
    address = {Warsaw, Poland,},
    abstract = {The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing an semi‐autonomous outdoor robot for risky interventions. This paper focuses mainly on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour‐based control architecture and implementation of a modular software architecture.},
    project = {ViewFinder,Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2007/Doroftei_ISMCR07.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei, E. Colon, Y. Baudoin, and H. Sahli, “Development of a semi-autonomous off-road vehicle.," in IEEE HuMan’07’, Timimoun, Algeria, 2007, p. 340–343.
    [BibTeX] [Abstract] [Download PDF]

    Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.

    @InProceedings{doro2007development,
    author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan and Sahli, Hichem},
    booktitle = {{IEEE} {HuMan}'07'},
    title = {Development of a semi-autonomous off-road vehicle.},
    year = {2007},
    address = {Timimoun, Algeria},
    pages = {340--343},
    abstract = {Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.},
    project = {Mobiniss, ViewFinder},
    url = {http://mecatron.rma.ac.be/pub/2007/Development_of_a_semi-autonomous_off-road_vehicle.pdf},
    unit= {meca-ras}
    }

2006

  • G. De Cubber, V. Enescu, H. Sahli, E. Demeester, M. Nuttin, and D. Vanhooydonck, “Active stereo vision-based mobile robot navigation for person tracking," Integrated Computer-Aided Engineering, vol. 13, p. 203–222, 2006.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first uses a stereo head equipped with a pan-tilt mechanism to track a moving target (selected by an operator) and keep it centered in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. For this, a hybrid navigation algorithm is adopted with a reactive part that efficiently reacts to the most recent sensor data, and a deliberative part that generates a globally optimal path to a target destination, such as the person’s location. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM. Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated via a suitable stochastic dynamic model for the stereo head-target system. Robust tracking is achieved by combining a color-based particle filter with a method to update the color model of the target under changing illumination conditions. The main contributions of this paper lie in (1) devising a robust color-based 3D target tracking method, (2) proposing a hybrid deliberative/reactive navigation scheme, and (3) integrating them on a wheelchair platform for the final goal of person following. Experimental results are presented for ASVM separately and in combination with a wheelchair platform-based implementation of the NM.

    @Article{2c2cd28d2aea4009ae0135448c005050,
    author = {De Cubber, Geert and Valentin Enescu and Hichem Sahli and Eric Demeester and Marnix Nuttin and Dirk Vanhooydonck},
    journal = {Integrated Computer-Aided Engineering},
    title = {Active stereo vision-based mobile robot navigation for person tracking},
    year = {2006},
    issn = {1069-2509},
    month = jul,
    note = {Integrated Computer-Aided Engineering, Vol. ?, Nr. ?, pp. ?, .},
    pages = {203--222},
    volume = {13},
    abstract = {In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first uses a stereo head equipped with a pan-tilt mechanism to track a moving target (selected by an operator) and keep it centered in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. For this, a hybrid navigation algorithm is adopted with a reactive part that efficiently reacts to the most recent sensor data, and a deliberative part that generates a globally optimal path to a target destination, such as the person's location. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM. Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated via a suitable stochastic dynamic model for the stereo head-target system. Robust tracking is achieved by combining a color-based particle filter with a method to update the color model of the target under changing illumination conditions. The main contributions of this paper lie in (1) devising a robust color-based 3D target tracking method, (2) proposing a hybrid deliberative/reactive navigation scheme, and (3) integrating them on a wheelchair platform for the final goal of person following. Experimental results are presented for ASVM separately and in combination with a wheelchair platform-based implementation of the NM.},
    day = {24},
    keywords = {mobile robot, active vision, stereo, navigation},
    language = {English},
    project = {Mobiniss, ViewFinder},
    publisher = {IOS Press},
    unit= {meca-ras,vub-etro},
    url = {https://cris.vub.be/en/publications/active-stereo-visionbased-mobile-robot-navigation-for-person-tracking(2c2cd28d-2aea-4009-ae01-35448c005050)/export.html},
    }

  • D. Doroftei, E. Colon, and Y. Baudoin, “A modular control architecture for semi-autonomous navigation," in CLAWAR 2006, Brussels, Belgium, 2006, p. 712–715.
    [BibTeX] [Abstract] [Download PDF]

    Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.

    @InProceedings{doro2006modular,
    author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan},
    booktitle = {{CLAWAR} 2006},
    title = {A modular control architecture for semi-autonomous navigation},
    year = {2006},
    pages = {712--715},
    abstract = {Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. },
    project = {Mobiniss, ViewFinder},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2006/Clawar2006_Doroftei_colon.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei, E. Colon, and Y. Baudoin, “Development of a control architecture for the ROBUDEM outdoor mobile robot platform," in IARP Workshop RISE 2006, Brussels, Belgium, 2006.
    [BibTeX] [Abstract] [Download PDF]

    Humanitarian demining still is a highly labor-intensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan a minefield semi-automatically. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.

    @InProceedings{doro2006development,
    author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan},
    booktitle = {{IARP} Workshop {RISE} 2006},
    title = {Development of a control architecture for the ROBUDEM outdoor mobile robot platform},
    year = {2006},
    abstract = {Humanitarian demining still is a highly labor-intensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan a minefield semi-automatically. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. },
    project = {Mobiniss, ViewFinder},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2006/IARPWS2006_Doroftei_Colon.pdf},
    unit= {meca-ras}
    }

2005

  • V. Enescu, G. De Cubber, H. Sahli, E. Demeester, D. Vanhooydonck, and M. Nuttin, “Active stereo vision-based mobile robot navigation for person tracking," in International Conference on Informatics in Control, Automation and Robotics, Barcelona, Spain, 2005, p. 32–39.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first tracks the person in stereo images and controls the pan/tilt unit to keep the target in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM.Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated by generating target location hypotheses in an efficient manner. Robustness against outliers and occlusions is achieved by employing a multi-hypothesis tracking method – the particle filter – based on a color model of the target. Moreover, to deal with illumination changes, the system adaptively updates the color model of the target. The main contributions of this paper lie in (1) devising a stereo, color-based target tracking method using the stereo geometry constraint and (2) integrating it with a robotic agent in a loosely coupled manner.

    @InProceedings{enescu2005active,
    author = {Enescu, Valentin and De Cubber, Geert and Sahli, Hichem and Demeester, Eric and Vanhooydonck, Dirk and Nuttin, Marnix},
    booktitle = {International Conference on Informatics in Control, Automation and Robotics},
    title = {Active stereo vision-based mobile robot navigation for person tracking},
    year = {2005},
    address = {Barcelona, Spain},
    month = sep,
    pages = {32--39},
    abstract = {In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first tracks the person in stereo images and controls the pan/tilt unit to keep the target in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM.Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated by generating target location hypotheses in an efficient manner. Robustness against outliers and occlusions is achieved by employing a multi-hypothesis tracking method - the particle filter - based on a color model of the target. Moreover, to deal with illumination changes, the system adaptively updates the color model of the target. The main contributions of this paper lie in (1) devising a stereo, color-based target tracking method using the stereo geometry constraint and (2) integrating
    it with a robotic agent in a loosely coupled manner.},
    project = {Mobiniss, ViewFinder},
    doi = {10.3233/ica-2006-13302},
    url = {http://mecatron.rma.ac.be/pub/2005/f969ee9e1169623340aa409f539fddb9c413.pdf},
    unit= {meca-ras,vub-etro}
    }

2004

  • G. De Cubber, S. A. Berrabah, and H. Sahli, “Color-based visual servoing under varying illumination conditions," Robotics and Autonomous Systems, vol. 47, iss. 4, p. 225–249, 2004.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. Here, we present a new model driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the non-linear system. This technique can be applied to any pan tilt zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features.

    @Article{de2004color,
    author = {De Cubber, Geert and Berrabah, Sid Ahmed and Sahli, Hichem},
    journal = {Robotics and Autonomous Systems},
    title = {Color-based visual servoing under varying illumination conditions},
    year = {2004},
    month = jul,
    number = {4},
    pages = {225--249},
    volume = {47},
    abstract = {Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. Here, we present a new model driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the non-linear system. This technique can be applied to any pan tilt zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features.},
    doi = {10.1016/j.robot.2004.03.015},
    publisher = {Elsevier {BV}},
    project = {Mobiniss},
    url = {https://www.sciencedirect.com/science/article/abs/pii/S0921889004000570},
    unit= {meca-ras,vub-etro}
    }

2003

  • G. De Cubber, H. Sahli, E. Colon, and Y. Baudoin, “Visual Servoing under Changing Illumination Conditions," in Proc. International Workshop on Attention and Performance in Computer Vision (ICVS03), Graz, Austria, 2003, p. 47–54.
    [BibTeX] [Abstract] [Download PDF]

    Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. In this paper, we present a new model-driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the nonlinear system. This technique can be applied to any pan-tilt-zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features

    @InProceedings{de2003visual,
    author = {De Cubber, Geert and Sahli, Hichem and Colon, Eric and Baudoin, Yvan},
    booktitle = {Proc. International Workshop on Attention and Performance in Computer Vision (ICVS03)},
    title = {Visual Servoing under Changing Illumination Conditions},
    year = {2003},
    pages = {47--54},
    address = {Graz, Austria},
    abstract = {Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. In this paper, we present a new model-driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the nonlinear system. This technique can be applied to any pan-tilt-zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features},
    url = {http://mecatron.rma.ac.be/pub/2003/ICVS03_Geert.pdf},
    project = {Mobiniss},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, S. A. Berrabah, and H. Sahli, “A Bayesian Approach for Color Constancy based Visual Servoing," in 11th International Conference on Advanced Robotics, Coimbra, Portugal, 2003.
    [BibTeX] [Download PDF]
    @InProceedings{de2003bayesian,
    author = {De Cubber, Geert and Berrabah, Sid Ahmed and Sahli, Hichem},
    booktitle = {11th International Conference on Advanced Robotics},
    title = {A Bayesian Approach for Color Constancy based Visual Servoing},
    year = {2003},
    address = {Coimbra, Portugal},
    unit= {meca-ras,vub-etro},
    project = {Mobiniss},
    url = {https://www.semanticscholar.org/paper/A-Bayesian-Approach-for-Color-Constancy-based-Cubber-Berrabah/ed5636626e307f2b8d0c5f4fcc79d5d54a9cc639},
    }

2002

  • G. De Cubber, H. Sahli, and F. Decroos, “Sensor Integration on a Mobile Robot," in ISMCR 2002: Proc. 12th Int’l Symp. Measurement and Control in Robotics,, Bourges, France, 2002.
    [BibTeX] [Abstract] [Download PDF]

    The purpose of this paper is to show an application of path planning for a mobile pneumatic robot. The robot is capable of searching for a specific target in the scene and navigating towards it, in an a priori unknown environment. To accomplish this task, the robot uses a colour pan-tilt camera and two ultrasonic sensors. As the camera is only used for target tracking, the robot is left with very incomplete sensor data with a high degree of uncertainty. To counter this, a fuzzy logic – based sensor fusion procedure is set up to aid the map building process in constructing a reliable environmental model. The significance of this work is that it shows that the use of fuzzy logic based fusion and potential field navigation can achieve good results for path planning

    @InProceedings{de2002sensor,
    author = {De Cubber, Geert and Sahli, Hichem and Decroos, Francis},
    booktitle = {ISMCR 2002: Proc. 12th Int'l Symp. Measurement and Control in Robotics,},
    title = {Sensor Integration on a Mobile Robot},
    year = {2002},
    address = {Bourges, France},
    abstract = {The purpose of this paper is to show an application of path planning for a mobile pneumatic robot. The robot is capable of searching for a specific target in the scene and navigating towards it, in an a priori unknown environment. To accomplish this task, the robot uses a colour pan-tilt camera and two ultrasonic sensors. As the camera is only used for target tracking, the robot is left with very incomplete sensor data with a high degree of uncertainty. To counter this, a fuzzy logic - based sensor fusion procedure is set up to aid the map building process in constructing a reliable environmental model. The significance of this work is that it shows that the use of fuzzy logic based fusion and potential field navigation can achieve good results for path planning},
    url = {http://mecatron.rma.ac.be/pub/2002/Paper ISMCR'02 - Sensor Integration on a Mobile Robot.pdf},
    project = {Mobiniss},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, H. Sahli, H. Ping, and E. Colon, “A Colour Constancy Approach for Illumination Invariant Colour Target Tracking," in IARP Workshop on Robots for Humanitarian Demining, Vienna, Austria, 2002.
    [BibTeX] [Abstract] [Download PDF]

    Many robotic agents use color vision to retrieve quality information about the environment. In this work, we present a visual servoing technique, where vision is the primary sensing modality and sensing is based upon the analysis of the perceived visual information. We describe how colored targets can be identified and how their position and motion can be estimated quickly and reliably. The visual servoing procedure is essentially a four-stage process, with color target identification, motion parameter estimation, target tracking and target position estimation. These individual parts add up to a global vision system enabling precise positioning for a demining robot.

    @InProceedings{de2002colour,
    author = {De Cubber, Geert and Sahli, Hichem and Ping, Hong and Colon, Eric},
    booktitle = {IARP Workshop on Robots for Humanitarian Demining},
    title = {A Colour Constancy Approach for Illumination Invariant Colour Target Tracking},
    year = {2002},
    address = {Vienna, Austria},
    abstract = {Many robotic agents use color vision to retrieve quality information about the environment. In this work, we present a visual servoing technique, where vision is the primary sensing modality and sensing is based upon the analysis of the perceived visual information. We describe how colored targets can be identified and how their position and motion can be estimated quickly and reliably. The visual servoing procedure is essentially a four-stage process, with color target identification, motion parameter estimation, target tracking and target position estimation. These individual parts add up to a global vision system enabling precise positioning for a demining robot.},
    url = {http://mecatron.rma.ac.be/pub/2002/Paper IARP - Geert De Cubber.pdf},
    project = {Mobiniss},
    unit= {meca-ras,vub-etro}
    }