In the event of an emergency due to a fire or other crisis it is time consuming to establish whether the ground can be entered safely by human beings. The VIEW-FINDER project seeks to develop and utilise robots and an advanced base station for inspection and in-situ data gathering.
The robots will be installed with onboard TV/IR cameras, LADAR and other sensors to enhance scene reconstruction, as well as a wide array of chemical sensors. The data will be sent to the base station for processing and presented to the command of the operation combined with information originating from a web of sources. The information can also be forwarded to the relevant forces dealing with the crisis (e.g. fire fighters, rescue workers and police).
Besides the task specific sensors, 'conventional' sensors will support navigation. The robots will navigate individually or cooperatively by following high level instructions from the base station. The robots are made up of off-the-shelf units, including wheeled robots for the common fire ground and caterpillars for more exceptional circumstances. The robots will connect to the base station and to each other; using a wireless self-organising network of mobile communication nodes (that consist of other robots) which adapts to the terrain. The robots are intended to be used as the first explorers of the danger area, as well as to act as in-situ supporters and safeguards to human personnel.
The base station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It is equipped with a sophisticated human interface to display the information in a convenient and useable form to the human operators and operation command. The project will provide proof-of-concept solutions, to be evaluated by a board of End-Users, thus ensuring that all operational needs are addressed and accounted for. Project workshops that aim at further dissemination and exploitation of results will be organised.
European Commission
6th Framework Programme
2006 – 2009
3.5 M€
Project Video Gallery
Final Demonstration
Project Publications
2013
- G. De Cubber and H. Sahli, “Augmented Lagrangian-based approach for dense three-dimensional structure and motion estimation from binocular image sequences," IET Computer Vision, 2013.
[BibTeX] [Abstract] [Download PDF] [DOI]
In this study, the authors propose a framework for stereo–motion integration for dense depth estimation. They formulate the stereo–motion depth reconstruction problem into a constrained minimisation one. A sequential unconstrained minimisation technique, namely, the augmented Lagrange multiplier (ALM) method has been implemented to address the resulting constrained optimisation problem. ALM has been chosen because of its relative insensitivity to whether the initial design points for a pseudo-objective function are feasible or not. The development of the method and results from solving the stereo–motion integration problem are presented. Although the authors work is not the only one adopting the ALMs framework in the computer vision context, to thier knowledge the presented algorithm is the first to use this mathematical framework in a context of stereo–motion integration. This study describes how the stereo–motion integration problem was cast in a mathematical context and solved using the presented ALM method. Results on benchmark and real visual input data show the validity of the approach.
@Article{de2013augmented, author = {De Cubber, Geert and Sahli, Hichem}, journal = {IET Computer Vision}, title = {Augmented Lagrangian-based approach for dense three-dimensional structure and motion estimation from binocular image sequences}, year = {2013}, abstract = {In this study, the authors propose a framework for stereo–motion integration for dense depth estimation. They formulate the stereo–motion depth reconstruction problem into a constrained minimisation one. A sequential unconstrained minimisation technique, namely, the augmented Lagrange multiplier (ALM) method has been implemented to address the resulting constrained optimisation problem. ALM has been chosen because of its relative insensitivity to whether the initial design points for a pseudo-objective function are feasible or not. The development of the method and results from solving the stereo–motion integration problem are presented. Although the authors work is not the only one adopting the ALMs framework in the computer vision context, to thier knowledge the presented algorithm is the first to use this mathematical framework in a context of stereo–motion integration. This study describes how the stereo–motion integration problem was cast in a mathematical context and solved using the presented ALM method. Results on benchmark and real visual input data show the validity of the approach.}, doi = {10.1049/iet-cvi.2013.0017}, publisher = {IET Digital Library}, project = {ICARUS,ViewFinder,Mobiniss}, url = {https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2013.0017}, unit= {meca-ras} }
2012
- G. De Cubber and H. Sahli, “Partial differential equation-based dense 3D structure and motion estimation from monocular image sequences," IET computer vision, vol. 6, iss. 3, p. 174–185, 2012.
[BibTeX] [DOI]@Article{de2012partial, author = {De Cubber, Geert and Sahli, Hichem}, journal = {IET computer vision}, title = {Partial differential equation-based dense {3D} structure and motion estimation from monocular image sequences}, year = {2012}, number = {3}, pages = {174--185}, volume = {6}, doi = {10.1049/iet-cvi.2011.0174}, project = {ViewFinder, Mobiniss}, publisher = {IET Digital Library}, unit= {meca-ras,vu-etro} }
2011
- G. De Cubber, D. Doroftei, H. Sahli, and Y. Baudoin, “Outdoor Terrain Traversability Analysis for Robot Navigation using a Time-Of-Flight Camera," in RGB-D Workshop on 3D Perception in Robotics, Vasteras, Sweden, 2011.
[BibTeX] [Abstract] [Download PDF]
Autonomous robotic systems operating in unstructured outdoor environments need to estimate the traversabilityof the terrain in order to navigate safely. Traversability estimation is a challenging problem, as the traversability is a complex function of both the terrain characteristics, such as slopes, vegetation, rocks, etc and the robot mobility characteristics, i.e. locomotion method, wheels, etc. It is thus required to analyze in real-time the 3D characteristics of the terrain and pair this data to the robot capabilities. In this paper, a method is introduced to estimate the traversability using data from a time-of-flight camera.
@InProceedings{de2011outdoor, author = {De Cubber, Geert and Doroftei, Daniela and Sahli, Hichem and Baudoin, Yvan}, booktitle = {RGB-D Workshop on 3D Perception in Robotics}, title = {Outdoor Terrain Traversability Analysis for Robot Navigation using a Time-Of-Flight Camera}, year = {2011}, abstract = {Autonomous robotic systems operating in unstructured outdoor environments need to estimate the traversabilityof the terrain in order to navigate safely. Traversability estimation is a challenging problem, as the traversability is a complex function of both the terrain characteristics, such as slopes, vegetation, rocks, etc and the robot mobility characteristics, i.e. locomotion method, wheels, etc. It is thus required to analyze in real-time the 3D characteristics of the terrain and pair this data to the robot capabilities. In this paper, a method is introduced to estimate the traversability using data from a time-of-flight camera.}, project = {ViewFinder, Mobiniss}, address = {Vasteras, Sweden}, url = {http://mecatron.rma.ac.be/pub/2011/TTA_TOF.pdf}, unit= {meca-ras} }
2010
- G. De Cubber, S. A. Berrabah, D. Doroftei, Y. Baudoin, and H. Sahli, “Combining Dense Structure from Motion and Visual SLAM in a Behavior-Based Robot Control Architecture," International Journal of Advanced Robotic Systems, vol. 7, iss. 1, 2010.
[BibTeX] [Abstract] [Download PDF] [DOI]
In this paper, we present a control architecture for an intelligent outdoor mobile robot. This enables the robot to navigate in a complex, natural outdoor environment, relying on only a single on-board camera as sensory input. This is achieved through a twofold analysis of the visual data stream: a dense structure from motion algorithm calculates a depth map of the environment and a visual simultaneous localization and mapping algorithm builds a map of the surroundings using image features. This information enables a behavior-based robot motion and path planner to navigate the robot through the environment. In this paper, we show the theoretical aspects of setting up this architecture.
@Article{de2010combining, author = {De Cubber, Geert and Sid Ahmed Berrabah and Daniela Doroftei and Yvan Baudoin and Hichem Sahli}, journal = {International Journal of Advanced Robotic Systems}, title = {Combining Dense Structure from Motion and Visual {SLAM} in a Behavior-Based Robot Control Architecture}, year = {2010}, month = mar, number = {1}, volume = {7}, abstract = {In this paper, we present a control architecture for an intelligent outdoor mobile robot. This enables the robot to navigate in a complex, natural outdoor environment, relying on only a single on-board camera as sensory input. This is achieved through a twofold analysis of the visual data stream: a dense structure from motion algorithm calculates a depth map of the environment and a visual simultaneous localization and mapping algorithm builds a map of the surroundings using image features. This information enables a behavior-based robot motion and path planner to navigate the robot through the environment. In this paper, we show the theoretical aspects of setting up this architecture.}, doi = {10.5772/7240}, publisher = {{SAGE} Publications}, project = {ViewFinder, Mobiniss}, url = {http://mecatron.rma.ac.be/pub/2010/e_from_motion_and_visual_slam_in_a_behavior-based_robot_control_architecture.pdf}, unit= {meca-ras,vub-etro} }
- Y. Baudoin, D. Doroftei, G. De Cubber, S. A. Berrabah, E. Colon, C. Pinzon, A. Maslowski, J. Bedkowski, and J. PENDERS, “VIEW-FINDER: Robotics Assistance to fire-Fighting services," in Mobile Robotics: Solutions and Challenges, , 2010, p. 397–406.
[BibTeX] [Abstract] [Download PDF]
This paper presents an overview of the View-Finder project
@InCollection{baudoin2010view, author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Berrabah, Sid Ahmed and Colon, Eric and Pinzon, Carlos and Maslowski, Andrzej and Bedkowski, Janusz and PENDERS, Jacques}, booktitle = {Mobile Robotics: Solutions and Challenges}, title = {{VIEW-FINDER}: Robotics Assistance to fire-Fighting services}, year = {2010}, pages = {397--406}, abstract = {This paper presents an overview of the View-Finder project}, project = {ViewFinder}, unit= {meca-ras}, url = {https://books.google.be/books?id=zcfFCgAAQBAJ&pg=PA397&lpg=PA397&dq=VIEW-FINDER: Robotics Assistance to fire-Fighting services mobile robots&source=bl&ots=Jh6P63OKCr&sig=O1GPy_c42NPSEdO8Hb_pa9V6K7g&hl=en&sa=X&ved=2ahUKEwiLr76B-5zfAhUMCewKHQS_Af0Q6AEwDXoECAEQAQ#v=onepage&q=VIEW-FINDER: Robotics Assistance to fire-Fighting services mobile robots&f=false}, }
- G. De Cubber, “On-line and Off-line 3D Reconstruction for Crisis Management Applications," in Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE’2010, Sheffield, UK, 2010.
[BibTeX] [Abstract] [Download PDF]
We present in this paper a 3D reconstruction methodology. This approach fuses dense stereo and sparse motion data to estimate high quality instantaneous depth maps. This methodology achieves near realtime processing frame rates, such that it can be directly used on-line by the crisis management teams.
@InProceedings{de2010line, author = {De Cubber, Geert}, booktitle = {Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE’2010}, title = {On-line and Off-line {3D} Reconstruction for Crisis Management Applications}, year = {2010}, abstract = {We present in this paper a 3D reconstruction methodology. This approach fuses dense stereo and sparse motion data to estimate high quality instantaneous depth maps. This methodology achieves near realtime processing frame rates, such that it can be directly used on-line by the crisis management teams.}, project = {ViewFinder, Mobiniss}, address = {Sheffield, UK}, url = {http://mecatron.rma.ac.be/pub/RISE/RISE - 2010/On-line and Off-line 3D Reconstruction_Geert_De_Cubber.pdf}, unit= {meca-ras} }
- Y. Baudoin, G. De Cubber, E. Colon, D. Doroftei, and S. A. Berrabah, “Robotics Assistance by Risky Interventions: Needs and Realistic Solutions," in Workshop on Robotics for Extreme conditions, Saint-Petersburg, Russia, 2010.
[BibTeX] [Abstract] [Download PDF]
This paper discusses the requirements towards robotics systems in the domains of firefighting, CBRN-E and humanitarian demining.
@InProceedings{baudoin2010robotics, author = {Baudoin, Yvan and De Cubber, Geert and Colon, Eric and Doroftei, Daniela and Berrabah, Sid Ahmed}, booktitle = {Workshop on Robotics for Extreme conditions}, title = {Robotics Assistance by Risky Interventions: Needs and Realistic Solutions}, year = {2010}, abstract = {This paper discusses the requirements towards robotics systems in the domains of firefighting, CBRN-E and humanitarian demining.}, project = {ViewFinder, Mobiniss}, address = {Saint-Petersburg, Russia}, url = {http://mecatron.rma.ac.be/pub/2010/Robotics Assistance by risky interventions.pdf}, unit= {meca-ras} }
- G. De Cubber, “Variational methods for dense depth reconstruction from monocular and binocular video sequences," PhD Thesis, 2010.
[BibTeX] [Abstract] [Download PDF]
This research work tackles the problem of dense three-dimensional reconstruction from monocular and binocular image sequences. Recovering 3D-information has been in the focus of attention of the computer vision community for a few decades now, yet no all-satisfying method has been found so far. The main problem with vision, is that the perceived computer image is a two-dimensional projection of the 3D world. Three-dimensional reconstruction can thus be regarded as the process of re-projecting the 2D image(s) back to a 3D model, as such recovering the depth dimension which was lost during projection. In this work, we focus on dense reconstruction, meaning that a depth estimate is sought for each pixel of the input image. Most attention in the 3Dreconstruction area has been on stereo-vision based methods, which use the displacement of objects in two (or more) images. Where stereo vision must be seen as a spatial integration of multiple viewpoints to recover depth, it is also possible to perform a temporal integration. The problem arising in this situation is known as the Structure from Motion problem and deals with extracting 3-dimensional information about the environment from the motion of its projection onto a two-dimensional surface. Based upon the observation that the human visual system uses both stereo and structure from motion for 3D reconstruction, this research work also targets the combination of stereo information in a structure from motion-based 3D-reconstruction scheme. The data fusion problem arising in this case is solved by casting it as an energy minimization problem in a variationalframework.
@PhdThesis{de2010variational, author = {De Cubber, Geert}, school = {Vrije Universiteit Brussel-Royal Military Academy}, title = {Variational methods for dense depth reconstruction from monocular and binocular video sequences}, year = {2010}, abstract = {This research work tackles the problem of dense three-dimensional reconstruction from monocular and binocular image sequences. Recovering 3D-information has been in the focus of attention of the computer vision community for a few decades now, yet no all-satisfying method has been found so far. The main problem with vision, is that the perceived computer image is a two-dimensional projection of the 3D world. Three-dimensional reconstruction can thus be regarded as the process of re-projecting the 2D image(s) back to a 3D model, as such recovering the depth dimension which was lost during projection. In this work, we focus on dense reconstruction, meaning that a depth estimate is sought for each pixel of the input image. Most attention in the 3Dreconstruction area has been on stereo-vision based methods, which use the displacement of objects in two (or more) images. Where stereo vision must be seen as a spatial integration of multiple viewpoints to recover depth, it is also possible to perform a temporal integration. The problem arising in this situation is known as the Structure from Motion problem and deals with extracting 3-dimensional information about the environment from the motion of its projection onto a two-dimensional surface. Based upon the observation that the human visual system uses both stereo and structure from motion for 3D reconstruction, this research work also targets the combination of stereo information in a structure from motion-based 3D-reconstruction scheme. The data fusion problem arising in this case is solved by casting it as an energy minimization problem in a variationalframework.}, project = {ViewFinder, Mobiniss}, url = {http://mecatron.rma.ac.be/pub/2010/PhD_Thesis_Geert_.pdf}, unit= {meca-ras,vub-etro} }
- G. De Cubber, D. Doroftei, and S. A. Berrabah, “Using visual perception for controlling an outdoor robot in a crisis management scenario," in ROBOTICS 2010, Clermont-Ferrand, France, 2010.
[BibTeX] [Abstract] [Download PDF]
Crisis management teams (e.g. fire and rescue services, anti-terrorist units …) are often confronted with dramatic situations where critical decisions have to be made within hard time constraints. Therefore, they need correct information about what is happening on the crisis site. In this context, the View-Finder projects aims at developing robots which can assist the human crisis managers, by gathering data. This paper gives an overview of the development of such an outdoor robot. The presented robotic system is able to detect human victims at the incident site, by using vision-based human body shape detection. To increase the perceptual awareness of the human crisis managers, the robotic system is capable of reconstructing a 3D model of the environment, based on vision data. Also for navigation, the robot depends mostly on visual perception, as it combines a model-based navigation approach using geo-referenced positioning with stereo-based terrain traversability analysis for obstacle avoidance. The robot control scheme is embedded in a behavior-based robot control architecture, which integrates all the robot capabilities. This paper discusses all the above mentioned technologies.
@InProceedings{de2010using, author = {De Cubber, Geert and Doroftei, Daniela and Berrabah, Sid Ahmed}, booktitle = {ROBOTICS 2010}, title = {Using visual perception for controlling an outdoor robot in a crisis management scenario}, year = {2010}, abstract = {Crisis management teams (e.g. fire and rescue services, anti-terrorist units ...) are often confronted with dramatic situations where critical decisions have to be made within hard time constraints. Therefore, they need correct information about what is happening on the crisis site. In this context, the View-Finder projects aims at developing robots which can assist the human crisis managers, by gathering data. This paper gives an overview of the development of such an outdoor robot. The presented robotic system is able to detect human victims at the incident site, by using vision-based human body shape detection. To increase the perceptual awareness of the human crisis managers, the robotic system is capable of reconstructing a 3D model of the environment, based on vision data. Also for navigation, the robot depends mostly on visual perception, as it combines a model-based navigation approach using geo-referenced positioning with stereo-based terrain traversability analysis for obstacle avoidance. The robot control scheme is embedded in a behavior-based robot control architecture, which integrates all the robot capabilities. This paper discusses all the above mentioned technologies.}, project = {ViewFinder, Mobiniss}, address = {Clermont-Ferrand, France}, unit= {meca-ras}, url = {http://mecatron.rma.ac.be/pub/2010/Usingvisualperceptionforcontrollinganoutdoorrobotinacrisismanagementscenario (1).pdf}, }
- D. Doroftei and E. Colon, “Decentralized Multi-Robot Coordination for Risky Interventions," in Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance RISE, Sheffield, UK, 2010.
[BibTeX] [Abstract] [Download PDF]
The paper describes an approach to design a behavior-based architecture, how each behavior was designed and how the behavior fusion problem was solved.
@InProceedings{doro2010multibis, author = {Doroftei, Daniela and Colon, Eric}, booktitle = {Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance {RISE}}, title = {Decentralized Multi-Robot Coordination for Risky Interventions}, year = {2010}, abstract = {The paper describes an approach to design a behavior-based architecture, how each behavior was designed and how the behavior fusion problem was solved.}, project = {NMRS, ViewFinder}, address = {Sheffield, UK}, url = {http://mecatron.rma.ac.be/pub/RISE/RISE%20-%202010/Decentralized%20Multi-Robot%20Coordination%20for%20Risky%20Interventio.pdf}, unit= {meca-ras} }
2009
- G. De Cubber, D. Doroftei, L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Stereo-based terrain traversability analysis for robot navigation," in IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium, Brussels, Belgium, 2009.
[BibTeX] [Abstract] [Download PDF]
In this paper, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data.
@InProceedings{de2009stereo, author = {De Cubber, Geert and Doroftei, Daniela and Nalpantidis, Lazaros and Sirakoulis, Georgios Ch and Gasteratos, Antonios}, booktitle = {IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium}, title = {Stereo-based terrain traversability analysis for robot navigation}, year = {2009}, abstract = {In this paper, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data.}, project = {ViewFinder, Mobiniss}, address = {Brussels, Belgium}, url = {http://mecatron.rma.ac.be/pub/2009/RISE-DECUBBER-DUTH.pdf}, unit= {meca-ras} }
- G. De Cubber and G. Marton, “Human Victim Detection," in Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE, Brussels, Belgium, 2009.
[BibTeX] [Abstract] [Download PDF]
This paper presents an approach to achieve robust victim detection from color video images. The applied approach goes out from the Viola-Jones algorithm for Haar-features based template recognition. This algorithm was adapted to recognize persons lying on the ground in difficult outdoor illumination conditions.
@InProceedings{de2009human, author = {De Cubber, Geert and Marton, Gabor}, booktitle = {Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE}, title = {Human Victim Detection}, year = {2009}, abstract = {This paper presents an approach to achieve robust victim detection from color video images. The applied approach goes out from the Viola-Jones algorithm for Haar-features based template recognition. This algorithm was adapted to recognize persons lying on the ground in difficult outdoor illumination conditions.}, project = {ViewFinder, Mobiniss}, address = {Brussels, Belgium}, url = {http://mecatron.rma.ac.be/pub/2009/RISE-DECUBBER_BUTE.pdf}, unit= {meca-ras} }
- D. Doroftei, G. De Cubber, E. Colon, and Y. Baudoin, “Behavior based control for an outdoor crisis management robot," in Proceedings of the IARP International Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium, 2009, p. 12–14.
[BibTeX] [Abstract] [Download PDF]
The design and development of a control architecture for a robotic crisis management agent raises 3 main questions: 1. How can we design the individual behaviors, such that the robot is capable of avoiding obstacles and of navigating semi-autonomously? 2. How can these individual behaviors be combined in an optimal, leading to a rational and coherent global robot behavior? 3. How can all these capabilities be combined in a comprehensive and modular framework, such that the robot can handle a high-level task (searching for human victims) with minimal input from human operators, by navigating in a complex, dynamic and environment, while avoiding potentially hazardous obstacles? In this paper, we present each of these three main aspects of the general robot control architecture more in detail.
@InProceedings{doroftei2009behavior, author = {Doroftei, Daniela and De Cubber, Geert and Colon, Eric and Baudoin, Yvan}, booktitle = {Proceedings of the IARP International Workshop on Robotics for Risky Interventions and Environmental Surveillance}, title = {Behavior based control for an outdoor crisis management robot}, year = {2009}, pages = {12--14}, abstract = {The design and development of a control architecture for a robotic crisis management agent raises 3 main questions: 1. How can we design the individual behaviors, such that the robot is capable of avoiding obstacles and of navigating semi-autonomously? 2. How can these individual behaviors be combined in an optimal, leading to a rational and coherent global robot behavior? 3. How can all these capabilities be combined in a comprehensive and modular framework, such that the robot can handle a high-level task (searching for human victims) with minimal input from human operators, by navigating in a complex, dynamic and environment, while avoiding potentially hazardous obstacles? In this paper, we present each of these three main aspects of the general robot control architecture more in detail.}, project = {ViewFinder, Mobiniss}, address = {Brussels, Belgium}, url = {http://mecatron.rma.ac.be/pub/2009/RISE-DOROFTEI.pdf}, unit= {meca-ras} }
- Y. Baudoin, D. Doroftei, D. G. Cubber, S. A. Berrabah, C. Pinzon, F. Warlet, J. Gancet, E. Motard, M. Ilzkovitz, L. Nalpantidis, and A. Gasteratos, “VIEW-FINDER : Robotics assistance to fire-fighting services and Crisis Management," in 2009 IEEE International Workshop on Safety, Security & Rescue Robotics (SSRR 2009), Denver, USA, 2009, p. 1–6.
[BibTeX] [Abstract] [Download PDF] [DOI]
In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the Base Station (BS) the data is processed and combined with geographical information originating from a Web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. This paper will focus on the Crisis Management Information System that has been developed for improving a Disaster Management Action Plan and for linking the Control Station with a out-site Crisis Management Centre, and on the software tools implemented on the mobile robot gathering data in the outdoor area of the crisis.
@InProceedings{Baudoin2009view01, author = {Y. Baudoin and D. Doroftei and G. De Cubber and S. A. Berrabah and C. Pinzon and F. Warlet and J. Gancet and E. Motard and M. Ilzkovitz and L. Nalpantidis and A. Gasteratos}, booktitle = {2009 {IEEE} International Workshop on Safety, Security {\&} Rescue Robotics ({SSRR} 2009)}, title = {{VIEW}-{FINDER} : Robotics assistance to fire-fighting services and Crisis Management}, year = {2009}, month = nov, organization = {IEEE}, pages = {1--6}, publisher = {{IEEE}}, abstract = {In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the Base Station (BS) the data is processed and combined with geographical information originating from a Web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. This paper will focus on the Crisis Management Information System that has been developed for improving a Disaster Management Action Plan and for linking the Control Station with a out-site Crisis Management Centre, and on the software tools implemented on the mobile robot gathering data in the outdoor area of the crisis.}, doi = {10.1109/ssrr.2009.5424172}, project = {ViewFinder}, address = {Denver, USA}, url = {https://ieeexplore.ieee.org/document/5424172}, unit= {meca-ras} }
- Y. Baudoin, D. Doroftei, G. De Cubber, S. A. Berrabah, C. Pinzon, J. Penders, A. Maslowski, and J. Bedkowski, “VIEW-FINDER : Outdoor Robotics Assistance to Fire-Fighting services," in International Symposium Clawar, Istanbul, Turkey, 2009.
[BibTeX] [Abstract] [Download PDF]
In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-theshelf units, consisting of wheeled robots. The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command.
@InProceedings{baudoin2009view02, author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Berrabah, Sid Ahmed and Pinzon, Carlos and Penders, Jacques and Maslowski, Andrzej and Bedkowski, Janusz}, booktitle = {International Symposium Clawar}, title = {{VIEW-FINDER} : Outdoor Robotics Assistance to Fire-Fighting services}, year = {2009}, abstract = {In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-theshelf units, consisting of wheeled robots. The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command.}, project = {ViewFinder, Mobiniss}, address = {Istanbul, Turkey}, url = {http://mecatron.rma.ac.be/pub/2009/CLAWAR2009.pdf}, unit= {meca-ras} }
- Y. Baudoin, D. Doroftei, G. De Cubber, S. A. Berrabah, E. Colon, C. Pinzon, A. Maslowski, and J. Bedkowski, “View-Finder: a European project aiming the Robotics assistance to Fire-fighting services and Crisis Management," in IARP workshop on Service Robotics and Nanorobotics, Bejing, China, 2009.
[BibTeX] [Abstract] [Download PDF]
In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command. We’ll essentially focus in this paper to the steps entrusted to the RMA and PIAP through the work-packages of the project.
@InProceedings{baudoin2009view03, author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Berrabah, Sid Ahmed and Colon, Eric and Pinzon, Carlos and Maslowski, Andrzej and Bedkowski, Janusz}, booktitle = {IARP workshop on Service Robotics and Nanorobotics}, title = {{View-Finder}: a European project aiming the Robotics assistance to Fire-fighting services and Crisis Management}, year = {2009}, abstract = {In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command. We’ll essentially focus in this paper to the steps entrusted to the RMA and PIAP through the work-packages of the project.}, project = {ViewFinder}, address = {Bejing, China}, url = {http://mecatron.rma.ac.be/pub/2009/IARP-paper2009.pdf}, unit= {meca-ras} }
- Y. Baudoin, G. De Cubber, S. A. Berrabah, D. Doroftei, E. Colon, C. Pinzon, A. Maslowski, and J. Bedkowski, “VIEW-FINDER: European Project Aiming CRISIS MANAGEMENT TOOLS and the Robotics Assistance to Fire-Fighting Services," in IARP WS on service Robotics, Beijing, Bejing, China, 2009.
[BibTeX] [Abstract] [Download PDF]
Overview of the View-Finder project
@InProceedings{baudoin2009view04, author = {Baudoin, Yvan and De Cubber, Geert and Berrabah, Sid Ahmed and Doroftei, Daniela and Colon, E and Pinzon, C and Maslowski, A and Bedkowski, J}, booktitle = {IARP WS on service Robotics, Beijing}, title = {{VIEW-FINDER}: European Project Aiming CRISIS MANAGEMENT TOOLS and the Robotics Assistance to Fire-Fighting Services}, year = {2009}, abstract = {Overview of the View-Finder project}, project = {ViewFinder}, address = {Bejing, China}, unit= {meca-ras}, url = {https://www.academia.edu/2879650/VIEW-FINDER_European_Project_Aiming_CRISIS_MANAGEMENT_TOOLS_and_the_Robotics_Assistance_to_Fire-Fighting_Services}, }
- D. Doroftei, E. Colon, Y. Baudoin, and H. Sahli, “Development of a behaviour-based control and software architecture for a visually guided mine detection robot," European Journal of Automated Systems (JESA), vol. 43, iss. 3, p. 295–314, 2009.
[BibTeX] [Abstract] [Download PDF]
Humanitarian demining is a labor-intensive and high-risk which could benefit from the development of a humanitarian mine detection robot, capable of scanning a minefield semi-automatically. The design of such an outdoor autonomous robots requires the consideration and integration of multiple aspects: sensing, data fusion, path and motion planning and robot control embedded in a control and software architecture. This paper focuses on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour-based control architecture and implementation of a modular software architecture.
@Article{doro2009development, author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan and Sahli, Hichem}, journal = {European Journal of Automated Systems ({JESA})}, title = {Development of a behaviour-based control and software architecture for a visually guided mine detection robot}, year = {2009}, volume = {43}, number = {3}, abstract = { Humanitarian demining is a labor-intensive and high-risk which could benefit from the development of a humanitarian mine detection robot, capable of scanning a minefield semi-automatically. The design of such an outdoor autonomous robots requires the consideration and integration of multiple aspects: sensing, data fusion, path and motion planning and robot control embedded in a control and software architecture. This paper focuses on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour-based control architecture and implementation of a modular software architecture.}, pages = {295--314}, project = {Mobiniss, ViewFinder}, url = {http://mecatron.rma.ac.be/pub/2009/doc-article-hermes.pdf}, unit= {meca-ras} }
2008
- D. Doroftei, E. Colon, and G. De Cubber, “A Behaviour-Based Control and Software Architecture for the Visually Guided Robudem Outdoor Mobile Robot," Journal of Automation Mobile Robotics and Intelligent Systems, vol. 2, iss. 4, p. 19–24, 2008.
[BibTeX] [Abstract] [Download PDF]
The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing a semiautonomous outdoor robot for risky interventions. This paper focuses on three main aspects of the design process: visual sensing using stereo vision and image motion analysis, design of a behaviourbased control architecture and implementation of modular software architecture.
@Article{doroftei2008behaviour, author = {Doroftei, Daniela and Colon, Eric and De Cubber, Geert}, journal = {Journal of Automation Mobile Robotics and Intelligent Systems}, title = {A Behaviour-Based Control and Software Architecture for the Visually Guided Robudem Outdoor Mobile Robot}, year = {2008}, issn = {1897-8649}, month = oct, number = {4}, pages = {19--24}, volume = {2}, abstract = {The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing a semiautonomous outdoor robot for risky interventions. This paper focuses on three main aspects of the design process: visual sensing using stereo vision and image motion analysis, design of a behaviourbased control architecture and implementation of modular software architecture.}, project = {ViewFinder, Mobiniss}, url = {http://mecatron.rma.ac.be/pub/2008/XXX JAMRIS No8 - Doroftei.pdf}, unit= {meca-ras} }
- G. De Cubber, L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Intelligent robots need intelligent vision: visual 3D perception," in RISE’08: Proceedings of the EURON/IARP International Workshop on Robotics for Risky Interventions and Surveillance of the Environment, Benicassim, Spain, 2008.
[BibTeX] [Abstract] [Download PDF]
In this paper, we investigate the possibilities of stereo and structure from motion approaches. It is not the aim to compare both theories of depth reconstruction with the goal of designating a winner and a loser. Both methods are capable of providing sparse as well as dense 3D reconstructions and both approaches have their merits and defects. The thorough, year-long research in the field indicates that accurate depth perception requires a combination of methods rather than a sole one. In fact, cognitive research has shown that the human brain uses no less than 12 different cues to estimate depth. Therefore, we also finally introduce in a following section a methodology to integrate stereo and structure from motion.
@InProceedings{de2008intelligent, author = {De Cubber, Geert and Nalpantidis, Lazaros and Sirakoulis, Georgios Ch and Gasteratos, Antonios}, booktitle = {RISE’08: Proceedings of the EURON/IARP International Workshop on Robotics for Risky Interventions and Surveillance of the Environment}, title = {Intelligent robots need intelligent vision: visual {3D} perception}, year = {2008}, abstract = {In this paper, we investigate the possibilities of stereo and structure from motion approaches. It is not the aim to compare both theories of depth reconstruction with the goal of designating a winner and a loser. Both methods are capable of providing sparse as well as dense 3D reconstructions and both approaches have their merits and defects. The thorough, year-long research in the field indicates that accurate depth perception requires a combination of methods rather than a sole one. In fact, cognitive research has shown that the human brain uses no less than 12 different cues to estimate depth. Therefore, we also finally introduce in a following section a methodology to integrate stereo and structure from motion.}, project = {ViewFinder, Mobiniss}, address = {Benicassim, Spain}, url = {http://mecatron.rma.ac.be/pub/2008/DeCubber.pdf}, unit= {meca-ras} }
- G. De Cubber, D. Doroftei, and G. Marton, “Development of a visually guided mobile robot for environmental observation as an aid for outdoor crisis management operations," in Proceedings of the IARP Workshop on Environmental Maintenance and Protection, Baden Baden, Germany, 2008.
[BibTeX] [Abstract] [Download PDF]
To solve these issues, an outdoor mobile robotic platform was equipped with a differential GPS system for accurate geo-registered positioning, and a stereo vision system. This stereo vision systems serves two purposes: 1) victim detection and 2) obstacle detection and avoidance. For semi-autonomous robot control and navigation, we rely on a behavior-based robot motion and path planner. In this paper, we present each of the three main aspects (victim detection, stereo-based obstacle detection and behavior-based navigation) of the general robot control architecture more in detail.
@InProceedings{de2008development, author = {De Cubber, Geert and Doroftei, Daniela and Marton, Gabor}, booktitle = {Proceedings of the IARP Workshop on Environmental Maintenance and Protection}, title = {Development of a visually guided mobile robot for environmental observation as an aid for outdoor crisis management operations}, year = {2008}, abstract = {To solve these issues, an outdoor mobile robotic platform was equipped with a differential GPS system for accurate geo-registered positioning, and a stereo vision system. This stereo vision systems serves two purposes: 1) victim detection and 2) obstacle detection and avoidance. For semi-autonomous robot control and navigation, we rely on a behavior-based robot motion and path planner. In this paper, we present each of the three main aspects (victim detection, stereo-based obstacle detection and behavior-based navigation) of the general robot control architecture more in detail.}, project = {ViewFinder, Mobiniss}, address = {Baden Baden, Germany}, url = {http://mecatron.rma.ac.be/pub/2008/environmental observation as an aid for outdoor crisis management operations.pdf}, unit= {meca-ras} }
- G. De Cubber, “Dense 3D structure and motion estimation as an aid for robot navigation," Journal of Automation Mobile Robotics and Intelligent Systems, vol. 2, iss. 4, p. 14–18, 2008.
[BibTeX] [Abstract] [Download PDF]
Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.
@Article{DeCubber2008, author = {De Cubber, Geert}, journal = {Journal of Automation Mobile Robotics and Intelligent Systems}, title = {Dense {3D} structure and motion estimation as an aid for robot navigation}, year = {2008}, issn = {1897-8649}, month = oct, number = {4}, pages = {14--18}, volume = {2}, abstract = {Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.}, project = {ViewFinder,Mobiniss}, url = {http://www.jamris.org/images/ISSUES/ISSUE-2008-04/002 JAMRIS No8 - De Cubber.pdf}, unit= {meca-ras} }
- D. Doroftei and J. Bedkowski, “Towards the autonomous navigation of robots for risky interventions," in Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance RISE, Benicassim, Spain, 2008.
[BibTeX] [Abstract] [Download PDF]
In the course of the ViewFinder project, two robotics teams (RMS and PIAP) are working on the development of an intelligent autonomous mobile robot. This paper reports on the progress of both teams.
@InProceedings{doro2008towards, author = {Doroftei, Daniela and Bedkowski, Janusz}, booktitle = {Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance {RISE}}, title = {Towards the autonomous navigation of robots for risky interventions}, year = {2008}, abstract = {In the course of the ViewFinder project, two robotics teams (RMS and PIAP) are working on the development of an intelligent autonomous mobile robot. This paper reports on the progress of both teams.}, project = {ViewFinder, Mobiniss}, address = {Benicassim, Spain}, url = {http://mecatron.rma.ac.be/pub/2008/Doroftei.pdf}, unit= {meca-ras} }
2007
- G. De Cubber, “Dense 3D structure and motion estimation as an aid for robot navigation," in ISMCR 2007, Warsaw, Poland, 2007.
[BibTeX] [Abstract] [Download PDF]
Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.
@InProceedings{de2007dense, author = {De Cubber, Geert}, booktitle = {ISMCR 2007}, title = {Dense {3D} structure and motion estimation as an aid for robot navigation}, year = {2007}, abstract = {Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.}, project = {ViewFinder,Mobiniss}, address = {Warsaw, Poland}, url = {http://mecatron.rma.ac.be/pub/2007/Dense 3D Structure and Motion Estimation as an aid for Robot Navigation.pdf}, unit= {meca-ras,vub-etro} }
- D. Doroftei, E. Colon, and G. De Cubber, “A behaviour-based control and software architecture for the visually guided Robudem outdoor mobile robot,," in ISMCR 2007, Warsaw, Poland,, 2007.
[BibTeX] [Abstract] [Download PDF]
The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing an semi‐autonomous outdoor robot for risky interventions. This paper focuses mainly on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour‐based control architecture and implementation of a modular software architecture.
@InProceedings{doroftei2007behaviour, author = {Doroftei, Daniela and Colon, Eric and De Cubber, Geert}, booktitle = {ISMCR 2007}, title = {A behaviour-based control and software architecture for the visually guided {Robudem} outdoor mobile robot,}, year = {2007}, address = {Warsaw, Poland,}, abstract = {The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing an semi‐autonomous outdoor robot for risky interventions. This paper focuses mainly on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour‐based control architecture and implementation of a modular software architecture.}, project = {ViewFinder,Mobiniss}, url = {http://mecatron.rma.ac.be/pub/2007/Doroftei_ISMCR07.pdf}, unit= {meca-ras} }
- D. Doroftei, E. Colon, Y. Baudoin, and H. Sahli, “Development of a semi-autonomous off-road vehicle.," in IEEE HuMan’07’, Timimoun, Algeria, 2007, p. 340–343.
[BibTeX] [Abstract] [Download PDF]
Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.
@InProceedings{doro2007development, author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan and Sahli, Hichem}, booktitle = {{IEEE} {HuMan}'07'}, title = {Development of a semi-autonomous off-road vehicle.}, year = {2007}, address = {Timimoun, Algeria}, pages = {340--343}, abstract = {Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.}, project = {Mobiniss, ViewFinder}, url = {http://mecatron.rma.ac.be/pub/2007/Development_of_a_semi-autonomous_off-road_vehicle.pdf}, unit= {meca-ras} }
2006
- S. A. Berrabah, G. De Cubber, V. Enescu, and H. Sahli, “MRF-Based Foreground Detection in Image Sequences from a Moving Camera," in 2006 International Conference on Image Processing, Atlanta, USA, 2006, p. 1125–1128.
[BibTeX] [Abstract] [Download PDF] [DOI]
This paper presents a Bayesian approach for simultaneously detecting the moving objects (foregrounds) and estimating their motion in image sequences taken with a moving camera mounted on the top of a mobile robot. To model the background, the algorithm uses the GMM approach for its simplicity and capability to adapt to illumination changes and small motions in the scene. To overcome the limitations of the GMM approach with its pixel-wise processing, the background model is combined with the motion cue in a maximum a posteriori probability (MAP)-MRF framework. This enables us to exploit the advantages of spatio-temporal dependencies that moving objects impose on pixels and the interdependence of motion and segmentation fields. As a result, the detected moving objects have visually attractive silhouettes and they are more accurate and less affected by noise than those obtained with simple pixel-wise methods. To enhance the segmentation accuracy, the background model is re-updated using the MAP-MRF results. Experimental results and a qualitative study of the proposed approach are presented on image sequences with a static camera as well as with a moving camera.
@InProceedings{berrabah2006mrf, author = {Berrabah, Sid Ahmed and De Cubber, Geert and Enescu, Valentin and Sahli, Hichem}, booktitle = {2006 International Conference on Image Processing}, title = {{MRF}-Based Foreground Detection in Image Sequences from a Moving Camera}, year = {2006}, month = oct, organization = {IEEE}, pages = {1125--1128}, publisher = {{IEEE}}, abstract = {This paper presents a Bayesian approach for simultaneously detecting the moving objects (foregrounds) and estimating their motion in image sequences taken with a moving camera mounted on the top of a mobile robot. To model the background, the algorithm uses the GMM approach for its simplicity and capability to adapt to illumination changes and small motions in the scene. To overcome the limitations of the GMM approach with its pixel-wise processing, the background model is combined with the motion cue in a maximum a posteriori probability (MAP)-MRF framework. This enables us to exploit the advantages of spatio-temporal dependencies that moving objects impose on pixels and the interdependence of motion and segmentation fields. As a result, the detected moving objects have visually attractive silhouettes and they are more accurate and less affected by noise than those obtained with simple pixel-wise methods. To enhance the segmentation accuracy, the background model is re-updated using the MAP-MRF results. Experimental results and a qualitative study of the proposed approach are presented on image sequences with a static camera as well as with a moving camera.}, doi = {10.1109/icip.2006.312754}, project = {MOBINISS,ViewFinder}, address = {Atlanta, USA}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4106732}, unit= {meca-ras,vub-etro} }
- G. De Cubber, V. Enescu, H. Sahli, E. Demeester, M. Nuttin, and D. Vanhooydonck, “Active stereo vision-based mobile robot navigation for person tracking," Integrated Computer-Aided Engineering, vol. 13, p. 203–222, 2006.
[BibTeX] [Abstract] [Download PDF]
In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first uses a stereo head equipped with a pan-tilt mechanism to track a moving target (selected by an operator) and keep it centered in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. For this, a hybrid navigation algorithm is adopted with a reactive part that efficiently reacts to the most recent sensor data, and a deliberative part that generates a globally optimal path to a target destination, such as the person’s location. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM. Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated via a suitable stochastic dynamic model for the stereo head-target system. Robust tracking is achieved by combining a color-based particle filter with a method to update the color model of the target under changing illumination conditions. The main contributions of this paper lie in (1) devising a robust color-based 3D target tracking method, (2) proposing a hybrid deliberative/reactive navigation scheme, and (3) integrating them on a wheelchair platform for the final goal of person following. Experimental results are presented for ASVM separately and in combination with a wheelchair platform-based implementation of the NM.
@Article{2c2cd28d2aea4009ae0135448c005050, author = {De Cubber, Geert and Valentin Enescu and Hichem Sahli and Eric Demeester and Marnix Nuttin and Dirk Vanhooydonck}, journal = {Integrated Computer-Aided Engineering}, title = {Active stereo vision-based mobile robot navigation for person tracking}, year = {2006}, issn = {1069-2509}, month = jul, note = {Integrated Computer-Aided Engineering, Vol. ?, Nr. ?, pp. ?, .}, pages = {203--222}, volume = {13}, abstract = {In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first uses a stereo head equipped with a pan-tilt mechanism to track a moving target (selected by an operator) and keep it centered in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. For this, a hybrid navigation algorithm is adopted with a reactive part that efficiently reacts to the most recent sensor data, and a deliberative part that generates a globally optimal path to a target destination, such as the person's location. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM. Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated via a suitable stochastic dynamic model for the stereo head-target system. Robust tracking is achieved by combining a color-based particle filter with a method to update the color model of the target under changing illumination conditions. The main contributions of this paper lie in (1) devising a robust color-based 3D target tracking method, (2) proposing a hybrid deliberative/reactive navigation scheme, and (3) integrating them on a wheelchair platform for the final goal of person following. Experimental results are presented for ASVM separately and in combination with a wheelchair platform-based implementation of the NM.}, day = {24}, keywords = {mobile robot, active vision, stereo, navigation}, language = {English}, project = {Mobiniss, ViewFinder}, publisher = {IOS Press}, unit= {meca-ras,vub-etro}, url = {https://cris.vub.be/en/publications/active-stereo-visionbased-mobile-robot-navigation-for-person-tracking(2c2cd28d-2aea-4009-ae01-35448c005050)/export.html}, }
- D. Doroftei, E. Colon, and Y. Baudoin, “A modular control architecture for semi-autonomous navigation," in CLAWAR 2006, Brussels, Belgium, 2006, p. 712–715.
[BibTeX] [Abstract] [Download PDF]
Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.
@InProceedings{doro2006modular, author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan}, booktitle = {{CLAWAR} 2006}, title = {A modular control architecture for semi-autonomous navigation}, year = {2006}, pages = {712--715}, abstract = {Humanitarian demining is still a highly laborintensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan semi-automatically a minefield. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. }, project = {Mobiniss, ViewFinder}, address = {Brussels, Belgium}, url = {http://mecatron.rma.ac.be/pub/2006/Clawar2006_Doroftei_colon.pdf}, unit= {meca-ras} }
- D. Doroftei, E. Colon, and Y. Baudoin, “Development of a control architecture for the ROBUDEM outdoor mobile robot platform," in IARP Workshop RISE 2006, Brussels, Belgium, 2006.
[BibTeX] [Abstract] [Download PDF]
Humanitarian demining still is a highly labor-intensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan a minefield semi-automatically. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control.
@InProceedings{doro2006development, author = {Doroftei, Daniela and Colon, Eric and Baudoin, Yvan}, booktitle = {{IARP} Workshop {RISE} 2006}, title = {Development of a control architecture for the ROBUDEM outdoor mobile robot platform}, year = {2006}, abstract = {Humanitarian demining still is a highly labor-intensive and high-risk operation. Advanced sensors and mechanical aids can significantly reduce the demining time. In this context, it is the aim to develop a humanitarian demining mobile robot which is able to scan a minefield semi-automatically. This paper discusses the development of a control scheme for such a semi-autonomous mobile robot for humanitarian demining. This process requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. }, project = {Mobiniss, ViewFinder}, address = {Brussels, Belgium}, url = {http://mecatron.rma.ac.be/pub/2006/IARPWS2006_Doroftei_Colon.pdf}, unit= {meca-ras} }
2005
- V. Enescu, G. De Cubber, H. Sahli, E. Demeester, D. Vanhooydonck, and M. Nuttin, “Active stereo vision-based mobile robot navigation for person tracking," in International Conference on Informatics in Control, Automation and Robotics, Barcelona, Spain, 2005, p. 32–39.
[BibTeX] [Abstract] [Download PDF] [DOI]
In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first tracks the person in stereo images and controls the pan/tilt unit to keep the target in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM.Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated by generating target location hypotheses in an efficient manner. Robustness against outliers and occlusions is achieved by employing a multi-hypothesis tracking method – the particle filter – based on a color model of the target. Moreover, to deal with illumination changes, the system adaptively updates the color model of the target. The main contributions of this paper lie in (1) devising a stereo, color-based target tracking method using the stereo geometry constraint and (2) integrating it with a robotic agent in a loosely coupled manner.
@InProceedings{enescu2005active, author = {Enescu, Valentin and De Cubber, Geert and Sahli, Hichem and Demeester, Eric and Vanhooydonck, Dirk and Nuttin, Marnix}, booktitle = {International Conference on Informatics in Control, Automation and Robotics}, title = {Active stereo vision-based mobile robot navigation for person tracking}, year = {2005}, address = {Barcelona, Spain}, month = sep, pages = {32--39}, abstract = {In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first tracks the person in stereo images and controls the pan/tilt unit to keep the target in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM.Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated by generating target location hypotheses in an efficient manner. Robustness against outliers and occlusions is achieved by employing a multi-hypothesis tracking method - the particle filter - based on a color model of the target. Moreover, to deal with illumination changes, the system adaptively updates the color model of the target. The main contributions of this paper lie in (1) devising a stereo, color-based target tracking method using the stereo geometry constraint and (2) integrating it with a robotic agent in a loosely coupled manner.}, project = {Mobiniss, ViewFinder}, doi = {10.3233/ica-2006-13302}, url = {http://mecatron.rma.ac.be/pub/2005/f969ee9e1169623340aa409f539fddb9c413.pdf}, unit= {meca-ras,vub-etro} }