Geert De Cubber

Senior Researcher / Team Leader

Robotics & Autonomous Systems,
Royal Military Academy

Address

Avenue De La Renaissance 30, 1000 Brussels, Belgium

Contact Information

Call: +32(0)2-44-14108

Email: geert.de.cubber@rma.ac.be

Geert De Cubber is the team leader of the Robotics & Autonomous Systems unit of the department of Mechanics of the Belgian Royal Military Academy. He is also a senior researcher at this institute with a research focus on developing robotic solutions for solving security challenges like crisis management, the fight against crime and terrorism and border security.

He received his Diploma in Mechanical Engineering in 2001 from the Vrije Universiteit Brussel (VUB) and his Doctoral Degree in Engineering in 2010 from the Vrije Universiteit Brussel and the Belgian Royal Military Academy (RMA).

He is and was the coordinator of multiple European and national research projects, like FP7-ICARUS (on the development of search and rescue robots), H2020-SafeShore (on the development of a threat detection system) and COURAGEOUS (on the development on a standard test methodology for C-UAS tools). Next to this, he is the principal investigator for RMA for multiple international research projects like STARS*EU and ASSETs+.

His research interests include methodologies for perceiving and control of multi-agent robotic systems, across the air, land and maritime domain. His major goal is to find new ways to make multiple robotic systems capable of understanding their environment and decide on optimal collaborative strategies. Prominent application examples are crisis management robots, humanitarian demining robots, robots for surveillance applications, and generically robotics for tough environments.

Geert is active as a reviewer for the European Commission and other funding agencies and is a member of the organizing committee of several conferences and workshops in the field of robotics and computer vision. He has published around 100 scientific papers, including books and chapters in books.

Publications

2024

  • T. Nguyen, C. Hamesse, T. Dutrannois, T. Halleux, G. De Cubber, R. Haelterman, and B. Janssens, “Visual-based Localization Methods for Unmanned Aerial Vehicles in Landing Operation on Maritime Vessel," Acta IMEKO, vol. 13, iss. 4, p. 1–13, 2024.
    [BibTeX] [Download PDF] [DOI]
    @article{nguyen_visual_2024,
    title = {Visual-based {Localization} {Methods} for {Unmanned} {Aerial} {Vehicles} in {Landing} {Operation} on {Maritime} {Vessel}},
    volume = {13},
    issn = {2221-870X},
    url = {https://acta.imeko.org/index.php/acta-imeko/article/view/1575},
    doi = {10.21014/actaimeko.v13i4.1575},
    number = {4},
    journal = {Acta IMEKO},
    author = {Nguyen, Tien-Thanh and Hamesse, Charles and Dutrannois, Thomas and Halleux, Timothy and De Cubber, Geert and Haelterman, Rob and Janssens, Bart},
    month = nov,
    year = {2024},
    pages = {1--13},
    unit= {meca-ras},
    project= {MarLand}
    }

  • Z. Chekakta, N. Aouf, S. Govindaraj, F. Polisano, and G. De Cubber, “Towards Learning-Based Distributed Task Allocation Approach for Multi-Robot System," in 2024 10th International Conference on Automation, Robotics and Applications (ICARA), 2024, pp. 34-39.
    [BibTeX] [DOI]
    @INPROCEEDINGS{10553196,
    author={Chekakta, Zakaria and Aouf, Nabil and Govindaraj, Shashank and Polisano, Fabio and De Cubber, Geert},
    booktitle={2024 10th International Conference on Automation, Robotics and Applications (ICARA)},
    title={Towards Learning-Based Distributed Task Allocation Approach for Multi-Robot System},
    year={2024},
    volume={},
    number={},
    pages={34-39},
    keywords={Sequential analysis;Automation;Accuracy;Robot kinematics;Prediction algorithms;Approximation algorithms;Resource management;Task Allocation;Multirobot System;Distributed Algorithms;Graph Convolutional Neural Networks},
    doi={10.1109/ICARA60736.2024.10553196},
    unit= {meca-ras},
    project= {AIDED}
    }

  • P. Petsioti, M. Zyczkowski, K. Brewczyski, K. Cichulski, K. Kaminski, R. Razvan, A. Mohamoud, C. Church, A. Koniaris, G. De Cubber, and D. Doroftei, “Methodological Approach for the Development of Standard C-UAS Scenarios," Open Research Europe, vol. 4, iss. 240, 2024.
    [BibTeX] [Download PDF] [DOI]
    @Article{ 10.12688/openreseurope.18339.1,
    AUTHOR = {Petsioti, P. and Zyczkowski, M. and Brewczyski, K. and Cichulski, K. and Kaminski, K. and Razvan, R. and Mohamoud, A. and Church, C. and Koniaris, A. and De Cubber, G. and Doroftei, D.},
    TITLE = {Methodological Approach for the Development of Standard C-UAS Scenarios},
    JOURNAL = {Open Research Europe},
    VOLUME = {4},
    YEAR = {2024},
    NUMBER = {240},
    DOI = {10.12688/openreseurope.18339.1},
    URL = {https://open-research-europe.ec.europa.eu/articles/4-240/v1},
    unit= {meca-ras},
    project= {COURAGEOUS}
    }

  • K. D. Brewczyński, M. Życzkowski, K. Cichulski, K. A. Kamiński, P. Petsioti, and G. De Cubber, “Methods for Assessing the Effectiveness of Modern Counter Unmanned Aircraft Systems," Remote Sensing, vol. 16, iss. 19, 2024.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Given the growing threat posed by the widespread availability of unmanned aircraft systems (UASs), which can be utilised for various unlawful activities, the need for a standardised method to evaluate the effectiveness of systems capable of detecting, tracking, and identifying (DTI) these devices has become increasingly urgent. This article draws upon research conducted under the European project COURAGEOUS, where 260 existing drone detection systems were analysed, and a methodology was developed for assessing the suitability of C-UASs in relation to specific threat scenarios. The article provides an overview of the most commonly employed technologies in C-UASs, such as radars, visible light cameras, thermal imaging cameras, laser range finders (lidars), and acoustic sensors. It explores the advantages and limitations of each technology, highlighting their reliance on different physical principles, and also briefly touches upon the legal implications associated with their deployment. The article presents the research framework and provides a structural description, alongside the functional and performance requirements, as well as the defined metrics. Furthermore, the methodology for testing the usability and effectiveness of individual C-UAS technologies in addressing specific threat scenarios is elaborated. Lastly, the article offers a concise list of prospective research directions concerning the analysis and evaluation of these technologies.

    @Article{rs16193714,
    AUTHOR = {Brewczyński, Konrad D. and Życzkowski, Marek and Cichulski, Krzysztof and Kamiński, Kamil A. and Petsioti, Paraskevi and De Cubber, Geert},
    TITLE = {Methods for Assessing the Effectiveness of Modern Counter Unmanned Aircraft Systems},
    JOURNAL = {Remote Sensing},
    VOLUME = {16},
    YEAR = {2024},
    NUMBER = {19},
    ARTICLE-NUMBER = {3714},
    URL = {https://www.mdpi.com/2072-4292/16/19/3714},
    ISSN = {2072-4292},
    ABSTRACT = {Given the growing threat posed by the widespread availability of unmanned aircraft systems (UASs), which can be utilised for various unlawful activities, the need for a standardised method to evaluate the effectiveness of systems capable of detecting, tracking, and identifying (DTI) these devices has become increasingly urgent. This article draws upon research conducted under the European project COURAGEOUS, where 260 existing drone detection systems were analysed, and a methodology was developed for assessing the suitability of C-UASs in relation to specific threat scenarios. The article provides an overview of the most commonly employed technologies in C-UASs, such as radars, visible light cameras, thermal imaging cameras, laser range finders (lidars), and acoustic sensors. It explores the advantages and limitations of each technology, highlighting their reliance on different physical principles, and also briefly touches upon the legal implications associated with their deployment. The article presents the research framework and provides a structural description, alongside the functional and performance requirements, as well as the defined metrics. Furthermore, the methodology for testing the usability and effectiveness of individual C-UAS technologies in addressing specific threat scenarios is elaborated. Lastly, the article offers a concise list of prospective research directions concerning the analysis and evaluation of these technologies.},
    DOI = {10.3390/rs16193714},
    unit= {meca-ras},
    project= {COURAGEOUS},
    url={https://www.mdpi.com/2072-4292/16/19/3714}
    }

  • A. Borghgraef, M. Vandewal, and G. De Cubber, “COURAGEOUS: test methods for counter-UAS systems," in In Proceedings SPIE Sensors + Imaging, Target and Background Signatures X: Traditional Methods and Artificial Intelligence, 2024, p. 131990D.
    [BibTeX] [Download PDF] [DOI]
    @inproceedings{spie_alex,
    title={COURAGEOUS: test methods for counter-UAS systems},
    author={Borghgraef, Alexander and Vandewal, Marijke and De Cubber, Geert},
    year={2024},
    booktitle={In Proceedings SPIE Sensors + Imaging, Target and Background Signatures X: Traditional Methods and Artificial Intelligence},
    publisher = {SPIE},
    location = {Edinburgh, United Kingdom},
    unit= {meca-ras, ciss},
    project= {COURAGEOUS},
    volume = {13199},
    editor = {Karin Stein and Ric Schleijpen},
    organization = {International Society for Optics and Photonics},
    publisher = {SPIE},
    pages = {131990D},
    keywords = {counter-UAS, drone, border protection, standardization, measurement campaign, law enforcement, DTI, evaluation methods},
    doi = {10.1117/12.3030928},
    url = {https://doi.org/10.1117/12.3030928}
    }

  • A. M. Casado Fauli, M. Malizia, K. Hasselmann, E. Le Flécher, G. De Cubber, and B. Lauwens, “HADRON: Human-friendly Control and Artificial Intelligence for Military Drone Operations," in In Proceedings 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024, 2024.
    [BibTeX] [Download PDF]
    @inproceedings{fauli2024hadronhumanfriendlycontrolartificial,
    title={HADRON: Human-friendly Control and Artificial Intelligence for Military Drone Operations},
    author={Casado Fauli, Ana Maria and Malizia, Mario and Hasselmann, Ken and Le Flécher, Emile and De Cubber, Geert and Lauwens, Ben},
    year={2024},
    booktitle={In Proceedings 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024},
    publisher = {IEEE},
    location = {Pasadena, USA},
    unit= {meca-ras},
    project= {HADRON},
    eprint={2408.07063},
    archivePrefix={arXiv},
    primaryClass={cs.RO},
    url={https://arxiv.org/abs/2408.07063},
    }

  • D. Doroftei, G. De Cubber, S. Lo Bue, and H. De Smet, “Quantitative Assessment of Drone Pilot Performance," Drones, vol. 8, iss. 9, 2024.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper introduces a quantitative methodology for assessing drone pilot performance, aiming to reduce drone-related incidents by understanding the human factors influencing performance. The challenge lies in balancing evaluations in operationally relevant environments with those in a standardized test environment for statistical relevance. The proposed methodology employs a novel virtual test environment that records not only basic flight metrics but also complex mission performance metrics, such as the video quality from a target. A group of Belgian Defence drone pilots were trained using this simulator system, yielding several practical results. These include a human-performance model linking human factors to pilot performance, an AI co-pilot providing real-time flight performance guidance, a tool for generating optimal flight trajectories, a mission planning tool for ideal pilot assignment, and a method for iterative training improvement based on quantitative input. The training results with real pilots demonstrate the methodology’s effectiveness in evaluating pilot performance for complex military missions, suggesting its potential as a valuable addition to new pilot training programs.

    @Article{drones8090482,
    AUTHOR = {Doroftei, Daniela and De Cubber, Geert and Lo Bue, Salvatore and De Smet, Hans},
    TITLE = {Quantitative Assessment of Drone Pilot Performance},
    JOURNAL = {Drones},
    VOLUME = {8},
    YEAR = {2024},
    unit= {meca-ras},
    NUMBER = {9},
    ARTICLE-NUMBER = {482},
    URL = {https://www.mdpi.com/2504-446X/8/9/482},
    ISSN = {2504-446X},
    project= {ALPHONSE},
    ABSTRACT = {This paper introduces a quantitative methodology for assessing drone pilot performance, aiming to reduce drone-related incidents by understanding the human factors influencing performance. The challenge lies in balancing evaluations in operationally relevant environments with those in a standardized test environment for statistical relevance. The proposed methodology employs a novel virtual test environment that records not only basic flight metrics but also complex mission performance metrics, such as the video quality from a target. A group of Belgian Defence drone pilots were trained using this simulator system, yielding several practical results. These include a human-performance model linking human factors to pilot performance, an AI co-pilot providing real-time flight performance guidance, a tool for generating optimal flight trajectories, a mission planning tool for ideal pilot assignment, and a method for iterative training improvement based on quantitative input. The training results with real pilots demonstrate the methodology’s effectiveness in evaluating pilot performance for complex military missions, suggesting its potential as a valuable addition to new pilot training programs.},
    DOI = {10.3390/drones8090482}
    }

  • M. Malizia, A. M. Casado Fauli, K. Hasselmann, E. Le Flécher, G. De Cubber, and R. Haelterman, “Assisted Explosive Ordnance Disposal: Teleoperated Robotic Systems with AI, Virtual Reality, and Semi-Autonomous Manipulation for Safer Demining Operations," in 20th International Symposium Mine Action, 2024, pp. 52-55.
    [BibTeX] [Download PDF]
    @inproceedings{maliziamineact2024,
    title={Assisted Explosive Ordnance Disposal: Teleoperated Robotic Systems with AI, Virtual Reality, and Semi-Autonomous Manipulation for Safer Demining Operations},
    author={Malizia, Mario and Casado Fauli, Ana Maria and Hasselmann, Ken and Le Flécher, Emile and De Cubber, Geert and Haelterman, Rob},
    booktitle={20th International Symposium Mine Action},
    publisher = {CTRO-HR},
    year = {2024},
    location = {Cavtat, Croatia},
    unit= {meca-ras},
    url={https://www.ctro.hr/userfiles/files/MINE%20ACTION_2024_ONLIINE.pdf},
    pages={52-55},
    project= {BELGIAN}
    }

  • K. Hasselmann, M. Malizia, R. Caballero, F. Polisano, S. Govindaraj, J. Stigler, O. Ilchenko, M. Bajic, and G. De Cubber, “A multi-robot system for the detection of explosive devices," in “IEEE ICRA Workshop on Field Robotics", 2024.
    [BibTeX] [Download PDF] [DOI]
    @inproceedings{Hasselmannetal2024ICRAWSFRO,
    doi = {10.48550/ARXIV.2404.14167},
    url={https://arxiv.org/abs/2404.14167},
    booktitle = {"IEEE ICRA Workshop on Field Robotics"},
    author = {Hasselmann, Ken and Malizia, Mario and Caballero, Rafael and Polisano, Fabio and Govindaraj, Shashank and Stigler, Jakob and Ilchenko, Oleksii and Bajic, Milan and De Cubber, Geert},
    title = {A multi-robot system for the detection of explosive devices},
    year = {2024},
    unit= {meca-ras},
    project= {AIDED, AIDEDEX}
    }

  • T-T. Nguyen, A. Crismer, G. De Cubber, B. Janssens, and H. Bruyninckx, “Landing UAV on Moving Surface Vehicle: Visual Tracking and Motion Prediction of Landing Deck," in 2024 IEEE/SICE International Symposium on System Integration (SII)., 2024.
    [BibTeX] [Download PDF] [DOI]
    @inproceedings{sii2024,
    title={Landing UAV on Moving Surface Vehicle: Visual Tracking and Motion Prediction of Landing Deck},
    author={Nguyen, T-T. and Crismer, A and De Cubber, G. and Janssens, B. and Bruyninckx, H.},
    booktitle={2024 IEEE/SICE International Symposium on System Integration (SII).},
    editors ={},
    publisher = {IEEE},
    year = {2024},
    vol = {},
    location = {Ha Long, Vietnam},
    unit= {meca-ras},
    doi = {https://doi.org/10.1109/SII58957.2024.10417303},
    url={https://drive.google.com/file/d/1UiF4uPF9VkxgMxMX_JCBOFiZJPmoDSRv/view?usp=drive_link},
    project= {MarLand}
    }

2023

  • G. De Cubber, P. Petsioti, R. Roman, A. Mohamoud, I. Maza, and C. Church, “The COURAGEOUS project efforts towards standardized test methods for assessing the performance of counter-drone solutions," in In Proceedings 11th biennial Symposium on Non-Lethal Weapons, 2023, p. 44.
    [BibTeX] [Download PDF]
    @inproceedings{decubbercuas2023,
    title={The COURAGEOUS project efforts towards standardized test methods for assessing the performance of counter-drone solutions},
    author={De Cubber, Geert and Petsioti, Petsioti and Roman, Razvan and Mohamoud, Ali and Maza, Ivan and Church, Christopher},
    booktitle={In Proceedings 11th biennial Symposium on Non-Lethal Weapons},
    publisher = {European Working Group on Non-Lethal Weapons},
    year = {2023},
    location = {Brussels, Belgium},
    unit= {meca-ras},
    url={https://mecatron.rma.ac.be/pub/2024/Towards%20standardized%20test%20methods%20for%20assessing%20the%20performance%20of%20counter-drone%20solutions.pdf},
    pages={44},
    project= {COURAGEOUS}
    }

  • G. De Cubber, E. Le Flécher, A. La Grappe, E. Ghisoni, E. Maroulis, P. Ouendo, D. Hawari, and D. Doroftei, “Dual Use Security Robotics: A Demining, Resupply and Reconnaissance Use Case," in IEEE International Conference on Safety, Security, and Rescue Robotics, 2023.
    [BibTeX] [Download PDF]
    @inproceedings{ssrr2023decubber,
    title={Dual Use Security Robotics: A Demining, Resupply and Reconnaissance Use Case},
    author={De Cubber, Geert and Le Flécher, Emile and La Grappe, Alexandre and Ghisoni, Enzo and Maroulis, Emmanouil and Ouendo, Pierre-Edouard and Hawari, Danial and Doroftei, Daniela},
    booktitle={IEEE International Conference on Safety, Security, and Rescue Robotics},
    editors ={Kimura, Tetsuya},
    publisher = {IEEE},
    year = {2023},
    vol = {1},
    project = {AIDED, iMUGs, CUGS},
    location = {Fukushima, Japan},
    unit= {meca-ras},
    doi = {},
    url={https://mecatron.rma.ac.be/pub/2023/SSRR2023-DeCubber.pdf}
    }

  • T-T. Nguyen, L. Somers, J. Van den Bosch, G. De Cubber, B. Janssens, and H. Bruyninckx, “Affordable and Customizable Research and Educational Aerial and Surface Vehicles Robot Platforms – first implementation," in 17th Mechatronics Forum International Conference., 2023.
    [BibTeX] [Download PDF]
    @inproceedings{mechatronics20203usv,
    title={Affordable and Customizable Research and Educational Aerial and Surface Vehicles Robot Platforms – first implementation},
    author={Nguyen, T-T. and Somers, L. and Van den Bosch, J. and De Cubber, G. and Janssens, B. and Bruyninckx, H.},
    booktitle={17th Mechatronics Forum International Conference.},
    editors ={},
    publisher = {},
    year = {2023},
    vol = {},
    location = {Leuven, Belgium},
    unit= {meca-ras},
    doi = {},
    url={https://mechatronics2023.eu/wp-content/uploads/2023/09/MX_2023_session_3_paper_3_nguyen.pdf},
    project= {MarLand}
    }

  • T-T. Nguyen, J. Duverger, G. De Cubber, B. Janssens, and H. Bruyninckx, “Development of Dual-function Adaptive Landing Gear and Gripper for Unmanned Aerial Vehicles," in 17th Mechatronics Forum International Conference., 2023.
    [BibTeX] [Download PDF]
    @inproceedings{mechatronics20203gripper,
    title={Development of Dual-function Adaptive Landing Gear and Gripper for Unmanned Aerial Vehicles},
    author={Nguyen, T-T. and Duverger, J. and De Cubber, G. and Janssens, B. and Bruyninckx, H.},
    booktitle={17th Mechatronics Forum International Conference.},
    editors ={},
    publisher = {},
    year = {2023},
    vol = {},
    location = {Leuven, Belgium},
    unit= {meca-ras},
    doi = {},
    url={https://mechatronics2023.eu/wp-content/uploads/2023/09/MX_2023_session_3_paper_1_nguyen.pdf},
    project= {MarLand}
    }

  • G. De Cubber, E. Le Flécher, A. Dominicus, and D. Doroftei, “Human-agent teaming between soldiers and unmanned ground systems in a resupply scenario," in Human Factors in Robots, Drones and Unmanned Systems. AHFE (2023) International Conference., 2023.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Thanks to advances in embedded computing and robotics, intelligent Unmanned Ground Systems (UGS) are used more and more in our daily lives. Also in the military domain, the use of UGS is highly investigated for applications like force protection of military installations, surveillance, target acquisition, reconnaissance, handling of chemical, biological, radiological, nuclear (CBRN) threats, explosive ordnance disposal, etc. A pivotal research aspect for the integration of these military UGS in the standard operating procedures is the question of how to achieve a seamless collaboration between human and robotic agents in such high-stress and non-structured environments. Indeed, in these kind of operations, it is critical that the human-agent mutual understanding is flawless; hence, the focus on human factors and ergonomic design of the control interfaces.The objective of this paper is to focus on one key military application of UGS, more specifically logistics, and elaborate how efficient human-machine teaming can be achieved in such a scenario. While getting much less attention than other application areas, the domain of logistics is in fact one of the most important for any military operation, as it is an application area that is very well suited for robotic systems. Indeed, military troops are very often burdened by having to haul heavy gear across large distances, which is a problem UGS can solve.The significance of this paper is that it is based on more than two years of field research work on human + multi-agent UGS collaboration in realistic military operating conditions, performed within the scope of the European project iMUGS. In the framework of this project, not less than six large-scale field trial campaigns were organized across Europe. In each field trial campaign, soldiers and UGS had to work together to achieve a set of high-level mission goals that were distributed among them via a planning & scheduling mechanism. This paper will focus on the outcomes of the Belgian field trial, which concentrated on a resupply logistics mission.Within this paper, a description of the iMUGS test setup and operational scenarios is provided. The ergonomic design of the tactical planning system is elaborated, together with the high-level swarming and task scheduling methods that divide the work between robotic and human agents in the fieldThe resupply mission, as described in this paper, was executed in summer 2022 in Belgium by a mixed team of soldiers and UGS for an audience of around 200 people from defence actors from European member states. The results of this field trial were evaluated as highly positive, as all high-level requirements were obtained by the robotic fleet.

    @inproceedings{ahfe20203decubber,
    title={Human-agent teaming between soldiers and unmanned ground systems in a resupply scenario},
    author={De Cubber, G. and Le Flécher, E. and Dominicus, A. and Doroftei, D.},
    booktitle={Human Factors in Robots, Drones and Unmanned Systems. AHFE (2023) International Conference.},
    editors ={Tareq Ahram and Waldemar Karwowski},
    publisher = {AHFE Open Access, AHFE International, USA},
    year = {2023},
    vol = {93},
    project = {iMUGs},
    location = {San Francisco, USA},
    unit= {meca-ras},
    doi = {http://doi.org/10.54941/ahfe1003746},
    url={https://openaccess.cms-conferences.org/publications/book/978-1-958651-69-8/article/978-1-958651-69-8_5},
    abstract = {Thanks to advances in embedded computing and robotics, intelligent Unmanned Ground Systems (UGS) are used more and more in our daily lives. Also in the military domain, the use of UGS is highly investigated for applications like force protection of military installations, surveillance, target acquisition, reconnaissance, handling of chemical, biological, radiological, nuclear (CBRN) threats, explosive ordnance disposal, etc. A pivotal research aspect for the integration of these military UGS in the standard operating procedures is the question of how to achieve a seamless collaboration between human and robotic agents in such high-stress and non-structured environments. Indeed, in these kind of operations, it is critical that the human-agent mutual understanding is flawless; hence, the focus on human factors and ergonomic design of the control interfaces.The objective of this paper is to focus on one key military application of UGS, more specifically logistics, and elaborate how efficient human-machine teaming can be achieved in such a scenario. While getting much less attention than other application areas, the domain of logistics is in fact one of the most important for any military operation, as it is an application area that is very well suited for robotic systems. Indeed, military troops are very often burdened by having to haul heavy gear across large distances, which is a problem UGS can solve.The significance of this paper is that it is based on more than two years of field research work on human + multi-agent UGS collaboration in realistic military operating conditions, performed within the scope of the European project iMUGS. In the framework of this project, not less than six large-scale field trial campaigns were organized across Europe. In each field trial campaign, soldiers and UGS had to work together to achieve a set of high-level mission goals that were distributed among them via a planning & scheduling mechanism. This paper will focus on the outcomes of the Belgian field trial, which concentrated on a resupply logistics mission.Within this paper, a description of the iMUGS test setup and operational scenarios is provided. The ergonomic design of the tactical planning system is elaborated, together with the high-level swarming and task scheduling methods that divide the work between robotic and human agents in the fieldThe resupply mission, as described in this paper, was executed in summer 2022 in Belgium by a mixed team of soldiers and UGS for an audience of around 200 people from defence actors from European member states. The results of this field trial were evaluated as highly positive, as all high-level requirements were obtained by the robotic fleet.}
    }

  • D. Doroftei, G. De Cubber, and H. De Smet, “Human factors assessment for drone operations: towards a virtual drone co-pilot," in Human Factors in Robots, Drones and Unmanned Systems. AHFE (2023) International Conference., 2023.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    As the number of drone operations increases, so does the risk of incidents with these novel, yet sometimes dangerous unmanned systems. Research has shown that over 70% of drone incidents are caused by human error, so in order to reduce the risk of incidents, the human factors related to the operation of the drone should be studied. However, this is not a trivial exercise, because on the one hand, a realistic operational environment is required (in order to study the human behaviour in realistic conditions), while on the other hand a standardised environment is required, such that repeatable experiments can be set up in order to ensure statistical relevance. In order to remedy this, within the scope of the ALPHONSE project, a realistic simulation environment was developed that is specifically geared towards the evaluation of human factors for military drone operations. Within the ALPHONSE simulator, military (and other) drone pilots can perform missions in realistic operational conditions. At the same time, they are subjected to a range of factors that can influence operator performance. These constitute both person-induced factors like pressure to achieve the set goals in time or people talking to the pilot and environment-induced stress factors like changing weather conditions. During the flight operation, the ALPHONSE simulator continuously monitors over 65 flight parameters. After the flight, an overall performance score is calculated, based upon the achievement of the mission objectives. Throughout the ALPHONSE trials, a wide range of pilots has flown in the simulator, ranging from beginner to expert pilots. Using all the data recorded during these flights, three actions are performed:-An Artificial Intelligence (AI) – based classifier was trained to automatically recognize in real time good and bad flight behaviour. This allows for the development of a virtual co-pilot that can warn the pilot at any given moment when the pilot is starting to exhibit behaviour that is recognized by the classifier to correspond mostly to the behaviour of inexperienced pilots and not to the behaviour of good pilots.-An identification and ranking of the human factors and their impact on the flight performance, by linking the induced stress factors to the performance scores-An update of the training procedures to take into consideration the human factors that impact flight performance, such that newly trained pilots are better aware of these influences.The objective of this paper is to present the complete ALPHONSE simulator system for the evaluation of human factors for drone operations and present the results of the experiments with real military flight operators. The focus of the paper will be on the elaboration of the design choices for the development of the AI – based classifier for real-time flight performance evaluation.The proposed development is highly significant, as it presents a concrete and cost-effective methodology for developing a virtual co-pilot for drone pilots that can render drone operations safer. Indeed, while the initial training of the AI model requires considerable computing resources, the implementation of the classifier can be readily integrated in commodity flight controllers to provide real-time alerts when pilots are manifesting undesired flight behaviours.The paper will present results of tests with drone pilots from Belgian Defence and civilian Belgian Defence researchers that have flown within the ALPHONSE simulator. These pilots have first acted as data subjects to provide flight data to train the model and have later been used to validate the model. The validation shows that the virtual co-pilot achieves a very high accuracy and can in over 80% of the cases correctly identify bad flight profiles in real-time.

    @inproceedings{ahfe20203doroftei,
    title={Human factors assessment for drone operations: towards a virtual drone co-pilot},
    author={Doroftei, D. and De Cubber, G. and De Smet, H.},
    booktitle={Human Factors in Robots, Drones and Unmanned Systems. AHFE (2023) International Conference.},
    editors ={Tareq Ahram and Waldemar Karwowski},
    publisher = {AHFE Open Access, AHFE International, USA},
    year = {2023},
    vol = {93},
    project = {Alphonse},
    location = {San Francisco, USA},
    unit= {meca-ras},
    doi = {http://doi.org/10.54941/ahfe1003747},
    url={https://openaccess.cms-conferences.org/publications/book/978-1-958651-69-8/article/978-1-958651-69-8_6},
    abstract = {As the number of drone operations increases, so does the risk of incidents with these novel, yet sometimes dangerous unmanned systems. Research has shown that over 70% of drone incidents are caused by human error, so in order to reduce the risk of incidents, the human factors related to the operation of the drone should be studied. However, this is not a trivial exercise, because on the one hand, a realistic operational environment is required (in order to study the human behaviour in realistic conditions), while on the other hand a standardised environment is required, such that repeatable experiments can be set up in order to ensure statistical relevance. In order to remedy this, within the scope of the ALPHONSE project, a realistic simulation environment was developed that is specifically geared towards the evaluation of human factors for military drone operations. Within the ALPHONSE simulator, military (and other) drone pilots can perform missions in realistic operational conditions. At the same time, they are subjected to a range of factors that can influence operator performance. These constitute both person-induced factors like pressure to achieve the set goals in time or people talking to the pilot and environment-induced stress factors like changing weather conditions. During the flight operation, the ALPHONSE simulator continuously monitors over 65 flight parameters. After the flight, an overall performance score is calculated, based upon the achievement of the mission objectives. Throughout the ALPHONSE trials, a wide range of pilots has flown in the simulator, ranging from beginner to expert pilots. Using all the data recorded during these flights, three actions are performed:-An Artificial Intelligence (AI) - based classifier was trained to automatically recognize in real time good and bad flight behaviour. This allows for the development of a virtual co-pilot that can warn the pilot at any given moment when the pilot is starting to exhibit behaviour that is recognized by the classifier to correspond mostly to the behaviour of inexperienced pilots and not to the behaviour of good pilots.-An identification and ranking of the human factors and their impact on the flight performance, by linking the induced stress factors to the performance scores-An update of the training procedures to take into consideration the human factors that impact flight performance, such that newly trained pilots are better aware of these influences.The objective of this paper is to present the complete ALPHONSE simulator system for the evaluation of human factors for drone operations and present the results of the experiments with real military flight operators. The focus of the paper will be on the elaboration of the design choices for the development of the AI - based classifier for real-time flight performance evaluation.The proposed development is highly significant, as it presents a concrete and cost-effective methodology for developing a virtual co-pilot for drone pilots that can render drone operations safer. Indeed, while the initial training of the AI model requires considerable computing resources, the implementation of the classifier can be readily integrated in commodity flight controllers to provide real-time alerts when pilots are manifesting undesired flight behaviours.The paper will present results of tests with drone pilots from Belgian Defence and civilian Belgian Defence researchers that have flown within the ALPHONSE simulator. These pilots have first acted as data subjects to provide flight data to train the model and have later been used to validate the model. The validation shows that the virtual co-pilot achieves a very high accuracy and can in over 80% of the cases correctly identify bad flight profiles in real-time.}
    }

  • E. Ghisoni, S. Govindaraj, A. M. C. Faul{‘i}, G. De Cubber, F. Polisano, N. Aouf, D. Rondao, Z. Chekakta, and B. de Waard, “Multi-agent system and AI for Explosive Ordnance Disposal," in 19th International Symposium Mine Action, 2023, p. 26.
    [BibTeX] [Download PDF]
    @inproceedings{ghisonimulti,
    title={Multi-agent system and AI for Explosive Ordnance Disposal},
    author={Ghisoni, Enzo and Govindaraj, Shashank and Faul{\'\i}, Ana Mar{\'\i}a Casado and De Cubber, Geert and Polisano, Fabio and Aouf, Nabil and Rondao, Duarte and Chekakta, Zakaria and de Waard, Bob},
    booktitle={19th International Symposium Mine Action},
    publisher = {CEIA},
    year = {2023},
    project = {AIDED},
    location = {Croatia},
    unit= {meca-ras},
    url={https://www.ctro.hr/userfiles/files/MINE-ACTION-2023_.pdf},
    pages={26}
    }

2022

  • R. Lahouli., G. De Cubber, B. Pairet., C. Hamesse., T. Fréville., and R. Haelterman., “Deep Learning based Object Detection and Tracking for Maritime Situational Awareness," in Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) – Volume 4: VISAPP, 2022, pp. 643-650.
    [BibTeX] [DOI]
    @conference{visapp22,
    author={Rihab Lahouli. and Geert {De Cubber}. and Benoît Pairet. and Charles Hamesse. and Timothée Fréville. and Rob Haelterman.},
    title={Deep Learning based Object Detection and Tracking for Maritime Situational Awareness},
    booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP},
    year={2022},
    pages={643-650},
    publisher={SciTePress},
    organization={INSTICC},
    doi={10.5220/0010901000003124},
    isbn={978-989-758-555-5},
    issn={2184-4321},
    }

  • R. Lahouli, G. De Cubber, B. Pairet, C. Hamesse, T. Freville, and R. Haelterman, “Deep Learning based Object Detection and Tracking for Maritime Situational Awareness," in Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications – Volume 4: VISAPP,, 2022, pp. 643-650.
    [BibTeX] [Download PDF] [DOI]
    @conference{visapp22,
    author={Lahouli, Rihab and De Cubber, Geert and Pairet, Benoit and Hamesse, Charles and Freville, Timothee and Haelterman, Rob},
    title={Deep Learning based Object Detection and Tracking for Maritime Situational Awareness},
    booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,},
    year={2022},
    pages={643-650},
    publisher={SciTePress},
    organization={INSTICC},
    doi={10.5220/0010901000003124},
    isbn={978-989-758-555-5},
    project={SSAVE},
    url={https://www.scitepress.org/PublicationsDetail.aspx?ID=mJ5eF6o+SbM=&t=1},
    unit= {meca-ras}
    }

  • D. Doroftei, G. De Cubber, and H. De Smet, “A quantitative measure for the evaluation of drone-based video quality on a target," in Eighteenth International Conference on Autonomic and Autonomous Systems (ICAS), Venice, Italy, 2022.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper presents a methodology to assess video quality and based on that automatically calculate drone trajectories that optimize the video quality.

    @InProceedings{doroftei2022alphonse2,
    author = {Doroftei, Daniela and De Cubber, Geert and De Smet, Hans},
    booktitle = {Eighteenth International Conference on Autonomic and Autonomous Systems (ICAS)},
    title = {A quantitative measure for the evaluation of drone-based video quality on a target},
    year = {2022},
    month = jun,
    organization = {IARIA},
    publisher = {ThinkMind},
    address = {Venice, Italy},
    url = {https://www.thinkmind.org/articles/icas_2022_1_40_20018.pdf},
    isbn={978-1-61208-966-9},
    doi = {https://www.thinkmind.org/index.php?view=article&articleid=icas_2022_1_40_20018},
    abstract = {This paper presents a methodology to assess video quality and based on that automatically calculate drone trajectories that optimize the video quality.},
    project = {Alphonse},
    unit= {meca-ras}
    }

  • E. Ghisoni and G. De Cubber, “AIDED: Robotics & Artificial Intelligence for Explosive Ordnance Disposal," in International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance (VRISE), Les Bons Villers, Belgium, 2022.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents an overview of the AIDED project on AI for IED detection.

    @InProceedings{ghisoni2022a,
    author = {Ghisoni, Enzo and De Cubber, Geert},
    booktitle = {International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance (VRISE)},
    title = {AIDED: Robotics & Artificial Intelligence for Explosive Ordnance Disposal},
    year = {2022},
    month = jun,
    organization = {IMEKO},
    publisher = {IMEKO},
    address = {Les Bons Villers, Belgium},
    url = {https://www.ici-belgium.be/registration-and-program-vrise2022-june-7/},
    abstract = {This paper presents an overview of the AIDED project on AI for IED detection.},
    project = {AIDED},
    unit= {meca-ras}
    }

  • G. De Cubber and F. E. Schneider, “Military Robotics," in Encyclopedia of Robotics, M. H. Ang, O. Khatib, and B. Siciliano, Eds., Springer, 2022.
    [BibTeX] [Download PDF]
    @InCollection{encyclopedia2022,
    author = {De Cubber, Geert and Schneider, Frank E.},
    title = {Military Robotics},
    editor = {Ang, Marcelo H. and Khatib, Oussama and Siciliano, Bruno},
    booktitle = {Encyclopedia of Robotics},
    publisher = {Springer},
    year = {2022},
    url = {https://meteor.springer.com/project/dashboard.jsf?id=347},
    project = {iMUGs},
    unit= {meca-ras}
    }

  • D. Doroftei, G. De Cubber, and H. De Smet, “Assessing Human Factors for Drone Operations in a Simulation Environment," in Human Factors in Robots, Drones and Unmanned Systems – AHFE (2022) International Conference, New York, USA, 2022.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper presents an overview of the Alphonse methodology for Assessing Human Factors for Drone Operations in a Simulation Environment.

    @InProceedings{doroftei2022a,
    author = {Doroftei, Daniela and De Cubber, Geert and De Smet, Hans},
    booktitle = {Human Factors in Robots, Drones and Unmanned Systems - AHFE (2022) International Conference},
    title = {Assessing Human Factors for Drone Operations in a Simulation Environment},
    year = {2022},
    month = jul,
    volume = {57},
    editor = {Tareq Ahram and Waldemar Karwowski},
    publisher = {AHFE International},
    address = {New York, USA},
    url = {https://openaccess-api.cms-conferences.org/articles/download/978-1-958651-33-9_16},
    abstract = {This paper presents an overview of the Alphonse methodology for Assessing Human Factors for Drone Operations in a Simulation Environment.},
    doi = {http://doi.org/10.54941/ahfe1002319},
    project = {Alphonse},
    unit= {meca-ras}
    }

  • T. Halleux, T. Nguyen, C. Hamesse, G. De Cubber, and B. Janssens, “Visual Drone Detection and Tracking for Autonomous Operation from Maritime Vessel," in Proceedings of TC17-ISMCR2022 – A Topical Event of Technical Committee on Measurement and Control of Robotics (TC17), International Measurement Confederation (IMEKO), Theme: “Robotics and Virtual Tools for a New Era", 2022.
    [BibTeX] [Download PDF] [DOI]
    @INPROCEEDINGS{ismcr2022_1,
    author={Halleux, Timothy and Nguyen, Tien-Thanh and Hamesse, Charles and De Cubber, Geert and Janssens, Bart},
    booktitle={Proceedings of TC17-ISMCR2022 - A Topical Event of Technical Committee on Measurement and Control of Robotics (TC17), International Measurement Confederation (IMEKO), Theme: "Robotics and Virtual Tools for a New Era"},
    title={Visual Drone Detection and Tracking for Autonomous Operation from Maritime Vessel},
    year={2022},
    volume={},
    number={},
    url={https://mecatron.rma.ac.be/pub/2022/ISMCR-Drone_detection_tracking_FullPaper.pdf},
    project={MarLand, COURAGEOUS},
    publisher={IMEKO},
    address={},
    doi={10.5281/zenodo.7074445},
    month={September},
    unit= {meca-ras}
    }

  • T. Dutrannois, T. Nguyen, C. Hamesse, G. De Cubber, and B. Janssens, “Visual SLAM for Autonomous Drone Landing on a Maritime Platform," in Proceedings of TC17-ISMCR2022 – A Topical Event of Technical Committee on Measurement and Control of Robotics (TC17), International Measurement Confederation (IMEKO), Theme: “Robotics and Virtual Tools for a New Era", 2022.
    [BibTeX] [Download PDF] [DOI]
    @INPROCEEDINGS{ismcr2022_2,
    author={Dutrannois, Thomas and Nguyen, Tien-Thanh and Hamesse, Charles and De Cubber, Geert and Janssens, Bart},
    booktitle={Proceedings of TC17-ISMCR2022 - A Topical Event of Technical Committee on Measurement and Control of Robotics (TC17), International Measurement Confederation (IMEKO), Theme: "Robotics and Virtual Tools for a New Era"},
    title={Visual SLAM for Autonomous Drone Landing on a Maritime Platform},
    year={2022},
    volume={},
    number={},
    url={https://mecatron.rma.ac.be/pub/2022/ISMCR-Visual_SLAM_FullPaper.pdf},
    project={MarLand},
    publisher={IMEKO},
    address={},
    doi={10.5281/zenodo.7074451},
    month={September},
    unit= {meca-ras}
    }

  • G. De Cubber and F. E. Schneider, “Military Robotics," in Encyclopedia of Robotics, M. H. Ang, O. Khatib, and B. Siciliano, Eds., Springer, 2022.
    [BibTeX] [Download PDF]
    @InCollection{encyclopedia2022,
    author = {De Cubber, Geert and Schneider, Frank E.},
    title = {Military Robotics},
    editor = {Ang, Marcelo H. and Khatib, Oussama and Siciliano, Bruno},
    booktitle = {Encyclopedia of Robotics},
    publisher = {Springer},
    year = {2022},
    url = {https://meteor.springer.com/project/dashboard.jsf?id=347},
    project = {iMUGs},
    unit= {meca-ras}
    }

  • E. Le Flécher, A. La Grappe, and G. De Cubber, “iMUGS – A ground multi-robot architecture for military Manned-Unmanned Teaming," in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2022.
    [BibTeX]
    @inbook{imugs_le_flecher_la_grappe_de_cubber,
    place={Kyoto},
    title={iMUGS - A ground multi-robot architecture for military Manned-Unmanned Teaming},
    booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    publisher={IEEE},
    year={2022},
    author={Le Flécher, Emile and La Grappe, Alexandre and De Cubber, Geert},
    project = {iMUGs},
    unit= {meca-ras}
    }

2021

  • K. Mathiassen, F. E. Schneider, P. Bounker, A. Tiderko, G. D. Cubber, M. Baksaas, J. Główka, R. Kozik, T. Nussbaumer, J. Röning, J. Pellenz, and A. Volk, “Demonstrating interoperability between unmanned ground systems and command and control systems," International Journal of Intelligent Defence Support Systems, vol. 6, iss. 2, pp. 100-129, 2021.
    [BibTeX] [Download PDF] [DOI]
    @article{doi:10.1504/IJIDSS.2021.115236,
    author = {Mathiassen, Kim and Schneider, Frank E. and Bounker, Paul and Tiderko, Alexander and Cubber, Geert De and Baksaas, Magnus and Główka, Jakub and Kozik, Rafał and Nussbaumer, Thomas and Röning, Juha and Pellenz, Johannes and Volk, André},
    title = {Demonstrating interoperability between unmanned ground systems and command and control systems},
    journal = {International Journal of Intelligent Defence Support Systems},
    volume = {6},
    number = {2},
    pages = {100-129},
    year = {2021},
    doi = {10.1504/IJIDSS.2021.115236},
    url = {https://www.inderscienceonline.com/doi/abs/10.1504/IJIDSS.2021.115236},
    eprint = {https://www.inderscienceonline.com/doi/pdf/10.1504/IJIDSS.2021.115236},
    project = {ICARUS, iMUGs},
    doi = {10.1504/ijidss.2021.115236},
    unit= {meca-ras}
    }

  • D. Doroftei, T. De Vleeschauwer, S. L. Bue, M. Dewyn, F. Vanderstraeten, and G. De Cubber, “Human-Agent Trust Evaluation in a Digital Twin Context," in 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 2021, pp. 203-207.
    [BibTeX] [Download PDF] [DOI]
    @INPROCEEDINGS{9515445,
    author={Doroftei, Daniela and De Vleeschauwer, Tom and Bue, Salvatore Lo and Dewyn, Michaël and Vanderstraeten, Frik and De Cubber, Geert},
    booktitle={2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN)},
    title={Human-Agent Trust Evaluation in a Digital Twin Context},
    year={2021},
    volume={},
    number={},
    pages={203-207},
    url={https://www.researchgate.net/profile/Geert-De-Cubber/publication/354078858_Human-Agent_Trust_Evaluation_in_a_Digital_Twin_Context/links/61430bd22bfbd83a46cf2b8c/Human-Agent-Trust-Evaluation-in-a-Digital-Twin-Context.pdf?_sg%5B0%5D=BdEPB9AGDUV3sOwnEQKCr-DgWRA7uDNeMlvyQYNaMPGSO2bhCDbyG4AENXXxH3j323ypYTq9nMftVbDr2fsCSA.ePETOgrc5VHnE0GK_yjBK1XVVfdQ9S6g2UKVfg8Z8miIkGlMPXpzaYKlB0JPDSiroGp9QoFbmcY2egYAXbL1ZQ&_sg%5B1%5D=ykQnQS2LN8fUQXAYx5Fpiy2NXqIwqO1UyVCENkpSUUWZn8Qqgrelh1bb4ry9Q9XPgCts7lVXU1_68YLjqnCPh4seSzWfG5BpKHc3MuFwsK6l.ePETOgrc5VHnE0GK_yjBK1XVVfdQ9S6g2UKVfg8Z8miIkGlMPXpzaYKlB0JPDSiroGp9QoFbmcY2egYAXbL1ZQ&_iepl=},
    project={Alphonse},
    publisher={IEEE},
    address={Vancouver, BC, Canada},
    month=aug,
    doi={10.1109/RO-MAN50785.2021.9515445},
    unit= {meca-ras}}

  • G. De Cubber, R. Lahouli, D. Doroftei, and R. Haelterman, “Distributed coverage optimisation for a fleet of unmanned maritime systems," ACTA IMEKO, vol. 10, iss. 3, pp. 36-43, 2021.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Unmanned maritime systems (UMS) can provide important benefits for maritime law enforcement agencies for tasks such as area surveillance and patrolling, especially when they are able to work together as one coordinated system. In this context, this paper proposes a methodology that optimises the coverage of a fleet of UMS, thereby maximising the opportunities for identifying threats. Unlike traditional approaches to maritime coverage optimisation, which are also used, for example, in search and rescue operations when searching for victims at sea, this approach takes into consideration the limited seaworthiness of small UMS, compared with traditional large ships, by incorporating the danger level into the design of the optimiser.

    @ARTICLE{cubberimeko2021,
    author={De Cubber, Geert and Lahouli, Rihab and Doroftei, Daniela and Haelterman, Rob},
    journal={ACTA IMEKO},
    title={Distributed coverage optimisation for a fleet of unmanned maritime systems},
    year={2021},
    volume={10},
    number={3},
    pages={36-43},
    issn={2221-870X},
    url={https://acta.imeko.org/index.php/acta-imeko/article/view/IMEKO-ACTA-10%20%282021%29-03-07/pdf},
    project={MarSur, SSAVE},
    publisher={IMEKO},
    month=oct,
    abstract = {Unmanned maritime systems (UMS) can provide important benefits for maritime law enforcement agencies for tasks such as area surveillance and patrolling, especially when they are able to work together as one coordinated system. In this context, this paper proposes a methodology that optimises the coverage of a fleet of UMS, thereby maximising the opportunities for identifying threats. Unlike traditional approaches to maritime coverage optimisation, which are also used, for example, in search and rescue operations when searching for victims at sea, this approach takes into consideration the limited seaworthiness of small UMS, compared with traditional large ships, by incorporating the danger level into the design of the optimiser. },
    doi={http://dx.doi.org/10.21014/acta_imeko.v10i3.1031},
    unit= {meca-ras}}

  • Y. Baudoin, G. De Cubber, and E. Cepolina, “Mobile Robots Supporting Risky Interventions, Humanitarian actions and Demining, in particular the promising DISARMADILLO Tool," in Proceedings of TC17-VRISE2021 – A VIRTUAL Topical Event of Technical Committee on Measurement and Control of Robotics (TC17), International Measurement Confederation (IMEKO), Theme: “Robotics for Risky Interventions and Environmental Surveillance", Houston, TX, USA, 2021, pp. 5-6.
    [BibTeX] [Download PDF]
    @INPROCEEDINGS{knvrise,
    author={Baudoin, Yvan and De Cubber, Geert and Cepolina, Emanuela},
    booktitle={Proceedings of TC17-VRISE2021 - A VIRTUAL Topical Event of Technical Committee on Measurement and Control of Robotics (TC17), International Measurement Confederation (IMEKO), Theme: "Robotics for Risky Interventions and Environmental Surveillance"},
    title={Mobile Robots Supporting Risky Interventions, Humanitarian actions and Demining, in particular the promising DISARMADILLO Tool},
    year={2021},
    volume={},
    number={},
    pages={5-6},
    url={https://mecatron.rma.ac.be/pub/2021/TC17-VRISE2021-Abstract%20Proceedings.pdf},
    project={AIDED, Alphonse, MarSur, SSAVE, MarLand, iMUGs, ICARUS, TIRAMISU},
    publisher={IMEKO},
    address={Houston, TX, USA},
    month=oct,
    unit= {meca-ras}
    }

2020

  • H. Balta, J. Velagic, H. Beglerovic, G. De Cubber, and B. Siciliano, “3D Registration and Integrated Segmentation Framework for Heterogeneous Unmanned Robotic Systems," Remote Sensing, vol. 12, iss. 10, p. 1608, 2020.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The paper proposes a novel framework for registering and segmenting 3D point clouds of large-scale natural terrain and complex environments coming from a multisensor heterogeneous robotics system, consisting of unmanned aerial and ground vehicles. This framework involves data acquisition and pre-processing, 3D heterogeneous registration and integrated multi-sensor based segmentation modules. The first module provides robust and accurate homogeneous registrations of 3D environmental models based on sensors’ measurements acquired from the ground (UGV) and aerial (UAV) robots. For 3D UGV registration, we proposed a novel local minima escape ICP (LME-ICP) method, which is based on the well known iterative closest point (ICP) algorithm extending it by the introduction of our local minima estimation and local minima escape mechanisms. It did not require any prior known pose estimation information acquired from sensing systems like odometry, global positioning system (GPS), or inertial measurement units (IMU). The 3D UAV registration has been performed using the Structure from Motion (SfM) approach. In order to improve and speed up the process of outliers removal for large-scale outdoor environments, we introduced the Fast Cluster Statistical Outlier Removal (FCSOR) method. This method was used to filter out the noise and to downsample the input data, which will spare computational and memory resources for further processing steps. Then, we co-registered a point cloud acquired from a laser ranger (UGV) and a point cloud generated from images (UAV) generated by the SfM method. The 3D heterogeneous module consists of a semi-automated 3D scan registration system, developed with the aim to overcome the shortcomings of the existing fully automated 3D registration approaches. This semi-automated registration system is based on the novel Scale Invariant Registration Method (SIRM). The SIRM provides the initial scaling between two heterogenous point clouds and provides an adaptive mechanism for tuning the mean scale, based on the difference between two consecutive estimated point clouds’ alignment error values. Once aligned, the resulting homogeneous ground-aerial point cloud is further processed by a segmentation module. For this purpose, we have proposed a system for integrated multi-sensor based segmentation of 3D point clouds. This system followed a two steps sequence: ground-object segmentation and color-based region-growing segmentation. The experimental validation of the proposed 3D heterogeneous registration and integrated segmentation framework was performed on large-scale datasets representing unstructured outdoor environments, demonstrating the potential and benefits of the proposed semi-automated 3D registration system in real-world environments.

    @Article{balta20203Dregistration,
    author = {Balta, Haris and Velagic, Jasmin and Beglerovic, Halil and De Cubber, Geert and Siciliano, Bruno},
    journal = {Remote Sensing},
    title = {3D Registration and Integrated Segmentation Framework for Heterogeneous Unmanned Robotic Systems},
    year = {2020},
    month = may,
    number = {10},
    pages = {1608},
    volume = {12},
    abstract = {The paper proposes a novel framework for registering and segmenting 3D point clouds of large-scale natural terrain and complex environments coming from a multisensor heterogeneous robotics system, consisting of unmanned aerial and ground vehicles. This framework involves data acquisition and pre-processing, 3D heterogeneous registration and integrated multi-sensor based segmentation modules. The first module provides robust and accurate homogeneous registrations of 3D environmental models based on sensors’ measurements acquired from the ground (UGV) and aerial (UAV) robots. For 3D UGV registration, we proposed a novel local minima escape ICP (LME-ICP) method, which is based on the well known iterative closest point (ICP) algorithm extending it by the introduction of our local minima estimation and local minima escape mechanisms. It did not require any prior known pose estimation information acquired from sensing systems like odometry, global positioning system (GPS), or inertial measurement units (IMU). The 3D UAV registration has been performed using the Structure from Motion (SfM) approach. In order to improve and speed up the process of outliers removal for large-scale outdoor environments, we introduced the Fast Cluster Statistical Outlier Removal (FCSOR) method. This method was used to filter out the noise and to downsample the input data, which will spare computational and memory resources for further processing steps. Then, we co-registered a point cloud acquired from a laser ranger (UGV) and a point cloud generated from images (UAV) generated by the SfM method. The 3D heterogeneous module consists of a semi-automated 3D scan registration system, developed with the aim to overcome the shortcomings of the existing fully automated 3D registration approaches. This semi-automated registration system is based on the novel Scale Invariant Registration Method (SIRM). The SIRM provides the initial scaling between two heterogenous point clouds and provides an adaptive mechanism for tuning the mean scale, based on the difference between two consecutive estimated point clouds’ alignment error values. Once aligned, the resulting homogeneous ground-aerial point cloud is further processed by a segmentation module. For this purpose, we have proposed a system for integrated multi-sensor based segmentation of 3D point clouds. This system followed a two steps sequence: ground-object segmentation and color-based region-growing segmentation. The experimental validation of the proposed 3D heterogeneous registration and integrated segmentation framework was performed on large-scale datasets representing unstructured outdoor environments, demonstrating the potential and benefits of the proposed semi-automated 3D registration system in real-world environments.},
    doi = {10.3390/rs12101608},
    project = {NRTP,ICARUS,TIRAMISU,MarSur},
    publisher = {MDPI},
    url = {https://www.mdpi.com/2072-4292/12/10/1608/pdf},
    unit= {meca-ras}
    }

  • D. Doroftei, G. De Cubber, and H. De Smet, “Reducing drone incidents by incorporating human factors in the drone and drone pilot accreditation process," in Advances in Human Factors in Robots, Drones and Unmanned Systems, San Diego, USA, 2020, p. 71–77.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Considering the ever-increasing use of drones in a plentitude of application areas, the risk is that also an ever-increasing number of drone incidents would be ob-served. Research has shown that a large majority of all incidents with drones is due not to technological, but to human error. An advanced risk-reduction meth-odology, focusing on the human element, is thus required in order to allow for the safe use of drones. In this paper, we therefore introduce a novel concept to pro-vide a qualitative and quantitative assessment of the performance of the drone op-erator. The proposed methodology is based on one hand upon the development of standardized test methodologies and on the other hand on human performance modeling of the drone operators in a highly realistic simulation environment.

    @InProceedings{doroftei2020alphonse,
    author = {Doroftei, Daniela and De Cubber, Geert and De Smet, Hans},
    booktitle = {Advances in Human Factors in Robots, Drones and Unmanned Systems},
    title = {Reducing drone incidents by incorporating human factors in the drone and drone pilot accreditation process},
    year = {2020},
    month = jul,
    editor = {Zallio, Matteo},
    publisher = {Springer International Publishing},
    pages = {71--77},
    isbn = {978-3-030-51758-8},
    organization = {AHFE},
    address = {San Diego, USA},
    abstract = {Considering the ever-increasing use of drones in a plentitude of application areas, the risk is that also an ever-increasing number of drone incidents would be ob-served. Research has shown that a large majority of all incidents with drones is due not to technological, but to human error. An advanced risk-reduction meth-odology, focusing on the human element, is thus required in order to allow for the safe use of drones. In this paper, we therefore introduce a novel concept to pro-vide a qualitative and quantitative assessment of the performance of the drone op-erator. The proposed methodology is based on one hand upon the development of standardized test methodologies and on the other hand on human performance modeling of the drone operators in a highly realistic simulation environment.},
    doi = {10.1007/978-3-030-51758-8_10},
    unit= {meca-ras},
    project = {Alphonse},
    url = {http://mecatron.rma.ac.be/pub/2020/Reducing%20drone%20incidents%20by%20incorporating%20human%20factors%20in%20the%20drone%20and%20drone%20pilot%20accreditation%20process.pdf},
    }

  • A. Kakogawa, S. Ma, B. Ristic, C. Gilliam, A. K. Kamath, V. K. Tripathi, L. Behera, A. Ferrein, I. Scholl, T. Neumann, K. Krückel, S. Schiffer, A. Joukhadar, M. Alchehabi, and A. Jejeh, Unmanned Robotic Systems and Applications, M. Reyhanoglu and G. De Cubber, Eds., InTech, 2020.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This book presents recent studies of unmanned robotic systems and their applications. With its five chapters, the book brings together important contributions from renowned international researchers. Unmanned autonomous robots are ideal candidates for applications such as rescue missions, especially in areas that are difficult to access. Swarm robotics (multiple robots working together) is another exciting application of the unmanned robotics systems, for example, coordinated search by an interconnected group of moving robots for the purpose of finding a source of hazardous emissions. These robots can behave like individuals working in a group without a centralized control.

    @Book{de2020unmanned,
    author = {Atsushi Kakogawa and Shugen Ma and Branko Ristic and Christopher Gilliam and Archit Krishna Kamath and Vibhu Kumar Tripathi and Laxmidhar Behera and Alexander Ferrein and Ingrid Scholl and Tobias Neumann and Kai Krückel and Stefan Schiffer and Abdulkader Joukhadar and Mohammad Alchehabi and Adnan Jejeh},
    editor = {Reyhanoglu, Mahmut and De Cubber, Geert},
    publisher = {{InTech}},
    title = {Unmanned Robotic Systems and Applications},
    year = {2020},
    month = apr,
    abstract = {This book presents recent studies of unmanned robotic systems and their applications. With its five chapters, the book brings together important contributions from renowned international researchers. Unmanned autonomous robots are ideal candidates for applications such as rescue missions, especially in areas that are difficult to access. Swarm robotics (multiple robots working together) is another exciting application of the unmanned robotics systems, for example, coordinated search by an interconnected group of moving robots for the purpose of finding a source of hazardous emissions. These robots can behave like individuals working in a group without a centralized control.},
    doi = {10.5772/intechopen.88414},
    project = {NRTP,ICARUS,MarSur},
    url = {https://www.intechopen.com/books/unmanned-robotic-systems-and-applications},
    unit= {meca-ras}
    }

  • G. De Cubber, R. Lahouli, D. Doroftei, and R. Haelterman, “Distributed coverage optimization for a fleet of unmanned maritime systems for a maritime patrol and surveillance application," in ISMCR 2020: 23rd International Symposium on Measurement and Control in Robotics, Budapest, Hungary, 2020.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In order for unmanned maritime systems to provide added value for maritime law enforcement agencies, they have to be able to work together as a coordinated team for tasks such as area surveillance and patrolling. Therefore, this paper proposes a methodology that optimizes the coverage of a fleet of unmanned maritime systems, and thereby maximizes the chances of noticing threats. Unlike traditional approaches for maritime coverage optimization, which are also used for example in search and rescue operations when searching for victims at sea, this approaches takes into consideration the limited seaworthiness of small unmanned systems, as compared to traditional large ships, by incorporating the danger level in the design of the optimizer.

    @InProceedings{decubber2020dco,
    author = {De Cubber, Geert and Lahouli, Rihab and Doroftei, Daniela and Haelterman, Rob},
    booktitle = {ISMCR 2020: 23rd International Symposium on Measurement and Control in Robotics},
    title = {Distributed coverage optimization for a fleet of unmanned maritime systems for a maritime patrol and surveillance application},
    year = {2020},
    month = oct,
    organization = {ISMCR},
    publisher = {{IEEE}},
    abstract = {In order for unmanned maritime systems to provide added value for maritime law enforcement agencies, they have to be able to work together as a coordinated team for tasks such as area surveillance and patrolling. Therefore, this paper proposes a methodology that optimizes the coverage of a fleet of unmanned maritime systems, and thereby maximizes the chances of noticing threats. Unlike traditional approaches for maritime coverage optimization, which are also used for example in search and rescue operations when searching for victims at sea, this approaches takes into consideration the limited seaworthiness of small unmanned systems, as compared to traditional large ships, by incorporating the danger level in the design of the optimizer.},
    project = {SSAVE,MarSur},
    address = {Budapest, Hungary},
    doi = {10.1109/ISMCR51255.2020.9263740},
    url = {http://mecatron.rma.ac.be/pub/2020/conference_101719.pdf},
    unit= {meca-ras}
    }

2019

  • G. De Cubber, “Opportunities and threats posed by new technologies," in SciFi-IT, Ghent, Belgium, 2019.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The technological evolution is introducing in a fast pace new technologies in our everyday lives. As always, these new technologies can be applied for good causes and thereby give us the opportunity to do many interesting new things. Think for example about drones transporting blood samples between hospitals. However, like always, new technologies can also be applied for bad causes. Think for example about the same drones, but this time transporting bomb parcels instead of blood. In this paper, we focus on a number of novel technologies and discuss how security actors are currently doing their best to maximize the good use of these tools while minimizing the bad use. We will focus on research actions taken by Belgian Royal Military Academy in the domains of: – Augmented reality, and showcase how this technology can be used to improve surveillance operations. – Unmanned Aerial Systems (Drones), and showcase how the potential security threats posed by these systems can be mitigated by novel drone detection systems. – Unmanned Maritime Systems, and showcase how this technology can be used to increase the safety at sea. – Unmanned Ground Systems, and more specifically the autonomous cars, showcasing how to prevent potential cyber-attacks on these future transportation tools.

    @InProceedings{de2019opportunities,
    author = {De Cubber, Geert},
    booktitle = {SciFi-IT},
    title = {Opportunities and threats posed by new technologies},
    year = {2019},
    abstract = {The technological evolution is introducing in a fast pace new technologies in our everyday lives. As always, these new technologies can be applied for good causes and thereby give us the opportunity to do many interesting new things. Think for example about drones transporting blood samples between hospitals. However, like always, new technologies can also be applied for bad causes. Think for example about the same drones, but this time transporting bomb parcels instead of blood.
    In this paper, we focus on a number of novel technologies and discuss how security actors are currently
    doing their best to maximize the good use of these tools while minimizing the bad use. We will focus on research actions taken by Belgian Royal Military Academy in the domains of:
    - Augmented reality, and showcase how this technology can be used to improve surveillance operations.
    - Unmanned Aerial Systems (Drones), and showcase how the potential security threats posed by these systems can be mitigated by novel drone detection systems.
    - Unmanned Maritime Systems, and showcase how this technology can be used to increase the safety at sea.
    - Unmanned Ground Systems, and more specifically the autonomous cars, showcasing how to prevent potential cyber-attacks on these future transportation tools.},
    doi = {https://doi.org/10.5281/zenodo.2628758},
    address = {Ghent, Belgium},
    project = {MarSur,SafeShore},
    url = {http://mecatron.rma.ac.be/pub/2019/Sci-Fi-It-2019-DeCubber (2).pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, “Explosive drones: How to deal with this new threat?," in International workshop on Measurement, Prevention, Protection and Management of CBRN Risks (RISE), Les Bon Villers, Belgium, 2019.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    As the commercial and recreative use of small unmanned aerial vehicles or drones is booming, so are the military and criminals starting to use these systems more and more. Due to improvements in flight stability, autonomy and payload capacity it becomes possible to equip these drones with explosive charges, making them threat agents where traditional response mechanisms have few answers against. In this paper, we will discuss this new type of threat in detail, making the difference between the loitering munition, as used by regular armies and the traditional drones equipped with explosive charges, used in guerrilla warfare and by criminals. We will then discuss what research actions are currently being undertaken to provide answers to each of these threats and what countermeasures that are currently already available and which ones will be available in the near future.

    @InProceedings{de2019explosive,
    author = {De Cubber, Geert},
    booktitle = {International workshop on Measurement, Prevention, Protection and Management of CBRN Risks (RISE)},
    title = {Explosive drones: How to deal with this new threat?},
    year = {2019},
    number = {9},
    address = {Les Bon Villers, Belgium},
    abstract = {As the commercial and recreative use of small unmanned aerial vehicles or drones is booming, so are the military and criminals starting to use these systems more and more. Due to improvements in flight stability, autonomy and payload capacity it becomes possible to equip these drones with explosive charges, making them threat agents where traditional response mechanisms have few answers against. In this paper, we will discuss this new type of threat in detail, making the difference between the loitering munition, as used by regular armies and the traditional drones equipped with explosive charges, used in guerrilla warfare and by criminals. We will then discuss what research actions are currently being undertaken to provide answers to each of these threats and what countermeasures that are currently already available and which ones will be available in the near future.},
    doi = {10.5281/ZENODO.2628752},
    project = {SafeShore},
    url = {http://mecatron.rma.ac.be/pub/2019/Explosive drones - How to deal with this new threat.pdf},
    unit= {meca-ras}
    }

  • I. Lahouli, Z. Chtourou, M. A. Ben Ayed, R. Haelterman, G. De Cubber, and R. Attia, “Pedestrian Detection and Trajectory Estimation in the Compressed Domain Using Thermal Images," in Computer Vision, Imaging and Computer Graphics Theory and Applications, Springer, 2019, p. 212–227.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Since a few decades, the Unmanned Aerial Vehicles (UAVs) are considered precious tools for different military applications such as the automatic surveillance in outdoor environments. Nevertheless, the onboard implementation of image and video processing techniques poses many challenges like the high computational cost and the high bandwidth requirements, especially on low-performance processing platforms like small or medium UAVs. A fast and efficient framework for pedestrian detection and trajectory estimation for outdoor surveillance using thermal images is presented in this paper. First, the detection process is based on a conjunction between contrast enhancement techniques and saliency maps as a hotspot detector, on Discrete Chebychev Moments (DCM) as a global image content descriptor and on a linear Support Vector Machine (SVM) as a classifier. Second, raw H.264/AVC compressed video streams with limited computational overhead are exploited to estimate the trajectories of the detected pedestrians. In order to simulate suspicious events, six different scenarios were carried out and filmed using a thermal camera. The obtained results show the effectiveness and the low computational requirements of the proposed framework which make it suitable for real-time applications and onboard implementation.

    @InCollection{lahouli2019pedestrian,
    author = {Lahouli, Ichraf and Chtourou, Zied and Ben Ayed, Mohamed Ali and Haelterman, Rob and De Cubber, Geert and Attia, Rabah},
    booktitle = {Computer Vision, Imaging and Computer Graphics Theory and Applications},
    publisher = {Springer},
    title = {Pedestrian Detection and Trajectory Estimation in the Compressed Domain Using Thermal Images},
    year = {2019},
    pages = {212--227},
    abstract = {Since a few decades, the Unmanned Aerial Vehicles (UAVs) are considered precious tools for different military applications such as the automatic surveillance in outdoor environments. Nevertheless, the onboard implementation of image and video processing techniques poses many challenges like the high computational cost and the high bandwidth requirements, especially on low-performance processing platforms like small or medium UAVs. A fast and efficient framework for pedestrian detection and trajectory estimation for outdoor surveillance using thermal images is presented in this paper. First, the detection process is based on a conjunction between contrast enhancement techniques and saliency maps as a hotspot detector, on Discrete Chebychev Moments (DCM) as a global image content descriptor and on a linear Support Vector Machine (SVM) as a classifier. Second, raw H.264/AVC compressed video streams with limited computational overhead are exploited to estimate the trajectories of the detected pedestrians. In order to simulate suspicious events, six different scenarios were carried out and filmed using a thermal camera. The obtained results show the effectiveness and the low computational requirements of the proposed framework which make it suitable for real-time applications and onboard implementation.},
    doi = {10.1007/978-3-030-26756-8_10},
    project = {SafeShore},
    url = {https://www.springerprofessional.de/en/pedestrian-detection-and-trajectory-estimation-in-the-compressed/16976092},
    unit= {meca-ras}
    }

  • I. Lahouli, R. Haelterman, Z. Chtourou, G. De Cubber, and R. Attia, “Pedestrian Tracking in the Compressed Domain Using Thermal Images," in Representations, Analysis and Recognition of Shape and Motion from Imaging Data, Communications in Computer and Information Science, Springer International Publishing, 2019, vol. 842, p. 35–44.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The video surveillance of sensitive facilities or borders poses many challenges like the high bandwidth requirements and the high computational cost. In this paper, we propose a framework for detecting and tracking pedestrians in the compressed domain using thermal images. Firstly, the detection process uses a conjunction between saliency maps and contrast enhancement techniques followed by a global image content descriptor based on Discrete Chebychev Moments (DCM) and a linear Support Vector Machine (SVM) as a classifier. Secondly, the tracking process exploits raw H.264 compressed video streams with limited computational overhead. In addition to two, well-known, public datasets, we have generated our own dataset by carrying six different scenarios of suspicious events using a thermal camera. The obtained results show the effectiveness and the low computational requirements of the proposed framework which make it suitable for real-time applications and onboard implementation.

    @InCollection{lahouli2019pedestriantracking,
    author = {Lahouli, Ichraf and Haelterman, Rob and Chtourou, Zied and De Cubber, Geert and Attia, Rabah},
    booktitle = {Representations, Analysis and Recognition of Shape and Motion from Imaging Data, Communications in Computer and Information Science},
    publisher = {Springer International Publishing},
    title = {Pedestrian Tracking in the Compressed Domain Using Thermal Images},
    year = {2019},
    pages = {35--44},
    volume = {842},
    abstract = {The video surveillance of sensitive facilities or borders poses many challenges like the high bandwidth requirements and the high computational cost. In this paper, we propose a framework for detecting and tracking pedestrians in the compressed domain using thermal images. Firstly, the detection process uses a conjunction between saliency maps and contrast enhancement techniques followed by a global image content descriptor based on Discrete Chebychev Moments (DCM) and a linear Support Vector Machine (SVM) as a classifier. Secondly, the tracking process exploits raw H.264 compressed video streams with limited computational overhead. In addition to two, well-known, public datasets, we have generated our own dataset by carrying six different scenarios of suspicious events using a thermal camera. The obtained results show the effectiveness and the low computational requirements of the proposed framework which make it suitable for real-time applications and onboard implementation.},
    doi = {10.1007/978-3-030-19816-9_3},
    project = {SafeShore},
    url = {https://app.dimensions.ai/details/publication/pub.1113953804},
    unit= {meca-ras}
    }

  • G. De Cubber and R. Haelterman, “Optimized distributed scheduling for a fleet of heterogeneous unmanned maritime systems," in 2019 IEEE International Symposium on Measurement and Control in Robotics (ISMCR), Houston, USA, 2019.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Due to the increase in embedded computing power, modern robotic systems are capable of running a wide range of perception and control algorithms simultaneously. This raises the question where to optimally allocate each robotic cognition process. In this paper, we present a concept for a novel load distribution approach. The proposed methodology adopts a decentralised approach towards the allocation of perception and control processes to different agents (unmanned vessels, fog or cloud services) based on an estimation of the communication parameters (bandwidth, latency, cost), the agent capabilities in terms of processing hardware (not only focusing on the CPU, but also taking into consideration the GPU, disk & memory speed and size) and the requirements in terms of timely delivery of quality output data. The presented approach is extensively validated in a simulation environment and shows promising properties.

    @InProceedings{de2019optimized,
    author = {De Cubber, Geert and Haelterman, Rob},
    booktitle = {2019 {IEEE} International Symposium on Measurement and Control in Robotics ({ISMCR})},
    title = {Optimized distributed scheduling for a fleet of heterogeneous unmanned maritime systems},
    year = {2019},
    month = sep,
    number = {23},
    publisher = {{IEEE}},
    address = {Houston, USA},
    abstract = {Due to the increase in embedded computing power, modern robotic systems are capable of running a wide range of perception and control algorithms simultaneously. This raises the question where to optimally allocate each robotic cognition process. In this paper, we present a concept for a novel load distribution approach. The proposed methodology adopts a decentralised approach towards the allocation of perception and control processes to different agents (unmanned vessels, fog or cloud services) based on an estimation of the communication parameters (bandwidth, latency, cost), the agent capabilities in terms of processing hardware (not only focusing on the CPU, but also taking into consideration the GPU, disk & memory speed and size) and the requirements in terms of timely delivery of quality output data. The presented approach is extensively validated in a simulation environment and shows promising properties.},
    doi = {10.1109/ismcr47492.2019.8955727},
    project = {MarSur},
    url = {http://mecatron.rma.ac.be/pub/2019/ICMCR-DeCubber.pdf},
    unit= {meca-ras}
    }

  • H. Balta, J. Velagic, G. De Cubber, and B. Siciliano, “Semi-Automated 3D Registration for Heterogeneous Unmanned Robots Based on Scale Invariant Method," in 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Wurzburg, Germany, 2019.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper addresses the problem of 3D registration of outdoor environments combining heterogeneous datasets acquired from unmanned aerial (UAV) and ground (UGV) vehicles. In order to solve this problem, we introduced a novel Scale Invariant Registration Method (SIRM) for semi-automated registration of 3D point clouds. The method is capable of coping with an arbitrary scale difference between the point clouds, without any information about their initial position and orientation. Furthermore, the SIRM does not require having a good initial overlap between two heterogeneous datasets. Our method strikes an elegant balance between the existing fully automated 3D registration systems (which often fail in the case of heterogeneous datasets and harsh outdoor environments) and fully manual registration approaches (which are labour-intensive). The experimental validation of the proposed 3D heterogeneous registration system was performed on large-scale datasets representing unstructured and harsh outdoor environments, demonstrating the potential and benefits of the proposed 3D registration system in real-world environments.

    @InProceedings{balta2019semi,
    author = {Balta, Haris and Velagic, Jasmin and De Cubber, Geert and Siciliano, Bruno},
    booktitle = {2019 {IEEE} International Symposium on Safety, Security, and Rescue Robotics ({SSRR})},
    title = {Semi-Automated {3D} Registration for Heterogeneous Unmanned Robots Based on Scale Invariant Method},
    year = {2019},
    month = sep,
    publisher = {{IEEE}},
    volume = {1},
    address = {Wurzburg, Germany},
    abstract = {This paper addresses the problem of 3D registration of outdoor environments combining heterogeneous datasets acquired from unmanned aerial (UAV) and ground (UGV) vehicles. In order to solve this problem, we introduced a novel Scale Invariant Registration Method (SIRM) for semi-automated registration of 3D point clouds. The method is capable of coping with an arbitrary scale difference between the point clouds, without any information about their initial position and orientation. Furthermore, the SIRM does not require having a good initial overlap between two heterogeneous datasets. Our method strikes an elegant balance between the existing fully automated 3D registration systems (which often fail in the case of heterogeneous datasets and harsh outdoor environments) and fully manual registration approaches (which are labour-intensive). The experimental validation of the proposed 3D heterogeneous registration system was performed on large-scale datasets representing unstructured and harsh outdoor environments, demonstrating the potential and benefits of the proposed 3D registration system in real-world environments.},
    doi = {10.1109/ssrr.2019.8848951},
    project = {NRTP},
    url = {https://ieeexplore.ieee.org/document/8848951},
    unit= {meca-ras}
    }

  • D. Doroftei and G. De Cubber, “Using a qualitative and quantitative validation methodology to evaluate a drone detection system," ACTA IMEKO, vol. 8, iss. 4, p. 20–27, 2019.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Now that the use of drones is becoming more common, the need to regulate the access to airspace for these systems is becoming more pressing. A necessary tool in order to do this is a means of detecting drones. Numerous parties have started the development of such drone detection systems. A big problem with these systems is that the evaluation of the performance of drone detection systems is a difficult operation that requires the careful consideration of all technical and non-technical aspects of the system under test. Indeed, weather conditions and small variations in the appearance of the targets can have a huge difference on the performance of the systems. In order to provide a fair evaluation, it is therefore paramount that a validation procedure that finds a compromise between the requirements of end users (who want tests to be performed in operational conditions) and platform developers (who want statistically relevant tests) is followed. Therefore, we propose in this article a qualitative and quantitative validation methodology for drone detection systems. The proposed validation methodology seeks to find this compromise between operationally relevant benchmarking (by providing qualitative benchmarking under varying environmental conditions) and statistically relevant evaluation (by providing quantitative score sheets under strictly described conditions).

    @Article{doroftei2019using,
    author = {Doroftei, Daniela and De Cubber, Geert},
    journal = {{ACTA} {IMEKO}},
    title = {Using a qualitative and quantitative validation methodology to evaluate a drone detection system},
    year = {2019},
    month = dec,
    number = {4},
    pages = {20--27},
    volume = {8},
    abstract = {Now that the use of drones is becoming more common, the need to regulate the access to airspace for these systems is becoming more pressing. A necessary tool in order to do this is a means of detecting drones. Numerous parties have started the development of such drone detection systems. A big problem with these systems is that the evaluation of the performance of drone detection systems is a difficult operation that requires the careful consideration of all technical and non-technical aspects of the system under test. Indeed, weather conditions and small variations in the appearance of the targets can have a huge difference on the performance of the systems. In order to provide a fair evaluation, it is therefore paramount that a validation procedure that finds a compromise between the requirements of end users (who want tests to be performed in operational conditions) and platform developers (who want statistically relevant tests) is followed. Therefore, we propose in this article a qualitative and quantitative validation methodology for drone detection systems. The proposed validation methodology seeks to find this compromise between operationally relevant benchmarking (by providing qualitative benchmarking under varying environmental conditions) and statistically relevant evaluation (by providing quantitative score sheets under strictly described conditions).},
    doi = {10.21014/acta_imeko.v8i4.682},
    pdf = {https://acta.imeko.org/index.php/acta-imeko/article/view/IMEKO-ACTA-08%20%282019%29-04-05/pdf},
    project = {SafeShore},
    publisher = {{IMEKO} International Measurement Confederation},
    url = {https://acta.imeko.org/index.php/acta-imeko/article/view/IMEKO-ACTA-08%20%282019%29-04-05/pdf},
    unit= {meca-ras}
    }

  • N. Nauwynck, H. Balta, G. De Cubber, and H. Sahli, “A proof of concept of the in-flight launch of unmanned aerial vehicles in a search and rescue scenario," ACTA IMEKO, vol. 8, iss. 4, p. 13–19, 2019.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This article considers the development of a system to enable the in-flight-launch of one aerial system by another. The article discusses how an optimal release mechanism was developed taking into account the aerodynamics of one specific mothership and child Unmanned Aerial Vehicle (UAV). Furthermore, it discusses the PID-based control concept that was introduced in order to autonomously stabilise the child UAV after being released from the mothership UAV. Finally, the article demonstrates how the concept of a mothership and child UAV combination could be taken advantage of in the context of a search and rescue operation.

    @Article{nauwynck2019proof,
    author = {Nauwynck, Niels and Balta, Haris and De Cubber, Geert and Sahli, Hichem},
    journal = {{ACTA} {IMEKO}},
    title = {A proof of concept of the in-flight launch of unmanned aerial vehicles in a search and rescue scenario},
    year = {2019},
    month = dec,
    number = {4},
    pages = {13--19},
    volume = {8},
    abstract = {This article considers the development of a system to enable the in-flight-launch of one aerial system by another. The article discusses how an optimal release mechanism was developed taking into account the aerodynamics of one specific mothership and child Unmanned Aerial Vehicle (UAV). Furthermore, it discusses the PID-based control concept that was introduced in order to autonomously stabilise the child UAV after being released from the mothership UAV. Finally, the article demonstrates how the concept of a mothership and child UAV combination could be taken advantage of in the context of a search and rescue operation.},
    doi = {10.21014/acta_imeko.v8i4.681},
    publisher = {{IMEKO} International Measurement Confederation},
    project = {ICARUS, NRTP},
    url = {https://acta.imeko.org/index.php/acta-imeko/article/view/IMEKO-ACTA-08 (2019)-04-04},
    unit= {meca-ras}
    }

  • A. Coluccia, A. Fascista, A. Schumann, L. Sommer, M. Ghenescu, T. Piatrik, G. De Cubber, M. Nalamati, A. Kapoor, M. Saqib, N. Sharma, M. Blumenstein, V. Magoulianitis, D. Ataloglou, A. Dimou, D. Zarpalas, P. Daras, C. Craye, S. Ardjoune, D. De la Iglesia, M. Mández, R. Dosil, and I. González, “Drone-vs-Bird Detection Challenge at IEEE AVSS2019," in 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2019, pp. 1-7.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper presents the second edition of the “drone-vs-bird” detection challenge, launched within the activities of the 16-th IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS). The challenge’s goal is to detect one or more drones appearing at some point in video sequences where birds may be also present, together with motion in background or foreground. Submitted algorithms should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds, nor being confused by the rest of the scene. This paper reports on the challenge results on the 2019 dataset, which extends the first edition dataset provided by the SafeShore project with additional footage under different conditions.

    @INPROCEEDINGS{8909876,
    author={A. {Coluccia} and A. {Fascista} and A. {Schumann} and L. {Sommer} and M. {Ghenescu} and T. {Piatrik} and G. {De Cubber} and M. {Nalamati} and A. {Kapoor} and M. {Saqib} and N. {Sharma} and M. {Blumenstein} and V. {Magoulianitis} and D. {Ataloglou} and A. {Dimou} and D. {Zarpalas} and P. {Daras} and C. {Craye} and S. {Ardjoune} and D. {De la Iglesia} and M. {Mández} and R. {Dosil} and I. {González}},
    booktitle={2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)},
    title={Drone-vs-Bird Detection Challenge at IEEE AVSS2019},
    year={2019},
    volume={},
    number={},
    pages={1-7},
    project = {SafeShore,MarSur},
    doi = {10.1109/AVSS.2019.8909876},
    abstract = {This paper presents the second edition of the “drone-vs-bird” detection challenge, launched within the activities of the 16-th IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS). The challenge's goal is to detect one or more drones appearing at some point in video sequences where birds may be also present, together with motion in background or foreground. Submitted algorithms should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds, nor being confused by the rest of the scene. This paper reports on the challenge results on the 2019 dataset, which extends the first edition dataset provided by the SafeShore project with additional footage under different conditions.},
    url = {https://ieeexplore.ieee.org/abstract/document/8909876},
    unit= {meca-ras}
    }

2018

  • Y. Baudoin, D. Doroftei, G. de Cubber, J. Habumuremyi, H. Balta, and I. Doroftei, “Unmanned Ground and Aerial Robots Supporting Mine Action Activities," Journal of Physics: Conference Series, vol. 1065, iss. 17, p. 172009, 2018.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    During the Humanitarian‐demining actions, teleoperation of sensors or multi‐sensor heads can enhance‐detection process by allowing more precise scanning, which is use‐ ful for the optimization of the signal processing algorithms. This chapter summarizes the technologies and experiences developed during 16 years through national and/or European‐funded projects, illustrated by some contributions of our own laboratory, located at the Royal Military Academy of Brussels, focusing on the detection of unexploded devices and the implementation of mobile robotics systems on minefields

    @Article{baudoin2018unmanned,
    author = {Baudoin, Yvan and Doroftei, Daniela and de Cubber, Geert and Habumuremyi, Jean-Claude and Balta, Haris and Doroftei, Ioan},
    title = {Unmanned Ground and Aerial Robots Supporting Mine Action Activities},
    year = {2018},
    month = aug,
    number = {17},
    organization = {IOP Publishing},
    pages = {172009},
    publisher = {{IOP} Publishing},
    volume = {1065},
    abstract = {During the Humanitarian‐demining actions, teleoperation of sensors or multi‐sensor heads can enhance‐detection process by allowing more precise scanning, which is use‐ ful for the optimization of the signal processing algorithms. This chapter summarizes the technologies and experiences developed during 16 years through national and/or European‐funded projects, illustrated by some contributions of our own laboratory, located at the Royal Military Academy of Brussels, focusing on the detection of unexploded devices and the implementation of mobile robotics systems on minefields},
    doi = {10.1088/1742-6596/1065/17/172009},
    journal = {Journal of Physics: Conference Series},
    project = {TIRAMISU},
    url = {https://iopscience.iop.org/article/10.1088/1742-6596/1065/17/172009/pdf},
    unit= {meca-ras}
    }

  • I. Lahouli, R. Haelterman, Z. Chtourou, G. De Cubber, and R. Attia, “Pedestrian Detection and Tracking in Thermal Images from Aerial MPEG videos," in International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2018, p. 487–495.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Video surveillance for security and intelligence purposes has been a precious tool as long as the technology has been available but is computationally heavy. In this paper, we present a fast and efficient framework for pedestrian detection and tracking using thermal images. It is designed for automatic surveillance applications in an outdoor environment like preventing border intrusions or attacks on sensitive facilities using image and video processing techniques implemented on-board Unmanned Aerial Vehicles (UAV)s. The proposed framework exploits raw H.264 compressed video streams with limited computational overhead. Our work is driven by the fact that Motion Vectors (MV) are an integral part of any video compression technique, by day and night capabilities of thermal sensors and the distinguished thermal signature of humans. Six different scenarios were carried out and filmed using a thermal camera in order to simulate suspicious events. The obtained results show the effectiveness of the proposed framework and its low computational requirements which make it adequate for on-board processing and real-time applications.

    @InProceedings{lahouli2018pedestrian,
    author = {Lahouli, Ichraf and Haelterman, Robby and Chtourou, Zied and De Cubber, Geert and Attia, Rabah},
    booktitle = {International Conference on Computer Vision Theory and Applications},
    title = {Pedestrian Detection and Tracking in Thermal Images from Aerial {MPEG} videos},
    year = {2018},
    organization = {DOI 10.5220/0006723704870495},
    pages = {487--495},
    publisher = {{SCITEPRESS} - Science and Technology Publications},
    volume = {1},
    abstract = {Video surveillance for security and intelligence purposes has been a precious tool as long as the technology has been available but is computationally heavy. In this paper, we present a fast and efficient framework for pedestrian detection and tracking using thermal images. It is designed for automatic surveillance applications in an outdoor environment like preventing border intrusions or attacks on sensitive facilities using image and video processing techniques implemented on-board Unmanned Aerial Vehicles (UAV)s. The proposed framework exploits raw H.264 compressed video streams with limited computational overhead. Our work is driven by the fact that Motion Vectors (MV) are an integral part of any video compression technique, by day and night capabilities of thermal sensors and the distinguished thermal signature of humans. Six different scenarios were carried out and filmed using a thermal camera in order to simulate suspicious events. The obtained results show the effectiveness of the proposed framework and its low computational requirements which make it adequate for on-board processing and real-time applications.},
    doi = {10.5220/0006723704870495},
    project = {SafeShore},
    address = {Funchal, Portugal},
    url = {https://www.scitepress.org/Papers/2018/67237/67237.pdf},
    unit= {meca-ras}
    }

  • I. Lahouli, R. Haelterman, J. Degroote, M. Shimoni, G. De Cubber, and R. Attia, “Accelerating existing non-blind image deblurring techniques through a strap-on limited-memory switched Broyden method," IEICE TRANSACTIONS on Information and Systems, vol. 1, iss. 1, p. 8, 2018.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Video surveillance from airborne platforms can suffer from many sources of blur, like vibration, low-end optics, uneven lighting conditions, etc. Many different algorithms have been developed in the past that aim to recover the deblurred image but often incur substantial CPU-time, which is not always available on-board. This paper shows how a strap-on quasi-Newton method can accelerate the convergence of existing iterative methods with little extra overhead while keeping the performance of the original algorithm, thus paving the way for (near) real-time applications using on-board processing.

    @Article{lahouli2018accelerating,
    author = {Lahouli, Ichraf and Haelterman, Robby and Degroote, Joris and Shimoni, Michal and De Cubber, Geert and Attia, Rabah},
    journal = {IEICE TRANSACTIONS on Information and Systems},
    title = {Accelerating existing non-blind image deblurring techniques through a strap-on limited-memory switched {Broyden} method},
    year = {2018},
    number = {1},
    pages = {8},
    volume = {1},
    abstract = {Video surveillance from airborne platforms can suffer from many sources of blur, like vibration, low-end optics, uneven lighting conditions, etc. Many different algorithms have been developed in the past that aim to recover the deblurred image but often incur substantial CPU-time, which is not always available on-board. This paper shows how a strap-on quasi-Newton method can accelerate the convergence of existing iterative methods with little extra overhead while keeping the performance of the original algorithm, thus paving the way for (near) real-time applications using on-board processing.},
    doi = {10.1587/transinf.2017mvp0022},
    file = {:lahouli2018accelerating - Accelerating Existing Non Blind Image Deblurring Techniques through a Strap on Limited Memory Switched Broyden Method.PDF:PDF},
    publisher = {The Institute of Electronics, Information and Communication Engineers},
    project = {SafeShore},
    url = {https://www.jstage.jst.go.jp/article/transinf/E101.D/5/E101.D_2017MVP0022/_pdf/-char/en},
    unit= {meca-ras}
    }

  • I. Lahouli, E. Karakasis, R. Haelterman, Z. Chtourou, G. De Cubber, A. Gasteratos, and R. Attia, “Hot spot method for pedestrian detection using saliency maps, discrete Chebyshev moments and support vector machine," IET Image Processing, vol. 12, iss. 7, p. 1284–1291, 2018.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The increasing risks of border intrusions or attacks on sensitive facilities and the growing availability of surveillance cameras lead to extensive research efforts for robust detection of pedestrians using images. However, the surveillance of borders or sensitive facilities poses many challenges including the need to set up many cameras to cover the whole area of interest, the high bandwidth requirements for data streaming and the high-processing requirements. Driven by day and night capabilities of the thermal sensors and the distinguished thermal signature of humans, the authors propose a novel and robust method for the detection of pedestrians using thermal images. The method is composed of three steps: a detection which is based on a saliency map in conjunction with a contrast-enhancement technique, a shape description based on discrete Chebyshev moments and a classification step using a support vector machine classifier. The performance of the method is tested using two different thermal datasets and is compared with the conventional maximally stable extremal regions detector. The obtained results prove the robustness and the superiority of the proposed framework in terms of true and false positives rates and computational costs which make it suitable for low-performance processing platforms and real-time applications.

    @Article{lahouli2018hot,
    author = {Lahouli, Ichraf and Karakasis, Evangelos and Haelterman, Robby and Chtourou, Zied and De Cubber, Geert and Gasteratos, Antonios and Attia, Rabah},
    journal = {IET Image Processing},
    title = {Hot spot method for pedestrian detection using saliency maps, discrete {Chebyshev} moments and support vector machine},
    year = {2018},
    number = {7},
    pages = {1284--1291},
    volume = {12},
    abstract = {The increasing risks of border intrusions or attacks on sensitive facilities and the growing availability of surveillance cameras lead to extensive research efforts for robust detection of pedestrians using images. However, the surveillance of borders or sensitive facilities poses many challenges including the need to set up many cameras to cover the whole area of interest, the high bandwidth requirements for data streaming and the high-processing requirements. Driven by day and night capabilities of the thermal sensors and the distinguished thermal signature of humans, the authors propose a novel and robust method for the detection of pedestrians using thermal images. The method is composed of three steps: a detection which is based on a saliency map in conjunction with a contrast-enhancement technique, a shape description based on discrete Chebyshev moments and a classification step using a support vector machine classifier. The performance of the method is tested using two different thermal datasets and is compared with the conventional maximally stable extremal regions detector. The obtained results prove the robustness and the superiority of the proposed framework in terms of true and false positives rates and computational costs which make it suitable for low-performance processing platforms and real-time applications.},
    doi = {10.1049/iet-ipr.2017.0221},
    publisher = {IET Digital Library},
    project = {SafeShore},
    url = {https://ieeexplore.ieee.org/document/8387035},
    unit= {meca-ras}
    }

  • I. Lahouli, R. Haelterman, G. De Cubber, Z. Chtourou, and R. Attia, “A fast and robust approach for human detection in thermal imagery for surveillance using UAVs," in 15th Multi-Conference on Systems, Signals and Devices, Hammamet, Tunisia, 2018.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The use of Unmanned Aerial Vehicles (UAV)s has spread in various fields such as surveillance and search and rescue purposes. This leads to many research efforts that are focusing on the detection of people using aerial images. However, these platforms have limited resources of power and bandwidth which cause many restrictions and challenges. The use of the thermal sensors offers the possibility to work day and night and the detection of the human bodies because of its distinguished thermal signature. In this paper, we propose a fast and efficient method for the detection of humans in outdoor scenes using thermal images taken from aerial platforms. We start by extracting the bright blobs based on a conjunction between a saliency map and a contrast enhancement techniques. Then, we use the Discrete Chebyshev Moments as a shape descriptor and finally, we classify the blobs into humans and non-humans. The proposed framework is first tested using a well-known thermal database that covers a wide range of lighting and weather conditions and further and then compared to an also well-known blob extractor which is the Maximally Stable Extremal Regions detector (MSER). The results highlight the effectiveness and even the superiority of the proposed method in terms of true positives, false alarms and processing time.

    @InProceedings{lahouli2018fast,
    author = {Lahouli, Ichraf and Haelterman, Robby and De Cubber, Geert and Chtourou, Zied and Attia, Rabah},
    booktitle = {15th Multi-Conference on Systems, Signals and Devices},
    title = {A fast and robust approach for human detection in thermal imagery for surveillance using {UAVs}},
    year = {2018},
    volume = {1},
    abstract = {The use of Unmanned Aerial Vehicles (UAV)s has spread in various fields such as surveillance and search and rescue purposes. This leads to many research efforts that are focusing on the detection of people using aerial images. However, these platforms have limited resources of power and bandwidth which cause many restrictions and challenges. The use of the thermal sensors offers the possibility to work day and night and the detection of the human bodies because of its distinguished thermal signature. In this paper, we propose a fast and efficient method for the detection of humans in outdoor scenes using thermal images taken from aerial platforms. We start by extracting the bright blobs based on a conjunction between a saliency map and a contrast enhancement techniques. Then, we use the Discrete Chebyshev Moments as a shape descriptor and finally, we classify the blobs into humans and non-humans. The proposed framework is first tested using a well-known thermal database that covers a wide range of lighting and weather conditions and further and then compared to an also well-known blob extractor which is the Maximally Stable Extremal Regions detector (MSER). The results highlight the effectiveness and even the superiority of the proposed method in terms of true positives, false alarms and processing time.},
    doi = {10.1109/ssd.2018.8570637},
    file = {:lahouli2018fast - A Fast and Robust Approach for Human Detection in Thermal Imagery for Surveillance Using UAVs.PDF:PDF},
    project = {SafeShore},
    address = {Hammamet, Tunisia},
    url = {https://ieeexplore.ieee.org/document/8570637},
    unit= {meca-ras}
    }

  • N. Nauwynck, H. Balta, G. De Cubber, and H. Sahli, “In-flight launch of unmanned aerial vehicles," in International Symposium on Measurement and Control in Robotics ISMCR2018, Mons, Belgium, 2018.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper considers the development of a system to enable the in-flight-launch of one aerial system by another. The paper will discuss how an optimal release mechanism was developed, taking into account the aerodynamics of one specific mother and child UAV. Furthermore, it will discuss the PID-based control concept that was introduced in order to autonomously stabilize the child UAV after being released from the mothership UAV. Finally, the paper will show how the concept of a mothership UAV + child UAV combination could be usefully taken into advantage in the context of a search and rescue operation.

    @InProceedings{nauwynck2018flight,
    author = {Nauwynck, Niels and Balta, Haris and De Cubber, Geert and Sahli, Hichem},
    booktitle = {International Symposium on Measurement and Control in Robotics ISMCR2018},
    title = {In-flight launch of unmanned aerial vehicles},
    year = {2018},
    volume = {1},
    abstract = {This paper considers the development of a system to enable the in-flight-launch of one aerial system by another. The paper will discuss how an optimal release mechanism was developed, taking into account the aerodynamics of one specific mother and child UAV. Furthermore, it will discuss the PID-based control concept that was introduced in order to autonomously stabilize the child UAV after being released from the mothership UAV. Finally, the paper will show how the concept of a mothership UAV + child UAV combination could be usefully taken into advantage in the context of a search and rescue operation.},
    doi = {10.5281/zenodo.1462605},
    file = {:nauwynck2018flight - In Flight Launch of Unmanned Aerial Vehicles.PDF:PDF},
    keywords = {Unmanned Aerial Vehicles, Control, Autonomous stabilization, Search and Rescue drones, Heterogeneous systems},
    project = {NRTP},
    address = {Mons, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2018/Paper_Niels.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei and G. De Cubber, “Qualitative and quantitative validation of drone detection systems," in International Symposium on Measurement and Control in Robotics ISMCR2018, Mons, Belgium, 2018.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    As drones are more and more entering our world, so comes the need to regulate the access to airspace for these systems. A necessary tool in order to do this is a means of detecting these drones. Numerous commercial and non-commercial parties have started the development of such drone detection systems. A big problem with these systems is that the evaluation of the performance of drone detection systems is a difficult operation, which requires the careful consideration of all technical and non-technical aspects of the system under test. Indeed, weather conditions and small variations in the appearance of the targets can have a huge difference on the performance of the systems. In order to provide a fair evaluation and an honest comparison between systems, it is therefore paramount that a stringent validation procedure is followed. Moreover, the validation methodology needs to find a compromise between the often contrasting requirements of end users (who want tests to be performed in operational conditions) and platform developers (who want tests to be performed that are statistically relevant). Therefore, we propose in this paper a qualitative and quantitative validation methodology for drone detection systems. The proposed validation methodology seeks to find this compromise between operationally relevant benchmarking (by providing qualitative benchmarking under varying environmental conditions) and statistically relevant evaluation (by providing quantitative score sheets under strictly described conditions).

    @InProceedings{doroftei2018qualitative,
    author = {Doroftei, Daniela and De Cubber, Geert},
    booktitle = {International Symposium on Measurement and Control in Robotics ISMCR2018},
    title = {Qualitative and quantitative validation of drone detection systems},
    year = {2018},
    volume = {1},
    abstract = {As drones are more and more entering our world, so comes the need to regulate the access to airspace for these systems. A necessary tool in order to do this is a means of detecting these drones. Numerous commercial and non-commercial parties have started the development of such drone detection systems. A big problem with these systems is that the evaluation of the performance of drone detection systems is a difficult operation, which requires the careful consideration of all technical and non-technical aspects of the system under test. Indeed, weather conditions and small variations in the appearance of the targets can have a huge difference on the performance of the systems. In order to provide a fair evaluation and an honest comparison between systems, it is therefore paramount that a stringent validation procedure is followed. Moreover, the validation methodology needs to find a compromise between the often contrasting requirements of end users (who want tests to be performed in operational conditions) and platform developers (who want tests to be performed that are statistically relevant). Therefore, we propose in this paper a qualitative and quantitative validation methodology for drone detection systems. The proposed validation methodology seeks to find this compromise between operationally relevant benchmarking (by providing qualitative benchmarking under varying environmental conditions) and statistically relevant evaluation (by providing quantitative score sheets under strictly described conditions).},
    doi = {10.5281/ZENODO.1462586},
    file = {:doroftei2018qualitative - Qualitative and Quantitative Validation of Drone Detection Systems.PDF:PDF},
    keywords = {Unmanned Aerial Vehicles, Drones, Detection systems, Drone detection, Test and evaluation methods},
    project = {SafeShore},
    address = {Mons, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2018/Paper_Daniela.pdf},
    unit= {meca-ras}
    }

  • H. Balta, J. Velagic, G. De Cubber, W. Bosschaerts, and B. Siciliano, “Fast Statistical Outlier Removal Based Method for Large 3D Point Clouds of Outdoor Environments," in 12th IFAC SYMPOSIUM ON ROBOT CONTROL – SYROCO 2018, Budapest, Hungary, 2018, p. 348–353.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper proposes a very effective method for data handling and preparation of the input 3D scans acquired from laser scanner mounted on the Unmanned Ground Vehicle (UGV). The main objectives are to improve and speed up the process of outliers removal for large-scale outdoor environments. This process is necessary in order to filter out the noise and to downsample the input data which will spare computational and memory resources for further processing steps, such as 3D mapping of rough terrain and unstructured environments. It includes the Voxel-subsampling and Fast Cluster Statistical Outlier Removal (FCSOR) subprocesses. The introduced FCSOR represents an extension on the Statistical Outliers Removal (SOR) method which is effective for both homogeneous and heterogeneous point clouds. This method is evaluated on real data obtained in outdoor environment.

    @InProceedings{balta2018fast01,
    author = {Balta, Haris and Velagic, Jasmin and De Cubber, Geert and Bosschaerts, Walter and Siciliano, Bruno},
    booktitle = {12th IFAC SYMPOSIUM ON ROBOT CONTROL - SYROCO 2018},
    title = {Fast Statistical Outlier Removal Based Method for Large {3D} Point Clouds of Outdoor Environments},
    year = {2018},
    number = {22},
    pages = {348--353},
    publisher = {Elsevier {BV}},
    volume = {51},
    abstract = {This paper proposes a very effective method for data handling and preparation of the input 3D scans acquired from laser scanner mounted on the Unmanned Ground Vehicle (UGV). The main objectives are to improve and speed up the process of outliers removal for large-scale outdoor environments. This process is necessary in order to filter out the noise and to downsample the input data which will spare computational and memory resources for further processing steps, such as 3D mapping of rough terrain and unstructured environments. It includes the Voxel-subsampling and Fast Cluster Statistical Outlier Removal (FCSOR) subprocesses. The introduced FCSOR represents an extension on the Statistical Outliers Removal (SOR) method which is effective for both homogeneous and heterogeneous point clouds. This method is evaluated on real data obtained in outdoor environment.},
    doi = {10.1016/j.ifacol.2018.11.566},
    file = {:balta2018fast - Fast Statistical Outlier Removal Based Method for Large 3D Point Clouds of Outdoor Environments.PDF:PDF},
    journal = {{IFAC}-{PapersOnLine}},
    project = {NRTP},
    address = {Budapest, Hungary},
    url = {https://www.sciencedirect.com/science/article/pii/S2405896318332725},
    unit= {meca-ras}
    }

  • H. Balta, J. Velagic, G. De Cubber, W. Bosschaerts, and B. Siciliano, “Fast Iterative 3D Mapping for Large-Scale Outdoor Environments with Local Minima Escape Mechanism," in 12th IFAC SYMPOSIUM ON ROBOT CONTROL – SYROCO 2018, Budapest, Hungary, 2018, p. 298–305.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper introduces a novel iterative 3D mapping framework for large scale natural terrain and complex environments. The framework is based on an Iterative-Closest-Point (ICP) algorithm and an iterative error minimization mechanism, allowing robust 3D map registration. This was accomplished by performing pairwise scan registrations without any prior known pose estimation information and taking into account the measurement uncertainties due to the 6D coordinates (translation and rotation) deviations in the acquired scans. Since the ICP algorithm does not guarantee to escape from local minima during the mapping, new algorithms for the local minima estimation and local minima escape process were proposed. The proposed framework is validated using large scale field test data sets. The experimental results were compared with those of standard, generalized and non-linear ICP registration methods and the performance evaluation is presented, showing improved performance of the proposed 3D mapping framework.

    @InProceedings{balta2018fast02,
    author = {Balta, Haris and Velagic, Jasmin and De Cubber, Geert and Bosschaerts, Walter and Siciliano, Bruno},
    booktitle = {12th IFAC SYMPOSIUM ON ROBOT CONTROL - SYROCO 2018},
    title = {Fast Iterative {3D} Mapping for Large-Scale Outdoor Environments with Local Minima Escape Mechanism},
    year = {2018},
    number = {22},
    pages = {298--305},
    publisher = {Elsevier {BV}},
    volume = {51},
    abstract = {This paper introduces a novel iterative 3D mapping framework for large scale natural terrain and complex environments. The framework is based on an Iterative-Closest-Point (ICP) algorithm and an iterative error minimization mechanism, allowing robust 3D map registration. This was accomplished by performing pairwise scan registrations without any prior known pose estimation information and taking into account the measurement uncertainties due to the 6D coordinates (translation and rotation) deviations in the acquired scans. Since the ICP algorithm does not guarantee to escape from local minima during the mapping, new algorithms for the local minima estimation and local minima escape process were proposed. The proposed framework is validated using large scale field test data sets. The experimental results were compared with those of standard, generalized and non-linear ICP registration methods and the performance evaluation is presented, showing improved performance of the proposed 3D mapping framework.},
    doi = {10.1016/j.ifacol.2018.11.558},
    journal = {{IFAC}-{PapersOnLine}},
    address = {Budapest, Hungary},
    project = {NRTP},
    url = {https://www.sciencedirect.com/science/article/pii/S2405896318332646},
    unit= {meca-ras}
    }

  • G. De Cubber, “Legal Issues in Search and Rescue UAV operations," in IROS2018 forum on Legal Issues, Cybersecurity and Policymakers Implication in AI Robotics, Madrid, Spain, 2018.
    [BibTeX]
    @InProceedings{de2018legal,
    author = {De Cubber, Geert},
    booktitle = {IROS2018 forum on Legal Issues, Cybersecurity and Policymakers Implication in AI Robotics},
    title = {Legal Issues in Search and Rescue {UAV} operations},
    year = {2018},
    address = {Madrid, Spain},
    project = {ICARUS},
    unit= {meca-ras}
    }

2017

  • D. S. López, G. Moreno, J. Cordero, J. Sanchez, S. Govindaraj, M. M. Marques, V. Lobo, S. Fioravanti, A. Grati, K. Rudin, M. Tosa, A. Matos, A. Dias, A. Martins, J. Bedkowski, H. Balta, and G. De Cubber, “Interoperability in a Heterogeneous Team of Search and Rescue Robots," in Search and Rescue Robotics – From Theory to Practice, G. De Cubber and D. Doroftei, Eds., InTech, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Search and rescue missions are complex operations. A disaster scenario is generally unstructured, time‐varying and unpredictable. This poses several challenges for the successful deployment of unmanned technology. The variety of operational scenarios and tasks lead to the need for multiple robots of different types, domains and sizes. A priori planning of the optimal set of assets to be deployed and the definition of their mission objectives are generally not feasible as information only becomes available during mission. The ICARUS project responds to this challenge by developing a heterogeneous team composed by different and complementary robots, dynamically cooperating as an interoperable team. This chapter describes our approach to multi‐robot interoperability, understood as the ability of multiple robots to operate together, in synergy, enabling multiple teams to share data, intelligence and resources, which is the ultimate objective of ICARUS project. It also includes the analysis of the relevant standardization initiatives in multi‐robot multi‐domain systems, our implementation of an interoperability framework and several examples of multi‐robot cooperation of the ICARUS robots in realistic search and rescue missions.

    @InBook{lopez2017interoperability,
    author = {Daniel Serrano L{\'{o}}pez and German Moreno and Jose Cordero and Jose Sanchez and Shashank Govindaraj and Mario Monteiro Marques and Victor Lobo and Stefano Fioravanti and Alberto Grati and Konrad Rudin and Massimo Tosa and Anibal Matos and Andre Dias and Alfredo Martins and Janusz Bedkowski and Haris Balta and De Cubber, Geert},
    editor = {De Cubber, Geert and Doroftei, Daniela},
    chapter = {Chapter 6},
    publisher = {{InTech}},
    title = {Interoperability in a Heterogeneous Team of Search and Rescue Robots},
    year = {2017},
    month = aug,
    abstract = {Search and rescue missions are complex operations. A disaster scenario is generally unstructured, time‐varying and unpredictable. This poses several challenges for the successful deployment of unmanned technology. The variety of operational scenarios and tasks lead to the need for multiple robots of different types, domains and sizes. A priori planning of the optimal set of assets to be deployed and the definition of their mission objectives are generally not feasible as information only becomes available during mission. The ICARUS project responds to this challenge by developing a heterogeneous team composed by different and complementary robots, dynamically cooperating as an interoperable team. This chapter describes our approach to multi‐robot interoperability, understood as the ability of multiple robots to operate together, in synergy, enabling multiple teams to share data, intelligence and resources, which is the ultimate objective of ICARUS project. It also includes the analysis of the relevant standardization initiatives in multi‐robot multi‐domain systems, our implementation of an interoperability framework and several examples of multi‐robot cooperation of the ICARUS robots in realistic search and rescue missions.},
    booktitle = {Search and Rescue Robotics - From Theory to Practice},
    doi = {10.5772/intechopen.69493},
    project = {ICARUS},
    unit= {meca-ras},
    url = {https://www.intechopen.com/books/search-and-rescue-robotics-from-theory-to-practice/interoperability-in-a-heterogeneous-team-of-search-and-rescue-robots},
    }

  • G. De Cubber, D. Doroftei, H. Balta, A. Matos, E. Silva, D. Serrano, S. Govindaraj, R. Roda, V. Lobo, M. Marques, and R. Wagemans, “Operational Validation of Search and Rescue Robots," in Search and Rescue Robotics – From Theory to Practice, G. De Cubber and D. Doroftei, Eds., InTech, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This chapter describes how the different ICARUS unmanned search and rescue tools have been evaluated and validated using operational benchmarking techniques. Two large‐scale simulated disaster scenarios were organized: a simulated shipwreck and an earthquake response scenario. Next to these simulated response scenarios, where ICARUS tools were deployed in tight interaction with real end users, ICARUS tools also participated to a real relief, embedded in a team of end users for a flood response mission. These validation trials allow us to conclude that the ICARUS tools fulfil the user requirements and goals set up at the beginning of the project.

    @InBook{de2017operational,
    author = {De Cubber, Geert and Daniela Doroftei and Haris Balta and Anibal Matos and Eduardo Silva and Daniel Serrano and Shashank Govindaraj and Rui Roda and Victor Lobo and M{\'{a}}rio Marques and Rene Wagemans},
    editor = {De Cubber, Geert and Doroftei, Daniela},
    chapter = {Chapter 10},
    publisher = {{InTech}},
    title = {Operational Validation of Search and Rescue Robots},
    year = {2017},
    month = aug,
    abstract = {This chapter describes how the different ICARUS unmanned search and rescue tools have been evaluated and validated using operational benchmarking techniques. Two large‐scale simulated disaster scenarios were organized: a simulated shipwreck and an earthquake response scenario. Next to these simulated response scenarios, where ICARUS tools were deployed in tight interaction with real end users, ICARUS tools also participated to a real relief, embedded in a team of end users for a flood response mission. These validation trials allow us to conclude that the ICARUS tools fulfil the user requirements and goals set up at the beginning of the project.},
    booktitle = {Search and Rescue Robotics - From Theory to Practice},
    doi = {10.5772/intechopen.69497},
    journal = {Search and Rescue Robotics: From Theory to Practice},
    project = {ICARUS},
    url = {https://www.intechopen.com/books/search-and-rescue-robotics-from-theory-to-practice/operational-validation-of-search-and-rescue-robots},
    unit= {meca-ras}
    }

  • K. Berns, A. Nezhadfard, M. Tosa, H. Balta, and G. De Cubber, “Unmanned Ground Robots for Rescue Tasks," in Search and Rescue Robotics – From Theory to Practice, G. De Cubber and D. Doroftei, Eds., InTech, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This chapter describes two unmanned ground vehicles that can help search and rescue teams in their difficult, but life-saving tasks. These robotic assets have been developed within the framework of the European project ICARUS. The large unmanned ground vehicle is intended to be a mobile base station. It is equipped with a powerful manipulator arm and can be used for debris removal, shoring operations, and remote structural operations (cutting, welding, hammering, etc.) on very rough terrain. The smaller unmanned ground vehicle is also equipped with an array of sensors, enabling it to search for victims inside semi-destroyed buildings. Working together with each other and the human search and rescue workers, these robotic assets form a powerful team, increasing the effectiveness of search and rescue operations, as proven by operational validation tests in collaboration with end users.

    @InBook{berns2017unmanned,
    author = {Karsten Berns and Atabak Nezhadfard and Massimo Tosa and Haris Balta and De Cubber, Geert},
    editor = {De Cubber, Geert and Doroftei, Daniela},
    chapter = {Chapter 4},
    publisher = {{InTech}},
    title = {Unmanned Ground Robots for Rescue Tasks},
    year = {2017},
    month = aug,
    abstract = {This chapter describes two unmanned ground vehicles that can help search and rescue teams in their difficult, but life-saving tasks. These robotic assets have been developed within the framework of the European project ICARUS. The large unmanned ground vehicle is intended to be a mobile base station. It is equipped with a powerful manipulator arm and can be used for debris removal, shoring operations, and remote structural operations (cutting, welding, hammering, etc.) on very rough terrain. The smaller unmanned ground vehicle is also equipped with an array of sensors, enabling it to search for victims inside semi-destroyed buildings. Working together with each other and the human search and rescue workers, these robotic assets form a powerful team, increasing the effectiveness of search and rescue operations, as proven by operational validation tests in collaboration with end users.},
    booktitle = {Search and Rescue Robotics - From Theory to Practice},
    doi = {10.5772/intechopen.69491},
    project = {ICARUS},
    url = {https://www.intechopen.com/books/search-and-rescue-robotics-from-theory-to-practice/unmanned-ground-robots-for-rescue-tasks},
    unit= {meca-ras}
    }

  • D. Doroftei, G. De Cubber, R. Wagemans, A. Matos, E. Silva, V. Lobo, K. C. Guerreiro Cardoso, S. Govindaraj, J. Gancet, and D. Serrano, “User-centered design," , G. De Cubber and D. Doroftei, Eds., InTech, 2017, p. 19–36.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The successful introduction and acceptance of novel technological tools are only possible if end users are completely integrated in the design process. However, obtaining such integration of end users is not obvious, as end‐user organizations often do not consider research toward new technological aids as their core business and are therefore reluctant to engage in these kinds of activities. This chapter explains how this problem was tackled in the ICARUS project, by carefully identifying and approaching the targeted user communities and by compiling user requirements. Resulting from these user requirements, system requirements and a system architecture for the ICARUS system were deduced. An important aspect of the user‐centered design approach is that it is an iterative methodology, based on multiple intermediate operational validations by end users of the developed tools, leading to a final validation according to user‐scripted validation scenarios.

    @InBook{doroftei2017user,
    author = {Doroftei, Daniela and De Cubber, Geert and Wagemans, Rene and Matos, Anibal and Silva, Eduardo and Lobo, Victor and Guerreiro Cardoso, Keshav Chintamani and Govindaraj, Shashank and Gancet, Jeremi and Serrano, Daniel},
    editor = {De Cubber, Geert and Doroftei, Daniela},
    chapter = {Chapter 2},
    pages = {19--36},
    publisher = {{InTech}},
    title = {User-centered design},
    year = {2017},
    abstract = {The successful introduction and acceptance of novel technological tools are only possible if end users are completely integrated in the design process. However, obtaining such integration of end users is not obvious, as end‐user organizations often do not consider research toward new technological aids as their core business and are therefore reluctant to engage in these kinds of activities. This chapter explains how this problem was tackled in the ICARUS project, by carefully identifying and approaching the targeted user communities and by compiling user requirements. Resulting from these user requirements, system requirements and a system architecture for the ICARUS system were deduced. An important aspect of the user‐centered design approach is that it is an iterative methodology, based on multiple intermediate operational validations by end users of the developed tools, leading to a final validation according to user‐scripted validation scenarios.},
    doi = {10.5772/intechopen.69483},
    journal = {Search and rescue robotics. From theory to practice. IntechOpen, London},
    project = {ICARUS},
    unit= {meca-ras},
    url = {https://www.intechopen.com/books/search-and-rescue-robotics-from-theory-to-practice/user-centered-design},
    }

  • G. D. Cubber, D. Doroftei, K. Rudin, K. Berns, A. Matos, D. Serrano, J. Sanchez, S. Govindaraj, J. Bedkowski, R. Roda, E. Silva, and S. Ourevitch, “Introduction to the use of robotic tools for search and rescue," in Search and Rescue Robotics – From Theory to Practice, G. De Cubber and D. Doroftei, Eds., InTech, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Modern search and rescue workers are equipped with a powerful toolkit to address natural and man-made disasters. This introductory chapter explains how a new tool can be added to this toolkit: robots. The use of robotic assets in search and rescue operations is explained and an overview is given of the worldwide efforts to incorporate robotic tools in search and rescue operations. Furthermore, the European Union ICARUS project on this subject is introduced. The ICARUS project proposes to equip first responders with a comprehensive and integrated set of unmanned search and rescue tools, to increase the situational awareness of human crisis managers, such that more work can be done in a shorter amount of time. The ICARUS tools consist of assistive unmanned air, ground, and sea vehicles, equipped with victim-detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the command and control equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.

    @InBook{cubber2017introduction,
    author = {Geert De Cubber and Daniela Doroftei and Konrad Rudin and Karsten Berns and Anibal Matos and Daniel Serrano and Jose Sanchez and Shashank Govindaraj and Janusz Bedkowski and Rui Roda and Eduardo Silva and Stephane Ourevitch},
    editor = {De Cubber, Geert and Doroftei, Daniela},
    chapter = {Chapter 1},
    publisher = {{InTech}},
    title = {Introduction to the use of robotic tools for search and rescue},
    year = {2017},
    month = aug,
    abstract = {Modern search and rescue workers are equipped with a powerful toolkit to address natural and man-made disasters. This introductory chapter explains how a new tool can be added to this toolkit: robots. The use of robotic assets in search and rescue operations is explained and an overview is given of the worldwide efforts to incorporate robotic tools in search and rescue operations. Furthermore, the European Union ICARUS project on this subject is introduced. The ICARUS project proposes to equip first responders with a comprehensive and integrated set of unmanned search and rescue tools, to increase the situational awareness of human crisis managers, such that more work can be done in a shorter amount of time. The ICARUS tools consist of assistive unmanned air, ground, and sea vehicles, equipped with victim-detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the command and control equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.},
    booktitle = {Search and Rescue Robotics - From Theory to Practice},
    doi = {10.5772/intechopen.69489},
    project = {ICARUS},
    url = {https://www.intechopen.com/books/search-and-rescue-robotics-from-theory-to-practice/introduction-to-the-use-of-robotic-tools-for-search-and-rescue},
    unit= {meca-ras}
    }

  • G. De Cubber, R. Shalom, A. Coluccia, O. Borcan, R. Chamrád, T. Radulescu, E. Izquierdo, and Z. Gagov, “The SafeShore system for the detection of threat agents in a maritime border environment," in IARP Workshop on Risky Interventions and Environmental Surveillance, Les Bon Villers, Belgium, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper discusses the goals of the H2020-SafeShore project, which has as a main goal to cover existing gaps in coastal border surveillance, increasing internal security by preventing cross-border crime such as trafficking in human beings and the smuggling of drugs. It is designed to be integrated with existing systems and create a continuous detection line along the border

    @InProceedings{de2017safeshore,
    author = {De Cubber, Geert and Shalom, Ron and Coluccia, Angelo and Borcan, Octavia and Chamr{\'a}d, Richard and Radulescu, Tudor and Izquierdo, Ebroul and Gagov, Zhelyazko},
    booktitle = {IARP Workshop on Risky Interventions and Environmental Surveillance},
    title = {The {SafeShore} system for the detection of threat agents in a maritime border environment},
    year = {2017},
    organization = {IARP},
    abstract = {This paper discusses the goals of the H2020-SafeShore project, which has as a main goal to cover existing gaps in coastal border
    surveillance, increasing internal security by preventing cross-border crime such as trafficking in human beings and the smuggling of drugs. It is designed to be integrated with existing systems and create a continuous detection line along the border},
    doi = {10.5281/zenodo.1115552},
    keywords = {SafeShore, Counter UAV, Counter RPAS},
    language = {en},
    project = {Safeshore},
    address = {Les Bon Villers, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2017/SafeShore Abstract RISE-2017_.pdf},
    unit= {meca-ras}
    }

  • M. Buric and G. De Cubber, “Counter Remotely Piloted Aircraft Systems," MTA Review, vol. 27, iss. 1, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    An effective Counter Remotely Aircraft System is a major objective of many researchers and industries entities. Their activity is strongly impelled by the operational requirements of the Law Enforcement Authorities and naturally follows both the course of the latest terrorist events and technological developments. The designing process of an effective Counter Remotely Aircraft System needs to benefit from a systemic approach, starting from the legal aspects, and ending with the technical ones. From a technical point of view, the system has to work according to the five “kill chain” model starting with the detection phase, going on with the classification, prioritization, tracking and neutralization of the targets and ending with the forensic phase.

    @Article{buric2017counter,
    author = {Buric, Marian and De Cubber, Geert},
    journal = {MTA Review},
    title = {Counter Remotely Piloted Aircraft Systems},
    year = {2017},
    number = {1},
    volume = {27},
    abstract = {An effective Counter Remotely Aircraft System is a major objective of many researchers and industries entities. Their activity is strongly impelled by the operational requirements of the Law Enforcement Authorities and naturally follows both the course of the latest terrorist events and technological developments. The designing process of an effective Counter Remotely Aircraft System needs to benefit from a systemic approach, starting from the legal aspects, and ending with the technical ones. From a technical point of view, the system has to work according to the five “kill chain” model starting with the detection phase, going on with the classification, prioritization, tracking and neutralization of the targets and ending with the forensic phase.},
    doi = {10.5281/zenodo.1115502},
    keywords = {Counter Remotely Piloted Aircraft Systems, drone, drone detection tracking and neutralization, RPAS, SafeShore},
    language = {en},
    publisher = {Military Technical Academy Publishing House},
    project = {SafeShore},
    url = {http://mecatron.rma.ac.be/pub/2017/Counter Remotely Piloted Aircraft Systems.pdf},
    unit= {meca-ras}
    }

  • A. Coluccia, M. Ghenescu, T. Piatrik, G. D. Cubber, A. Schumann, L. Sommer, J. Klatte, T. Schuchert, J. Beyerer, M. Farhadi, R. Amandi, C. Aker, S. Kalkan, M. Saqib, N. Sharma, S. Daud, K. Makkah, and M. Blumenstein, “Drone-vs-Bird detection challenge at IEEE AVSS2017," in 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 2017, p. 1–6.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Small drones are a rising threat due to their possible misuse for illegal activities, in particular smuggling and terrorism. The project SafeShore, funded by the European Commission under the Horizon 2020 program, has launched the drone-vs-bird detection challenge to address one of the many technical issues arising in this context. The goal is to detect a drone appearing at some point in a video where birds may be also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. This paper reports on the challenge proposal, evaluation, and results

    @InProceedings{coluccia2017drone,
    author = {Angelo Coluccia and Marian Ghenescu and Tomas Piatrik and Geert De Cubber and Arne Schumann and Lars Sommer and Johannes Klatte and Tobias Schuchert and Juergen Beyerer and Mohammad Farhadi and Ruhallah Amandi and Cemal Aker and Sinan Kalkan and Muhammad Saqib and Nabin Sharma and Sultan Daud and Khan Makkah and Michael Blumenstein},
    booktitle = {2017 14th {IEEE} International Conference on Advanced Video and Signal Based Surveillance ({AVSS})},
    title = {Drone-vs-Bird detection challenge at {IEEE} {AVSS}2017},
    year = {2017},
    month = aug,
    organization = {IEEE},
    pages = {1--6},
    publisher = {{IEEE}},
    abstract = {Small drones are a rising threat due to their possible misuse for illegal activities, in particular smuggling and terrorism. The project SafeShore, funded by the European Commission under the Horizon 2020 program, has launched the drone-vs-bird detection challenge to address one of the many technical issues arising in this context. The goal is to detect a drone appearing at some point in a video where birds may be also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. This paper reports on the challenge proposal, evaluation, and results},
    doi = {10.1109/avss.2017.8078464},
    project = {SafeShore},
    address = {Lecce, Italy},
    url = {http://mecatron.rma.ac.be/pub/2017/WOSDETCpaper (1).pdf},
    unit= {meca-ras}
    }

  • I. Lahouli, R. Haelterman, Z. Chtourou, G. De Cubber, and R. Attia, “Pedestrian Tracking in the Compressed Domain Using Thermal Images," in VIIth International Workshop on Representation, analysis and recognition of shape and motion from Image data, Savoie, France, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The video surveillance of sensitive facilities or borders poses many challenges like the high bandwidth requirements and the high computational cost. In this paper, we propose a framework for detecting and tracking pedestrians in the compressed domain using thermal images. Firstly, the detection process uses a conjunction between saliency maps and contrast enhancement techniques followed by a global image content descriptor based on Discrete Chebychev Moments (DCM) and a linear Support Vector Machine (SVM) as a classifier. Secondly, the tracking process exploits raw H.264 compressed video streams with limited computational overhead. In addition to two, well-known, public datasets, we have generated our own dataset by carrying six different scenarios of suspicious events using a thermal camera. The obtained results show the effectiveness and the low computational requirements of the proposed framework which make it suitable for real-time applications and on-board implementation.

    @InProceedings{lahouli2017pedestrian,
    author = {Lahouli, Ichraf and Haelterman, Robby and Chtourou, Zied and De Cubber, Geert and Attia, Rabah},
    booktitle = {VIIth International Workshop on Representation, analysis and recognition of shape and motion from Image data},
    title = {Pedestrian Tracking in the Compressed Domain Using Thermal Images},
    year = {2017},
    number = {1},
    volume = {1},
    abstract = {The video surveillance of sensitive facilities or borders poses many challenges like the high bandwidth requirements and the high computational cost. In this paper, we propose a framework for detecting and tracking pedestrians in the compressed domain using thermal images. Firstly, the detection process uses a conjunction between saliency maps and contrast enhancement techniques followed by a global image content descriptor based on Discrete Chebychev Moments (DCM) and a linear
    Support Vector Machine (SVM) as a classifier. Secondly, the tracking process exploits raw H.264 compressed video streams with limited computational overhead. In addition to two, well-known, public datasets, we have generated our own dataset by carrying six different scenarios of suspicious events using a thermal camera. The obtained results show the effectiveness and the low computational requirements of the proposed framework which make it suitable for real-time applications and on-board implementation.},
    doi = {10.1007/978-3-030-19816-9_3},
    project = {SafeShore},
    address = {Savoie, France},
    url = {http://mecatron.rma.ac.be/pub/2017/RFMI2017_LAHOULI.pdf},
    unit= {meca-ras}
    }

  • G. D. Cubber, D. Doroftei, K. Rudin, K. Berns, A. Matos, D. Serrano, J. M. Sanchez, S. Govindaraj, J. Bedkowski, R. Roda, E. Silva, S. Ourevitch, R. Wagemans, V. Lobo, G. Cardoso, K. Chintamani, J. Gancet, P. Stupler, A. Nezhadfard, M. Tosa, H. Balta, J. Almeida, A. Martins, H. Ferreira, B. Ferreira, J. Alves, A. Dias, S. Fioravanti, D. Bertin, G. Moreno, J. Cordero, M. M. Marques, A. Grati, H. M. Chaudhary, B. Sheers, Y. Riobo, P. Letier, M. N. Jimenez, M. A. Esbri, P. Musialik, I. Badiola, R. Goncalves, A. Coelho, T. Pfister, K. Majek, M. Pelka, A. Maslowski, and R. Baptista, Search and Rescue Robotics – From Theory to Practice, G. De Cubber and D. Doroftei, Eds., InTech, 2017.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In the event of large crises (earthquakes, typhoons, floods, …), a primordial task of the fire and rescue services is the search for human survivors on the incident site. This is a complex and dangerous task, which – too often – leads to loss of lives among the human crisis managers themselves. This book explains how unmanned search can be added to the toolkit of the search and rescue workers, offering a valuable tool to save human lives and to speed up the search and rescue process. The introduction of robotic tools in the world of search and rescue is not straightforward, due to the fact that the search and rescue context is extremely technology-unfriendly, meaning that very robust solutions, which can be deployed extremely quickly, are required. Multiple research projects across the world are tackling this problem and in this book, a special focus is placed on showcasing the results of the European Union ICARUS project on this subject. The ICARUS project proposes to equip first responders with a comprehensive and integrated set of unmanned search and rescue tools, to increase the situational awareness of human crisis managers, so that more work can be done in a shorter amount of time. The ICARUS tools consist of assistive unmanned air, ground, and sea vehicles, equipped with victim-detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the command and control equipment of the human crisis managers and a set of training and support tools is provided to them in order to learn to use the ICARUS system.

    @Book{de2017search,
    author = {Geert De Cubber and Daniela Doroftei and Konrad Rudin and Karsten Berns and Anibal Matos and Daniel Serrano and Jose Manuel Sanchez and Shashank Govindaraj and Janusz Bedkowski and Rui Roda and Eduardo Silva and Stephane Ourevitch and Rene Wagemans and Victor Lobo and Guerreiro Cardoso and Keshav Chintamani and Jeremi Gancet and Pascal Stupler and Atabak Nezhadfard and Massimo Tosa and Haris Balta and Jose Almeida and Alfredo Martins and Hugo Ferreira and Bruno Ferreira and Jose Alves and Andre Dias and Stefano Fioravanti and Daniele Bertin and German Moreno and Jose Cordero and Mario Monteiro Marques and Alberto Grati and Hafeez M. Chaudhary and Bart Sheers and Yudani Riobo and Pierre Letier and Mario Nunez Jimenez and Miguel Angel Esbri and Pawel Musialik and Irune Badiola and Ricardo Goncalves and Antonio Coelho and Thomas Pfister and Karol Majek and Michal Pelka and Andrzej Maslowski and Ricardo Baptista},
    editor = {De Cubber, Geert and Doroftei, Daniela},
    publisher = {{InTech}},
    title = {Search and Rescue Robotics - From Theory to Practice},
    year = {2017},
    month = aug,
    abstract = {In the event of large crises (earthquakes, typhoons, floods, ...), a primordial task of the fire and rescue services is the search for human survivors on the incident site. This is a complex and dangerous task, which - too often - leads to loss of lives among the human crisis managers themselves. This book explains how unmanned search can be added to the toolkit of the search and rescue workers, offering a valuable tool to save human lives and to speed up the search and rescue process. The introduction of robotic tools in the world of search and rescue is not straightforward, due to the fact that the search and rescue context is extremely technology-unfriendly, meaning that very robust solutions, which can be deployed extremely quickly, are required. Multiple research projects across the world are tackling this problem and in this book, a special focus is placed on showcasing the results of the European Union ICARUS project on this subject. The ICARUS project proposes to equip first responders with a comprehensive and integrated set of unmanned search and rescue tools, to increase the situational awareness of human crisis managers, so that more work can be done in a shorter amount of time. The ICARUS tools consist of assistive unmanned air, ground, and sea vehicles, equipped with victim-detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the command and control equipment of the human crisis managers and a set of training and support tools is provided to them in order to learn to use the ICARUS system.},
    doi = {10.5772/intechopen.68449},
    project = {ICARUS},
    url = {https://www.intechopen.com/books/search-and-rescue-robotics-from-theory-to-practice},
    unit= {meca-ras}
    }

  • Y. Baudoin, D. Doroftei, G. De Cubber, J. Habumuremyi, H. Balta, and I. Doroftei, “Unmanned Ground and Aerial Robots Supporting Mine Action Activities," in Mine Action – The Research Experience of the Royal Military Academy of Belgium, C. Beumier, D. Closson, V. Lacroix, N. Milisavljevic, and Y. Yvinec, Eds., InTech, 2017, vol. 1.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    During the Humanitarian‐demining actions, teleoperation of sensors or multi‐sensor heads can enhance-detection process by allowing more precise scanning, which is useful for the optimization of the signal processing algorithms. This chapter summarizes the technologies and experiences developed during 16 years through national and/or European‐funded projects, illustrated by some contributions of our own laboratory, located at the Royal Military Academy of Brussels, focusing on the detection of unexploded devices and the implementation of mobile robotics systems on minefields.

    @InBook{baudoin2017unmanned,
    author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Habumuremyi, Jean-Claude and Balta, Haris and Doroftei, Ioan},
    editor = {Beumier, Charles and Closson, Damien and Lacroix, Vincianne and Milisavljevic, Nada and Yvinec, Yann},
    chapter = {Chapter 9},
    publisher = {{InTech}},
    title = {Unmanned Ground and Aerial Robots Supporting Mine Action Activities},
    year = {2017},
    month = aug,
    volume = {1},
    abstract = {During the Humanitarian‐demining actions, teleoperation of sensors or multi‐sensor heads can enhance-detection process by allowing more precise scanning, which is useful for the optimization of the signal processing algorithms. This chapter summarizes the technologies and experiences developed during 16 years through national and/or European‐funded projects, illustrated by some contributions of our own laboratory, located at the Royal Military Academy of Brussels, focusing on the detection of unexploded devices and the implementation of mobile robotics systems on minefields.},
    booktitle = {Mine Action - The Research Experience of the Royal Military Academy of Belgium},
    doi = {10.5772/65783},
    project = {TIRAMISU},
    url = {https://www.intechopen.com/books/mine-action-the-research-experience-of-the-royal-military-academy-of-belgium/unmanned-ground-and-aerial-robots-supporting-mine-action-activities},
    unit= {meca-ras}
    }

2016

  • M. M. Marques, R. Parreira, V. Lobo, A. Martins, A. Matos, N. Cruz, J. M. Almeida, J. C. Alves, E. Silva, J. Bedkowski, K. Majek, M. Pelka, P. Musialik, H. Ferreira, A. Dias, B. Ferreira, G. Amaral, A. Figueiredo, R. Almeida, F. Silva, D. Serrano, G. Moreno, G. De Cubber, H. Balta, and H. Beglerovic, “Use of multi-domain robots in search and rescue operations — Contributions of the ICARUS team to the euRathlon 2015 challenge," in OCEANS 2016, Shanghai, China, 2016, p. 1–7.
    [BibTeX] [Download PDF] [DOI]
    @InProceedings{marques2016use,
    author = {Mario Monteiro Marques and Rui Parreira and Victor Lobo and Alfredo Martins and Anibal Matos and Nuno Cruz and Jose Miguel Almeida and Jose Carlos Alves and Eduardo Silva and Janusz Bedkowski and Karol Majek and Michal Pelka and Pawel Musialik and Hugo Ferreira and Andre Dias and Bruno Ferreira and Guilherme Amaral and Andre Figueiredo and Rui Almeida and Filipe Silva and Daniel Serrano and German Moreno and De Cubber, Geert and Haris Balta and Halil Beglerovic},
    booktitle = {{OCEANS} 2016},
    title = {Use of multi-domain robots in search and rescue operations {\textemdash} Contributions of the {ICARUS} team to the {euRathlon} 2015 challenge},
    year = {2016},
    month = apr,
    organization = {IEEE},
    pages = {1--7},
    publisher = {{IEEE}},
    doi = {10.1109/oceansap.2016.7485354},
    project = {ICARUS},
    unit= {meca-ras},
    address = {Shanghai, China},
    url = {http://mecatron.rma.ac.be/pub/2016/euRathlon2015_paper_final.pdf},
    }

  • H. Balta, J. Bedkowski, S. Govindaraj, K. Majek, P. Musialik, D. Serrano, K. Alexis, R. Siegwart, and G. De Cubber, “Integrated Data Management for a Fleet of Search-and-rescue Robots," Journal of Field Robotics, vol. 34, iss. 3, p. 539–582, 2016.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Search‐and‐rescue operations have recently been confronted with the introduction of robotic tools that assist the human search‐and‐rescue workers in their dangerous but life‐saving job of searching for human survivors after major catastrophes. However, the world of search and rescue is highly reliant on strict procedures for the transfer of messages, alarms, data, and command and control over the deployed assets. The introduction of robotic tools into this world causes an important structural change in this procedural toolchain. Moreover, the introduction of search‐and‐rescue robots acting as data gatherers could potentially lead to an information overload toward the human search‐and‐rescue workers, if the data acquired by these robotic tools are not managed in an intelligent way. With that in mind, we present in this paper an integrated data combination and data management architecture that is able to accommodate real‐time data gathered by a fleet of robotic vehicles on a crisis site, and we present and publish these data in a way that is easy to understand by end‐users. In the scope of this paper, a fleet of unmanned ground and aerial search‐and‐rescue vehicles is considered, developed within the scope of the European ICARUS project. As a first step toward the integrated data‐management methodology, the different robotic systems require an interoperable framework in order to pass data from one to another and toward the unified command and control station. As a second step, a data fusion methodology will be presented, combining the data acquired by the different heterogenic robotic systems. The computation needed for this process is done in a novel mobile data center and then (as a third step) published in a software as a service (SaaS) model. The SaaS model helps in providing access to robotic data over ubiquitous Ethernet connections. As a final step, we show how the presented data‐management architecture allows for reusing recorded exercises with real robots and rescue teams for training purposes and teaching search‐and‐rescue personnel how to handle the different robotic tools. The system was validated in two experiments. First, in the controlled environment of a military testing base, a fleet of unmanned ground and aerial vehicles was deployed in an earthquake‐response scenario. The data gathered by the different interoperable robotic systems were combined by a novel mobile data center and presented to the end‐user public. Second, an unmanned aerial system was deployed on an actual mission with an international relief team to help with the relief operations after major flooding in Bosnia in the spring of 2014. Due to the nature of the event (floods), no ground vehicles were deployed here, but all data acquired by the aerial system (mainly three‐dimensional maps) were stored in the ICARUS data center, where they were securely published for authorized personnel all over the world. This mission (which is, to our knowledge, the first recorded deployment of an unmanned aerial system by an official governmental international search‐and‐rescue team in another country) proved also the concept of the procedural integration of the ICARUS data management system into the existing procedural toolchain of the search and rescue workers, and this in an international context (deployment from Belgium to Bosnia). The feedback received from the search‐and‐rescue personnel on both validation exercises was highly positive, proving that the ICARUS data management system can efficiently increase the situational awareness of the search‐and‐rescue personnel.

    @Article{balta2017integrated,
    author = {Haris Balta and Janusz Bedkowski and Shashank Govindaraj and Karol Majek and Pawel Musialik and Daniel Serrano and Kostas Alexis and Roland Siegwart and De Cubber, Geert},
    journal = {Journal of Field Robotics},
    title = {Integrated Data Management for a Fleet of Search-and-rescue Robots},
    year = {2016},
    month = jul,
    number = {3},
    pages = {539--582},
    volume = {34},
    abstract = {Search‐and‐rescue operations have recently been confronted with the introduction of robotic tools that assist the human search‐and‐rescue workers in their dangerous but life‐saving job of searching for human survivors after major catastrophes. However, the world of search and rescue is highly reliant on strict procedures for the transfer of messages, alarms, data, and command and control over the deployed assets. The introduction of robotic tools into this world causes an important structural change in this procedural toolchain. Moreover, the introduction of search‐and‐rescue robots acting as data gatherers could potentially lead to an information overload toward the human search‐and‐rescue workers, if the data acquired by these robotic tools are not managed in an intelligent way. With that in mind, we present in this paper an integrated data combination and data management architecture that is able to accommodate real‐time data gathered by a fleet of robotic vehicles on a crisis site, and we present and publish these data in a way that is easy to understand by end‐users. In the scope of this paper, a fleet of unmanned ground and aerial search‐and‐rescue vehicles is considered, developed within the scope of the European ICARUS project. As a first step toward the integrated data‐management methodology, the different robotic systems require an interoperable framework in order to pass data from one to another and toward the unified command and control station. As a second step, a data fusion methodology will be presented, combining the data acquired by the different heterogenic robotic systems. The computation needed for this process is done in a novel mobile data center and then (as a third step) published in a software as a service (SaaS) model. The SaaS model helps in providing access to robotic data over ubiquitous Ethernet connections. As a final step, we show how the presented data‐management architecture allows for reusing recorded exercises with real robots and rescue teams for training purposes and teaching search‐and‐rescue personnel how to handle the different robotic tools. The system was validated in two experiments. First, in the controlled environment of a military testing base, a fleet of unmanned ground and aerial vehicles was deployed in an earthquake‐response scenario. The data gathered by the different interoperable robotic systems were combined by a novel mobile data center and presented to the end‐user public. Second, an unmanned aerial system was deployed on an actual mission with an international relief team to help with the relief operations after major flooding in Bosnia in the spring of 2014. Due to the nature of the event (floods), no ground vehicles were deployed here, but all data acquired by the aerial system (mainly three‐dimensional maps) were stored in the ICARUS data center, where they were securely published for authorized personnel all over the world. This mission (which is, to our knowledge, the first recorded deployment of an unmanned aerial system by an official governmental international search‐and‐rescue team in another country) proved also the concept of the procedural integration of the ICARUS data management system into the existing procedural toolchain of the search and rescue workers, and this in an international context (deployment from Belgium to Bosnia). The feedback received from the search‐and‐rescue personnel on both validation exercises was highly positive, proving that the ICARUS data management system can efficiently increase the situational awareness of the search‐and‐rescue personnel.},
    doi = {10.1002/rob.21651},
    publisher = {Wiley},
    project = {ICARUS},
    unit= {meca-ras},
    url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21651},
    }

2015

  • D. Doroftei, A. Matos, E. Silva, V. Lobo, R. Wagemans, and G. De Cubber, “Operational validation of robots for risky environments," in 8th IARP Workshop on Robotics for Risky Environments, Lisbon, Portugal, 2015.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents an operational test and validation approach for the evaluation of the performance of a range of marine, aerial and ground search and rescue robots. The proposed approach seeks to find a compromise between the traditional rigorous standardized approaches and the open-ended robot competitions. Operational scenarios are defined, including a performance assessment of individual robots but also collective operations where heterogeneous robots cooperate together and with manned teams in search and rescue activities. That way, it is possible to perform a more complete validation of the use of robotic tools in challenging real world scenarios.

    @InProceedings{doroftei2015operational,
    author = {Doroftei, Daniela and Matos, Anibal and Silva, Eduardo and Lobo, Victor and Wagemans, Rene and De Cubber, Geert},
    booktitle = {8th IARP Workshop on Robotics for Risky Environments},
    title = {Operational validation of robots for risky environments},
    year = {2015},
    abstract = {This paper presents an operational test and validation approach for the evaluation of the performance of a range of marine, aerial and ground search and rescue robots. The proposed approach seeks to find a compromise between the traditional rigorous standardized approaches and the open-ended robot competitions. Operational scenarios are defined, including a performance assessment of individual robots but also collective operations where heterogeneous robots cooperate together and with manned teams in search and rescue activities. That way, it is possible to perform a more complete validation of the use of robotic tools in challenging real world scenarios.},
    project = {ICARUS},
    address = {Lisbon, Portugal},
    url = {http://mecatron.rma.ac.be/pub/2015/Operational validation of robots for risky environments.pdf},
    unit= {meca-ras}
    }

  • D. Serrano, P. Chrobocinski, G. De Cubber, D. Moore, G. Leventakis, and S. Govindaraj, “ICARUS and DARIUS approaches towards interoperability," in 8th IARP Workshop on Robotics for Risky Environments, Lisbon, Portugal, 2015.
    [BibTeX] [Abstract] [Download PDF]

    The two FP7 projects ICARUS and DARIUS share a common objective which is to integrate the unmanned platforms in Search and Rescue operations and assess their added value through the development of an integrated system that will be tested in realistic conditions on the field. This paper describes the concept of both projects towards an optimized interoperability level in the three dimensions: organizational, procedural and technical interoperability, describing the system components and illustrating the results of the trials already performed.

    @InProceedings{serrano2015icarus,
    author = {Serrano, Daniel and Chrobocinski, Philippe and De Cubber, Geert and Moore, Dave and Leventakis, Georgios and Govindaraj, Shashank},
    booktitle = {8th IARP Workshop on Robotics for Risky Environments},
    title = {{ICARUS} and {DARIUS} approaches towards interoperability},
    year = {2015},
    abstract = {The two FP7 projects ICARUS and DARIUS share a common objective which is to integrate the unmanned platforms in Search and Rescue operations and assess their added value through the development of an integrated system that will be tested in realistic conditions on the field. This paper describes the concept of both projects towards an optimized interoperability level in the three dimensions: organizational, procedural and technical interoperability, describing the system components and illustrating the results of the trials already performed.},
    project = {ICARUS},
    address = {Lisbon, Portugal},
    url = {http://mecatron.rma.ac.be/pub/2015/RISE - 2015 - ICARUS and DARIUS approach towards interoperability - rev1.3.pdf},
    unit= {meca-ras}
    }

  • H. Balta, G. De Cubber, Y. Baudoin, and D. Doroftei, “UAS deployment and data processing during the Balkans flooding with the support to Mine Action," in 8th IARP Workshop on Robotics for Risky Environments, Lisbon, Portugal, 2015.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we provide a report on a real relief operation mission, jointly conducted by two European research projects, in response to the massive flooding in the Balkan in spring 2014. Un Unmanned Aerial System was deployed on-site in collaboration with traditional relief workers, to support them with damage assessment, area mapping, visual inspection and re-localizing the many explosive remnants of war which have been moved due to the flooding and landslides. The destructive impact of landslides, sediment torrents and floods on the mine fields and the change of mine action situation resulted with significant negative environmental and security consequences. Novel robotic technologies and data processing methodologies were brought from the research labs and directly applied onto the terrain in order to support the relief workers and minimize human suffering.

    @InProceedings{balta2015uas,
    author = {Balta, Haris and De Cubber, Geert and Baudoin, Yvan and Doroftei, Daniela},
    booktitle = {8th IARP Workshop on Robotics for Risky Environments},
    title = {{UAS} deployment and data processing during the {Balkans} flooding with the support to Mine Action},
    year = {2015},
    abstract = {In this paper, we provide a report on a real relief operation mission, jointly conducted by two European research projects, in response to the massive flooding in the Balkan in spring 2014. Un Unmanned Aerial System was deployed on-site in collaboration with traditional relief workers, to support them with damage assessment, area mapping, visual inspection and re-localizing the many explosive remnants of war which have been moved due to the flooding and landslides. The destructive impact of landslides, sediment torrents and floods on the mine fields and the change of mine action situation resulted with significant negative environmental and security consequences. Novel robotic technologies and data processing methodologies were brought from the research labs and directly applied onto the terrain in order to support the relief workers and minimize human suffering.},
    project = {ICARUS},
    address = {Lisbon, Portugal},
    url = {http://mecatron.rma.ac.be/pub/2015/RISE_2015_Haris_Balta_RMA.PDF},
    unit= {meca-ras}
    }

  • G. De Cubber and H. Balta, “Terrain Traversability Analysis using full-scale 3D Processing," in 8th IARP Workshop on Robotics for Risky Environments, Lisbon, Portugal, 2015.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous robotic systems which aspire to navigate through rough unstructured terrain require the capability to reason about the environmental characteristics of their environment. As a first priority, the robotic systems need to assess the degree of traversability of their immediate environment to ensure their mobility while navigating through these rough environments. This paper presents a novel terrain-traversability analyis methodology which is based on processing the full 3D model of the terrain, not on a projected or downscaled version of this model. The approach is validated using field tests using a time-of-flight camera.

    @InProceedings{de2015terrain,
    author = {De Cubber, Geert and Balta, Haris},
    booktitle = {8th IARP Workshop on Robotics for Risky Environments},
    title = {Terrain Traversability Analysis using full-scale {3D} Processing},
    year = {2015},
    abstract = {Autonomous robotic systems which aspire to navigate through rough unstructured terrain require the capability to reason about the environmental characteristics of their environment. As a first priority, the robotic systems need to assess the degree of traversability of their immediate environment to ensure their mobility while navigating through these rough environments. This paper presents a novel terrain-traversability analyis methodology which is based on processing the full 3D model of the terrain, not on a projected or downscaled version of this model. The approach is validated using field tests using a time-of-flight camera.},
    project = {ICARUS},
    address = {Lisbon, Portugal},
    url = {http://mecatron.rma.ac.be/pub/2015/Terrain Traversability Analysis.pdf},
    unit= {meca-ras}
    }

  • O. De Meyst, T. Goethals, H. Balta, G. De Cubber, and R. Haelterman, “Autonomous guidance for a UAS along a staircase," in International Symposium on Visual Computing, Las Vegas, USA, 2015, p. 466–475.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In the quest for fully autonomous unmanned aerial systems (UAS), multiple challenges are faced. For enabling autonomous UAS navigation in indoor environments, one of the major bottlenecks is the capability to autonomously traverse narrow 3D – passages, like staircases. This paper presents a novel integrated system that implements a semi-autonomous navigation system for a quadcopter. The navigation system permits the UAS to detect a staircase using only the images provided by an on-board monocular camera. A 3D model of this staircase is then automatically reconstructed and this model is used to guide the UAS to the top of the detected staircase. For validating the methodology, a proof of concept is created, based on the Parrot AR.Drone 2.0 which is a cheap commercial off-the-shelf quadcopter.

    @InProceedings{de2015autonomous,
    author = {De Meyst, Olivier and Goethals, Thijs and Balta, Haris and De Cubber, Geert and Haelterman, Rob},
    booktitle = {International Symposium on Visual Computing},
    title = {Autonomous guidance for a {UAS} along a staircase},
    year = {2015},
    organization = {Springer, Cham},
    pages = {466--475},
    abstract = {In the quest for fully autonomous unmanned aerial systems (UAS), multiple challenges are faced. For enabling autonomous UAS navigation in indoor environments, one of the major bottlenecks is the capability to autonomously traverse narrow 3D - passages, like staircases. This paper presents a novel integrated system that implements a semi-autonomous navigation system for a quadcopter. The navigation system permits the UAS to detect a staircase using only the images provided by an on-board monocular camera. A 3D model of this staircase is then automatically reconstructed and this model is used to guide the UAS to the top of the detected staircase. For validating the methodology, a proof of concept is created, based on the Parrot AR.Drone 2.0 which is a cheap commercial off-the-shelf quadcopter.},
    doi = {10.1007/978-3-319-27857-5_42},
    project = {ICARUS},
    address = {Las Vegas, USA},
    unit= {meca-ras},
    url = {https://link.springer.com/chapter/10.1007/978-3-319-27857-5_42},
    }

  • G. De Cubber, “Search and Rescue Robots," Belgisch Militair Tijdschrift, vol. 10, p. 50–60, 2015.
    [BibTeX] [Abstract] [Download PDF]

    This article provides an overview of the work on search and rescue robotics and more specifically the research performed within the ICARUS research project.

    @Article{de2015search,
    author = {De Cubber, Geert},
    journal = {Belgisch Militair Tijdschrift},
    title = {Search and Rescue Robots},
    year = {2015},
    pages = {50--60},
    volume = {10},
    abstract = {This article provides an overview of the work on search and rescue robotics and more specifically the research performed within the ICARUS research project.},
    publisher = {Defensie},
    project = {ICARUS},
    unit= {meca-ras},
    url = {http://mecatron.rma.ac.be/pub/2015/rmb102.pdf},
    }

2014

  • D. Doroftei, A. Matos, and G. De Cubber, “Designing Search and Rescue Robots towards Realistic User Requirements," in Advanced Concepts on Mechanical Engineering (ACME), Iasi, Romania, 2014.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In the event of a large crisis (think about typhoon Haiyan or the Tohoku earthquake and tsunami in Japan), a primordial task of the rescue services is the search for human survivors on the incident site. This is a complex and dangerous task, which often leads to loss of lives among the human crisis managers themselves. The introduction of unmanned search and rescue devices can offer a valuable tool to save human lives and to speed up the search and rescue process. In this context, the EU-FP7-ICARUS project [1] concentrates on the development of unmanned search and rescue technologies for detecting, locating and rescuing humans. The complex nature and difficult operating conditions of search and rescue operations pose heavy constraints on the mechanical design of the unmanned platforms. In this paper, we discuss the different user requirements which have an impact of the design of the mechanical systems (air, ground and marine robots). We show how these user requirements are obtained, how they are validated, how they lead to design specifications for operational prototypes which are tested in realistic operational conditions and we show how the final mechanical design specifications are derived from these different steps. An important aspect of all these design steps which is emphasized in this paper is to always keep the end-users in the loop in order to come to realistic requirements and specifications, ensuring the practical deployability [2] of the developed platforms.

    @InProceedings{doroftei2014designing,
    author = {Doroftei, Daniela and Matos, Anibal and De Cubber, Geert},
    booktitle = {Advanced Concepts on Mechanical Engineering (ACME)},
    title = {Designing Search and Rescue Robots towards Realistic User Requirements},
    year = {2014},
    abstract = {In the event of a large crisis (think about typhoon Haiyan or the Tohoku earthquake and tsunami in Japan), a primordial task of the rescue services is the search for human survivors on the incident site. This is a complex and dangerous task, which often leads to loss of lives among the human crisis managers themselves. The introduction of unmanned search and rescue devices can
    offer a valuable tool to save human lives and to speed up the search and rescue process. In this context, the EU-FP7-ICARUS project [1] concentrates on the development of unmanned search and rescue technologies for detecting, locating and rescuing humans. The complex nature and difficult operating conditions of search and rescue operations pose heavy constraints on the mechanical design of the unmanned platforms. In this paper, we discuss the different user requirements which have an impact of the design of the mechanical systems (air, ground and marine robots). We show how these user requirements are obtained, how they are validated, how they lead to design specifications for operational prototypes which are tested in realistic operational conditions and we show how the final mechanical design specifications are derived from these different steps. An important aspect of all these design steps which is emphasized in this paper is to always keep the end-users in the loop in order to come to realistic requirements and specifications, ensuring the practical deployability [2] of the developed platforms.},
    doi = {10.4028/www.scientific.net/amm.658.612},
    project = {ICARUS},
    address = {Iasi, Romania},
    url = {http://mecatron.rma.ac.be/pub/2014/Designing Search and Rescue robots towards realistic user requirements - full article -v3.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, H. Balta, and C. Lietart, “Teodor: A semi-autonomous search and rescue and demining robot," in Advanced Concepts on Mechanical Engineering (ACME), Iasi, Romania, 2014.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this paper, we present a ground robotic system which is developed to deal with rough outdoor conditions. The platform is to be used as an environmental monitoring robot for 2 main application areas: 1) Humanitarian demining: The vehicle is equipped with a specialized multichannel metal detector array. An unmanned aerial system supports it for locating suspected locations of mines, which can then be confirmed by the ground vehicle. 2) Search and rescue: The vehicle is equipped with human victim detection sensors and a 3D camera enabling it to assess the traversability of the terrain in front of the robot in order to be able to navigate autonomously. This paper discusses both the mechanical design of these platforms as the autonomous perception capabilities on board of these vehicles.

    @InProceedings{de2014teodor,
    author = {De Cubber, Geert and Balta, Haris and Lietart, Claude},
    booktitle = {Advanced Concepts on Mechanical Engineering (ACME)},
    title = {Teodor: A semi-autonomous search and rescue and demining robot},
    year = {2014},
    abstract = {In this paper, we present a ground robotic system which is developed to deal with rough outdoor conditions. The platform is to be used as an environmental monitoring robot for 2 main application areas: 1) Humanitarian demining: The vehicle is equipped with a specialized multichannel metal detector array. An unmanned aerial system supports it for locating suspected locations of mines, which can then be confirmed by the ground vehicle. 2) Search and rescue: The vehicle is equipped with human victim detection sensors and a 3D camera enabling it to assess the traversability of the terrain in front of the robot in order to be able to navigate autonomously. This paper discusses both the mechanical design of these platforms as the autonomous perception
    capabilities on board of these vehicles.},
    doi = {10.4028/www.scientific.net/amm.658.599},
    project = {ICARUS},
    address = {Iasi, Romania},
    url = {http://mecatron.rma.ac.be/pub/2014/Teodor - A semi-autonomous search and rescue and demining robot - full article.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, H. Balta, D. Doroftei, and Y. Baudoin, “UAS deployment and data processing during the Balkans flooding," in 2014 IEEE International Symposium on Safety, Security, and Rescue Robotics (2014), Toyako-cho, Hokkaido, Japan, 2014, p. 1–4.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This project paper provides a report on a real relief operation mission, jointly conducted by two European research projects, in response to the massive flooding in the Balkan in spring 2014. Un Unmanned Aerial System was deployed on-site in collaboration with traditional relief workers, to support them with damage assessment, area mapping, visual inspection and re-localizing the many explosive remnants of war which have been moved due to the flooding and landslides. Novel robotic technologies and data processing methodologies were brought from the research labs and directly applied onto the terrain in order to support the relief workers and minimize human suffering.

    @InProceedings{de2014uas,
    author = {De Cubber, Geert and Balta, Haris and Doroftei, Daniela and Baudoin, Yvan},
    booktitle = {2014 IEEE International Symposium on Safety, Security, and Rescue Robotics (2014)},
    title = {{UAS} deployment and data processing during the Balkans flooding},
    year = {2014},
    organization = {IEEE},
    pages = {1--4},
    abstract = {This project paper provides a report on a real relief operation mission, jointly conducted by two European research projects, in response to the massive flooding in the Balkan in spring 2014. Un Unmanned Aerial System was deployed on-site in collaboration with traditional relief workers, to support them with damage assessment, area mapping, visual inspection and re-localizing the many explosive remnants of war which have been moved due to the flooding and landslides. Novel robotic technologies and data processing methodologies were brought from the research labs and directly applied onto the terrain in order to support the relief workers and minimize human suffering.},
    doi = {10.1109/ssrr.2014.7017670},
    project = {ICARUS},
    address = {Toyako-cho, Hokkaido, Japan},
    url = {http://mecatron.rma.ac.be/pub/2014/SSRR2014_proj_037.pdf},
    unit= {meca-ras}
    }

  • M. Pelka, K. Majek, J. Bedkowski, P. Musialik, A. Maslowski, G. de Cubber, H. Balta, A. Coelho, R. Goncalves, R. Baptista, J. M. Sanchez, and S. Govindaraj, “Training and Support system in the Cloud for improving the situational awareness in Search and Rescue (SAR) operations," in 2014 IEEE International Symposium on Safety, Security, and Rescue Robotics (2014), Toyako-cho, Hokkaido, Japan, 2014, p. 1–6.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this paper, a Training and Support system for Search and Rescue operations is described. The system is a component of the ICARUS project (http://www.fp7-icarus.eu) which has a goal to develop sensor, robotic and communication technologies for Human Search And Rescue teams. The support system for planning and managing complex SAR operations is implemented as a command and control component that integrates different sources of spatial information, such as maps of the affected area, satellite images and sensor data coming from the unmanned robots, in order to provide a situation snapshot to the rescue team who will make the necessary decisions. Support issues will include planning of frequency resources needed for given areas, prediction of coverage conditions, location of fixed communication relays, etc. The training system is developed for the ICARUS operators controlling UGVs (Unmanned Ground Vehicles), UAVs (Unmanned Aerial Vehicles) and USVs (Unmanned Surface Vehicles) from a unified Remote Control Station (RC2). The Training and Support system is implemented in SaaS model (Software as a Service). Therefore, its functionality is available over the Ethernet. SAR ICARUS teams from different countries can be trained simultaneously on a shared virtual stage. In this paper we will show the multi-robot 3D mapping component (aerial vehicle and ground vehicles). We will demonstrate that these 3D maps can be used for Training purpose. Finally we demonstrate current approach for ICARUS Urban SAR (USAR) and Marine SAR (MSAR) operation training.

    @InProceedings{pelka2014training,
    author = {Michal Pelka and Karol Majek and Janusz Bedkowski and Pawel Musialik and Andrzej Maslowski and Geert de Cubber and Haris Balta and Antonio Coelho and Ricardo Goncalves and Ricardo Baptista and Jose Manuel Sanchez and Shashank Govindaraj},
    booktitle = {2014 {IEEE} International Symposium on Safety, Security, and Rescue Robotics (2014)},
    title = {Training and Support system in the Cloud for improving the situational awareness in Search and Rescue ({SAR}) operations},
    year = {2014},
    month = oct,
    organization = {IEEE},
    pages = {1--6},
    publisher = {{IEEE}},
    abstract = {In this paper, a Training and Support system for Search and Rescue operations is described. The system is a component of the ICARUS project (http://www.fp7-icarus.eu) which has a goal to develop sensor, robotic and communication technologies for Human Search And Rescue teams. The support system for planning and managing complex SAR operations is implemented as a command and control component that integrates different sources of spatial information, such as maps of the affected area, satellite images and sensor data coming from the unmanned robots, in order to provide a situation snapshot to the rescue team who will make the necessary decisions. Support issues will include planning of frequency resources needed for given areas, prediction of coverage conditions, location of fixed communication relays, etc. The training system is developed for the ICARUS operators controlling UGVs (Unmanned Ground Vehicles), UAVs (Unmanned Aerial Vehicles) and USVs (Unmanned Surface Vehicles) from a unified Remote Control Station (RC2). The Training and Support system is implemented in SaaS model (Software as a Service). Therefore, its functionality is available over the Ethernet. SAR ICARUS teams from different countries can be trained simultaneously on a shared virtual stage. In this paper we will show the multi-robot 3D mapping component (aerial vehicle and ground vehicles). We will demonstrate that these 3D maps can be used for Training purpose. Finally we demonstrate current approach for ICARUS Urban SAR (USAR) and Marine SAR (MSAR) operation training.},
    doi = {10.1109/ssrr.2014.7017644},
    project = {ICARUS},
    address = {Toyako-cho, Hokkaido, Japan},
    url = {https://ieeexplore.ieee.org/document/7017644?arnumber=7017644&sortType=asc_p_Sequence&filter=AND(p_IS_Number:7017643)=},
    unit= {meca-ras}
    }

  • C. Armbrust, G. De Cubber, and K. Berns, “ICARUS Control Systems for Search and Rescue Robots," Field and Assistive Robotics – Advances in Systems and Algorithms, 2014.
    [BibTeX] [Abstract] [Download PDF]

    This paper describes results of the European project ICARUS in the field of search and rescue robotics. It presents the software architectures of two unmanned ground vehicles (a small and a large one) developed in the context of the project. The architectures of the two vehicles share many similarities. This allows for component reuse and thus reduces the overall development effort. Hence, the main contribution of this paper are design concepts that can serve as a basis for the development of different robot control systems.

    @Article{armbrust2014icarus,
    author = {Armbrust, Christopher and De Cubber, Geert and Berns, Karsten},
    journal = {Field and Assistive Robotics - Advances in Systems and Algorithms},
    title = {{ICARUS} Control Systems for Search and Rescue Robots},
    year = {2014},
    abstract = {This paper describes results of the European project ICARUS in the field of search and rescue robotics. It presents the software architectures of two unmanned ground vehicles (a small and a large one) developed in the context of the project. The architectures of the two vehicles share many similarities. This allows for component reuse and thus reduces the overall development effort. Hence, the main contribution of this paper are design concepts that can serve as a basis for the development of different robot control systems.},
    publisher = {Shaker Verlag},
    project = {ICARUS},
    url = {https://pdfs.semanticscholar.org/713d/8c8561eba9b577f17d3059155e1f3953893a.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber and H. Balta, “ICARUS RPAS AND THEIR OPERATIONAL USE IN Bosnia," in RPAS 2014, Brussels, Belgium, 2014.
    [BibTeX] [Abstract] [Download PDF]

    This is a report in the field mission with an unmanned aircraft system in Spring 2014 in Bosnia, to help with flood relief and mine clearing operations.

    @InProceedings{de2014icarus,
    author = {De Cubber, Geert and Balta, Haris},
    booktitle = {RPAS 2014},
    title = {{ICARUS RPAS} AND THEIR OPERATIONAL USE IN {Bosnia}},
    year = {2014},
    organization = {UVS International},
    abstract = {This is a report in the field mission with an unmanned aircraft system in Spring 2014 in Bosnia, to help with flood relief and mine clearing operations.},
    project = {ICARUS},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2014/Icarus Project - RPAS in Bosnia_.pdf},
    unit= {meca-ras}
    }

2013

  • J. Bedkowski, K. Majek, I. Ostrowski, P. Musialik, A. Mas{l}owski, A. Adamek, A. Coelho, and G. De Cubber, “Methodology of Training and Support for Urban Search and Rescue With Robots," in Proc. Ninth International Conference on Autonomic and Autonomous Systems (ICAS), Lisbon, Portugal, Lisbon, Portugal, 2013, p. 77–82.
    [BibTeX] [Abstract] [Download PDF]

    A primordial task of the fire-fighting and rescue services in the event of a large crisis is the search for human survivors on the incident site. This task, being complex and dangerous, often leads to loss of lives. Unmanned search and rescue devices can provide a valuable tool for saving human lives and speeding up the search and rescue operations. Urban Search and Rescue (USAR) community agrees with the fact that the operator skill is the main factor for successfully using unmanned robotic platforms. The key training concept is “train as you fight" mentality. Intervention troops focalize on “real training", as a crisis is difficult to simulate. For this reason, in this paper a methodology of training and support for USAR with unmanned vehicles is proposed. The methodology integrates the Qualitative Spatio-Temporal Representation and Reasoning (QSTRR) framework with USAR tools to decrease the cognitive load on human operators working with sophisticated robotic platforms. Tools for simplifying and improving virtual training environment generation from life data are shown

    @InProceedings{bedkowski2013methodology,
    author = {Bedkowski, Janusz and Majek, Karol and Ostrowski, Igor and Musialik, Pawe{\l} and Mas{\l}owski, Andrzej and Adamek, Artur and Coelho, Antonio and De Cubber, Geert},
    booktitle = {Proc. Ninth International Conference on Autonomic and Autonomous Systems (ICAS), Lisbon, Portugal},
    title = {Methodology of Training and Support for Urban Search and Rescue With Robots},
    year = {2013},
    address = {Lisbon, Portugal},
    month = mar,
    pages = {77--82},
    abstract = {A primordial task of the fire-fighting and rescue services in the event of a large crisis is the search for human survivors on the incident site. This task, being complex and dangerous, often leads to loss of lives. Unmanned search and rescue devices can provide a valuable tool for saving human lives and speeding up the search and rescue operations. Urban Search and Rescue (USAR) community agrees with the fact that the operator skill is the main factor for successfully using unmanned robotic platforms. The key training concept is "train as you fight" mentality. Intervention troops focalize on "real training", as a crisis is difficult to simulate. For this reason, in this paper a methodology of training and support for USAR with unmanned vehicles is proposed. The methodology integrates the Qualitative Spatio-Temporal Representation and Reasoning (QSTRR) framework with USAR tools to decrease the cognitive load on human operators working with sophisticated robotic platforms. Tools for simplifying and improving virtual training environment generation from life data are shown},
    project = {ICARUS},
    url = {https://www.thinkmind.org/download.php?articleid=icas_2013_3_40_20054},
    unit= {meca-ras}
    }

  • G. De Cubber, “ICARUS Consortium – Providing Unmanned Search and Rescue Tools," in Remotely Piloted Aircraft Systems – The Global Perspective – Yearbook 2013/2014, Brussels, Belgium: Blyenburgh & co, 2013, vol. 11, p. 133–134.
    [BibTeX]
    @InCollection{de2013icarus,
    author = {De Cubber, Geert},
    booktitle = {Remotely Piloted Aircraft Systems - The Global Perspective - Yearbook 2013/2014},
    publisher = {Blyenburgh \& co},
    title = {{ICARUS} Consortium - Providing Unmanned Search and Rescue Tools},
    year = {2013},
    pages = {133--134},
    address = {Brussels, Belgium},
    project = {ICARUS},
    volume = {11},
    unit= {meca-ras}
    }

  • G. De Cubber and H. Sahli, “Augmented Lagrangian-based approach for dense three-dimensional structure and motion estimation from binocular image sequences," IET Computer Vision, 2013.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this study, the authors propose a framework for stereo–motion integration for dense depth estimation. They formulate the stereo–motion depth reconstruction problem into a constrained minimisation one. A sequential unconstrained minimisation technique, namely, the augmented Lagrange multiplier (ALM) method has been implemented to address the resulting constrained optimisation problem. ALM has been chosen because of its relative insensitivity to whether the initial design points for a pseudo-objective function are feasible or not. The development of the method and results from solving the stereo–motion integration problem are presented. Although the authors work is not the only one adopting the ALMs framework in the computer vision context, to thier knowledge the presented algorithm is the first to use this mathematical framework in a context of stereo–motion integration. This study describes how the stereo–motion integration problem was cast in a mathematical context and solved using the presented ALM method. Results on benchmark and real visual input data show the validity of the approach.

    @Article{de2013augmented,
    author = {De Cubber, Geert and Sahli, Hichem},
    journal = {IET Computer Vision},
    title = {Augmented Lagrangian-based approach for dense three-dimensional structure and motion estimation from binocular image sequences},
    year = {2013},
    abstract = {In this study, the authors propose a framework for stereo–motion integration for dense depth estimation. They formulate the stereo–motion depth reconstruction problem into a constrained minimisation one. A sequential unconstrained minimisation technique, namely, the augmented Lagrange multiplier (ALM) method has been implemented to address the resulting constrained optimisation problem. ALM has been chosen because of its relative insensitivity to whether the initial design points for a pseudo-objective function are feasible or not. The development of the method and results from solving the stereo–motion integration problem are presented. Although the authors work is not the only one adopting the ALMs framework in the computer vision context, to thier knowledge the presented algorithm is the first to use this mathematical framework in a context of stereo–motion integration. This study describes how the stereo–motion integration problem was cast in a mathematical context and solved using the presented ALM method. Results on benchmark and real visual input data show the validity of the approach.},
    doi = {10.1049/iet-cvi.2013.0017},
    publisher = {IET Digital Library},
    project = {ICARUS,ViewFinder,Mobiniss},
    url = {https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2013.0017},
    unit= {meca-ras}
    }

  • H. Balta, G. De Cubber, D. Doroftei, Y. Baudoin, and H. Sahli, “Terrain traversability analysis for off-road robots using time-of-flight 3d sensing," in 7th IARP International Workshop on Robotics for Risky Environment-Extreme Robotics, Saint-Petersburg, Russia, 2013.
    [BibTeX] [Abstract] [Download PDF]

    In this paper we present a terrain traversability analysis methodology which classifies all image pixels in the TOF image as traversable or not, by estimating for each pixel a traversability score which is based upon the analysis of the 3D (depth data) and 2D (IR data) content of the TOF camera data. This classification result is then used for the (semi) – autonomous navigation of two robotic systems, operating in extreme environments: a search and rescue robot and a humanitarian demining robot. Integrated in autonomous robot control architecture, terrain traversability classification increases the environmental situational awareness and enables a mobile robot to navigate (semi) – autonomously in an unstructured dynamical outdoor environment.

    @InProceedings{balta2013terrain,
    author = {Balta, Haris and De Cubber, Geert and Doroftei, Daniela and Baudoin, Yvan and Sahli, Hichem},
    booktitle = {7th IARP International Workshop on Robotics for Risky Environment-Extreme Robotics},
    title = {Terrain traversability analysis for off-road robots using time-of-flight 3d sensing},
    year = {2013},
    abstract = {In this paper we present a terrain traversability analysis methodology which classifies all image pixels in the TOF image as traversable or not, by estimating for each pixel a traversability score which is based upon the analysis of the 3D (depth data) and 2D (IR data) content of the TOF camera data. This classification result is then used for the (semi) – autonomous navigation of two robotic systems, operating in extreme environments: a search and rescue robot and a humanitarian demining robot. Integrated in autonomous robot control architecture, terrain traversability classification increases the environmental situational awareness and enables a mobile robot to navigate (semi) – autonomously in an unstructured dynamical outdoor environment.},
    project = {ICARUS},
    address = {Saint-Petersburg, Russia},
    url = {http://mecatron.rma.ac.be/pub/2013/Terrain Traversability Analysis ver 4-HS.pdf},
    unit= {meca-ras}
    }

  • Y. Baudoin and G. De Cubber, “TIRAMISU-ICARUS: FP7-Projects Challenges for Robotics Systems," in 7th IARP Workshop on Robotics for Risky Environment – Extreme Robotics, Saint-Petersburg, Russia, 2013, p. 55–69.
    [BibTeX] [Abstract] [Download PDF]

    TIRAMISU: Clearing large civilian areas from anti-personnel landmines and cluster munitions is a difficult problem because of the large diversity of hazardous areas and explosive contamination. A single solution does not exist and many Mine Action actors have called for a toolbox from which they could choose the tools best fit to a given situation. Some have built their own toolboxes, usually specific to their activities, such as clearance. The TIRAMISU project aims at providing the foundation for a global toolbox that will cover the main Mine Action activities, from the survey of large areas to the actual disposal of explosive hazards, including Mine Risk Education. The toolbox produced by the project will provide Mine Action actors with a large set of tools, grouped into thematic modules, which will help them to better perform their job. These tools will have been designed with the help of end-users and validated by them in mine affected countries. ICARUS: Recent dramatic events such as the earthquakes in Haiti and L’Aquila or the flooding in Pakistan have shown that local civil authorities and emergency services have difficulties with adequately managing crises. The result is that these crises lead to major disruption of the whole local society. The goal of ICARUS is to decrease the total cost (both in human lives and in euro) of a major crisis. In order to realise this goal, the ICARUS project proposes to equip first responders with a comprehensive and integrated set of unmanned search and rescue tools, to increase the situational awareness of human crisis managers and to assist search and rescue teams for dealing with the difficult and dangerous, but life-saving task of finding human survivors. As every crisis is different, it is impossible to provide one solution which fits all needs. Therefore, the ICARUS project will concentrate on developing components or building blocks that can be directly used by the crisis managers when arriving on the field. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with human detection sensors. The ICARUS unmanned vehicles are intended as the first explorers of the area, as well as in-situ supporters to act as safeguards to human personnel. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radionetworking. To ensure optimal human-robot collaboration, these ICARUS tools are seamlessly integrated into the C4I equipment of the human crisis managers and a set of training and support tools is provided to the human crisis to learn to use the ICARUS system.

    @InProceedings{baudoin2013tiramisu,
    author = {Baudoin, Yvan and De Cubber, Geert},
    booktitle = {7th IARP Workshop on Robotics for Risky Environment - Extreme Robotics},
    title = {{TIRAMISU-ICARUS}: {FP7}-Projects Challenges for Robotics Systems},
    year = {2013},
    pages = {55--69},
    address = {Saint-Petersburg, Russia},
    abstract = {TIRAMISU: Clearing large civilian areas from anti-personnel landmines and cluster munitions is a difficult problem because of the large diversity of hazardous areas and explosive contamination. A single solution does not exist and many Mine Action actors have called for a toolbox from which they could choose the tools best fit to a given situation. Some have built their own toolboxes, usually specific to their activities, such as clearance. The TIRAMISU project aims at providing the foundation for a global toolbox that will cover the main Mine Action activities, from the survey of large areas to the actual disposal of explosive hazards, including Mine Risk Education. The toolbox produced by the project will provide Mine Action actors with a large set of tools, grouped into thematic modules, which will help them to better perform their job. These tools will have been designed with the help of end-users and validated by them in mine affected countries.
    ICARUS: Recent dramatic events such as the earthquakes in Haiti and L’Aquila or the flooding in Pakistan have shown that local civil authorities and emergency services have difficulties with adequately managing crises. The result is that these crises lead to major disruption of the whole local society. The goal of ICARUS is to decrease the total cost (both in human lives and in euro) of a major crisis. In order to realise this goal, the ICARUS project proposes to equip first responders with a comprehensive and integrated set of unmanned search and rescue tools, to increase the situational awareness of human crisis managers and to assist search and rescue teams for dealing with the difficult and dangerous, but life-saving task of finding human survivors. As every crisis is different, it is impossible to provide one solution which fits all needs. Therefore, the ICARUS project will concentrate on developing components or building blocks that can be directly used by the crisis managers when arriving on the field. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with human detection sensors. The ICARUS unmanned vehicles are intended as the first explorers of the area, as well as in-situ supporters to act as safeguards to human personnel. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radionetworking. To ensure optimal human-robot collaboration, these ICARUS tools are seamlessly integrated into the C4I equipment of the human crisis managers and a set of training and support tools is provided to the human crisis to learn to use the ICARUS system.},
    project = {ICARUS, TIRAMISU},
    url = {http://mecatron.rma.ac.be/pub/2013/KN Paper YB.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Doroftei, D. Serrano, K. Chintamani, R. Sabino, and S. Ourevitch, “The EU-ICARUS project: developing assistive robotic tools for search and rescue operations," in 2013 IEEE international symposium on safety, security, and rescue robotics (SSRR), Linkoping, Sweden, 2013, p. 1–4.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The ICARUS EU-FP7 project deals with the development of a set of integrated components to assist search and rescue teams in dealing with the difficult and dangerous, but lifesaving task of finding human survivors. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with victim detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad-hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the C4I (command, control, communications, computers, and intelligence) equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.

    @InProceedings{de2013eu,
    author = {De Cubber, Geert and Doroftei, Daniela and Serrano, Daniel and Chintamani, Keshav and Sabino, Rui and Ourevitch, Stephane},
    booktitle = {2013 IEEE international symposium on safety, security, and rescue robotics (SSRR)},
    title = {The {EU-ICARUS} project: developing assistive robotic tools for search and rescue operations},
    year = {2013},
    organization = {IEEE},
    pages = {1--4},
    address = {Linkoping, Sweden},
    abstract = {The ICARUS EU-FP7 project deals with the development of a set of integrated components to assist search and rescue teams in dealing with the difficult and dangerous, but lifesaving task of finding human survivors. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with victim detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad-hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the C4I (command, control, communications, computers, and intelligence) equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.},
    doi = {10.1109/ssrr.2013.6719323},
    project = {ICARUS},
    url = {http://mecatron.rma.ac.be/pub/2013/SSRR2013_ICARUS.pdf},
    unit= {meca-ras}
    }

  • S. Govindaraj, K. Chintamani, J. Gancet, P. Letier, B. van Lierde, Y. Nevatia, G. D. Cubber, D. Serrano, M. E. Palomares, J. Bedkowski, C. Armbrust, J. Sanchez, A. Coelho, and I. Orbe, “The ICARUS project – Command, Control and Intelligence (C2I)," in 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Linkoping, Sweden, 2013, p. 1–4.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper describes the features and concepts behind the Command, Control and Intelligence (C2I) system under development in the ICARUS project, which aims at improving crisis management with the use of unmanned search and rescue robotic appliances embedded and integrated into existing infrastructures. A beneficial C2I system should assist the search and rescue process by enhancing first responder situational awareness, decision making and crisis handling by designing intuitive user interfaces that convey detailed and extensive information about the crisis and its evolution. The different components of C2I, their architectural and functional aspects are described along with the robot platform used for development and field testing.

    @InProceedings{govindaraj2013icarus,
    author = {Shashank Govindaraj and Keshav Chintamani and Jeremi Gancet and Pierre Letier and Boris van Lierde and Yashodhan Nevatia and Geert De Cubber and Daniel Serrano and Miguel Esbri Palomares and Janusz Bedkowski and Christopher Armbrust and Jose Sanchez and Antonio Coelho and Iratxe Orbe},
    booktitle = {2013 {IEEE} International Symposium on Safety, Security, and Rescue Robotics ({SSRR})},
    title = {The {ICARUS} project - Command, Control and Intelligence (C2I)},
    year = {2013},
    month = oct,
    organization = {IEEE},
    address = {Linkoping, Sweden},
    pages = {1--4},
    publisher = {{IEEE}},
    abstract = {This paper describes the features and concepts behind the Command, Control and Intelligence (C2I) system under development in the ICARUS project, which aims at improving crisis management with the use of unmanned search and rescue robotic appliances embedded and integrated into existing infrastructures. A beneficial C2I system should assist the search and rescue process by enhancing first responder situational awareness, decision making and crisis handling by designing intuitive user interfaces that convey detailed and extensive information about the crisis and its evolution. The different components of C2I, their architectural and functional aspects are described along with the robot platform used for development and field testing.},
    doi = {10.1109/ssrr.2013.6719356},
    project = {ICARUS},
    url = {http://mecatron.rma.ac.be/pub/2013/Govindaraj_SSRR_WS_Paper_V2.0.pdf},
    unit= {meca-ras}
    }

  • H. Balta, S. Rossi, S. Iengo, B. Siciliano, A. Finzi, and G. De Cubber, “Adaptive behavior-based control for robot navigation: A multi-robot case study," in 2013 XXIV International Conference on Information, Communication and Automation Technologies (ICAT), Sarajevo, Bosnia and Herzegovina, 2013, p. 1–7.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    The main focus of the work presented in this paper is to investigate the application of certain biologically-inspired control strategies in the field of autonomous mobile robots, with particular emphasis on multi-robot navigation systems. The control architecture used in this work is based on the behavior-based approach. The main argument in favor of this approach is its impressive and rapid practical success. This powerful methodology has demonstrated simplicity, parallelism, perception-action mapping and real implementation. When a group of autonomous mobile robots needs to achieve a goal operating in complex dynamic environments, such a task involves high computational complexity and a large volume of data needed for continuous monitoring of internal states and the external environment. Most autonomous mobile robots have limited capabilities in computation power or energy sources with limited capability, such as batteries. Therefore, it becomes necessary to build additional mechanisms on top of the control architecture able to efficiently allocate resources for enhancing the performance of an autonomous mobile robot. For this purpose, it is necessary to build an adaptive behavior-based control system focused on sensory adaptation. This adaptive property will assure efficient use of robot’s limited sensorial and cognitive resources. The proposed adaptive behavior-based control system is then validated through simulation in a multi-robot environment with a task of prey/predator scenario.

    @InProceedings{balta2013adaptive,
    author = {Balta, Haris and Rossi, Silvia and Iengo, Salvatore and Siciliano, Bruno and Finzi, Alberto and De Cubber, Geert},
    booktitle = {2013 XXIV International Conference on Information, Communication and Automation Technologies (ICAT)},
    title = {Adaptive behavior-based control for robot navigation: A multi-robot case study},
    year = {2013},
    organization = {IEEE},
    pages = {1--7},
    abstract = {The main focus of the work presented in this paper is to investigate the application of certain biologically-inspired control strategies in the field of autonomous mobile robots, with particular emphasis on multi-robot navigation systems. The control architecture used in this work is based on the behavior-based approach. The main argument in favor of this approach is its impressive and rapid practical success. This powerful methodology has demonstrated simplicity, parallelism, perception-action mapping and real implementation. When a group of autonomous mobile robots needs to achieve a goal operating in complex dynamic environments, such a task involves high computational complexity and a large volume of data needed for continuous monitoring of internal states and the external environment. Most autonomous mobile robots have limited capabilities in computation power or energy sources with limited capability, such as batteries. Therefore, it becomes necessary to build additional mechanisms on top of the control architecture able to efficiently allocate resources for enhancing the performance of an autonomous mobile robot. For this purpose, it is necessary to build an adaptive behavior-based control system focused on sensory adaptation. This adaptive property will assure efficient use of robot's limited sensorial and cognitive resources. The proposed adaptive behavior-based control system is then validated through simulation in a multi-robot environment with a task of prey/predator scenario.},
    doi = {10.1109/icat.2013.6684083},
    address = {Sarajevo, Bosnia and Herzegovina},
    project = {ICARUS},
    url = {https://ieeexplore.ieee.org/document/6684083?tp=&arnumber=6684083},
    unit= {meca-ras}
    }

  • H. Balta, G. De Cubber, and D. Doroftei, “Increasing Situational Awareness through Outdoor Robot Terrain Traversability Analysis based on Time- Of-Flight Camera," in Spring School on Developmental Robotics and Cognitive Bootstrapping, Athens, Greece: , 2013, p. 8.
    [BibTeX] [Abstract]

    Poster paper

    @InCollection{balta2013increasing,
    author = {Balta, Haris and De Cubber, Geert and Doroftei, Daniela},
    booktitle = {Spring School on Developmental Robotics and Cognitive Bootstrapping},
    title = {Increasing Situational Awareness through Outdoor Robot Terrain Traversability Analysis based on Time- Of-Flight Camera},
    year = {2013},
    number = {Developmental Robotics and Cognitive Bootstrapping},
    pages = {8},
    abstract = {Poster paper},
    address = {Athens, Greece},
    project = {ICARUS},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Serrano, K. Berns, K. Chintamani, R. Sabino, S. Ourevitch, D. Doroftei, C. Armbrust, T. Flamma, and Y. Baudoin, “Search and rescue robots developed by the European Icarus project," in 7th Int Workshop on Robotics for Risky Environments, Saint – Petersburg, Russia, 2013.
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses the efforts of the European ICARUS project towards the development of unmanned search and rescue (SAR) robots. ICARUS project proposes to equip first responders with a comprehensive and integrated set of remotely operated SAR tools, to increase the situational awareness of human crisis managers. In the event of large crises, a primordial task of the fire and rescue services is the search for human survivors on the incident site, which is a complex and dangerous task. The introduction of remotely operated SAR devices can offer a valuable tool to save human lives and to speed up the SAR process. Therefore, ICARUS concentrates on the development of unmanned SAR technologies for detecting, locating and rescuing humans. The remotely operated SAR devices are foreseen to be the first explorers of the area, along with in-situ supporters to act as safeguards to human personnel. While the ICARUS project also considers the development of marine and aerial robots, this paper will mostly concentrate on the development of the unmanned ground vehicles (UGVs) for SAR. Two main UGV platforms are being developed within the context of the project: a large UGV including a powerful arm for manipulation, which is able to make structural changes in disaster scenarios. The large UGV also serves as a base platform for a small UGV (and possibly also a UAV), which is used for entering small enclosures, while searching for human survivors. In order not to increase the cognitive load of the human crisis managers, the SAR robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station, being able to navigate in an autonomous and semi-autonomous manner. The robots connect to the base station and to each other using a wireless self-organizing cognitive network of mobile communication nodes which adapts to the terrain. The SAR robots are equipped with sensors that detect the presence of humans and will also be equipped with a wide array of other types of sensors. At the base station, the data is processed and combined with geographical information, thus enhancing the situational awareness of the personnel leading the operation with in-situ processed data that can improve decision-making.

    @InProceedings{de2013search,
    author = {De Cubber, Geert and Serrano, Daniel and Berns, Karsten and Chintamani, Keshav and Sabino, Rui and Ourevitch, Stephane and Doroftei, Daniela and Armbrust, Christopher and Flamma, Tommasso and Baudoin, Yvan},
    booktitle = {7th Int Workshop on Robotics for Risky Environments},
    title = {Search and rescue robots developed by the {European} {Icarus} project},
    year = {2013},
    abstract = {This paper discusses the efforts of the European ICARUS project towards the development of unmanned search and rescue (SAR) robots. ICARUS project proposes to equip first responders with a comprehensive and integrated set of remotely operated SAR tools, to increase the situational awareness of human crisis managers. In the event of large crises, a primordial task of the fire and rescue services is the search for human survivors on the incident site, which is a complex and dangerous task. The introduction of remotely operated SAR devices can offer a valuable tool to save human lives and to speed up the SAR process. Therefore, ICARUS concentrates on the development of unmanned SAR technologies for detecting, locating and rescuing humans. The remotely operated SAR devices are foreseen to be the first explorers of the area, along with in-situ supporters to act as safeguards to human personnel. While the ICARUS project also considers the development of marine and aerial robots, this paper will mostly concentrate on the development of the unmanned ground vehicles (UGVs) for SAR. Two main UGV platforms are being developed within the context of the project: a large UGV including a powerful arm for manipulation, which is able to make structural changes in disaster scenarios. The large UGV also serves as a base platform for a small UGV (and possibly also a UAV), which is used for entering small enclosures, while searching for human survivors. In order not to increase the cognitive load of the human crisis managers, the SAR robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station, being able to navigate in an autonomous and semi-autonomous manner. The robots connect to the base station and to each other using a wireless self-organizing cognitive network of mobile communication nodes which adapts to the terrain. The SAR robots are equipped with sensors that detect the presence of humans and will also be equipped with a wide array of other types of sensors. At the base station, the data is processed and
    combined with geographical information, thus enhancing the situational awareness of the personnel leading the operation with in-situ processed data that can improve decision-making.},
    project = {ICARUS},
    address = {Saint - Petersburg, Russia},
    url = {http://mecatron.rma.ac.be/pub/2013/Search and Rescue robots developed by the European ICARUS project - Article.pdf},
    unit= {meca-ras}
    }

2012

  • J. B{k{e}}dkowski, A. Mas{l}owski, and G. De Cubber, “Real time 3D localization and mapping for USAR robotic application," Industrial Robot: An International Journal, vol. 39, iss. 5, p. 464–474, 2012.
    [BibTeX] [DOI]
    @Article{bkedkowski2012real,
    author = {B{\k{e}}dkowski, Janusz and Mas{\l}owski, Andrzej and De Cubber, Geert},
    journal = {Industrial Robot: An International Journal},
    title = {Real time {3D} localization and mapping for {USAR} robotic application},
    year = {2012},
    number = {5},
    pages = {464--474},
    volume = {39},
    doi = {10.1108/01439911211249751},
    project = {ICARUS},
    publisher = {Emerald Group Publishing Limited},
    unit= {meca-ras}
    }

  • G. De Cubber and H. Sahli, “Partial differential equation-based dense 3D structure and motion estimation from monocular image sequences," IET computer vision, vol. 6, iss. 3, p. 174–185, 2012.
    [BibTeX] [DOI]
    @Article{de2012partial,
    author = {De Cubber, Geert and Sahli, Hichem},
    journal = {IET computer vision},
    title = {Partial differential equation-based dense {3D} structure and motion estimation from monocular image sequences},
    year = {2012},
    number = {3},
    pages = {174--185},
    volume = {6},
    doi = {10.1049/iet-cvi.2011.0174},
    project = {ViewFinder, Mobiniss},
    publisher = {IET Digital Library},
    unit= {meca-ras,vu-etro}
    }

  • Y. Yvinec, Y. Baudoin, G. De Cubber, M. Armada, L. Marques, J. Desaulniers, and M. Bajic, “TIRAMISU: FP7-Project for an integrated toolbox in Humanitarian Demining," in GICHD Technology Workshop, Geneva, Switzerland, 2012.
    [BibTeX] [Abstract] [Download PDF]

    The TIRAMISU project aims at providing the foundation for a global toolbox that will cover the main mine action activities, from the survey of large areas to the actual disposal of explosive hazards, including mine risk education and training tools. After a short description of some tools, particular emphasis will be given to the two topics proposed by the GICHD Technology Workshop, namely the methodology adopted by the explosion of an ammunition storage and the possible use of UAV (or UGV/UAV) in Technical survey and/or Close-in-Detection

    @InProceedings{yvinec2012tiramisu01,
    author = {Yvinec, Yann and Baudoin, Yvan and De Cubber, Geert and Armada, Manuel and Marques, Lino and Desaulniers, Jean-Marc and Bajic, Milan},
    booktitle = {GICHD Technology Workshop},
    title = {{TIRAMISU}: {FP7}-Project for an integrated toolbox in Humanitarian Demining},
    year = {2012},
    abstract = {The TIRAMISU project aims at providing the foundation for a global toolbox that will cover the main mine action activities, from the survey of large areas to the actual disposal of explosive hazards, including mine risk education and training tools. After a short description of some tools, particular emphasis will be given to the two topics proposed by the GICHD Technology Workshop, namely the methodology adopted by the explosion of an ammunition storage and the possible use of UAV (or
    UGV/UAV) in Technical survey and/or Close-in-Detection},
    project = {TIRAMISU},
    address = {Geneva, Switzerland},
    url = {http://mecatron.rma.ac.be/pub/2012/TIRAMISU-TWS-GICHD.pdf},
    unit= {meca-ras,ciss}
    }

  • Y. Yvinec, Y. Baudoin, G. De Cubber, M. Armada, L. Marques, J. Desaulniers, M. Bajic, E. Cepolina, and M. Zoppi, “TIRAMISU: FP7-Project for an integrated toolbox in Humanitarian Demining , focus on UGV, UAV and technical survey," in 6th IARP Workshop on Risky Interventions and Environmental Surveillance (RISE), Warsaw, Poland, 2012.
    [BibTeX] [Abstract] [Download PDF]

    The TIRAMISU project aims at providing the foundation for a global toolbox that will cover the main mine action activities, from the survey of large areas to the actual disposal of explosive hazards, including mine risk education and training tools. After a short description of some tools, particular emphasis will be given to the two topics proposed by the GICHD Technology Workshop, namely the methodology adopted by the explosion of an ammunition storage and the possible use of UAV (or UGV/UAV) in Technical survey and/or Close-in-Detection

    @InProceedings{yvinec2012tiramisu02,
    author = {Yvinec, Yann and Baudoin, Yvan and De Cubber, Geert and Armada, Manuel and Marques, Lino and Desaulniers, Jean-Marc and Bajic, Milan and Cepolina, Emanuela and Zoppi, Marco},
    booktitle = {6th IARP Workshop on Risky Interventions and Environmental Surveillance (RISE)},
    title = {{TIRAMISU}: {FP7}-Project for an integrated toolbox in Humanitarian Demining , focus on UGV, UAV and technical survey},
    year = {2012},
    abstract = {The TIRAMISU project aims at providing the foundation for a global toolbox that will cover the main mine action activities, from the survey of large areas to the actual disposal of explosive hazards, including mine risk education and training tools. After a short description of some tools, particular emphasis will be given to the two topics proposed by the GICHD Technology Workshop, namely the methodology adopted by the explosion of an ammunition storage and the possible use of UAV (or
    UGV/UAV) in Technical survey and/or Close-in-Detection},
    address = {Warsaw, Poland},
    project = {TIRAMISU},
    url = {http://mecatron.rma.ac.be/pub/2012/RISE-TIRAMISU.pdf},
    unit= {meca-ras,ciss}
    }

  • G. De Cubber, D. Doroftei, Y. Baudoin, D. Serrano, K. Chintamani, R. Sabino, and S. Ourevitch, “ICARUS : Providing Unmanned Search and Rescue Tools," in 6th IARP Workshop on Risky Interventions and Environmental Surveillance (RISE), Warsaw, Poland, 2012.
    [BibTeX] [Abstract] [Download PDF]

    The ICARUS EU-FP7 project deals with the development of a set of integrated components to assist search and rescue teams in dealing with the difficult and dangerous, but life-saving task of finding human survivors. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with victim detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoccognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the C4I equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.

    @InProceedings{de2012icarus01,
    author = {De Cubber, Geert and Doroftei, Daniela and Baudoin, Yvan and Serrano, Daniel and Chintamani, Keshav and Sabino, Rui and Ourevitch, Stephane},
    booktitle = {6th IARP Workshop on Risky Interventions and Environmental Surveillance (RISE)},
    title = {{ICARUS} : Providing Unmanned Search and Rescue Tools},
    year = {2012},
    abstract = {The ICARUS EU-FP7 project deals with the development of a set of integrated components to assist search and rescue teams in dealing with the difficult and dangerous, but life-saving task of finding human survivors. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with victim detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoccognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the C4I equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.},
    project = {ICARUS},
    address = {Warsaw, Poland},
    url = {http://mecatron.rma.ac.be/pub/2012/RISE2012_ICARUS.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei, G. De Cubber, and K. Chintamani, “Towards collaborative human and robotic rescue workers," in 5th International Workshop on Human-Friendly Robotics (HFR2012), Brussels, Belgium, 2012, p. 18–19.
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses some of the main remaining bottlenecks towards the successful introduction of robotic search and rescue (SAR) tools, collaborating with human rescue workers. It also sketches some of the recent advances which are being made to in the context of the European ICARUS project to get rid of these bottlenecks.

    @InProceedings{doroftei2012towards,
    author = {Doroftei, Daniela and De Cubber, Geert and Chintamani, Keshav},
    booktitle = {5th International Workshop on Human-Friendly Robotics (HFR2012)},
    title = {Towards collaborative human and robotic rescue workers},
    year = {2012},
    pages = {18--19},
    abstract = {This paper discusses some of the main remaining bottlenecks towards the successful introduction of robotic search and rescue (SAR) tools, collaborating with human rescue workers. It also sketches some of the recent advances which are being made to in the context of the European ICARUS project to get rid of these bottlenecks.},
    project = {ICARUS},
    address = {Brussels, Belgium},
    url = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.303.6697&rep=rep1&type=pdf},
    unit= {meca-ras}
    }

  • A. Conduraru, I. Conduraru, E. Puscalau, G. De Cubber, D. Doroftei, and H. Balta, “Development of an autonomous rough-terrain robot," in IROS2012 Workshop on Robots and Sensors integration in future rescue INformation system (ROSIN’12), Villamoura, Portugal, 2012.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we discuss the development process of a mobile robot intended for environmental observation applications. The paper describes how a standard tele-operated Explosive Ordnance Disposal (EOD) robot was upgraded with electronics, sensors, computing power and autonomous capabilities, such that it becomes able to execute semi-autonomous missions, e.g. for search & rescue or humanitarian demining tasks. The aim of this paper is not to discuss the details of the navigation algorithms (as these are often task-dependent), but more to concentrate on the development of the platform and its control architecture as a whole.

    @InProceedings{conduraru2012development,
    author = {Conduraru, Alina and Conduraru, Ionel and Puscalau, Emanuel and De Cubber, Geert and Doroftei, Daniela and Balta, Haris},
    booktitle = {IROS2012 Workshop on Robots and Sensors integration in future rescue INformation system (ROSIN'12)},
    title = {Development of an autonomous rough-terrain robot},
    year = {2012},
    abstract = {In this paper, we discuss the development process of a mobile robot intended for environmental observation applications. The paper describes how a standard tele-operated Explosive Ordnance Disposal (EOD) robot was upgraded with electronics, sensors, computing power and autonomous capabilities, such that it becomes able to execute semi-autonomous missions, e.g. for search & rescue or humanitarian demining tasks. The aim of this paper is not to discuss the details of the navigation algorithms (as these are often task-dependent), but more to concentrate on the development of the platform and its control architecture as a whole.},
    project = {ICARUS},
    address = {Villamoura, Portugal},
    url = {https://pdfs.semanticscholar.org/884e/6a80c8768044a1fd68ee91f45f17e5125153.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Doroftei, Y. Baudoin, D. Serrano, K. Chintamani, R. Sabino, and S. Ourevitch, “Operational RPAS scenarios envisaged for search & rescue by the EU FP7 ICARUS project," in Remotely Piloted Aircraft Systems for Civil Operations (RPAS2012), Brussels, Belgium, 2012.
    [BibTeX] [Abstract] [Download PDF]

    The ICARUS EU-FP7 project deals with the development of a set of integrated components to assist search and rescue teams in dealing with the difficult and dangerous, but life-saving task of finding human survivors. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with victim detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the C4I equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.

    @InProceedings{de2012operational,
    author = {De Cubber, Geert and Doroftei, Daniela and Baudoin, Yvan and Serrano, Daniel and Chintamani, Keshav and Sabino, Rui and Ourevitch, Stephane},
    booktitle = {Remotely Piloted Aircraft Systems for Civil Operations (RPAS2012)},
    title = {Operational {RPAS} scenarios envisaged for search \& rescue by the {EU FP7 ICARUS} project},
    year = {2012},
    abstract = {The ICARUS EU-FP7 project deals with the development of a set of integrated components to assist search and rescue teams in dealing with the difficult and dangerous, but life-saving task of finding human survivors. The ICARUS tools consist of assistive unmanned air, ground and sea vehicles, equipped with victim detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radio networking. To ensure optimal human-robot collaboration, these tools are seamlessly integrated into the C4I equipment of the human crisis managers and a set of training and support tools is provided to them to learn to use the ICARUS system.},
    project = {ICARUS},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2012/De-Cubber-Geert_RMA_Belgium_WP.pdf},
    unit= {meca-ras}
    }

  • J. B{k{e}}dkowski, G. De Cubber, and A. Mas{l}owski, “6D SLAM with GPGPU computation," Pomiary Automatyka Robotyka, vol. 16, iss. 2, p. 275–280, 2012.
    [BibTeX] [Abstract] [Download PDF]

    The main goal was to improve a state of the art 6D SLAM algorithm with a new GPGPU-based implementation of data registration module. Data registration is based on ICP (Iterative Closest Point) algorithm that is fully implemented in the GPU with NVIDIA FERMI architecture. In our research we focus on mobile robot inspection intervention systems applicable in hazardous environments. The goal is to deliver a complete system capable of being used in real life. In this paper we demonstrate our achievements in the field of on line robot localization and mapping. We demonstrated an experiment in real large environment. We compared two strategies of data alingment – simple ICP and ICP using so called meta scan.

    @Article{bkedkowski20126d,
    author = {B{\k{e}}dkowski, Janusz and De Cubber, Geert and Mas{\l}owski, Andrzej},
    journal = {Pomiary Automatyka Robotyka},
    title = {{6D SLAM} with {GPGPU} computation},
    year = {2012},
    number = {2},
    pages = {275--280},
    volume = {16},
    project = {ICARUS},
    abstract = {The main goal was to improve a state of the art 6D SLAM algorithm with a new GPGPU-based implementation of data registration module. Data registration is based on ICP (Iterative Closest Point) algorithm that is fully implemented in the GPU with NVIDIA FERMI architecture. In our research we focus on mobile robot inspection intervention systems applicable in hazardous environments. The goal is to deliver a complete system capable of being used in real life. In this paper we demonstrate our achievements in the field of on line robot localization and mapping. We demonstrated an experiment in real large environment. We compared two strategies of data alingment - simple ICP and ICP using so called meta scan.},
    url = {http://www.par.pl/en/content/download/14036/170476/file/275_280.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Doroftei, Y. Baudoin, D. Serrano, K. Chintamani, R. Sabino, and S. Ourevitch, “ICARUS: AN EU-FP7 PROJECT PROVIDING UNMANNED SEARCH AND RESCUE TOOLS," in IROS2012 Workshop on Robots and Sensors integration in future rescue INformation system (ROSIN’12), Villamoura, Portugal, 2012.
    [BibTeX] [Abstract] [Download PDF]

    Overview of the objectives of the ICARUS project

    @InProceedings{de2012icarus02,
    author = {De Cubber, Geert and Doroftei, Daniela and Baudoin, Y and Serrano, D and Chintamani, K and Sabino, R and Ourevitch, S},
    booktitle = {IROS2012 Workshop on Robots and Sensors integration in future rescue INformation system (ROSIN'12)},
    title = {{ICARUS}: AN {EU-FP7} PROJECT PROVIDING UNMANNED SEARCH AND RESCUE TOOLS},
    year = {2012},
    abstract = {Overview of the objectives of the ICARUS project},
    project = {ICARUS},
    address = {Villamoura, Portugal},
    url = {http://mecatron.rma.ac.be/pub/2012/Icarus - ROSIN2012 Presentation.pdf},
    unit= {meca-ras}
    }

2011

  • G. De Cubber, D. Doroftei, H. Sahli, and Y. Baudoin, “Outdoor Terrain Traversability Analysis for Robot Navigation using a Time-Of-Flight Camera," in RGB-D Workshop on 3D Perception in Robotics, Vasteras, Sweden, 2011.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous robotic systems operating in unstructured outdoor environments need to estimate the traversabilityof the terrain in order to navigate safely. Traversability estimation is a challenging problem, as the traversability is a complex function of both the terrain characteristics, such as slopes, vegetation, rocks, etc and the robot mobility characteristics, i.e. locomotion method, wheels, etc. It is thus required to analyze in real-time the 3D characteristics of the terrain and pair this data to the robot capabilities. In this paper, a method is introduced to estimate the traversability using data from a time-of-flight camera.

    @InProceedings{de2011outdoor,
    author = {De Cubber, Geert and Doroftei, Daniela and Sahli, Hichem and Baudoin, Yvan},
    booktitle = {RGB-D Workshop on 3D Perception in Robotics},
    title = {Outdoor Terrain Traversability Analysis for Robot Navigation using a Time-Of-Flight Camera},
    year = {2011},
    abstract = {Autonomous robotic systems operating in unstructured outdoor environments need to estimate the traversabilityof the terrain in order to navigate safely. Traversability estimation is a challenging problem, as the traversability is a complex function of both the terrain characteristics, such as slopes, vegetation, rocks, etc and the robot mobility characteristics, i.e. locomotion method, wheels, etc. It is thus required to analyze in real-time the 3D characteristics of the terrain and pair this data to the robot capabilities. In this paper, a method is introduced to estimate the traversability using data from a time-of-flight camera.},
    project = {ViewFinder, Mobiniss},
    address = {Vasteras, Sweden},
    url = {http://mecatron.rma.ac.be/pub/2011/TTA_TOF.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber and D. Doroftei, “Multimodal terrain analysis for an all-terrain crisis Management Robot," in IARP HUDEM 2011, Sibenik, Croatia, 2011.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, a novel stereo-based terrain-traversability estimation methodology is proposed. The novelty is that – in contrary to classic depth-based terrain classification algorithms – all the information of the stereo camera system is used, also the color information. Using this approach, depth and color information are fused in order to obtain a higher classification accuracy than is possible with uni-modal techniques

    @InProceedings{de2011multimodal,
    author = {De Cubber, Geert and Doroftei, Daniela},
    booktitle = {IARP HUDEM 2011},
    title = {Multimodal terrain analysis for an all-terrain crisis Management Robot},
    year = {2011},
    abstract = {In this paper, a novel stereo-based terrain-traversability estimation methodology is proposed. The novelty is that – in contrary to classic depth-based terrain classification algorithms – all the information of the stereo camera system is used, also the color information. Using this approach, depth and color information are fused in order to obtain a higher classification accuracy than is possible with uni-modal techniques},
    project = {Mobiniss},
    address = {Sibenik, Croatia},
    url = {http://mecatron.rma.ac.be/pub/2011/Multimodal terrain analysis for an all-terrain crisis management robot .pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Doroftei, K. Verbiest, and S. A. Berrabah, “Autonomous camp surveillance with the ROBUDEM robot: challenges and results," in IARP Workshop RISE’2011, Belgium, 2011.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous robotic systems can help for risky interventions to reduce the risk to human lives. An example of such a risky intervention is a camp surveillance scenario, where an environment needs to be patrolled and intruders need to be detected and intercepted. This paper describes the development of a mobile outdoor robot which is capable of performing such a camp surveillance task. The key research issues tackled are the robot design, geo-referenced localization and path planning, traversability estimation, the optimization of the terrain coverage strategy and the development of an intuitive human-robot interface.

    @InProceedings{de2011autonomous,
    author = {De Cubber, Geert and Doroftei, Daniela and Verbiest, Kristel and Berrabah, Sid Ahmed},
    booktitle = {IARP Workshop RISE’2011},
    title = {Autonomous camp surveillance with the {ROBUDEM} robot: challenges and results},
    year = {2011},
    abstract = {Autonomous robotic systems can help for risky interventions to reduce the risk to human lives. An example of such a risky intervention is a camp surveillance scenario, where an environment needs to be patrolled and intruders need to be detected and intercepted. This paper describes the development of a mobile outdoor robot which is capable of performing such a camp surveillance task. The key research issues tackled are the robot design, geo-referenced localization and path planning, traversability estimation, the optimization of the terrain coverage strategy and the development of an intuitive human-robot interface.},
    project = {Mobiniss},
    address = {Belgium},
    url = {http://mecatron.rma.ac.be/pub/2011/ELROB-RISE.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber and D. Doroftei, “Using Robots in Hazardous Environments: Landmine Detection, de-Mining and Other Applications," in Using Robots in Hazardous Environments: Landmine Detection, De-Mining and Other Applications, Y. Baudoin and M. Habib, Eds., Woodhead Publishing, 2011, vol. 1, p. 476–498.
    [BibTeX] [Abstract] [Download PDF]

    This chapter presents three main aspects of the development of a crisis management robot. First, we present an approach for robust victim detection in difficult outdoor conditions. Second, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data. Lastly, we present behavior-based control architecture, enabling a robot to search for human victims on an incident site, while navigating semi-autonomously, using stereo vision as the main source of sensor information.

    @InBook{de2010human,
    author = {De Cubber, Geert and Doroftei, Daniela},
    editor = {Baudoin, Yvan and Habib, Maki},
    chapter = {Chapter 20},
    pages = {476--498},
    publisher = {Woodhead Publishing},
    title = {Using Robots in Hazardous Environments: Landmine Detection, de-Mining and Other Applications},
    year = {2011},
    isbn = {1845697863},
    volume = {1},
    abstract = {This chapter presents three main aspects of the development of a crisis management robot. First, we present an approach for robust victim detection in difficult outdoor conditions. Second, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data. Lastly, we present behavior-based control architecture, enabling a robot to search for human victims on an incident site, while navigating semi-autonomously, using stereo vision as the main source of sensor information.},
    booktitle = {Using Robots in Hazardous Environments: Landmine Detection, De-Mining and Other Applications},
    date = {2011-01-11},
    ean = {9781845697860},
    pagetotal = {665},
    project = {Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2009/Handbook Chapter 4 - Human Victim Detection and Stereo-based Terrain Traversability Analysis for Behavior-Based Robot Navigation.pdf},
    unit= {meca-ras}
    }

2010

  • G. De Cubber, S. A. Berrabah, D. Doroftei, Y. Baudoin, and H. Sahli, “Combining Dense Structure from Motion and Visual SLAM in a Behavior-Based Robot Control Architecture," International Journal of Advanced Robotic Systems, vol. 7, iss. 1, 2010.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this paper, we present a control architecture for an intelligent outdoor mobile robot. This enables the robot to navigate in a complex, natural outdoor environment, relying on only a single on-board camera as sensory input. This is achieved through a twofold analysis of the visual data stream: a dense structure from motion algorithm calculates a depth map of the environment and a visual simultaneous localization and mapping algorithm builds a map of the surroundings using image features. This information enables a behavior-based robot motion and path planner to navigate the robot through the environment. In this paper, we show the theoretical aspects of setting up this architecture.

    @Article{de2010combining,
    author = {De Cubber, Geert and Sid Ahmed Berrabah and Daniela Doroftei and Yvan Baudoin and Hichem Sahli},
    journal = {International Journal of Advanced Robotic Systems},
    title = {Combining Dense Structure from Motion and Visual {SLAM} in a Behavior-Based Robot Control Architecture},
    year = {2010},
    month = mar,
    number = {1},
    volume = {7},
    abstract = {In this paper, we present a control architecture for an intelligent outdoor mobile robot. This enables the robot to navigate in a complex, natural outdoor environment, relying on only a single on-board camera as sensory input. This is achieved through a twofold analysis of the visual data stream: a dense structure from motion algorithm calculates a depth map of the environment and a visual simultaneous localization and mapping algorithm builds a map of the surroundings using image features. This information enables a behavior-based robot motion and path planner to navigate the robot through the environment. In this paper, we show the theoretical aspects of setting up this architecture.},
    doi = {10.5772/7240},
    publisher = {{SAGE} Publications},
    project = {ViewFinder, Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2010/e_from_motion_and_visual_slam_in_a_behavior-based_robot_control_architecture.pdf},
    unit= {meca-ras,vub-etro}
    }

  • Y. Baudoin, D. Doroftei, G. De Cubber, S. A. Berrabah, E. Colon, C. Pinzon, A. Maslowski, J. Bedkowski, and J. PENDERS, “VIEW-FINDER: Robotics Assistance to fire-Fighting services," in Mobile Robotics: Solutions and Challenges, , 2010, p. 397–406.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents an overview of the View-Finder project

    @InCollection{baudoin2010view,
    author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Berrabah, Sid Ahmed and Colon, Eric and Pinzon, Carlos and Maslowski, Andrzej and Bedkowski, Janusz and PENDERS, Jacques},
    booktitle = {Mobile Robotics: Solutions and Challenges},
    title = {{VIEW-FINDER}: Robotics Assistance to fire-Fighting services},
    year = {2010},
    pages = {397--406},
    abstract = {This paper presents an overview of the View-Finder project},
    project = {ViewFinder},
    unit= {meca-ras},
    url = {https://books.google.be/books?id=zcfFCgAAQBAJ&pg=PA397&lpg=PA397&dq=VIEW-FINDER: Robotics Assistance to fire-Fighting services mobile robots&source=bl&ots=Jh6P63OKCr&sig=O1GPy_c42NPSEdO8Hb_pa9V6K7g&hl=en&sa=X&ved=2ahUKEwiLr76B-5zfAhUMCewKHQS_Af0Q6AEwDXoECAEQAQ#v=onepage&q=VIEW-FINDER: Robotics Assistance to fire-Fighting services mobile robots&f=false},
    }

  • G. De Cubber, “On-line and Off-line 3D Reconstruction for Crisis Management Applications," in Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE’2010, Sheffield, UK, 2010.
    [BibTeX] [Abstract] [Download PDF]

    We present in this paper a 3D reconstruction methodology. This approach fuses dense stereo and sparse motion data to estimate high quality instantaneous depth maps. This methodology achieves near realtime processing frame rates, such that it can be directly used on-line by the crisis management teams.

    @InProceedings{de2010line,
    author = {De Cubber, Geert},
    booktitle = {Fourth International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE’2010},
    title = {On-line and Off-line {3D} Reconstruction for Crisis Management Applications},
    year = {2010},
    abstract = {We present in this paper a 3D reconstruction methodology. This approach fuses dense stereo and sparse motion data to estimate high quality instantaneous depth maps. This methodology achieves near realtime processing frame rates, such that it can be directly used on-line by the crisis management teams.},
    project = {ViewFinder, Mobiniss},
    address = {Sheffield, UK},
    url = {http://mecatron.rma.ac.be/pub/RISE/RISE - 2010/On-line and Off-line 3D Reconstruction_Geert_De_Cubber.pdf},
    unit= {meca-ras}
    }

  • Y. Baudoin, G. De Cubber, E. Colon, D. Doroftei, and S. A. Berrabah, “Robotics Assistance by Risky Interventions: Needs and Realistic Solutions," in Workshop on Robotics for Extreme conditions, Saint-Petersburg, Russia, 2010.
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses the requirements towards robotics systems in the domains of firefighting, CBRN-E and humanitarian demining.

    @InProceedings{baudoin2010robotics,
    author = {Baudoin, Yvan and De Cubber, Geert and Colon, Eric and Doroftei, Daniela and Berrabah, Sid Ahmed},
    booktitle = {Workshop on Robotics for Extreme conditions},
    title = {Robotics Assistance by Risky Interventions: Needs and Realistic Solutions},
    year = {2010},
    abstract = {This paper discusses the requirements towards robotics systems in the domains of firefighting, CBRN-E and humanitarian demining.},
    project = {ViewFinder, Mobiniss},
    address = {Saint-Petersburg, Russia},
    url = {http://mecatron.rma.ac.be/pub/2010/Robotics Assistance by risky interventions.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, “Variational methods for dense depth reconstruction from monocular and binocular video sequences," PhD Thesis, 2010.
    [BibTeX] [Abstract] [Download PDF]

    This research work tackles the problem of dense three-dimensional reconstruction from monocular and binocular image sequences. Recovering 3D-information has been in the focus of attention of the computer vision community for a few decades now, yet no all-satisfying method has been found so far. The main problem with vision, is that the perceived computer image is a two-dimensional projection of the 3D world. Three-dimensional reconstruction can thus be regarded as the process of re-projecting the 2D image(s) back to a 3D model, as such recovering the depth dimension which was lost during projection. In this work, we focus on dense reconstruction, meaning that a depth estimate is sought for each pixel of the input image. Most attention in the 3Dreconstruction area has been on stereo-vision based methods, which use the displacement of objects in two (or more) images. Where stereo vision must be seen as a spatial integration of multiple viewpoints to recover depth, it is also possible to perform a temporal integration. The problem arising in this situation is known as the Structure from Motion problem and deals with extracting 3-dimensional information about the environment from the motion of its projection onto a two-dimensional surface. Based upon the observation that the human visual system uses both stereo and structure from motion for 3D reconstruction, this research work also targets the combination of stereo information in a structure from motion-based 3D-reconstruction scheme. The data fusion problem arising in this case is solved by casting it as an energy minimization problem in a variationalframework.

    @PhdThesis{de2010variational,
    author = {De Cubber, Geert},
    school = {Vrije Universiteit Brussel-Royal Military Academy},
    title = {Variational methods for dense depth reconstruction from monocular and binocular video sequences},
    year = {2010},
    abstract = {This research work tackles the problem of dense three-dimensional reconstruction from monocular and binocular image sequences. Recovering 3D-information has been in the focus of attention of the computer vision community for a few decades now, yet no all-satisfying method has been found so far. The main problem with vision, is that the perceived computer image is a two-dimensional projection of the 3D world. Three-dimensional reconstruction can thus be regarded as the process of re-projecting the 2D image(s) back to a 3D model, as such recovering the depth dimension which was lost during projection.
    In this work, we focus on dense reconstruction, meaning that a depth estimate is sought for each pixel of the input image. Most attention in the 3Dreconstruction area has been on stereo-vision based methods, which use the displacement of objects in two (or more) images. Where stereo vision must be seen as a spatial integration of multiple viewpoints to recover depth, it is also possible to perform a temporal integration. The problem arising in this situation is known as the Structure from Motion problem and deals with extracting 3-dimensional information about the environment from the motion of its projection onto a two-dimensional surface. Based upon the observation that the human visual system uses both stereo and structure from motion for 3D reconstruction, this research work also targets the combination of stereo information in a structure from motion-based 3D-reconstruction scheme. The data fusion problem arising in this case is solved by casting it as an energy minimization problem in a variationalframework.},
    project = {ViewFinder, Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2010/PhD_Thesis_Geert_.pdf},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, D. Doroftei, and S. A. Berrabah, “Using visual perception for controlling an outdoor robot in a crisis management scenario," in ROBOTICS 2010, Clermont-Ferrand, France, 2010.
    [BibTeX] [Abstract] [Download PDF]

    Crisis management teams (e.g. fire and rescue services, anti-terrorist units …) are often confronted with dramatic situations where critical decisions have to be made within hard time constraints. Therefore, they need correct information about what is happening on the crisis site. In this context, the View-Finder projects aims at developing robots which can assist the human crisis managers, by gathering data. This paper gives an overview of the development of such an outdoor robot. The presented robotic system is able to detect human victims at the incident site, by using vision-based human body shape detection. To increase the perceptual awareness of the human crisis managers, the robotic system is capable of reconstructing a 3D model of the environment, based on vision data. Also for navigation, the robot depends mostly on visual perception, as it combines a model-based navigation approach using geo-referenced positioning with stereo-based terrain traversability analysis for obstacle avoidance. The robot control scheme is embedded in a behavior-based robot control architecture, which integrates all the robot capabilities. This paper discusses all the above mentioned technologies.

    @InProceedings{de2010using,
    author = {De Cubber, Geert and Doroftei, Daniela and Berrabah, Sid Ahmed},
    booktitle = {ROBOTICS 2010},
    title = {Using visual perception for controlling an outdoor robot in a crisis management scenario},
    year = {2010},
    abstract = {Crisis management teams (e.g. fire and rescue services, anti-terrorist units ...) are often confronted with dramatic situations where critical decisions have to be made within hard time constraints. Therefore, they need correct information about what is happening on the crisis site. In this context, the View-Finder projects aims at developing robots which can assist the human crisis managers, by gathering data. This paper gives an overview of the development of such an outdoor robot. The presented robotic system is able to detect human victims at the incident site, by using vision-based human body shape detection. To increase the perceptual awareness of the human crisis managers, the robotic system is capable of reconstructing a 3D model of the environment, based on vision data. Also for navigation, the robot depends mostly on visual perception, as it combines a model-based navigation approach using geo-referenced positioning with stereo-based terrain traversability analysis for obstacle avoidance. The robot control scheme is embedded in a behavior-based robot control architecture, which integrates all the robot capabilities. This paper discusses all the above mentioned technologies.},
    project = {ViewFinder, Mobiniss},
    address = {Clermont-Ferrand, France},
    unit= {meca-ras},
    url = {http://mecatron.rma.ac.be/pub/2010/Usingvisualperceptionforcontrollinganoutdoorrobotinacrisismanagementscenario (1).pdf},
    }

2009

  • G. De Cubber, D. Doroftei, L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Stereo-based terrain traversability analysis for robot navigation," in IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium, Brussels, Belgium, 2009.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data.

    @InProceedings{de2009stereo,
    author = {De Cubber, Geert and Doroftei, Daniela and Nalpantidis, Lazaros and Sirakoulis, Georgios Ch and Gasteratos, Antonios},
    booktitle = {IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium},
    title = {Stereo-based terrain traversability analysis for robot navigation},
    year = {2009},
    abstract = {In this paper, we present an approach where a classification of the terrain in the classes traversable and obstacle is performed using only stereo vision as input data.},
    project = {ViewFinder, Mobiniss},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2009/RISE-DECUBBER-DUTH.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber and G. Marton, “Human Victim Detection," in Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE, Brussels, Belgium, 2009.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents an approach to achieve robust victim detection from color video images. The applied approach goes out from the Viola-Jones algorithm for Haar-features based template recognition. This algorithm was adapted to recognize persons lying on the ground in difficult outdoor illumination conditions.

    @InProceedings{de2009human,
    author = {De Cubber, Geert and Marton, Gabor},
    booktitle = {Third International Workshop on Robotics for risky interventions and Environmental Surveillance-Maintenance, RISE},
    title = {Human Victim Detection},
    year = {2009},
    abstract = {This paper presents an approach to achieve robust victim detection from color video images. The applied approach goes out from the Viola-Jones algorithm for Haar-features based template recognition. This algorithm was adapted to recognize persons lying on the ground in difficult outdoor illumination conditions.},
    project = {ViewFinder, Mobiniss},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2009/RISE-DECUBBER_BUTE.pdf},
    unit= {meca-ras}
    }

  • D. Doroftei, G. De Cubber, E. Colon, and Y. Baudoin, “Behavior based control for an outdoor crisis management robot," in Proceedings of the IARP International Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium, 2009, p. 12–14.
    [BibTeX] [Abstract] [Download PDF]

    The design and development of a control architecture for a robotic crisis management agent raises 3 main questions: 1. How can we design the individual behaviors, such that the robot is capable of avoiding obstacles and of navigating semi-autonomously? 2. How can these individual behaviors be combined in an optimal, leading to a rational and coherent global robot behavior? 3. How can all these capabilities be combined in a comprehensive and modular framework, such that the robot can handle a high-level task (searching for human victims) with minimal input from human operators, by navigating in a complex, dynamic and environment, while avoiding potentially hazardous obstacles? In this paper, we present each of these three main aspects of the general robot control architecture more in detail.

    @InProceedings{doroftei2009behavior,
    author = {Doroftei, Daniela and De Cubber, Geert and Colon, Eric and Baudoin, Yvan},
    booktitle = {Proceedings of the IARP International Workshop on Robotics for Risky Interventions and Environmental Surveillance},
    title = {Behavior based control for an outdoor crisis management robot},
    year = {2009},
    pages = {12--14},
    abstract = {The design and development of a control architecture for a robotic crisis management agent raises 3 main questions:
    1. How can we design the individual behaviors, such that the robot is capable of avoiding obstacles and of navigating semi-autonomously?
    2. How can these individual behaviors be combined in an optimal, leading to a rational and coherent global robot behavior?
    3. How can all these capabilities be combined in a comprehensive and modular framework, such that the robot can handle a high-level task (searching for human victims) with minimal input from human operators, by navigating in a complex, dynamic and environment, while avoiding potentially hazardous obstacles?
    In this paper, we present each of these three main aspects of the general robot control architecture more in detail.},
    project = {ViewFinder, Mobiniss},
    address = {Brussels, Belgium},
    url = {http://mecatron.rma.ac.be/pub/2009/RISE-DOROFTEI.pdf},
    unit= {meca-ras}
    }

  • Y. Baudoin, D. Doroftei, D. G. Cubber, S. A. Berrabah, C. Pinzon, F. Warlet, J. Gancet, E. Motard, M. Ilzkovitz, L. Nalpantidis, and A. Gasteratos, “VIEW-FINDER : Robotics assistance to fire-fighting services and Crisis Management," in 2009 IEEE International Workshop on Safety, Security & Rescue Robotics (SSRR 2009), Denver, USA, 2009, p. 1–6.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the Base Station (BS) the data is processed and combined with geographical information originating from a Web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. This paper will focus on the Crisis Management Information System that has been developed for improving a Disaster Management Action Plan and for linking the Control Station with a out-site Crisis Management Centre, and on the software tools implemented on the mobile robot gathering data in the outdoor area of the crisis.

    @InProceedings{Baudoin2009view01,
    author = {Y. Baudoin and D. Doroftei and G. De Cubber and S. A. Berrabah and C. Pinzon and F. Warlet and J. Gancet and E. Motard and M. Ilzkovitz and L. Nalpantidis and A. Gasteratos},
    booktitle = {2009 {IEEE} International Workshop on Safety, Security {\&} Rescue Robotics ({SSRR} 2009)},
    title = {{VIEW}-{FINDER} : Robotics assistance to fire-fighting services and Crisis Management},
    year = {2009},
    month = nov,
    organization = {IEEE},
    pages = {1--6},
    publisher = {{IEEE}},
    abstract = {In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the Base Station (BS) the data is processed and combined with geographical information originating from a Web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. This paper will focus on the Crisis Management Information System that has been developed for improving a Disaster Management Action Plan and for linking the Control Station with a out-site Crisis Management Centre, and on the software tools implemented on the mobile robot gathering data in the outdoor area of the crisis.},
    doi = {10.1109/ssrr.2009.5424172},
    project = {ViewFinder},
    address = {Denver, USA},
    url = {https://ieeexplore.ieee.org/document/5424172},
    unit= {meca-ras}
    }

  • Y. Baudoin, D. Doroftei, G. De Cubber, S. A. Berrabah, C. Pinzon, J. Penders, A. Maslowski, and J. Bedkowski, “VIEW-FINDER : Outdoor Robotics Assistance to Fire-Fighting services," in International Symposium Clawar, Istanbul, Turkey, 2009.
    [BibTeX] [Abstract] [Download PDF]

    In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-theshelf units, consisting of wheeled robots. The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command.

    @InProceedings{baudoin2009view02,
    author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Berrabah, Sid Ahmed and Pinzon, Carlos and Penders, Jacques and Maslowski, Andrzej and Bedkowski, Janusz},
    booktitle = {International Symposium Clawar},
    title = {{VIEW-FINDER} : Outdoor Robotics Assistance to Fire-Fighting services},
    year = {2009},
    abstract = {In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-theshelf units, consisting of wheeled robots. The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It
    will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command.},
    project = {ViewFinder, Mobiniss},
    address = {Istanbul, Turkey},
    url = {http://mecatron.rma.ac.be/pub/2009/CLAWAR2009.pdf},
    unit= {meca-ras}
    }

  • Y. Baudoin, D. Doroftei, G. De Cubber, S. A. Berrabah, E. Colon, C. Pinzon, A. Maslowski, and J. Bedkowski, “View-Finder: a European project aiming the Robotics assistance to Fire-fighting services and Crisis Management," in IARP workshop on Service Robotics and Nanorobotics, Bejing, China, 2009.
    [BibTeX] [Abstract] [Download PDF]

    In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command. We’ll essentially focus in this paper to the steps entrusted to the RMA and PIAP through the work-packages of the project.

    @InProceedings{baudoin2009view03,
    author = {Baudoin, Yvan and Doroftei, Daniela and De Cubber, Geert and Berrabah, Sid Ahmed and Colon, Eric and Pinzon, Carlos and Maslowski, Andrzej and Bedkowski, Janusz},
    booktitle = {IARP workshop on Service Robotics and Nanorobotics},
    title = {{View-Finder}: a European project aiming the Robotics assistance to Fire-fighting services and Crisis Management},
    year = {2009},
    abstract = {In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the control station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). The robots connect wirelessly to the control station. The control station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command.
    We’ll essentially focus in this paper to the steps entrusted to the RMA and PIAP through the work-packages of the project.},
    project = {ViewFinder},
    address = {Bejing, China},
    url = {http://mecatron.rma.ac.be/pub/2009/IARP-paper2009.pdf},
    unit= {meca-ras}
    }

  • Y. Baudoin, G. De Cubber, S. A. Berrabah, D. Doroftei, E. Colon, C. Pinzon, A. Maslowski, and J. Bedkowski, “VIEW-FINDER: European Project Aiming CRISIS MANAGEMENT TOOLS and the Robotics Assistance to Fire-Fighting Services," in IARP WS on service Robotics, Beijing, Bejing, China, 2009.
    [BibTeX] [Abstract] [Download PDF]

    Overview of the View-Finder project

    @InProceedings{baudoin2009view04,
    author = {Baudoin, Yvan and De Cubber, Geert and Berrabah, Sid Ahmed and Doroftei, Daniela and Colon, E and Pinzon, C and Maslowski, A and Bedkowski, J},
    booktitle = {IARP WS on service Robotics, Beijing},
    title = {{VIEW-FINDER}: European Project Aiming CRISIS MANAGEMENT TOOLS and the Robotics Assistance to Fire-Fighting Services},
    year = {2009},
    abstract = {Overview of the View-Finder project},
    project = {ViewFinder},
    address = {Bejing, China},
    unit= {meca-ras},
    url = {https://www.academia.edu/2879650/VIEW-FINDER_European_Project_Aiming_CRISIS_MANAGEMENT_TOOLS_and_the_Robotics_Assistance_to_Fire-Fighting_Services},
    }

2008

  • D. Doroftei, E. Colon, and G. De Cubber, “A Behaviour-Based Control and Software Architecture for the Visually Guided Robudem Outdoor Mobile Robot," Journal of Automation Mobile Robotics and Intelligent Systems, vol. 2, iss. 4, p. 19–24, 2008.
    [BibTeX] [Abstract] [Download PDF]

    The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing a semiautonomous outdoor robot for risky interventions. This paper focuses on three main aspects of the design process: visual sensing using stereo vision and image motion analysis, design of a behaviourbased control architecture and implementation of modular software architecture.

    @Article{doroftei2008behaviour,
    author = {Doroftei, Daniela and Colon, Eric and De Cubber, Geert},
    journal = {Journal of Automation Mobile Robotics and Intelligent Systems},
    title = {A Behaviour-Based Control and Software Architecture for the Visually Guided Robudem Outdoor Mobile Robot},
    year = {2008},
    issn = {1897-8649},
    month = oct,
    number = {4},
    pages = {19--24},
    volume = {2},
    abstract = {The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing a semiautonomous outdoor robot for risky interventions. This paper focuses on three main aspects of the design process: visual sensing using stereo vision and image motion analysis, design of a behaviourbased control architecture and implementation of modular software architecture.},
    project = {ViewFinder, Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2008/XXX JAMRIS No8 - Doroftei.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, L. Nalpantidis, G. C. Sirakoulis, and A. Gasteratos, “Intelligent robots need intelligent vision: visual 3D perception," in RISE’08: Proceedings of the EURON/IARP International Workshop on Robotics for Risky Interventions and Surveillance of the Environment, Benicassim, Spain, 2008.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we investigate the possibilities of stereo and structure from motion approaches. It is not the aim to compare both theories of depth reconstruction with the goal of designating a winner and a loser. Both methods are capable of providing sparse as well as dense 3D reconstructions and both approaches have their merits and defects. The thorough, year-long research in the field indicates that accurate depth perception requires a combination of methods rather than a sole one. In fact, cognitive research has shown that the human brain uses no less than 12 different cues to estimate depth. Therefore, we also finally introduce in a following section a methodology to integrate stereo and structure from motion.

    @InProceedings{de2008intelligent,
    author = {De Cubber, Geert and Nalpantidis, Lazaros and Sirakoulis, Georgios Ch and Gasteratos, Antonios},
    booktitle = {RISE’08: Proceedings of the EURON/IARP International Workshop on Robotics for Risky Interventions and Surveillance of the Environment},
    title = {Intelligent robots need intelligent vision: visual {3D} perception},
    year = {2008},
    abstract = {In this paper, we investigate the possibilities of stereo and structure from motion approaches. It is not the aim to compare both theories of depth reconstruction with the goal of designating a winner and a loser. Both methods are capable of providing sparse as well as dense 3D reconstructions and both approaches have their merits and defects. The thorough, year-long research in the field indicates that accurate depth perception requires a combination of methods rather than a sole one. In fact, cognitive research has shown that the human brain uses no less than 12 different cues to estimate depth. Therefore, we also finally introduce in a following section a methodology to integrate stereo and structure from motion.},
    project = {ViewFinder, Mobiniss},
    address = {Benicassim, Spain},
    url = {http://mecatron.rma.ac.be/pub/2008/DeCubber.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, D. Doroftei, and G. Marton, “Development of a visually guided mobile robot for environmental observation as an aid for outdoor crisis management operations," in Proceedings of the IARP Workshop on Environmental Maintenance and Protection, Baden Baden, Germany, 2008.
    [BibTeX] [Abstract] [Download PDF]

    To solve these issues, an outdoor mobile robotic platform was equipped with a differential GPS system for accurate geo-registered positioning, and a stereo vision system. This stereo vision systems serves two purposes: 1) victim detection and 2) obstacle detection and avoidance. For semi-autonomous robot control and navigation, we rely on a behavior-based robot motion and path planner. In this paper, we present each of the three main aspects (victim detection, stereo-based obstacle detection and behavior-based navigation) of the general robot control architecture more in detail.

    @InProceedings{de2008development,
    author = {De Cubber, Geert and Doroftei, Daniela and Marton, Gabor},
    booktitle = {Proceedings of the IARP Workshop on Environmental Maintenance and Protection},
    title = {Development of a visually guided mobile robot for environmental observation as an aid for outdoor crisis management operations},
    year = {2008},
    abstract = {To solve these issues, an outdoor mobile robotic platform was equipped with a differential GPS system for accurate geo-registered positioning, and a stereo vision system. This stereo vision systems serves two purposes: 1) victim detection and 2) obstacle detection and avoidance. For semi-autonomous robot control and navigation, we rely on a behavior-based robot motion and path planner. In this paper, we present each of the three main aspects (victim detection, stereo-based obstacle detection and behavior-based navigation) of the general robot control architecture more in detail.},
    project = {ViewFinder, Mobiniss},
    address = {Baden Baden, Germany},
    url = {http://mecatron.rma.ac.be/pub/2008/environmental observation as an aid for outdoor crisis management operations.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, “Dense 3D structure and motion estimation as an aid for robot navigation," Journal of Automation Mobile Robotics and Intelligent Systems, vol. 2, iss. 4, p. 14–18, 2008.
    [BibTeX] [Abstract] [Download PDF]

    Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.

    @Article{DeCubber2008,
    author = {De Cubber, Geert},
    journal = {Journal of Automation Mobile Robotics and Intelligent Systems},
    title = {Dense {3D} structure and motion estimation as an aid for robot navigation},
    year = {2008},
    issn = {1897-8649},
    month = oct,
    number = {4},
    pages = {14--18},
    volume = {2},
    abstract = {Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.},
    project = {ViewFinder,Mobiniss},
    url = {http://www.jamris.org/images/ISSUES/ISSUE-2008-04/002 JAMRIS No8 - De Cubber.pdf},
    unit= {meca-ras}
    }

2007

  • E. Colon, G. De Cubber, H. Ping, J. Habumuremyi, H. Sahli, and Y. Baudoin, “Integrated Robotic systems for Humanitarian Demining," International Journal of Advanced Robotic Systems, vol. 4, iss. 2, p. 24, 2007.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper summarises the main results of 10 years of research and development in Humanitarian Demining. The Hudem project focuses on mine detection systems and aims at provided different solutions to support the mine detection operations. Robots using different kind of locomotion systems have been designed and tested on dummy minefields. In order to control these robots, software interfaces, control algorithms, visual positioning and terrain following systems have also been developed. Typical data acquisition results obtained during trial campaigns with robots and data acquisition systems are reported. Lessons learned during the project and future work conclude this paper.

    @Article{colon2007integrated,
    author = {Colon, Eric and De Cubber, Geert and Ping, Hong and Habumuremyi, Jean-Claude and Sahli, Hichem and Baudoin, Yvan},
    journal = {International Journal of Advanced Robotic Systems},
    title = {Integrated Robotic systems for Humanitarian Demining},
    year = {2007},
    month = jun,
    number = {2},
    pages = {24},
    volume = {4},
    abstract = {This paper summarises the main results of 10 years of research and development in Humanitarian Demining. The Hudem project focuses on mine detection systems and aims at provided different solutions to support the mine detection operations. Robots using different kind of locomotion systems have been designed and tested on dummy minefields. In order to control these robots, software interfaces, control algorithms, visual positioning and terrain following systems have also been developed. Typical data acquisition results obtained during trial campaigns with robots and data acquisition systems are reported. Lessons learned during the project and future work conclude this paper.},
    doi = {10.5772/5694},
    publisher = {{SAGE} Publications},
    project = {Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2007/10.1.1.691.7544.pdf},
    unit= {meca-ras}
    }

  • G. De Cubber, “Dense 3D structure and motion estimation as an aid for robot navigation," in ISMCR 2007, Warsaw, Poland, 2007.
    [BibTeX] [Abstract] [Download PDF]

    Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.

    @InProceedings{de2007dense,
    author = {De Cubber, Geert},
    booktitle = {ISMCR 2007},
    title = {Dense {3D} structure and motion estimation as an aid for robot navigation},
    year = {2007},
    abstract = {Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.},
    project = {ViewFinder,Mobiniss},
    address = {Warsaw, Poland},
    url = {http://mecatron.rma.ac.be/pub/2007/Dense 3D Structure and Motion Estimation as an aid for Robot Navigation.pdf},
    unit= {meca-ras,vub-etro}
    }

  • D. Doroftei, E. Colon, and G. De Cubber, “A behaviour-based control and software architecture for the visually guided Robudem outdoor mobile robot,," in ISMCR 2007, Warsaw, Poland,, 2007.
    [BibTeX] [Abstract] [Download PDF]

    The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing an semi‐autonomous outdoor robot for risky interventions. This paper focuses mainly on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour‐based control architecture and implementation of a modular software architecture.

    @InProceedings{doroftei2007behaviour,
    author = {Doroftei, Daniela and Colon, Eric and De Cubber, Geert},
    booktitle = {ISMCR 2007},
    title = {A behaviour-based control and software architecture for the visually guided {Robudem} outdoor mobile robot,},
    year = {2007},
    address = {Warsaw, Poland,},
    abstract = {The design of outdoor autonomous robots requires the careful consideration and integration of multiple aspects: sensors and sensor data fusion, design of a control and software architecture, design of a path planning algorithm and robot control. This paper describes partial aspects of this research work, which is aimed at developing an semi‐autonomous outdoor robot for risky interventions. This paper focuses mainly on three main aspects of the design process: visual sensing using stereo and image motion analysis, design of a behaviour‐based control architecture and implementation of a modular software architecture.},
    project = {ViewFinder,Mobiniss},
    url = {http://mecatron.rma.ac.be/pub/2007/Doroftei_ISMCR07.pdf},
    unit= {meca-ras}
    }

2006

  • S. A. Berrabah, G. De Cubber, V. Enescu, and H. Sahli, “MRF-Based Foreground Detection in Image Sequences from a Moving Camera," in 2006 International Conference on Image Processing, Atlanta, USA, 2006, p. 1125–1128.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    This paper presents a Bayesian approach for simultaneously detecting the moving objects (foregrounds) and estimating their motion in image sequences taken with a moving camera mounted on the top of a mobile robot. To model the background, the algorithm uses the GMM approach for its simplicity and capability to adapt to illumination changes and small motions in the scene. To overcome the limitations of the GMM approach with its pixel-wise processing, the background model is combined with the motion cue in a maximum a posteriori probability (MAP)-MRF framework. This enables us to exploit the advantages of spatio-temporal dependencies that moving objects impose on pixels and the interdependence of motion and segmentation fields. As a result, the detected moving objects have visually attractive silhouettes and they are more accurate and less affected by noise than those obtained with simple pixel-wise methods. To enhance the segmentation accuracy, the background model is re-updated using the MAP-MRF results. Experimental results and a qualitative study of the proposed approach are presented on image sequences with a static camera as well as with a moving camera.

    @InProceedings{berrabah2006mrf,
    author = {Berrabah, Sid Ahmed and De Cubber, Geert and Enescu, Valentin and Sahli, Hichem},
    booktitle = {2006 International Conference on Image Processing},
    title = {{MRF}-Based Foreground Detection in Image Sequences from a Moving Camera},
    year = {2006},
    month = oct,
    organization = {IEEE},
    pages = {1125--1128},
    publisher = {{IEEE}},
    abstract = {This paper presents a Bayesian approach for simultaneously detecting the moving objects (foregrounds) and estimating their motion in image sequences taken with a moving camera mounted on the top of a mobile robot. To model the background, the algorithm uses the GMM approach for its simplicity and capability to adapt to illumination changes and small motions in the scene. To overcome the limitations of the GMM approach with its pixel-wise processing, the background model is combined with the motion cue in a maximum a posteriori probability (MAP)-MRF framework. This enables us to exploit the advantages of spatio-temporal dependencies that moving objects impose on pixels and the interdependence of motion and segmentation fields. As a result, the detected moving objects have visually attractive silhouettes and they are more accurate and less affected by noise than those obtained with simple pixel-wise methods. To enhance the segmentation accuracy, the background model is re-updated using the MAP-MRF results. Experimental results and a qualitative study of the proposed approach are presented on image sequences with a static camera as well as with a moving camera.},
    doi = {10.1109/icip.2006.312754},
    project = {MOBINISS,ViewFinder},
    address = {Atlanta, USA},
    url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4106732},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, V. Enescu, H. Sahli, E. Demeester, M. Nuttin, and D. Vanhooydonck, “Active stereo vision-based mobile robot navigation for person tracking," Integrated Computer-Aided Engineering, vol. 13, p. 203–222, 2006.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first uses a stereo head equipped with a pan-tilt mechanism to track a moving target (selected by an operator) and keep it centered in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. For this, a hybrid navigation algorithm is adopted with a reactive part that efficiently reacts to the most recent sensor data, and a deliberative part that generates a globally optimal path to a target destination, such as the person’s location. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM. Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated via a suitable stochastic dynamic model for the stereo head-target system. Robust tracking is achieved by combining a color-based particle filter with a method to update the color model of the target under changing illumination conditions. The main contributions of this paper lie in (1) devising a robust color-based 3D target tracking method, (2) proposing a hybrid deliberative/reactive navigation scheme, and (3) integrating them on a wheelchair platform for the final goal of person following. Experimental results are presented for ASVM separately and in combination with a wheelchair platform-based implementation of the NM.

    @Article{2c2cd28d2aea4009ae0135448c005050,
    author = {De Cubber, Geert and Valentin Enescu and Hichem Sahli and Eric Demeester and Marnix Nuttin and Dirk Vanhooydonck},
    journal = {Integrated Computer-Aided Engineering},
    title = {Active stereo vision-based mobile robot navigation for person tracking},
    year = {2006},
    issn = {1069-2509},
    month = jul,
    note = {Integrated Computer-Aided Engineering, Vol. ?, Nr. ?, pp. ?, .},
    pages = {203--222},
    volume = {13},
    abstract = {In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first uses a stereo head equipped with a pan-tilt mechanism to track a moving target (selected by an operator) and keep it centered in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. For this, a hybrid navigation algorithm is adopted with a reactive part that efficiently reacts to the most recent sensor data, and a deliberative part that generates a globally optimal path to a target destination, such as the person's location. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM. Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated via a suitable stochastic dynamic model for the stereo head-target system. Robust tracking is achieved by combining a color-based particle filter with a method to update the color model of the target under changing illumination conditions. The main contributions of this paper lie in (1) devising a robust color-based 3D target tracking method, (2) proposing a hybrid deliberative/reactive navigation scheme, and (3) integrating them on a wheelchair platform for the final goal of person following. Experimental results are presented for ASVM separately and in combination with a wheelchair platform-based implementation of the NM.},
    day = {24},
    keywords = {mobile robot, active vision, stereo, navigation},
    language = {English},
    project = {Mobiniss, ViewFinder},
    publisher = {IOS Press},
    unit= {meca-ras,vub-etro},
    url = {https://cris.vub.be/en/publications/active-stereo-visionbased-mobile-robot-navigation-for-person-tracking(2c2cd28d-2aea-4009-ae01-35448c005050)/export.html},
    }

  • K. Cauwerts, G. De Cubber, T. Geerinck, W. Mattheyses, I. Ravyse, H. Sahli, M. Shami, P. Soens, W. Verhelst, and P. Verhoeve, “Audio-Visual Signal Processing: Speech and emotion processing for human-machine interaction," in Second annual IEEE BENELUX/DSP Valley Signal Processing Symposium (SPS-DARTS 2006), Brussels, Belgium, 2006.
    [BibTeX] [Download PDF]
    @InProceedings{cauwerts2006audio,
    author = {Cauwerts, Kenny and De Cubber, Geert and Geerinck, Thomas and Mattheyses, W and Ravyse, Ilse and Sahli, Hichem and Shami, M and Soens, P and Verhelst, Werner and Verhoeve, P},
    booktitle = {Second annual IEEE BENELUX/DSP Valley Signal Processing Symposium (SPS-DARTS 2006)},
    title = {Audio-Visual Signal Processing: Speech and emotion processing for human-machine interaction},
    year = {2006},
    address = {Brussels, Belgium},
    unit= {meca-ras},
    url = {https://www.semanticscholar.org/paper/Audio-Visual-Signal-Processing:-Speech-and-emotion-Cauwerts-Cubber/c6cc775bfc9f5528c8c889d32af53566f1ae8415},
    }

2005

  • V. Enescu, G. De Cubber, H. Sahli, E. Demeester, D. Vanhooydonck, and M. Nuttin, “Active stereo vision-based mobile robot navigation for person tracking," in International Conference on Informatics in Control, Automation and Robotics, Barcelona, Spain, 2005, p. 32–39.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first tracks the person in stereo images and controls the pan/tilt unit to keep the target in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM.Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated by generating target location hypotheses in an efficient manner. Robustness against outliers and occlusions is achieved by employing a multi-hypothesis tracking method – the particle filter – based on a color model of the target. Moreover, to deal with illumination changes, the system adaptively updates the color model of the target. The main contributions of this paper lie in (1) devising a stereo, color-based target tracking method using the stereo geometry constraint and (2) integrating it with a robotic agent in a loosely coupled manner.

    @InProceedings{enescu2005active,
    author = {Enescu, Valentin and De Cubber, Geert and Sahli, Hichem and Demeester, Eric and Vanhooydonck, Dirk and Nuttin, Marnix},
    booktitle = {International Conference on Informatics in Control, Automation and Robotics},
    title = {Active stereo vision-based mobile robot navigation for person tracking},
    year = {2005},
    address = {Barcelona, Spain},
    month = sep,
    pages = {32--39},
    abstract = {In this paper, we propose a mobile robot architecture for person tracking, consisting of an active stereo vision module (ASVM) and a navigation module (NM). The first tracks the person in stereo images and controls the pan/tilt unit to keep the target in the visual field. Its output, i.e. the 3D position of the person, is fed to the NM, which drives the robot towards the target while avoiding obstacles. As a peculiarity of the system, there is no feedback from the NM or the robot motion controller (RMC) to the ASVM. While this imparts flexibility in combining the ASVM with a wide range of robot platforms, it puts considerable strain on the ASVM.Indeed, besides the changes in the target dynamics, it has to cope with the robot motion during obstacle avoidance. These disturbances are accommodated by generating target location hypotheses in an efficient manner. Robustness against outliers and occlusions is achieved by employing a multi-hypothesis tracking method - the particle filter - based on a color model of the target. Moreover, to deal with illumination changes, the system adaptively updates the color model of the target. The main contributions of this paper lie in (1) devising a stereo, color-based target tracking method using the stereo geometry constraint and (2) integrating
    it with a robotic agent in a loosely coupled manner.},
    project = {Mobiniss, ViewFinder},
    doi = {10.3233/ica-2006-13302},
    url = {http://mecatron.rma.ac.be/pub/2005/f969ee9e1169623340aa409f539fddb9c413.pdf},
    unit= {meca-ras,vub-etro}
    }

2004

  • G. De Cubber, S. A. Berrabah, and H. Sahli, “Color-based visual servoing under varying illumination conditions," Robotics and Autonomous Systems, vol. 47, iss. 4, p. 225–249, 2004.
    [BibTeX] [Abstract] [Download PDF] [DOI]

    Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. Here, we present a new model driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the non-linear system. This technique can be applied to any pan tilt zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features.

    @Article{de2004color,
    author = {De Cubber, Geert and Berrabah, Sid Ahmed and Sahli, Hichem},
    journal = {Robotics and Autonomous Systems},
    title = {Color-based visual servoing under varying illumination conditions},
    year = {2004},
    month = jul,
    number = {4},
    pages = {225--249},
    volume = {47},
    abstract = {Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. Here, we present a new model driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the non-linear system. This technique can be applied to any pan tilt zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features.},
    doi = {10.1016/j.robot.2004.03.015},
    publisher = {Elsevier {BV}},
    project = {Mobiniss},
    url = {https://www.sciencedirect.com/science/article/abs/pii/S0921889004000570},
    unit= {meca-ras,vub-etro}
    }

2003

  • G. De Cubber, H. Sahli, E. Colon, and Y. Baudoin, “Visual Servoing under Changing Illumination Conditions," in Proc. International Workshop on Attention and Performance in Computer Vision (ICVS03), Graz, Austria, 2003, p. 47–54.
    [BibTeX] [Abstract] [Download PDF]

    Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. In this paper, we present a new model-driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the nonlinear system. This technique can be applied to any pan-tilt-zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features

    @InProceedings{de2003visual,
    author = {De Cubber, Geert and Sahli, Hichem and Colon, Eric and Baudoin, Yvan},
    booktitle = {Proc. International Workshop on Attention and Performance in Computer Vision (ICVS03)},
    title = {Visual Servoing under Changing Illumination Conditions},
    year = {2003},
    pages = {47--54},
    address = {Graz, Austria},
    abstract = {Visual servoing, or the control of motion on the basis of image analysis in a closed loop, is more and more recognized as an important tool in modern robotics. In this paper, we present a new model-driven approach to derive a description of the motion of a target object. This method can be subdivided into an illumination invariant target detection stage and a servoing process which uses an adaptive Kalman filter to update the model of the nonlinear system. This technique can be applied to any pan-tilt-zoom camera mounted on a mobile vehicle as well as to a static camera tracking moving environmental features},
    url = {http://mecatron.rma.ac.be/pub/2003/ICVS03_Geert.pdf},
    project = {Mobiniss},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, S. A. Berrabah, and H. Sahli, “A Bayesian Approach for Color Constancy based Visual Servoing," in 11th International Conference on Advanced Robotics, Coimbra, Portugal, 2003.
    [BibTeX] [Download PDF]
    @InProceedings{de2003bayesian,
    author = {De Cubber, Geert and Berrabah, Sid Ahmed and Sahli, Hichem},
    booktitle = {11th International Conference on Advanced Robotics},
    title = {A Bayesian Approach for Color Constancy based Visual Servoing},
    year = {2003},
    address = {Coimbra, Portugal},
    unit= {meca-ras,vub-etro},
    project = {Mobiniss},
    url = {https://www.semanticscholar.org/paper/A-Bayesian-Approach-for-Color-Constancy-based-Cubber-Berrabah/ed5636626e307f2b8d0c5f4fcc79d5d54a9cc639},
    }

2002

  • G. De Cubber, H. Sahli, and F. Decroos, “Sensor Integration on a Mobile Robot," in ISMCR 2002: Proc. 12th Int’l Symp. Measurement and Control in Robotics,, Bourges, France, 2002.
    [BibTeX] [Abstract] [Download PDF]

    The purpose of this paper is to show an application of path planning for a mobile pneumatic robot. The robot is capable of searching for a specific target in the scene and navigating towards it, in an a priori unknown environment. To accomplish this task, the robot uses a colour pan-tilt camera and two ultrasonic sensors. As the camera is only used for target tracking, the robot is left with very incomplete sensor data with a high degree of uncertainty. To counter this, a fuzzy logic – based sensor fusion procedure is set up to aid the map building process in constructing a reliable environmental model. The significance of this work is that it shows that the use of fuzzy logic based fusion and potential field navigation can achieve good results for path planning

    @InProceedings{de2002sensor,
    author = {De Cubber, Geert and Sahli, Hichem and Decroos, Francis},
    booktitle = {ISMCR 2002: Proc. 12th Int'l Symp. Measurement and Control in Robotics,},
    title = {Sensor Integration on a Mobile Robot},
    year = {2002},
    address = {Bourges, France},
    abstract = {The purpose of this paper is to show an application of path planning for a mobile pneumatic robot. The robot is capable of searching for a specific target in the scene and navigating towards it, in an a priori unknown environment. To accomplish this task, the robot uses a colour pan-tilt camera and two ultrasonic sensors. As the camera is only used for target tracking, the robot is left with very incomplete sensor data with a high degree of uncertainty. To counter this, a fuzzy logic - based sensor fusion procedure is set up to aid the map building process in constructing a reliable environmental model. The significance of this work is that it shows that the use of fuzzy logic based fusion and potential field navigation can achieve good results for path planning},
    url = {http://mecatron.rma.ac.be/pub/2002/Paper ISMCR'02 - Sensor Integration on a Mobile Robot.pdf},
    project = {Mobiniss},
    unit= {meca-ras,vub-etro}
    }

  • G. De Cubber, H. Sahli, H. Ping, and E. Colon, “A Colour Constancy Approach for Illumination Invariant Colour Target Tracking," in IARP Workshop on Robots for Humanitarian Demining, Vienna, Austria, 2002.
    [BibTeX] [Abstract] [Download PDF]

    Many robotic agents use color vision to retrieve quality information about the environment. In this work, we present a visual servoing technique, where vision is the primary sensing modality and sensing is based upon the analysis of the perceived visual information. We describe how colored targets can be identified and how their position and motion can be estimated quickly and reliably. The visual servoing procedure is essentially a four-stage process, with color target identification, motion parameter estimation, target tracking and target position estimation. These individual parts add up to a global vision system enabling precise positioning for a demining robot.

    @InProceedings{de2002colour,
    author = {De Cubber, Geert and Sahli, Hichem and Ping, Hong and Colon, Eric},
    booktitle = {IARP Workshop on Robots for Humanitarian Demining},
    title = {A Colour Constancy Approach for Illumination Invariant Colour Target Tracking},
    year = {2002},
    address = {Vienna, Austria},
    abstract = {Many robotic agents use color vision to retrieve quality information about the environment. In this work, we present a visual servoing technique, where vision is the primary sensing modality and sensing is based upon the analysis of the perceived visual information. We describe how colored targets can be identified and how their position and motion can be estimated quickly and reliably. The visual servoing procedure is essentially a four-stage process, with color target identification, motion parameter estimation, target tracking and target position estimation. These individual parts add up to a global vision system enabling precise positioning for a demining robot.},
    url = {http://mecatron.rma.ac.be/pub/2002/Paper IARP - Geert De Cubber.pdf},
    project = {Mobiniss},
    unit= {meca-ras,vub-etro}
    }

2001

  • G. De Cubber, “Integration of sensors on a mobile robot," Master Thesis, 2001.
    [BibTeX] [Abstract] [Download PDF]

    The final goal of this project is to add some sort of intelligence to an existing pneumatic mobile robot and by doing this, making the robot capable of walking towards a certain designated target in a complex and unknown environment with multiple obstacles and this without any user interaction. To realise this desired goal, some sensory equipment was added to the robot, in particular 2 ultrasonic sensors and a camera. This camera has the specific task of following the target object and returning its position, whereas the ultrasonic sensors have the more general task of retrieving environmental information. This information, coming from the different sensors, is brought together and fused in an intelligent way by a sensor fusion procedure based upon the principles of fuzzy logic. In order to be able to navigate in its environment, the robot makes use of the acquired sensory data to build a map – more specifically a potential field map – as a means of representing its surroundings. This map is used to plan the path to be followed and the actions to be undertaken. A control program was written in order to gather and to coordinate all these different functions, making the robot capable of reaching the goals set up initially

    @MastersThesis{de2001integration,
    author = {De Cubber, Geert},
    school = {Vrije Universiteit Brussel},
    title = {Integration of sensors on a mobile robot},
    year = {2001},
    abstract = {The final goal of this project is to add some sort of intelligence to an existing pneumatic mobile robot and by doing this, making the robot capable of walking towards a certain designated target in a complex and unknown environment with multiple obstacles and this without any user interaction. To realise this desired goal, some sensory equipment was added to the robot, in particular 2 ultrasonic sensors and a camera. This camera has the specific task of following the target object and returning its position, whereas the ultrasonic sensors have the more general task of retrieving environmental information. This information, coming from the different sensors, is brought together and fused in an intelligent way by a sensor fusion procedure based upon the principles of fuzzy logic. In order to be able to navigate in its environment, the robot makes use of the acquired sensory data to build a map – more specifically a potential field map – as a means of representing its surroundings. This map is used to plan the path to be followed and the actions to be undertaken. A control program was written in order to gather and to coordinate all these different functions, making the robot capable of reaching the goals set up initially},
    publisher = {Vrije Universiteit Brussel},
    url = {http://mecatron.rma.ac.be/pub/2001/ThesisText (2).pdf},
    unit= {meca-ras,vub-etro}
    }