| |
Last updated on November 13, 2023. This conference program is tentative and subject to change
Technical Program for Tuesday November 14, 2023
|
Tu1C Contributed Paper, Convention Hall |
Add to My Program |
Human-Machine Interaction & Cooperation |
|
|
Chair: Sato, Noritaka | Nagoya Institute of Technology |
|
08:30-08:45, Paper Tu1C.1 | Add to My Program |
Requirements and Challenges for Autonomy and Assistance Functions for Ground Rescue Robots in Reconnaissance Missions |
|
Daun, Kevin | Technische Universität Darmstadt |
von Stryk, Oskar | Technische Universität Darmstadt |
Keywords: Rescue robotics, Safety standards for robots and systems, Performance evaluation and benchmarking
Abstract: While rescue robots are becoming more established as part of disaster response, they are typically teleoperated in actual disasters. (Autonomous) assistance functions can improve performance, extend functionality and reduce operator overload. It is necessary to understand relevant requirements to ensure that developed capabilities apply to real-world needs. Previous analyses focused on general aspects of rescue robots, leaving a gap in understanding requirements for (autonomous) assistance functions. We address this gap and provide a detailed, evidence-driven analysis of application requirements and research challenges for (autonomous) assistance functions for rescue robots in reconnaissance missions. We base our analysis on a comprehensive model for technology acceptance and consider reports of past deployments, related analyses, our own experience from deploying robots, and insights from workshops with first responders. We define relevant aspects of an integrated function capability and analyze general and specific requirements for assistance functions and autonomy. We relate our results with current assistance functions and identify several research challenges. A key insight is the need for an increased research focus on novel approaches combining the complementary capabilities of human operators and robotic assistance functions.
|
|
08:45-09:00, Paper Tu1C.2 | Add to My Program |
Hector UI: A Flexible Human-Robot User Interface for (Semi-)Autonomous Rescue and Inspection Robots |
|
Fabian, Stefan Manuel | Technical University of Darmstadt |
von Stryk, Oskar | Technische Universität Darmstadt |
Keywords: Human-robot interaction, Mobile Robotics, Rescue robotics
Abstract: The remote human operator's user interface (UI) is an important link to make the robot an efficient extension of the operator's perception and action. In rescue applications, several studies have investigated the design of operator interfaces based on observations during major robotics competitions or field deployments. Based on this research, guidelines for good interface design were empirically identified. The investigations on the UIs of teams participating in competitions are often based on external observations during UI application, which may miss some relevant requirements for UI flexibility. In this work, we present an open-source and flexibly configurable user interface based on established guidelines and its exemplary use for wheeled, tracked, and walking robots. We explain the design decisions and cover the insights we have gained during its highly successful applications in multiple robotics competitions and evaluations. The presented UI can also be adapted for other robots with little effort and is available as open source.
|
|
09:00-09:15, Paper Tu1C.3 | Add to My Program |
ROBO-MOLE: An Assistant Robot System for First Responders in Subterranean Scenarios |
|
Didari, Hamid | Graz University of Technology |
Reitbauer, Eva | Graz University of Technology |
Schmied, Christoph | Graz University of Technology |
Eder, Matthias | Graz University of Technology |
Stephan, Schraml | AIT Austrian Institute of Technology |
Aourik, Nizar | IQSOFT GmbH |
Feilhauer, Marius | IQSOFT GmbH |
Kastner, Rene | Disaster Competence Network Austria |
Hieslmayr, Roland | Professional Fire Brigade Linz |
Steinbauer-Wagner, Gerald | Graz University of Technology |
Keywords: Rescue robotics, Localization, Mapping, and Navigation, Robotics and automation for safety and security
Abstract: Disasters in tunnels and underground infrastructures pose significant challenges to responders due to the confined and hazardous nature of these environments. Robots are seen as valuable tools in assisting first responders in such scenarios by providing situational awareness and allowing to maintain a safe distance to disaster sites in the initial reconnaissance. However, using robots with automated skills in confined environments like tunnels comes with its own set of challenges, mainly regarding perception, localization, and navigation. This study presents a comprehensive support system for challenging underground operations, utilizing a robot equipped with situation awareness sensors and automated navigation skills addressing the GNSS-denied and featureless environment.The developed system also includes a communication framework as well as a visualization tool that is able to provide a detailed 3D environment map, to monitor the temperature, and to detect gas and fire sources, which enhances safety and efficiency during subterranean activities. The system was evaluated through simulated rescue scenarios in a highway tunnel where responder used the system to find victims and to localize a gas leak after an accident in the tunnel.
|
|
09:15-09:30, Paper Tu1C.4 | Add to My Program |
Teleoperation for UAVs in Search and Rescue (SAR) - Current State and Future Outlook |
|
LADIG, ROBERT | University of South Denmark (SDU) |
Keywords: Aerial robotics, Human-robot interaction, Rescue robotics
Abstract: This paper offers a brief overview of the history of small-scale aerial platform teleoperation, its current state, and potential alternatives to traditional teleoperation techniques. We evaluate the pros and cons of augmented reality, virtual reality, and mixed reality technologies, based on our own and related research, as well as observations from a live search and rescue exercise. Additionally, we emphasize the potential benefits and challenges that new teleoperation systems could bring to search and rescue operations.
|
|
09:30-09:45, Paper Tu1C.5 | Add to My Program |
A Verification of a Teleoperation Interface for Rescue Robots Using a Virtual Reality Controller with a Door-Opening Task |
|
Kanazawa, Kotaro | Nagoya Institute of Technology |
Sato, Noritaka | Nagoya Institute of Technology |
Morita, Yoshifumi | Nagoya Institute of Technology |
Keywords: Human-robot interaction, Rescue robotics
Abstract: There is a need for an interface for the teleoperation of rescue robots that can be used with less training and reduces the mental burden. In recent years, numerous teleoperation interfaces for robots have been developed using commercially available VR headsets and VR controllers. In this study, to investigate the usefulness of VR interfaces in tasks involving both movement and manipulation, a VR interface capable of both movement and manipulation was proposed and validated by users in a door-opening task. The validation results confirm that operations using the VR interface contribute to a shorter working time with the manipulator compared to operations using the gamepad. Additionally, mental demand, temporal demand, and effort tended to be reduced in manipulator operation using the VR interface in the NASA-TLX mental workload assessment. However, the gamepad was also identified as a suitable input device for driving operations, and combining both operations or using them differently depending on the situation may reduce the operator's workload.
|
|
09:45-10:00, Paper Tu1C.6 | Add to My Program |
Characterizing Evacuee Behavior During a Robot-Guided Evacuation |
|
Nayyar, Mollik | The Pennsylvania State University |
Paik, Ghanghoon | Pennsylvania State University |
Yuan, Zhenyuan | The Pennsylvania State University |
Zheng, Tongjia | University of Notre Dame |
Zhu, Minghui | Pennsylvania State University |
Lin, Hai | University of Notre Dame |
Wagner, Alan Richard | Penn State University |
Keywords: Rescue robotics, Multi-robot systems, Human-robot interaction
Abstract: This research explores evacuee responses to robot-guided evacuation. During the conduct of a human-robot experiment we trigger a smoke alarm on unsuspecting subjects. The robot offers to led the subjects to a distant, less familiar exit. We capture the subjects decision to follow, evacuation characteristics, and impressions of the robot over individual and group conditions and two different approaches to designing evacuation robots. We ran a total of 112 subjects across various conditions and find that, overall, 95.28% of participants followed the robot's guidance. We further show that, although the amount of time evacuees wait prior to evacuating does not differ between the conditions, the total evacuation time is less when a multi-robot approach is taken. We also present a series of preliminary results on related exploratory studies.
|
|
Tu2C Contributed Paper, Convention Hall |
Add to My Program |
Recognition and Perception |
|
|
Chair: Kimura, Tetsuya | Nagaoka University of Technology |
|
13:00-13:15, Paper Tu2C.1 | Add to My Program |
Disaster Area Recognition from Aerial Images with Complex-Shape Class Detection |
|
González Navarro, Rubén | Universidad De Málaga |
Lin-Yang, Dahui | Malaga University |
Vazquez-Martin, Ricardo | University of Malaga |
García-Cerezo, Alfonso | University of Malaga |
Keywords: Rescue robotics, Aerial robotics, Perception for navigation, hazard detection, and victim identification
Abstract: This paper presents a convolutional neural network (CNN) model for event detection from Unmanned Aerial Vehicles (UAV) in disaster environments. The model leverages the YOLOv5 network, specifically adapted for aerial images and optimized for detecting Search and Rescue (SAR) related classes for disaster area recognition. These SAR-related classes are person, vehicle, debris, fire, smoke, and flooded areas. Among these, the latter four classes lead to unique challenges due to their lack of discernible edges and/or shapes in aerial imagery, making their accurate detection and performance evaluation metrics particularly intricate. The methodology for the model training involves the adaptation of the pre-trained model for aerial images and its subsequent optimization for SAR scenarios. These stages have been conducted using public datasets, with the required image labeling in the case of SAR-related classes. An analysis of the obtained results demonstrates the model's performance while discussing the intricacies related to complex-shape classes. The model and the SAR datasets are publicly available.
|
|
13:15-13:30, Paper Tu2C.2 | Add to My Program |
Simultaneous 3D Reconstruction and Vegetation Classification Utilizing a Multispectral Stereo Camera |
|
Vollet, Johannes | Nuremberg Institute of Technology Georg Simon Ohm |
May, Stefan | Nuremberg Institute of Technology Georg Simon Ohm |
Nuechter, Andreas | University of Würzburg |
Keywords: Sensing and sensor fusion, Perception for navigation, hazard detection, and victim identification, Field robotics
Abstract: Obstacle detection is crucial for ensuring the safety of autonomous robots and their surroundings in unstructured outdoor environments. Objects with minimal lateral dimensions can pose risks to the robot or serve as important elements in the infrastructure it operates in. Detecting these structures becomes particularly challenging when tall vegetation is present. Distinguishing between soft, traversable objects, such as tufts of grass, and potentially lethal solid obstacles is paramount to a robot's ability to operate. This paper presents a novel approach that focuses on point cloud generation and vegetation identification to facilitate the safe navigation of autonomous outdoor robots. Our approach uses a single multispectral stereo camera system that employs a novel stereo matching strategy based on binary descriptors for spectrally non-identical image pairs.
|
|
13:30-13:45, Paper Tu2C.3 | Add to My Program |
SAR Nets: An Evaluation of Semantic Segmentation Networks with Attention Mechanisms for Search and Rescue Scenes |
|
Salas-Espinales, Andrés | Universidad Técnica De Manabí |
Vazquez-Martin, Ricardo | University of Malaga |
García-Cerezo, Alfonso | University of Malaga |
Mandow, Anthony | Universidad De Malaga |
Keywords: Rescue robotics, Artificial Intelligence, Autonomous search and rescue
Abstract: This paper evaluates four semantic segmentation models in Search-and-Rescue (SAR) scenarios obtained from ground vehicles. Two base models are used (U-Net and PSPNet) to compare different approaches to semantic segmentation, such as skip connections between encoder and decoder stages and using a pooling pyramid module. The best base model is modified by including two attention mechanisms to analyze their performance and computational cost. We conduct a quantitative and qualitative evaluation using our SAR dataset defining eleven classes in disaster scenarios. The results demonstrate that the attention mechanisms increase model performance while minimally affecting the computation time.
|
|
13:45-14:00, Paper Tu2C.4 | Add to My Program |
HabitatDyn Dataset: Salient Object Detection to Kinematics Estimation |
|
Shen, Zhengcheng | TU Berlin |
Gao, Yi | TU Berlin |
Kästner, Linh | T-Mobile, TU Berlin |
Lambrecht, Jens | Technische Universität Berlin |
Keywords: Sensing and sensor fusion, Performance evaluation and benchmarking, Artificial Intelligence
Abstract: The advancement of computer vision and machine learning has made datasets crucial for further research and applications. However, the creation and development of robots with advanced recognition capabilities are hindered by the lack of appropriate datasets. Existing image or video processing datasets are unable to depict observations from a moving robot accurately, and they do not contain the kinematics information necessary for robotic tasks. Synthetic data, on the other hand, are cost-effective to create and offer greater flexibility for adapting to various applications. Hence, they are widely utilized in both research and industry. In this paper, we propose the dataset HabitatDyn, which contains both synthetic RGB videos, semantic labels, depth information, as well as kinetics information. HabitatDyn was created from the perspective of a mobile robot with a moving camera and contains 30 scenes featuring six different types of moving objects with varying velocities. To demonstrate the usability of our dataset, two existing algorithms are used for evaluation and an approach to estimate the distance between the object and camera is implemented based on these segmentation methods and evaluated through the dataset. With the availability of this dataset, we aspire to foster further advancements in the field of mobile robotics, leading to more capable and intelligent robots that can navigate and interact with their environments more effectively. The code is publicly available at https://github.com/ignc-research/HabitatDyn.
|
|
14:00-14:15, Paper Tu2C.5 | Add to My Program |
Autonomous Navigation of Rescue Robot on International Standard Rough Terrain by Using Deep Reinforcement Learning |
|
Matsuo, Hayato | Nagoya Institute of Technology |
Sato, Noritaka | Nagoya Institute of Technology |
Morita, Yoshifumi | Nagoya Institute of Technology |
Keywords: Rescue robotics, Autonomous search and rescue, Robotics simulation
Abstract: Robots that perform rescue and search operations at disaster sites are called rescue robots. These robots should be able to be navigated operate autonomously because remote control is difficult. The objective of this research is to enable rescue robots to be navigated autonomously on international standard rough terrain. To achieve this objective, we built a learning environment in a simulator using Unity, a physics engine, and conducted deep reinforcement learning using machine learning framework of Unity, ML-Agents. Comparative verification with remote control showed that autonomous navigation was superior to remote control in both time and success rate. The reason for this result was found to be the difference in the motion of the robot.
|
|
14:15-14:30, Paper Tu2C.6 | Add to My Program |
Improving Drone Imagery for Computer Vision/Machine Learning in Wilderness Search and Rescue |
|
Murphy, Robin | Texas A&M |
Manzini, Thomas | Texas A&M |
Keywords: Aerial robotics, Autonomous search and rescue, Perception for navigation, hazard detection, and victim identification
Abstract: This paper describes gaps in acquisition of drone imagery that impair the use with computer vision/machine learning (CV/ML) models and makes five recommendations to maximize image suitability for CV/ML post-processing. It describes a notional work process for the use of drones in wilderness search and rescue incidents. The large volume of data from the wide area search phase offers the greatest opportunity for CV/ML techniques because of the large number of images that would otherwise have to be manually inspected. The 2023 Wu-Murad search in Japan, one of the largest missing person searches conducted in that area, serves as a case study. Although drone teams conducting wide area searches may not know in advance if the data they collect is going to be used for CV/ML post-processing, there are data collection procedures that can improve the search in general with automated collection software. If the drone teams do expect to use CV/ML, then they can exploit knowledge about the model to further optimize flights.
|
| |