|
TuAT1 |
Room T1 |
Computer Vision for Automation |
Regular session |
Chair: Gammell, Jonathan | University of Oxford |
|
10:00-10:15, Paper TuAT1.1 | |
>Proactive Estimation of Occlusions and Scene Coverage for Planning Next Best Views in an Unstructured Representation |
> Video Attachment
|
|
Border, Rowan | University of Oxford |
Gammell, Jonathan | University of Oxford |
Keywords: Computer Vision for Automation, Visual Servoing, Computer Vision for Other Robotic Applications
Abstract: The process of planning views to observe a scene is known as the Next Best View (NBV) problem. Approaches often aim to obtain high-quality scene observations while reducing the number of views, travel distance and computational cost. Considering occlusions and scene coverage can significantly reduce the number of views and travel distance required to obtain an observation. Structured representations (e.g., a voxel grid or surface mesh) typically use raycasting to evaluate the visibility of represented structures but this is often computationally expensive. Unstructured representations (e.g., point density) avoid the computational overhead of maintaining and raycasting a structure imposed on the scene but as a result do not proactively predict the success of future measurements. This paper presents proactive solutions for handling occlusions and considering scene coverage with an unstructured representation. Their performance is evaluated by extending the density-based Surface Edge Explorer (SEE). Experiments show that these techniques allow an unstructured representation to observe scenes with fewer views and shorter distances while retaining high observation quality and low computational cost.
|
|
10:15-10:30, Paper TuAT1.2 | |
>Indirect Object-To-Robot Pose Estimation from an External Monocular RGB Camera |
> Video Attachment
|
|
Tremblay, Jonathan | Nvidia |
Tyree, Stephen | NVIDIA |
Mosier, Terry | NVIDIA |
Birchfield, Stan | NVIDIA Corporation |
Keywords: Perception for Grasping and Manipulation, Computer Vision for Automation
Abstract: We present a robotic grasping system that uses a single external monocular RGB camera as input. The object-to-robot pose is computed indirectly by combining the output of two neural networks: one that estimates the object-to-camera pose, and another that estimates the robot-to-camera pose. Both networks are trained entirely on synthetic data, relying on domain randomization to bridge the sim-to-real gap. Because the latter network performs online camera calibration, the camera can be moved freely during execution without affecting the quality of the grasp. Experimental results analyze the effect of camera placement, image resolution, and pose refinement in the context of grasping several household objects. We also present results on a new set of 28 textured household toy grocery objects, which have been selected to be accessible to other researchers. To aid reproducibility of the research, we offer 3D scanned textured models, along with pre-trained weights for pose estimation.
|
|
10:30-10:45, Paper TuAT1.3 | |
>Peg-In-Hole Using 3D Workpiece Reconstruction and CNN-Based Hole Detection |
> Video Attachment
|
|
Nigro, Michelangelo | University of Basilicata |
Sileo, Monica | University of Basilicata |
Pierri, Francesco | Università Della Basilicata |
Genovese, Katia | University of Basilicata |
Bloisi, Domenico | University of Basilicata |
Caccavale, Fabrizio | Università Degli Studi Della Basilicata |
Keywords: Compliant Assembly, Computer Vision for Manufacturing, Compliance and Impedance Control
Abstract: This paper presents a method to cope with autonomous assembly tasks in the presence of uncertainties. To this aim, a Peg-in-Hole operation is considered, where the target workpiece position is unknown and the peg-hole clearance is small. Deep learning based hole detection and 3D surface reconstruction techniques are combined for accurate workpiece localization. In detail, the hole is detected by using a convolutional neural network (CNN), while the target workpiece surface is reconstructed via 3D-Digital Image Correlation (3D-DIC). Peg insertion is performed via admittance control that confers the suitable compliance to the peg. Experiments on a collaborative manipulator confirm that the proposed approach can be promising for achieving a better degree of autonomy for a class of robotic tasks in partially structured environments.
|
|
10:45-11:00, Paper TuAT1.4 | |
>Automated Folding of a Deformable Thin Object through Robot Manipulators |
> Video Attachment
|
|
Cui, Zhenxi | The Hong Kong Polytechnic University |
Huang, Kaicheng | The Hong Kong Polytechnic University |
Lu, Bo | The Chinese University of Hong Kong |
Chu, Henry | The Hong Kong Polytechnic University |
Keywords: Computer Vision for Automation, Visual Servoing, Service Robots
Abstract: This paper presents a model-free approach to automate folding of a deformable object with robot manipulators, where its surface was labeled with markers to facilitate vision-based control and alignment. While performing the task involves solving nonconvex or nonlinear terms, in this paper, linearization was first performed to approximate the problem. By using the Levenberg–Marquardt algorithm, the task of folding a deformable thin object can be reformulated as a convex optimization problem. The mapping relationship between the motions of markers on the image and the joint inputs of the robot manipulator was evaluated through a Jacobian matrix. To account for the uncertainty in the matrix due to the deformable object, a two-stage evaluation scheme, which consists of approximate-rigidity rule and Broyden-update rule, was performed. The performance and the robustness of the proposed approach was examined through simulation using Bullet simulator. The video of the simulation can be retrieved from the attachment. The results confirm that the thin object can be precisely folded together based on different markers labeled on the surface.
|
|
11:00-11:15, Paper TuAT1.5 | |
>Uncertainty Aware Texture Classification and Mapping Using Soft Tactile Sensors |
|
Amini, Alexander | Massachusetts Institute of Technology |
Lipton, Jeffrey | University of Washington |
Rus, Daniela | MIT |
Keywords: Computer Vision for Automation
Abstract: Spatial mapping of surface roughness is a critical enabling technology for automating adaptive sanding operations. We leverage GelSight sensors to convert the problem of surface roughness measurement into a vision classification problem. By combining GelSight sensors with Optitrack positioning systems we attempt to develop an accurate spatial mapping of surface roughness that can compare to human touch, the current state of the art for large scale manufacturing. To perform the classification, we propose the use of Bayesian neural networks in conjunction with uncertainty-aware prediction. We compare the sensor and network with a human baseline for both absolute and relative texture classification. To establish a baseline, we collected performance data from humans on their ability to classify materials into 60, 120, and 180 grit sanded pine boards. Our results showed that the probabilistic network performs at the level of human touch for absolute and relative classifications. Using the Bayesian approach enables establishing a confidence bound on our prediction. We were able to integrate the sensor with Optitrack to provide a spatial map of sanding grit applied to pine boards. From this result, we can conclude that GelSight with Bayesian neural networks can learn accurate representations for sanding, and could be a significant enabling technology for closed loop robotic sanding operations.
|
|
11:15-11:30, Paper TuAT1.6 | |
>Estimating Motion Codes from Demonstration Videos |
|
Alibayev, Maxat | University of South Florida |
Paulius, David Andres | University of South Florida |
Sun, Yu | University of South Florida |
Keywords: Computer Vision for Automation, Imitation Learning, Visual Learning
Abstract: A motion taxonomy can encode manipulations as a binary-encoded representation, which we refer to as motion codes. These motion codes innately represent a manipulation action in an embedded space that describes the motion's mechanical features, including contact and trajectory type. The key advantage of using motion codes for embedding is that motions can be more appropriately defined with robotic-relevant features, and their distances can be more reasonably measured using these motion features. In this paper, we develop a deep learning pipeline to extract motion codes from demonstration videos in an unsupervised manner so that knowledge from these videos can be properly represented and used for robots. Our evaluations show that motion codes can be extracted from demonstrations of action in the EPIC-KITCHENS dataset.
|
|
11:15-11:30, Paper TuAT1.7 | |
>HDR Reconstruction Based on the Polarization Camera |
|
Xuesong, Wu | National University of Defense Technology |
Zhang, Hong | University of Alberta |
Hu, Xiaoping | National University of Defense Technology |
Shakeri, Moein | University of Alberta |
Chen, Fan | National University of Defense Technology |
Ting, Juiwen | University of Alberta |
Keywords: Computer Vision for Other Robotic Applications, Computational Geometry, Computer Vision for Automation
Abstract: The recent development of the on-chip micropolarizer technology has made it possible to acquire-with the same ease of operation as a conventional camera-spatially aligned and temporally synchronized polarization images simultaneously in four orientations. This development has created new opportunities for interesting applications including those in robotics. In this paper, we investigate the use of this sensor technology in high-dynamic-range (HDR) imaging. Specifically, observing that natural light can be attenuated differently by varying the direction of the polarization filter, we treat the multiple images captured by the polarization camera as a set captured at different exposure times, useful to the reconstruction of an HDR image. In our approach, we first study the radiometric model of the polarization camera, and relate the polarizer direction, degree and angle of polarization of light to the exposure time of a pixel in the polarization images. Subsequently, by applying the standard radiometric calibration procedure of a camera, we recover the camera response function. With multiple polarization images at known pixel-specific exposure times, we can then proceed to estimate the irradiance maps from the images and generate an HDR image. Two datasets are created to evaluate our approach, and experimental results show the dynamic range by our approach can be increased by an amount dependent on light polarization. We also use two robotics experiments on feature matching and visual odometry to demonstrate the potential benefit of this increased dynamic range.
|
|
TuAT2 |
Room T2 |
Manufacturing and Logistics |
Regular session |
Chair: Arpenti, Pierluigi | CREATE Consortium |
Co-Chair: Rocco, Paolo | Politecnico Di Milano |
|
10:00-10:15, Paper TuAT2.1 | |
>Zero-Tuning Grinding Process Methodology of Cyber-Physical Robot System |
|
Yang, Hsuan-Yu | National Taiwan University |
Shih, Chih-Hsuan | National Taiwan University |
Lo, Yuan Chieh | Industrial Technology Research Institute |
Lian, Feng-Li | National Taiwan University |
Keywords: Industrial Robots, Intelligent and Flexible Manufacturing, AI-Based Methods
Abstract: Industrial robots play potential and important roles on labor-intensive and high-risk jobs. For example, typical industrial robots have been used in grinding process. However, the automatic grinding process by robots is a complex process because it still relies on skillful engineers to adaptively adjust several key parameters. Moreover, it might take a lot of time and effort for generating better grinding quality. Hence, this paper proposes a new framework of cyber-physical robot system with zero-tuning methodology which can automatically optimize the process parameters of robotic grinding process according to the desired quality. To overcome the gaps between real world and simulation leading to the uncertainty, one proper system calibration can help to generate real environment position precisely, and the cloud database is constructed to record the relative data during the grinding process simultaneously. The proposed zero-tuning methodology combines both neural network (NN) model and genetic algorithm (GA) and is designed to generate the best combination of related corresponding parameters to meet the desired quality. Experimental results show that the average error of the output result is 8.93%. To compare the CNC machine, our solution play is more potential to apply in plumbing industry.
|
|
10:15-10:30, Paper TuAT2.2 | |
>An External Stabilization Unit for High-Precision Applications of Robot Manipulators |
> Video Attachment
|
|
Berninger, Tobias Franz Christian | TU Munich |
Slimak, Tomas | TU Munich |
Weber, Tobias | Boeing Research & Technology |
Rixen, Daniel | Technische Universität München |
Keywords: Automation at Micro-Nano Scales, Actuation and Joint Mechanisms, Industrial Robots
Abstract: Because of their large workspace, robot manipulators have the potential to be used for high precision non-contact manufacturing processes, such as laser cutting or welding, on large complex work pieces. However, most industrial manipulators are not able to provide the necessary accuracy requirements. Mainly because of their flexible structures, they are subject to point to point positioning errors and also vibration errors on a smaller scale. The vibration issues are especially hard to deal with. Many published solutions propose to modify the robot's own control system to deal with these problems. However, most modern control techniques require high fidelity models of the underlying system dynamics, which are quite difficult to obtain for robot manipulators. In this work, we propose an external stabilization unit with an additional set of actuators/sensors to stabilize the process tool, similar to Optical Image Stabilization systems. We show that, because of collocated control, a model of the robot's own dynamic behavior is not needed to achieve high tracking accuracy. We also provide testing results of a prototype stabilizing a dummy tool in two degrees of freedom on a UR10 robot, which reduced its tracking error by two orders of magnitude below 20 micrometers.
|
|
10:30-10:45, Paper TuAT2.3 | |
>CUHK-AHU Dataset: Promoting Practical Self-Driving Applications in the Complex Airport Logistics, Hill and Urban Environments |
|
Chen, Wen | The Chinese University of Hong Kong |
Liu, Zhe | University of Cambridge |
Zhao, Hongchao | The Chinese University of Hong Kong |
Zhou, Shunbo | The Chinese University of Hong Kong |
Li, Haoang | The Chinese University of Hong Kong |
Liu, Yunhui | Chinese University of Hong Kong |
Keywords: Industrial Robots, SLAM, Mapping
Abstract: This paper presents a novel dataset targeting three types of challenging environments for autonomous driving, i.e., the industrial logistics environment, the undulating hill environment and the mixed complex urban environment. To the best of the author's knowledge, similar dataset has not been published in the existing public datasets, especially for the logistics environment collected in the functioning Hong Kong Air Cargo Terminal (HACT). Structural changes always suddenly appeared in the airport logistics environment due to the frequent movement of goods in and out. In the structureless and noisy hill environment, the non-flat plane movement is usual. In the mixed complex urban environment, the highly dynamic residence blocks, sloped roads and highways are included in a single collection. The presented dataset includes LiDAR, image, IMU and GPS data by repeatedly driving along several paths to capture the structural changes, the illumination changes and the different degrees of undulation of the roads. The baseline trajectories are provided which are estimated by Simultaneous Localization and Mapping (SLAM).
|
|
10:45-11:00, Paper TuAT2.4 | |
>A Flexible Robotic Depalletizing System for Supermarket Logistics |
> Video Attachment
|
|
Caccavale, Riccardo | Università Di Napoli "Federico II" |
Arpenti, Pierluigi | CREATE Consortium |
Paduano, Gianmarco | Scuola Politecnica E Delle Scienze Di Base Federico II Di Napoli |
Fontanelli, Giuseppe Andrea | University of Naples Federico II |
Lippiello, Vincenzo | University of Naples FEDERICO II |
Villani, Luigi | Univ. Napoli Federico II |
Siciliano, Bruno | Univ. Napoli Federico II |
Keywords: Logistics, Intelligent and Flexible Manufacturing, AI-Based Methods
Abstract: Depalletizing robotic systems are commonly deployed to automatize and speed-up parts of logistic processes. Despite this, the necessity to adapt the preexisting logistic processes to the automatic systems often impairs the application of such robotic solutions to small business realities like supermarkets. In this work we propose a robotic depalletizing system designed to be easily integrated into supermarket logistic processes. The system has to schedule, monitor and adapt the depalletizing process considering both on-line perceptual information given by non-invasive sensors and constraints provided by the high-level management system or by a supervising user. We describe the overall system discussing two case studies in the context of a supermarket logistic process. We show how the proposed system can manage multiple depalletizing strategies and multiple logistic requests.
|
|
11:00-11:15, Paper TuAT2.5 | |
>A Reconfigurable Gripper for Robotic Autonomous Depalletizing in Supermarket Logistics |
> Video Attachment
|
|
Fontanelli, Giuseppe Andrea | University of Naples Federico II |
Paduano, Gianmarco | Scuola Politecnica E Delle Scienze Di Base Federico II Di Napoli |
Caccavale, Riccardo | Università Di Napoli "Federico II" |
Arpenti, Pierluigi | CREATE Consortium |
Lippiello, Vincenzo | University of Naples FEDERICO II |
Villani, Luigi | Univ. Napoli Federico II |
Siciliano, Bruno | Univ. Napoli Federico II |
Keywords: Logistics, Grippers and Other End-Effectors, Mechanism Design
Abstract: Automatic depalletizing is becoming a practice widely applied in some warehouses to automatize and speedup the logistics. On the other hand, the necessity to adapt the preexisting logistic lines to a custom automatic system can be a limit for the application of robotic solutions into smaller facilities like supermarkets. In this work, we tackle this issue proposing a flexible and adaptive gripper for robotic depalletizing. The gripper has been designed to be assembled on the end-tip of an industrial robotic arm. A novel patent-pending mechanism allows grasping boxes and products from both the upper and the lateral side enabling the depalletizing also of boxes with complex shape. Moreover, the gripper is reconfigurable with five actuated degrees of freedom, that are automatically controlled using the embedded sensors to adapt grasping to different shapes and weights.
|
|
11:15-11:30, Paper TuAT2.6 | |
>Combining Speed and Separation Monitoring with Power and Force Limiting for Safe Collaborative Robotics Applications |
> Video Attachment
|
|
Lucci, Niccolò | Politecnico Di Milano |
Lacevic, Bakir | University of Sarajevo |
Zanchettin, Andrea Maria | Politecnico Di Milano |
Rocco, Paolo | Politecnico Di Milano |
Keywords: Robot Safety, Physical Human-Robot Interaction, Industrial Robots
Abstract: Enabling humans and robots to safely work close to each other deserves careful consideration. With the publication of ISO directives on this matter, two different strategies, namely the Speed and Separation Monitoring and the Power and Force Limiting, have been proposed. This paper proposes a method to efficiently combine the two aforementioned safety strategies for collaborative robotics operations. By exploiting the combination of the two, it is then possible to achieve higher levels of productivity, still preserving safety of the human operators. In a nutshell, the state of motion of each point of the robot is monitored so that at every time instant the robot is able to modulate its speed to eventually come into contact with a body region of the human, consistently with the corresponding biomechanical limit. Validation experiments have been conducted to quantify the benefits of this newly developed strategy with respect to the state-of-the-art.
|
|
TuAT3 |
Room T3 |
Scheduling |
Regular session |
Chair: Barton, Kira | University of Michigan at Ann Arbor |
Co-Chair: Chakraborty, Nilanjan | Stony Brook University |
|
10:00-10:15, Paper TuAT3.1 | |
>Distributed Near-Optimal Multi-Robots Coordination in Heterogeneous Task Allocation |
|
Li, Qinyuan | Swinburne University of Technology |
Li, Minyi | RMIT |
Vo, Bao Quoc | Swinburne University of Technology |
Kowalczyk, Ryszard | Swinburne University of Technology |
Keywords: Mechanism Design, Multi-Robot Systems, Planning, Scheduling and Coordination
Abstract: This paper explores the heterogeneous task allocation problem in Multi-robot systems. A game-theoretic formulation of the problem is proposed to align the goal of individual robots with the system objective. The concept of Nash equilibrium is applied to define a desired solution for the task allocation problem in which each robot can allocate itself to an appropriate task group. We also introduce a market-based distributed mechanism, called DisNE, to allow the robots to exchange messages with tasks and move between task groups, eventually reaching an equilibrium solution. We carry out comprehensive empirical studies to demonstrate that DisNE achieves near-optimal system utility in significantly shorter computation times when compared with the state-of-the-art mechanisms.
|
|
10:15-10:30, Paper TuAT3.2 | |
>Heterogeneous Vehicle Routing and Teaming with Gaussian Distributed Energy Uncertainty |
|
Fu, Bo | University of University |
Smith, William | US Army TARDEC |
Rizzo, Denise M. | U.S. Army TARDEC |
Castanier, Matthew P. | US Army GVSC |
Barton, Kira | University of Michigan at Ann Arbor |
Keywords: Multi-Robot Systems, Cooperating Robots, Planning, Scheduling and Coordination
Abstract: For robot swarms operating on complex missions in an uncertain environment, it is important that the decision-making algorithm considers both heterogeneity and uncertainty. This paper presents a stochastic programming framework for the vehicle routing problem with stochastic travel energy costs and heterogeneous vehicles and tasks. We represent the heterogeneity as linear constraints, estimate the uncertain energy cost through Gaussian process regression, formulate this stochasticity as chance constraints or stochastic recourse costs, and then solve the stochastic programs using branch and cut algorithms to minimize the expected energy cost. The performance and practicality are demonstrated through extensive computational experiments and a practical test case.
|
|
10:30-10:45, Paper TuAT3.3 | |
>Long-Run Multi-Robot Planning under Uncertain Action Durations for Persistent Tasks |
|
Azevedo, Carlos | Instituto Superior Técnico - Institute for Systems and Robotics |
Lacerda, Bruno | University of Oxford |
Hawes, Nick | University of Oxford |
Lima, Pedro U. | Instituto Superior Técnico - Institute for Systems and Robotics |
Keywords: Planning, Scheduling and Coordination, Multi-Robot Systems, Task Planning
Abstract: This paper presents an approach for multi-robot long-term planning under uncertainty over the duration of actions. The proposed methodology takes advantage of generalized stochastic Petri nets with rewards (GSPNR) to model multi-robot problems. A GSPNR allows for unified modeling of action selection, uncertainty on the duration of action execution, and for goal specification through the use of transition rewards and rewards per time unit. Our approach relies on the interpretation of the GSPNR model as an equivalent embedded Markov reward automaton (MRA). We then build on a state-of-the-art method to compute the long-run average reward over MRAs, extending it to enable the extraction of the optimal policy. We provide an empirical evaluation of the proposed approach on a simulated multi-robot monitoring problem, evaluating its performance and scalability. The results show that the synthesized policy outperforms a policy obtained from an infinite horizon discounted reward formulation as well as a carefully hand-crafted policy.
|
|
10:45-11:00, Paper TuAT3.4 | |
>Algorithm for Multi-Robot Chance-Constrained Generalized Assignment Problem with Stochastic Resource Consumption |
|
Yang, Fan | Stony Brook University |
Chakraborty, Nilanjan | Stony Brook University |
Keywords: Multi-Robot Systems, Optimization and Optimal Control, Planning, Scheduling and Coordination
Abstract: We present a novel algorithm for the multi-robot generalized assignment problem (GAP) with stochastic resource consumption. In this problem, each robot has a resource (e.g., battery life) constraint and it consumes a certain amount of resource to perform a task. In practice, the resource consumed for performing a task can be uncertain. Therefore, we assume that the resource consumption is a random variable with known mean and variance. The objective is to find an assignment of the robots to tasks that maximizes the team payoff. Each task is assigned to at most one robot and the resource constraint for each robot has to be satisfied with very high probability. We formulate the problem as a chance-constrained combinatorial optimization problem and call it the chance-constrained generalized assignment problem (CC-GAP). This problem is an extension of the deterministic generalized assignment problem, which is a NP-hard problem. We design an iterative algorithm for solving CC-GAP in which each robot maximizes its own objective by solving a chance-constrained knapsack problem in an iterative manner. The approximation ratio of our algorithm is (1+alpha), assuming that the deterministic knapsack problem is solved by an alpha-approximation algorithm. We present simulation results to demonstrate that our algorithm is scalable with the number of robots and tasks.
|
|
11:00-11:15, Paper TuAT3.5 | |
>The Pluggable Distributed Resource Allocator (PDRA): A Middleware for Distributed Computing in Mobile Robotic Networks |
> Video Attachment
|
|
Rossi, Federico | Jet Propulsion Laboratory - California Institute of Technology |
Vaquero, Tiago | JPL, Caltech |
Sanchez Net, Marc | Jet Propulsion Laboratory - California Institute of Technology |
Saboia Da Silva, Maira | University at Buffalo |
Vander Hook, Joshua | NASA Jet Propulsion Laboratory |
Keywords: Multi-Robot Systems, Planning, Scheduling and Coordination, Software, Middleware and Programming Environments
Abstract: We present the Pluggable Distributed Resource Allocator (PDRA), a middleware for distributed computing in heterogeneous mobile robotic networks. PDRA enables autonomous robotic agents to share computational resources for computationally expensive tasks such as localization and path planning. It sits between an existing single-agent planner/executor and existing computational resources (e.g. ROS packages), intercepts the executor’s requests and, if needed, transparently routes them to other robots for execution. PDRA is pluggable: it can be integrated in an existing single-robot autonomy stack with minimal modifications. Task allocation decisions are performed by a mixed-integer programming algorithm, solved in a shared-world fashion, that models CPU resources, latency requirements, and multi-hop, periodic, bandwidth-limited network communications; the algorithm can minimize overall energy usage or maximize the reward for completing optional tasks. Simulation results show that PDRA can reduce energy and CPU usage by over 50% in representative multi-robot scenarios compared to a naive scheduler; runs on embedded platforms; and performs well in delay- and disruption-tolerant networks (DTNs). PDRA is available to the community under an open-source license.
|
|
11:15-11:30, Paper TuAT3.6 | |
>Learning Scheduling Policies for Multi-Robot Coordination with Graph Attention Networks |
|
Wang, Zheyuan | Georgia Institute of Technology |
Gombolay, Matthew | Georgia Institute of Technology |
Keywords: Planning, Scheduling and Coordination, Imitation Learning, Multi-Robot Systems
Abstract: Increasing interest in integrating advanced robotics within manufacturing has spurred a renewed concentration in developing real-time scheduling solutions to coordinate human-robot collaboration in this environment. Traditionally, the problem of scheduling agents to complete tasks with temporal and spatial constraints has been approached either with exact algorithms, which are computationally intractable for large-scale, dynamic coordination, or approximate methods that require domain experts to craft heuristics for each application. We seek to overcome the limitations of these conventional methods by developing a novel graph attention network-based scheduler to automatically learn features of scheduling problems towards generating high-quality solutions. To learn effective policies for combinatorial optimization problems, we combine imitation learning, which makes use of expert demonstration on small problems, with graph neural networks, in a non-parametric framework, to allow for fast, near-optimal scheduling of robot teams with various sizes, while generalizing to large, unseen problems. Experimental results showed that our network-based policy was able to find high-quality solutions for ~90% of the testing problems involving scheduling 2-5 robots and up to 100 tasks, which significantly outperforms prior state-of-the-art, approximate methods. Those results were achieved with affordable computation cost and up to 100x less computation time compared to exact solvers.
|
|
TuAT4 |
Room T4 |
Robot Computation: Hardware, Software, Datasets |
Regular session |
Chair: Fallon, Maurice | University of Oxford |
Co-Chair: Scaramuzza, Davide | University of Zurich |
|
10:00-10:15, Paper TuAT4.1 | |
>The Newer College Dataset Handheld LiDAR, Inertial and Vision with Ground Truth |
> Video Attachment
|
|
Ramezani, Milad | University of Oxford |
Wang, Yiduo | University of Oxford |
Camurri, Marco | University of Oxford |
Wisth, David | University of Oxford |
Mattamala, Matías | University of Oxford |
Fallon, Maurice | University of Oxford |
Keywords: Big Data in Robotics and Automation, Localization, Mapping
Abstract: In this paper, we present a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km around New College, Oxford as well as a series of supplementary datasets with much more aggressive motion and lighting contrast. The datasets include data from two commercially available devices - a stereoscopic-inertial camera and a multibeam 3D LiDAR, which also provides inertial measurements. Additionally, we used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing ∼290 million points). Using the map, we generated a 6 Degrees of Freedom (DoF) ground truth pose for each LiDAR scan (with approximately 3 cm accuracy) to enable better benchmarking of LiDAR and vision localisation, mapping and reconstruction systems. This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The large dataset combines both built environments, open spaces and vegetated areas so as to test localisation and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LiDAR reconstruction and appearance-based place recognition, while the supplementary datasets contain very dynamic motions to introduce more challenges for visual-inertial odometry systems. The datasets are available at: ori.ox.ac.uk/datasets/newer-college-dataset
|
|
10:15-10:30, Paper TuAT4.2 | |
>Faster Than FAST: GPU-Accelerated Frontend for High-Speed VIO |
|
Nagy, Balazs | University of Zürich |
Foehn, Philipp | University of Zurich |
Scaramuzza, Davide | University of Zurich |
Keywords: Aerial Systems: Perception and Autonomy, SLAM, Visual Tracking
Abstract: The recent introduction of powerful embedded graphics processing units (GPUs) has allowed for unforeseen improvements in real-time computer vision applications. It has enabled algorithms to run onboard, well above the standard video rates, yielding not only higher information processing capability, but also reduced latency. This work focuses on the applicability of efficient low-level, GPU hardware-specific instructions to improve on existing computer vision algorithms in the field of visual-inertial odometry (VIO). While most steps of a VIO pipeline work on visual features, they rely on image data for detection and tracking, of which both steps are well suited for parallelization. Especially non-maxima suppression and the subsequent feature selection are prominent contributors to the overall image processing latency. Our work first revisits the problem of non-maxima suppression for feature detection specifically on GPUs, and proposes a solution that selects local response maxima, imposes spatial feature distribution, and extracts features simultaneously. Our second contribution introduces an enhanced FAST feature detector that applies the aforementioned non-maxima suppression method. Finally, we compare our method to other state-of-the-art CPU and GPU implementations, where we always outperform all of them in feature tracking and detection, resulting in over 1000fps throughput on an embedded Jetson TX2 platform. Additionally, we demonstrate our work integrated into a VIO pipeline achieving a metric state estimation at ~200fps.
|
|
10:30-10:45, Paper TuAT4.3 | |
>GPU Parallelization of Policy Iteration RRT# |
|
Lawson, R. Connor | Georgia Institute of Technology |
Wills, Linda | Georgia Institute of Technology |
Tsiotras, Panagiotis | Georgia Tech |
Keywords: Motion and Path Planning
Abstract: Sampling-based planning has become a de facto standard for complex robots given its superior ability to rapidly explore high-dimensional configuration spaces. Most existing optimal sampling-based planning algorithms are sequential in nature and cannot take advantage of wide parallelism available on modern computer hardware. Further, tight synchronization of exploration and exploitation phases in these algorithms limits sample throughput and planner performance. Policy Iteration RRT# (PI-RRT#) exposes fine-grained parallelism during the exploitation phase, but this parallelism has not yet been evaluated using a concrete implementation. We first present a novel GPU implementation of PI-RRT#’s exploitation phase and discuss data structure considerations to maximize parallel performance. Our implementation achieves 3–4× speedup over a serial PI-RRT# implementation for a 77.9% decrease in overall planning time on average. As a second contribution, we introduce the Batched-Extension RRT# algorithm, which loosens the synchronization present in PI-RRT# to realize independent 12.97× and 12.54× speedups under serial and parallel exploitation, respectively.
|
|
10:45-11:00, Paper TuAT4.4 | |
>ROS-Lite: ROS Framework for NoC-Based Embedded Many-Core Platform |
> Video Attachment
|
|
Azumi, Takuya | Saitama University |
Maruyama, Yuya | Osaka University |
Kato, Shinpei | Nagoya University |
Keywords: Software, Middleware and Programming Environments, Localization, Motion and Path Planning
Abstract: This paper proposes ROS-lite, a robot operating system (ROS) development framework for embedded many-core platforms based on network-on-chip (NoC) technology. Many-core platforms support the high processing capacity and low power consumption requirement of embedded systems. In this study, a self-driving software platform module is parallelized to run on many-core processors to demonstrate the practicality of embedded many-core platforms. The experimental results show that the proposed framework and the parallelized applications have met the deadline for low-speed self-driving systems.
|
|
TuAT5 |
Room T5 |
Sim-To-Real |
Regular session |
Chair: Bock, Juergen | KUKA Deutschland GmbH |
Co-Chair: Batra, Dhruv | Facebook AI Research / Georgia Tech |
|
10:00-10:15, Paper TuAT5.1 | |
>Sim2Real Transfer for Reinforcement Learning without Dynamics Randomization |
> Video Attachment
|
|
Kaspar,, Manuel | KUKA Deutschland GmbH |
Muñoz Osorio, Juan David | Leibniz University, KUKA Germany GmbH |
Bock, Juergen | KUKA Deutschland GmbH |
Keywords: Reinforecment Learning, Transfer Learning, Compliance and Impedance Control
Abstract: In this work we show how to use the Operational Space Control framework (OSC) under joint and cartesian constraints for reinforcement learning in cartesian space. Our method is therefore able to learn fast and with adjustable degrees of freedom, while we are able to transfer policies without additional dynamics randomizations on a KUKA LBR iiwa peg in-hole task. Before learning in simulation starts, we perform a system identification for aligning the simulation environment as far as possible with the dynamics of a real robot. Adding constraints to the OSC controller allows us to learn in a safe way on the real robot or to learn a flexible, goal conditioned policy that can be easily transferred from simulation to the real robot.
|
|
10:15-10:30, Paper TuAT5.2 | |
>Learning the Sense of Touch in Simulation: A Sim-To-Real Strategy for Vision-Based Tactile Sensing |
> Video Attachment
|
|
Sferrazza, Carmelo | ETH Zurich |
Bi, Thomas | ETH Zurich |
D'Andrea, Raffaello | ETHZ |
Keywords: Force and Tactile Sensing, Soft Sensors and Actuators
Abstract: Data-driven approaches to tactile sensing aim to overcome the complexity of accurately modeling contact with soft materials. However, their widespread adoption is impaired by concerns about data efficiency and the capability to generalize when applied to various tasks. This paper focuses on both these aspects with regard to a vision-based tactile sensor, which aims to reconstruct the distribution of the three-dimensional contact forces applied on its soft surface. Accurate models for the soft materials and the camera projection, derived via state-of-the-art techniques in the respective domains, are employed to generate a dataset in simulation. A strategy is proposed to train a tailored deep neural network entirely from the simulation data. The resulting learning architecture is directly transferable across multiple tactile sensors without further training and yields accurate predictions on real data, while showing promising generalization capabilities to unseen contact conditions.
|
|
10:30-10:45, Paper TuAT5.3 | |
>Reinforced Grounded Action Transformation for Sim-To-Real Transfer |
> Video Attachment
|
|
Karnan, Haresh | The University of Texas at Austin |
Desai, Siddharth | The University of Texas at Austin |
Warnell, Garrett | U.S. Army Research Laboratory |
Hanna, Josiah | The University of Texas at Austin |
Stone, Peter | University of Texas at Austin |
Keywords: Reinforecment Learning, Transfer Learning, Neural and Fuzzy Control
Abstract: Robots can learn to do complex tasks in simulation, but often, learned behaviors fail to transfer well to the real world due to simulator imperfections (the “reality gap”). Some existing solutions to this sim-to-real problem, such as Grounded Action Transformation (GAT), use a small amount of real-world experience to minimize the reality gap by “grounding” the simulator. While very effective in certain scenarios, GAT is not robust on problems that use complex function approximation techniques to model a policy. In this paper, we introduce Reinforced Grounded Action Transformation(RGAT), a new sim-to-real technique that uses Reinforcement Learning (RL) not only to update the target policy in simulation, but also to perform the grounding step itself. This novel formulation allows for end-to-end training during the grounding step, which, compared to GAT, produces a better grounded simulator. Moreover, we show experimentally in several MuJoCo domains that our approach leads to successful transfer for policies modeled using neural networks.
|
|
10:45-11:00, Paper TuAT5.4 | |
>Adaptability Preserving Domain Decomposition for Stabilizing Sim2Real Reinforcement Learning |
> Video Attachment
|
|
Gao, Haichuan | Tsinghua University |
Yang, Zhile | Tsinghua University |
Su, Xin | Tsinghua University |
Tan, Tian | Stanford University |
Chen, Feng | Tsinghua University |
Keywords: Reinforecment Learning, Transfer Learning, Big Data in Robotics and Automation
Abstract: In sim-to-real transfer of Reinforcement Learning (RL) policies for robot tasks, Domain Randomization (DR) is a widely used technique for improving adaptability. However, in DR there is a conflict between adaptability and training stability, and heavy DR tends to result in instability or even failure in training. To relieve this conflict, we propose a new algorithm named Domain Decomposition (DD) that decomposes the randomized domain according to environments and trains a separate RL policy for each part. This decomposition stabilizes the training of each RL policy, and as we prove theoretically, the adaptability of the overall policy can be preserved. Our simulation results verify that DD really improves stability in training while preserving ideal adaptability. Further, we complete a complex real-world vision-based patrolling task using DD, which demonstrates DD’s practicality. A video is attached as supplementary material.
|
|
11:00-11:15, Paper TuAT5.5 | |
>Sim-To-Real with Domain Randomization for Tumbling Robot Control |
> Video Attachment
|
|
Schwartzwald, Amalia | CSE, UMN |
Papanikolopoulos, Nikos | University of Minnesota |
Keywords: Model Learning for Control
Abstract: Tumbling locomotion allows for small robots to traverse comparatively rough terrain, however, their motion is complex and difficult to control. Existing tumbling robot control methods involve manual control or the assumption of flat terrain. Reinforcement learning allows for the exploration and exploitation of diverse environments. By utilizing reinforcement learning with domain randomization, a robust control policy can be learned in simulation then transferred to the real world. In this paper, we demonstrate autonomous setpoint navigation with a tumbling robot prototype on flat and non-flat terrain. The flexibility of this system improves the viability of nontraditional robots for navigational tasks.
|
|
11:15-11:30, Paper TuAT5.6 | |
>Sim2Real Predictivity: Does Evaluation in Simulation Predict Real-World Performance |
|
Kadian, Abhishek | Facebook AI Research |
Truong, Joanne | The Georgia Institute of Technology |
Gokaslan, Aaron | Brown University |
Clegg, Alexander | Georgia Institute of Technology |
Wijmans, Erik | Georgia Tech |
Lee, Stefan | Oregon State University |
Savva, Manolis | Simon Fraser University |
Chernova, Sonia | Georgia Institute of Technology |
Batra, Dhruv | Facebook AI Research / Georgia Tech |
Keywords: Visual-Based Navigation, Simulation and Animation
Abstract: Does progress in simulation translate to progress on robots? If one method outperforms another in simulation, how likely is that trend to hold in reality on a robot? We examine this question for embodied PointGoal navigation – developing engineering tools and a research paradigm for evaluating a simulator by its sim2real predictivity. First, we develop Habitat-PyRobot Bridge (HaPy), a library for seamless execution of identical code on simulated agents and robots – transferring simulation-trained agents to a LoCoBot platform with a one-line code change. Second, we investigate the sim2real predictivity of Habitat-Sim for PointGoal navigation. We 3D-scan a physical lab space to create a virtualized replica, and run parallel tests of 9 different models in reality and simulation. We present a new metric called Simvs-Real Correlation Coefficient (SRCC) to quantify predictivity. We find that SRCC for Habitat as used for the CVPR19 challenge is low (0.18 for the success metric), suggesting that performance differences in this simulator-based challenge do not persist after physical deployment. This gap is largely due to AI agents learning to exploit simulator imperfections – abusing collision dynamics to ‘slide’ along walls , leading to shortcuts through otherwise non-navigable space. Naturally, such exploits do not work in the real world. Our experiments show that it is possible to tune simulation parameters to improve sim2real predictivity (e.g. improving SRCCSucc from 0.18 to 0.844) – increasing confidence that in-simulation comparisons will translate to deployed systems in reality.
|
|
TuAT7 |
Room T7 |
Localization |
Regular session |
Chair: Kim, Ayoung | Korea Advanced Institute of Science Technology |
Co-Chair: Liu, Ming | Hong Kong University of Science and Technology |
|
10:00-10:15, Paper TuAT7.1 | |
>Pedestrian Motion Tracking by Using Inertial Sensors on the Smartphone |
|
Wang, Yingying | The Chinese University of Hong Kong |
Cheng, Hu | The Chinese University of Hong Kong |
Meng, Max Q.-H. | The Chinese University of Hong Kong |
Keywords: Localization, Human and Humanoid Motion Analysis and Synthesis
Abstract: Inertial Measurement Unit (IMU) has long been a dream for stable and reliable motion estimation, especially in indoor environments where GPS strength limits. In this paper, we propose a novel method for position and orientation estimation of a moving object only from a sequence of IMU signals collected from the phone. Our main observation is that human motion is monotonous and periodic. We adopt the Extended Kalman Filter and use the learning-based method to dynamically update the measurement noise of the filter. Our pedestrian motion tracking system intends to accurately estimate planar position, velocity, heading direction without restricting the phone’s daily use. The method is not only tested on the self-collected signals, but also provides accurate position and velocity estimations on the public RIDI dataset, i.e., the absolute transmit error is 1.28m for a 59-second sequence.
|
|
10:15-10:30, Paper TuAT7.2 | |
>A Bayesian Approach for Gas Source Localization in Large Indoor Environments |
> Video Attachment
|
|
Prabowo, Yaqub | Institut Teknologi Bandung |
Ranasinghe, Ravindra | University of Technology Sydney |
Dissanayake, Gamini | University of Technology Sydney |
Riyanto, Bambang | Institut Teknologi Bandung |
Yuliarto, Brian | Institut Teknologi Bandung |
Keywords: Localization, Robotics in Hazardous Fields
Abstract: The main contribution of this paper is a probabilistic estimator that assists a mobile robot to locate a gas source in an indoor environment. The scenario is that a robot equipped with a gas sensor enters a building after the gas is released due to a leak or explosion. The problem is discretized by dividing the environment into a set of regions and time into a set of time intervals. Likelihood functions describing the probability of obtaining a certain gas concentration measurement at a given location at a given time interval are assembled using data generated with GADEN, a three-dimensional gas dispersion simulator [1]. Given a measurement of the gas concentration is available, Bayes's rule is used to compute the joint probability density describing the location of the gas source and the time at which it started spreading. To illustrate the estimation process, a relatively simple motion planner that directs the robot towards the most likely gas source location using a cost function based on the marginal probability of the gas source location is used. The motion plan is periodically revised to reflect the latest posterior probability density. Simulation experiments in a large air-conditioned building with turbulence and wind are presented to demonstrate the effectiveness of the proposed technique.
|
|
10:30-10:45, Paper TuAT7.3 | |
>Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation |
|
Fourie, Dehann | Massachusetts Institute of Technology and Woods Hole Oceanograph |
Rypkema, Nicholas Rahardiyan | Massachusetts Institute of Technology |
Claassens, Samuel David | Semisorted Technologies |
Vaz Teixeira, Pedro | Massachusetts Institute of Technology |
Leonard, John | MIT |
Fischell, Erin Marie | Woods Hole Oceanographic Institution |
Keywords: SLAM, Range Sensing, Marine Robotics
Abstract: This paper presents a method for processing sparse, non-Gaussian multimodal data in a simultaneous localization and mapping (SLAM) framework using factor graphs. Our approach demonstrates the feasibility of using a sum-product inference strategy to recover functional belief marginals from highly non-Gaussian situations, relaxing the prolific unimodal Gaussian assumption. The method is more focused than conventional multi-hypothesis approaches, but still captures dominant modes via multi-modality. The proposed algorithm exists in a trade space that spans the anticipated uncertainty of measurement data, task-specific performance, sensor quality, and computational cost. This work leverages several major algorithm design constructs, including clique recycling, to put an upper bound on the allowable computational expense -- a major challenge in non-parametric methods. To better demonstrate robustness, experimental results show the feasibility of the method on at least two of four major sources of non-Gaussian behavior: i) the first introduces a canonical range-only problem which is always underdetermined although composed exclusively from Gaussian measurements; ii) a real-world AUV dataset, demonstrating how ambiguous acoustic correlator measurements are directly incorporated into a non-Gaussian SLAM solution, while using dead reckon tethering to overcome short term computational requirements.
|
|
10:45-11:00, Paper TuAT7.4 | |
>An Augmented Reality Spatial Referencing System for Mobile Robots |
|
Chacko, Sonia | NYU Tandon School of Engineering |
Granado, Armando | New York University Tandon School of Engineering |
Rajkumar, Ashwin | New York University Tandon School of Engineering |
Kapila, Vikram | NYU Tandon School of Engineering |
Keywords: Virtual Reality and Interfaces, Task Planning, Service Robotics
Abstract: The deployment of a mobile service robot in domestic settings is a challenging task due to the dynamic and unstructured nature of such environments. Successful operation of the robot requires continuous human supervision to update its spatial knowledge about the dynamic environment. Thus, it is essential to develop a human-robot interaction (HRI) strategy that is suitable for novice end users to effortlessly provide task-specific spatial information to the robot. Although several approaches have been developed for this purpose, most of them are not feasible or convenient for use in domestic environments. In response, we have developed an augmented reality (AR) spatial referencing system (SRS), which allows a non-expert user to tag any specific locations on a physical surface to allocate tasks to be performed by the robot at those locations. Specifically, in the AR-SRS, the user provides a spatial reference by creating an AR virtual object with a semantic label. The real-world location of the user-created virtual object is estimated and stored as spatial data along with the user-specified semantic label. We present three different approaches to establish the correspondence between the user-created virtual object locations and the real-world coordinates on an a priori static map of the service area available to the robot. The performance of each approach is evaluated and reported. We also present use-case scenarios to demonstrate potential applications of the AR-SRS for mobile service robots.
|
|
11:00-11:15, Paper TuAT7.5 | |
>GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Models |
> Video Attachment
|
|
Huang, Huaiyang | The Hong Kong University of Science and Technology |
Ye, Haoyang | The Hong Kong University of Science and Technology |
Sun, Yuxiang | Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Automation Technologies for Smart Cities, Localization, Visual-Based Navigation
Abstract: Incorporating prior structure information into the visual state estimation could generally improve the localization performance. In this letter, we aim to address the paradox between accuracy and efficiency in coupling visual factors with structure constraints. To this end, we present a cross-modality method that tracks a camera in a prior map modelled by the Gaussian Mixture Model (GMM). With the pose estimated by the front-end initially, the local visual observations and map components are associated efficiently, and the visual structure from the triangulation is refined simultaneously. By introducing the hybrid structure factors into the joint optimization, the camera poses are bundle-adjusted with the local visual structure. By evaluating our complete system, namely GMMLoc, on the public dataset, we show how our system can provide a centimeter-level localization accuracy with only trivial computational overhead. In addition, the comparative studies with the state-of-the-art vision-dominant state estimators demonstrate the competitive performance of our method.
|
|
11:15-11:30, Paper TuAT7.6 | |
>HDMI-Loc: Exploiting High Definition Map Image for Precise Localization Via Bitwise Particle Filter |
> Video Attachment
|
|
Jeong, Jinyong | KAIST |
Cho, Younggun | KAIST |
Kim, Ayoung | Korea Advanced Institute of Science Technology |
Keywords: Localization, Autonomous Vehicle Navigation, Visual-Based Navigation
Abstract: In this paper, we propose a method for accurately estimating the 6-Degree Of Freedom (DOF) pose in an urban environment when a High Definition (HD) map is available. An HD map expresses 3D geometric data with semantic information in a compressed format and thus is more memory-efficient than point cloud maps. The small capacity of HD maps can be a significant advantage for autonomous vehicles in terms of map storage and updates within a large urban area. Unfortunately, existing approaches failed to sufficiently exploit HD maps by only estimating partial pose. In this study, we present a full 6-DOF localization against an HD map using an onboard stereo camera with semantic information from roads. We introduce an 8-bit representation for road information, which allow for effective bitwise operation when matching between query data and the HD map. For the pose estimation, we leverage a particle filter followed by a full 6-DOF pose optimization. Our experimental results show a median error of approximately 0:3 m in the lateral and longitudinal directions for a drive of approximately 11 km. These results can be used by autonomous vehicles to correct the global position without using Global Positioning System (GPS) data in highly complex urban environments. The median operation speed is approximately 60 msec supporting 10 Hz.
|
|
11:15-11:30, Paper TuAT7.7 | |
>Visual SLAM with Drift-Free Rotation Estimation in Manhattan World |
|
Liu, Jiacheng | Tsinghua University |
Meng, Ziyang | Tsinghua University |
Keywords: Localization, SLAM, Visual-Based Navigation
Abstract: This paper presents an efficient and accurate simultaneous localization and mapping (SLAM) system in man-made environments. The Manhattan world assumption is imposed, with which the global orientation is obtained. The drift-free rotational motion estimation is derived from the structural regularities using line features. In particular, a two-stage vanishing points (VPs) estimation method is developed, which consists of a short-term tracking module to track the clustered line features and a long-term searching module to generate abundant sets of VPs candidates and retrieve the optimal one. A least square problem is constructed and solved to provide refined VPs with the clusters of structural line features every frame. We make full use of the absolute orientation estimation to benefit the whole SLAM process. In particular, we utilize the absolute orientation estimation to increase the localization accuracy in the front end, and formulate a linear batch camera pose refinement problem with the known rotations to improve the real time performance in the back end. Experiments on both synthesized and real-world scenes reveal results with high-precision in the real time camera pose estimation process and high-speed in pose graph optimization process compared with the existing state-of-the-art methods.
|
|
TuAT8 |
Room T8 |
Localization: Other Modalities I |
Regular session |
Chair: Martinez, Julieta | Uber |
Co-Chair: Wei, Bo | Northumbria University |
|
10:00-10:15, Paper TuAT8.1 | |
>Pit30M: A Benchmark for Global Localization in the Age of Self-Driving Cars |
> Video Attachment
|
|
Martinez, Julieta | Uber |
Doubov, Sasha | University of Waterloo |
Fan, Jack | Uber ATG |
Bârsan, Ioan Andrei | Uber ATG / University of Toronto |
Wang, Shenlong | University of Toronto |
Mattyus, Gellert | Uber ATG |
Urtasun, Raquel | University of Toronto |
Keywords: Big Data in Robotics and Automation, Localization, Multi-Modal Perception
Abstract: We are interested in understanding whether retrieval-based localization approaches are good enough in the context of self-driving vehicles (SDVs). Towards this goal, we introduce Pit30M, a new image and LiDAR dataset with over 30 million frames, which is 10 to 100 times larger than those used in previous work. Pit30M is captured under diverse conditions (i.e., season, weather, time of the day, traffic), and provides accurate localization ground truth. We also automatically annotate our dataset with historical weather and astronomical data, as well as with image and LiDAR semantic segmentation (as a proxy measure for occlusion). We benchmark multiple existing methods for image and LiDAR retrieval and, in the process, introduce a simple, yet effective convolutional network-based LiDAR retrieval method that is competitive with the state-of-the-art. Our work provides, for the first, time, a benchmark for sub-metre retrieval-based localization at city scale.
|
|
10:15-10:30, Paper TuAT8.2 | |
>SolarSLAM: Battery-Free Loop Closure for Indoor Localisation |
|
Wei, Bo | Northumbria University |
Xu, Weitao | City University of Hong Kong |
Luo, Chengwen | Shenzhen University |
Zoppi, Guillaume | Northumbria University |
Ma, Dong | University of Cambridge |
Wang, Sen | Edinburgh Centre for Robotics, Heriot-Watt University |
Keywords: Localization, SLAM, Sensor Networks
Abstract: In this paper, we propose SolarSLAM, a battery-free loop closure method for indoor localisation. Inertial Measurement Unit (IMU) based indoor localisation method has been widely used due to its ubiquity in mobile devices, such as mobile phones, smartwatches and wearable bands. However, it suffers from the unavoidable long term drift. To mitigate the localisation error, many loop closure solutions have been proposed using sophisticated sensors, such as cameras, laser, etc. Despite achieving high-precision localisation performance, these sensors consume a huge amount of energy. Different from those solutions, the proposed SolarSLAM takes advantage of an energy harvesting solar cell as a sensor and achieves effective battery-free loop closure method. The proposed method suggests the key-point dynamic time warping for detecting loops and uses robust simultaneous localisation and mapping (SLAM) as the optimiser to remove falsely recognised loop closures. Extensive evaluations in the real environments have been conducted to demonstrate the advantageous photocurrent characteristics for indoor localisation and good localisation accuracy of the proposed method.
|
|
10:30-10:45, Paper TuAT8.3 | |
>Robot-To-Robot Relative Pose Estimation Based on Semidefinite Relaxation Optimization |
|
Li, Ming | Chinese University of Hong Kong, Shenzhen |
Liang, Guanqi | The Chinese University of Hong Kong, Shenzhen |
Luo, Haobo | The Chinese University of Hong Kong, Shenzhen |
Qian, Huihuan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Keywords: Localization, Multi-Robot Systems
Abstract: In this paper, the 2D robot-to-robot relative pose (position and orientation) estimation problem based on egomotion and noisy distance measurements is considered. We address this problem using the optimization based method. In particular, we start from a state-of-the-art method named square distances weighted least square (SD-WLS), and reformulate it as a non-convex quadratically constrained quadratic programming (QCQP) problem. To handle its non-convex nature, a SDP relaxation optimization based method is proposed, and we prove that the relaxation is theoretically tight when the measurements are free from noise or just corrupted by small noise. Further, to obtain the optimal solution of the relative pose estimation problem in the sense of maximum likelihood estimation (MLE), a theoretically optimal WLS method is developed to refine the estimate from the SDP optimization. Extensive simulations and a certain amount of real data processing results are presented for validating the performance of the the proposed algorithm and comparing its accuracy to the existing approaches.
|
|
10:45-11:00, Paper TuAT8.4 | |
>A Model-Based Approach to Acoustic Reflector Localization with a Robotic Platform |
> Video Attachment
|
|
Saqib, Usama | Aalborg University |
Jensen, Jesper Rindom | Aalborg University |
Keywords: Localization, Robot Audition, Mapping
Abstract: Constructing a spatial map of an indoor environment, e.g., a typical office environment with glass surfaces, is a difficult and challenging task. Current state-of-the-art, e.g., camera- and laser-based approaches are unsuitable for detecting transparent surfaces. Hence, the spatial map generated with these approaches are often inaccurate. In this paper, a method that utilizes echolocation with sound in the audible frequency range is proposed to robustly localize the position of an acoustic reflector, e.g., walls, glass surfaces etc., which could be used to construct a spatial map of an indoor environment as the robot moves. The proposed method estimate the acoustic reflector's position, using only a single microphone and a loudspeaker that are present on many socially assistive robot platforms such as the NAO robot. The experimental results show that the proposed method could robustly detect an acoustic reflector up to a distance of 1.5~m in more than 60 % of the trials and works efficiently even under low SNRs. To test the proposed method, a proof-of-concept robotic platform was build to construct a spatial map of an indoor environment.
|
|
11:00-11:15, Paper TuAT8.5 | |
>TP-TIO: A Robust Thermal-Inertial Odometry with Deep ThermalPoint |
> Video Attachment
|
|
Zhao, Shibo | Carnegie Mellon University |
Wang, Peng | Faculty of Robot Science and Engineering, Northeastern University |
Zhang, Hengrui | Carnegie Mellon University |
Fang, Zheng | Northeastern University |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: Localization, Sensor Fusion, SLAM
Abstract: To achieve robust motion estimation in GPS-denied and visually degraded environments such as dust, fog, and smoke, thermal odometry has been an attraction in the robotics community. However, most thermal odometry methods are purely based on classical feature extractors on re-scaled thermal images, which is difficult to establish robust correspondences in successive frames due to sudden photometric changes and large thermal noise. To overcome the limitations of feature-based thermal odometry, we propose ThermalPoint, a lightweight feature detection network specifically tailored for producing keypoints on thermal images, providing notable anti-noise improvements compared to other state-of-the-art methods. Also, we combine ThermalPoint with a novel radiometric feature tracking method, which directly makes use of full radiometric data and establishes reliable correspondences between sequential frames. Finally, taking advantage of an optimization-based visual-inertial framework, a deep feature-based thermal-inertial odometry (TP-TIO) estimation framework is proposed and evaluated thoroughly from thermal feature tracking to pose estimation in various visually degraded environments. Experiments show that our method outperforms the state-of-the-art visual and laser odometry methods in smoke-filled environments and achieves competitive accuracy in normal environments.
|
|
11:15-11:30, Paper TuAT8.6 | |
>Versatile 3D Multi-Sensor Fusion for Lightweight 2D Localization |
|
Geneva, Patrick | University of Delaware |
Merrill, Nathaniel | University of Delaware |
Yang, Yulin | University of Delaware |
Chen, Chuchu | University of Delaware |
Lee, Woosik | University of Delaware |
Huang, Guoquan (Paul) | University of Delaware |
Keywords: Localization, Sensor Fusion, Calibration and Identification
Abstract: Aiming for a lightweight and robust localization solution for low-cost, low-power autonomous robot platforms, such as educational or industrial ground vehicles, under challenging conditions (e.g., poor sensor calibration, low lighting and dynamic objects), we propose a two-stage localization system which incorporates both offline prior map building and online multi-modal localization. In particular, we develop an occupancy grid mapping system with probabilistic odometry fusion, accurate scan-to-submap covariance modeling, and accelerated loop-closure detection, which is further aided by 2D line features that exploit the environmental structural constraints. We then develop a versatile EKF-based online localization system which optimally (up to linearization) fuses multi-modal information provided by the pre-built occupancy grid map, IMU, odometry, and 2D LiDAR measurements with low computational requirements. Importantly, spatiotemporal calibration between these sensors are also estimated online to account for poor initial calibration and make the system more "plug-and-play", which improves both the accuracy and flexibility of the proposed multi-sensor fusion framework. In our experiments, our mapping system is shown to be more accurate than the state-of-the-art Google Cartographer. Then, extensive Monte-Carlo simulations are performed to verify both accuracy, consistency and efficiency of the proposed map-based localization system with full spatiotemporal calibration. We also validate the complete system (prior map building and online localization) with building-scale real-world datasets.
|
|
TuAT9 |
Room T9 |
Localization: Other Modalities II |
Regular session |
Chair: Westerlund, Tomi | University of Turku |
Co-Chair: Pang, Shuo | Embry-Riddle Aeronautical University |
|
10:00-10:15, Paper TuAT9.1 | |
>UWB-Based System for UAV Localization in GNSS-Denied Environments: Characterization and Dataset |
|
Peña Queralta, Jorge | University of Turku |
Martinez Almansa, Carmen | University of Turku |
Schiano, Fabrizio | Ecole Polytechnique Federale De Lausanne, EPFL |
Floreano, Dario | Ecole Polytechnique Federal, Lausanne |
Westerlund, Tomi | University of Turku |
Keywords: Localization, Aerial Systems: Perception and Autonomy, Search and Rescue Robots
Abstract: Small unmanned aerial vehicles (UAV) have penetrated multiple domains over the past years. In GNSS-denied or indoor environments, aerial robots require a robust and stable localization system, often with external feedback, in order to fly safely. Motion capture systems are typically utilized indoors when accurate localization is needed. However, these systems are expensive and most require a fixed setup. In this paper, we study and characterize an ultra-wideband (UWB) system for navigation and localization of aerial robots indoors based on Decawave's DWM1001 UWB node. The system is portable, inexpensive and can be battery powered in its totality. We show the viability of this system for autonomous flight of UAVs, and provide open-source methods and data that enable its widespread application even with movable anchor systems. We characterize the accuracy based on the position of the UAV with respect to the anchors, its altitude and speed, and the distribution of the anchors in space. Finally, we analyze the accuracy of the self-calibration of the anchors' positions.
|
|
10:15-10:30, Paper TuAT9.2 | |
>Ultra-Wideband Aided UAV Positioning Using Incremental Smoothing with Ranges and Multilateration |
> Video Attachment
|
|
Kang, Jungwon | York University |
Park, Kunwoo | York University |
Arjmandi, Zahra | York University |
Sohn, Gunho | York University |
Shahbazi, Mozhdeh | Centre De Géomatique Du Québec |
Menard, Patrick | CGQ |
Keywords: Localization, Aerial Systems: Applications, Sensor Fusion
Abstract: In this paper, we present a novel smoothing approach for ultra-wideband (UWB) aided unmanned aerial vehicle (UAV) positioning. Existing works based on smoothing or filtering estimate 3D position of UAV by updating a solution for each single 1D low-dimensional UWB range measurement. However, a low-dimensional single range measurement merely acts as a weak constraint in a solution space for UAV position estimation, and thus it can often lead to incorrect estimation in unfavorable conditions. Inspired by the idea that the multilateration outcome can be utilized as a measurement providing a strong constraint, we utilize two types of UWB-based measurements: (i) each single 1D range as a high-rate measurement with a weak constraint, and (ii) multilateration outcome as a low-rate measurement with a strong constraint. We propose an incremental smoothing-based method that seamlessly integrates these two types of UWB-based measurements and inertial measurement into a unified factor graph framework. Through experiments under a variety of scenarios, we demonstrate the effectiveness of the proposed method.
|
|
10:30-10:45, Paper TuAT9.3 | |
>BRM Localization: UAV Localization in GNSS-Denied Environments Based on Matching of Numerical Map and UAV Images |
|
Choi, Junho | KAIST |
Myung, Hyun | KAIST (Korea Adv. Inst. Sci. & Tech.) |
Keywords: Localization, Visual-Based Navigation, Autonomous Vehicle Navigation
Abstract: Localization is one of the most important technologies needed to use Unmanned Aerial Vehicles (UAVs) in actual fields. Currently, most UAVs use GNSS to estimate their position. Recently, there have been attacks that target the weaknesses of UAVs that use GNSS, such as interrupting GNSS signal to crash the UAVs or sending fake GNSS signals to hijack the UAVs. To avoid this kind of situation, this paper proposes an algorithm that deals with the localization problem of the UAV in GNSS-denied environments. We propose a localization method, named as BRM (Building Ratio Map based) localization, for a UAV by matching an existing numerical map with UAV images. The building area is extracted from the UAV images. The ratio of buildings that occupy in the corresponding image frame is calculated and matched with the building information on the numerical map. The position estimation is started in the range of several km2 area, so that the position estimation can be performed without knowing the exact initial coordinate. Only freely available maps are used for training data set and matching the ground truth. Finally, we get real UAV images, IMU data, and GNSS data from UAV flight to show that the proposed method can achieve better performance than the conventional methods.
|
|
10:45-11:00, Paper TuAT9.4 | |
>Inertial Velocity Estimation for Indoor Navigation through Magnetic Gradient-Based EKF and LSTM Learning Model |
|
Zmitri, Makia | CNRS/GIPSA-Lab |
Fourati, Hassen | GIPSA-Lab / University of Grenoble |
Prieur, Christophe | CNRS |
Keywords: Localization, Sensor Fusion, AI-Based Methods
Abstract: This paper presents a novel method to improve the inertial velocity estimation of a mobile body, for indoor navigation, using solely raw data from a triad of inertial sensors (accelerometer and gyroscope), as well as a determined arrangement of magnetometers array. The key idea of the method is the use of deep neural networks to dynamically tune the measurement covariance matrix of an Extended Kalman Filter (EKF). To do so, a Long Short-Term Memory (LSTM) model is derived to determine a pseudo-measurement of inertial velocity of the target under investigation. This measurement is used afterwords to dynamically adapt the measurement noise parameters of a magnetic field gradient-based EKF. As it was shown in the literature, there is a strong relation between inertial velocity and magnetic field gradient, which is highlighted with the proposed approach in this paper. Its performance is tested on the Openshoe dataset, and the obtained results compete with the INS/ZUPT approach, that unlike the proposed solution, can only be applied on foot-mounted applications and is not adequate to all walking paces.
|
|
11:00-11:15, Paper TuAT9.5 | |
>An Implementation of the Adaptive Neuro-Fuzzy Inference System (ANFIS) for Odor Source Localization |
|
Wang, Lingxiao | Embry-Riddle Aeronautical University |
Pang, Shuo | Embry-Riddle Aeronautical University |
Keywords: Neural and Fuzzy Control, AI-Based Methods, Autonomous Vehicle Navigation
Abstract: In this paper, we investigate the viability of implementing machine learning (ML) algorithms to solve the odor source localization (OSL) problem. The primary objective is to obtain an ML model that guides and navigates a mobile robot to find an odor source without explicating searching algorithms. To achieve this goal, the model of an adaptive neuro-fuzzy inference system (ANFIS) is employed to generate the olfactory-based navigation strategy. To train the ANFIS model, multiple training data sets are acquired by applying two traditional olfactory-based navigation methods, namely moth-inspired and Bayesian-inference methods, in hundreds of simulated OSL tests with different environments. After training with the hybrid-learning algorithm, the ANFIS model is validated in multiple OSL tests with varying searching conditions. Experiment results show that the ANFIS model can imitate other olfactory-based navigation methods and correctly locate the odor source. Besides, by training it with the fused training data set, the ANFIS model is better than two traditional navigation methods in terms of the averaged searching time.
|
|
TuAT10 |
Room T10 |
Visual Localization I |
Regular session |
Chair: Huang, Guoquan (Paul) | University of Delaware |
Co-Chair: Stiller, Christoph | Karlsruhe Institute of Technology |
|
10:00-10:15, Paper TuAT10.1 | |
>Visual-Inertial-Wheel Odometry with Online Calibration |
> Video Attachment
|
|
Lee, Woosik | University of Delaware |
Eckenhoff, Kevin | University of Delaware |
Yang, Yulin | University of Delaware |
Geneva, Patrick | University of Delaware |
Huang, Guoquan (Paul) | University of Delaware |
Keywords: Localization, Calibration and Identification, Wheeled Robots
Abstract: In this paper, we introduce a novel visual-inertial-wheel odometry (VIWO) system for ground vehicles, which efficiently fuses multi-modal visual, inertial and 2D wheel odometry measurements in a sliding-window filtering fashion. As multi-sensor fusion requires both intrinsic and extrinsic (spatiotemproal) calibration parameters which may vary over time during terrain navigation, we propose to perform VIWO along with online sensor calibration of wheel encoders' intrinsic and extrinsic parameters. To this end, we analytically derive the 2D wheel odometry measurement model from the raw wheel encoders' readings and optimally fuse this 2D relative motion information with 3D visual-inertial measurements. Additionally, an observability analysis is performed for the linearized VIWO system, which identifies five commonly-seen degenerate motions for wheel calibration parameters. The proposed system has been validated extensively in both Monte-Carlo simulations and real-world experiments in large-scale urban driving scenarios.
|
|
10:15-10:30, Paper TuAT10.2 | |
>Active Perception for Outdoor Localisation with an Omnidirectional Camera |
> Video Attachment
|
|
Jayasuriya, Maleen | University of Technology Sydney |
Ranasinghe, Ravindra | University of Technology Sydney |
Dissanayake, Gamini | University of Technology Sydney |
Keywords: Localization, Omnidirectional Vision, Autonomous Vehicle Navigation
Abstract: This paper presents a novel localisation framework based on an omnidirectional camera, targeted at outdoor urban environments. Bearing only information to persistent and easily observable high-level semantic landmarks (such as lamp-posts, street-signs and trees) are perceived using a Convolutional Neural Network (CNN). The framework utilises an information theoretic strategy to decide the best viewpoint to serve as an input to the CNN instead of the full 360 degree coverage offered by an omnidirectional camera, in order to leverage the advantage of having a higher field of view without compromising on performance. Environmental landmark observations are supplemented with observations to ground surface boundaries corresponding to high-level features such as manhole covers, pavement edges and lane markings extracted from a second CNN. Localisation is carried out in an Extended Kalman Filter (EKF) framework using a sparse 2D map of the environmental landmarks and Vector Distance Transform (VDT) based representation of the ground surface boundaries. This is in contrast to traditional vision only localisation systems that have to carry out Visual Odometry (VO) or Simultaneous Localisation and Mapping (SLAM), since low level features (such as SIFT, SURF, ORB) do not persist over long time frames due to radical appearance changes (illumination, occlusions etc) and dynamic objects. As the proposed framework relies on high-level persistent semantic features of the environment, it offers an opportunity to carry out localisation on a prebuilt map, which is significantly more resource efficient and robust. Experiments using a Personal Mobility Device (PMD) driven in a representative urban environment are presented to demonstrate and evaluate the effectiveness of the proposed localiser against relevant state of the art techniques.
|
|
10:30-10:45, Paper TuAT10.3 | |
>Ground Texture Based Localization: Do We Need to Detect Keypoints? |
|
Schmid, Jan Fabian | Robert Bosch GmbH; Goethe University Frankfurt |
Simon, Stephan F. | Robert Bosch GmbH |
Mester, Rudolf | NTNU Trondheim |
Keywords: Localization, Mapping, SLAM
Abstract: Localization using ground texture images recorded with a downward-facing camera is a promising approach to achieve reliable high-accuracy vehicle positioning. A common way to accomplish the task is to focus on prominent features of the ground texture such as stones and cracks. Our results indicate that with an approximately known camera pose it is sufficient to use arbitrary ground regions, i.e. extracting features at random positions without significant loss in localization performance. Additionally, we propose a real-time capable CPU-only localization method based on this idea, and suggest possible improvements for further research.
|
|
10:45-11:00, Paper TuAT10.4 | |
>Vision Global Localization with Semantic Segmentation and Interest Feature Points |
|
Li, Kai | Alibaba Group |
Zhang, Xudong | OPPO |
Li, Kun | Alibaba Group |
Zhang, Shuo | Alibaba Group |
Keywords: Localization, Computer Vision for Other Robotic Applications, Visual Tracking
Abstract: In this work, we present a vision-only global localization architecture for autonomous vehicle applications, and achieves centimeter-level accuracy and high robustness in various scenarios. We first apply pixel-wise segmentation to front-view mono camera and extract the semantic features, e.g. pole-like objects, lane markings, and curbs, which are robust to light condition, viewing angles and seasonal changes. For the scenes without enough semantic information, we extract interest feature points on static background, such as ground surface and buildings, assisted by our semantic segmentation. We create the visual global map with semantic features map layer extracted from LiDAR point-cloud semantic map and the point features map layer built with a fixed-pose structure from motion. A lumped Levenberg-Marquardt optimization solver is then applied to minimize to cost from two types of observations. We further evaluate the accuracy and robustness of our method with road tests on Alibaba's autonomous delivery vehicles in multiple scenarios as well as a KAIST urban dataset.
|
|
11:00-11:15, Paper TuAT10.5 | |
>Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences |
> Video Attachment
|
|
Yu, Huai | Carnegie Mellon University; Wuhan University |
Zhen, Weikun | Carnegie Mellon University |
Yang, Wen | Wuhan University |
Zhang, Ji | Carnegie Mellon University |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: Localization, Sensor Fusion
Abstract: Light-weight camera localization in existing maps is essential for vision-based navigation. Currently, visual and visual-inertial odometry (VO&VIO) techniques are well-developed for state estimation but with inevitable accumulated drifts and pose jumps upon loop closure. To overcome these problems, we propose an efficient monocular camera localization method in prior LiDAR maps using direct 2D-3D line correspondences. To handle the appearance differences and modality gaps between LiDAR point clouds and images, geometric 3D lines are extracted offline from LiDAR maps while robust 2D lines are extracted online from video sequences. With the pose prediction from VIO, we can efficiently obtain coarse 2D-3D line correspondences. Then the camera poses and 2D-3D correspondences are iteratively optimized by minimizing the projection error of correspondences and rejecting outliers. Experimental results on the EurocMav dataset and our collected dataset demonstrate that the proposed method can efficiently estimate camera poses without accumulated drifts or pose jumps in structured environments.
|
|
11:15-11:30, Paper TuAT10.6 | |
>Monocular Localization in HD Maps by Combining Semantic Segmentation and Distance Transform |
> Video Attachment
|
|
Pauls, Jan-Hendrik | Karlsruhe Institute of Technology (KIT) |
Petek, Kürsat | Karlsruher Institut Für Technologie (KIT) |
Poggenhans, Fabian | FZI Research Center for Information Technology |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Localization, SLAM, Intelligent Transportation Systems
Abstract: Easy, yet robust long-term localization is still an open topic in research. Existing approaches require either dense maps, expensive sensors, specialized map features or proprietary detectors. We propose using semantic segmentation on a monocular camera to localize directly in a HD map as used for automated driving. This combines lightweight, yet powerful HD maps with the simplicity of monocular vision and the flexibility of neural networks. The major challenges arising from this combination are data association and robustness against misdetections. Association is solved efficiently by applying distance transform on binary per-class images. This provides not only a fast lookup table for a smooth gradient as needed for pose-graph optimization, but also dynamic association by default. A sliding-window pose graph optimization combines single image detections with vehicle odometry, smoothing results and helping overcome even misclassifications in consecutive frames. Evaluation against a highly accurate 6D visual localization shows that our approach can achieve accuracy levels as required for automated driving, being one of the most lightweight and flexible methods to do so.
|
|
TuAT11 |
Room T11 |
Visual Localization II |
Regular session |
Chair: Forbes, James Richard | McGill University |
Co-Chair: Stachniss, Cyrill | University of Bonn |
|
10:00-10:15, Paper TuAT11.1 | |
>Learning an Overlap-Based Observation Model for 3D LiDAR Localization |
|
Chen, Xieyuanli | University of Bonn |
Läbe, Thomas | University of Bonn |
Nardi, Lorenzo | University of Bonn |
Behley, Jens | University of Bonn |
Stachniss, Cyrill | University of Bonn |
Keywords: Localization, SLAM
Abstract: Localization is a crucial capability for mobile robots and autonomous cars. In this paper, we address learn- ing an observation model for Monte-Carlo localization using 3D LiDAR data. We propose a novel, neural network-based observation model that computes the expected overlap of two 3D LiDAR scans. The model predicts the overlap and yaw angle offset between the current sensor reading and virtual frames generated from a pre-built map. We integrate this observation model into a Monte-Carlo localization framework and tested it on urban datasets collected with a car in different seasons. The experiments presented in this paper illustrate that our method can reliably localize a vehicle in typical urban environments. We furthermore provide comparisons to a beam- endpoint and a histogram-based method indicating a superior global localization performance of our method with fewer particles.
|
|
10:15-10:30, Paper TuAT11.2 | |
>Global Localization Over 2D Floor Plans with Free-Space Density Based on Depth Information |
|
Maffei, Renan | Federal University of Rio Grande Do Sul |
Pittol, Diego | Federal University of Rio Grande Do Sul |
Mantelli, Mathias Fassini | Federal University of Rio Grande Do Sul |
Prestes, Edson | UFRGS |
Kolberg, Mariana | UFRGS |
Keywords: Localization, RGB-D Perception
Abstract: Many applications with mobile robots require self-localization in indoor maps. While such maps can be previously generated by SLAM strategies, there are various localization approaches that use 2D floor plans as reference input. In this paper, we present a localization strategy using floor plan as map, which is based on spatial density information computed from dense depth data of RGB-D cameras. We propose an interval-based model, called Interval Free-Space Density, that bounds the uncertainty of observations and minimizes the effects of movable objects in the environment. Our model was applied in a Monte Carlo Localization strategy and compared with traditional observation models. The results of experiments showed the robustness of the proposed method in single-camera and multi-camera experiments in home environments.
|
|
10:30-10:45, Paper TuAT11.3 | |
>A Point Cloud Registration Pipeline Using Gaussian Process Regression for Bathymetric SLAM |
|
Hitchcox, Thomas | McGill University |
Forbes, James Richard | McGill University |
Keywords: SLAM, Marine Robotics, Visual-Based Navigation
Abstract: Point cloud registration is a means of achieving loop closure correction within a simultaneous localization and mapping (SLAM) algorithm. Data association is a critical component in point cloud registration, and can be very challenging in feature-depleted environments such as seabed. This paper presents a point cloud registration pipeline for performing loop closure correction in feature-depleted subsea environments using data collected from an optical scanner. The pipeline uses Gaussian process regression to extract keypoint sets, and a weighted network alignment algorithm to propose point correspondences. A variant of the iterative closest point (ICP) registration algorithm is used to perform fine alignment, with point correspondences informed by the mappings determined following the network alignment step. The developed registration pipeline is deployed with success on a challenging section of field data containing topography that cannot be resolved using conventional imaging sonar.
|
|
10:45-11:00, Paper TuAT11.4 | |
>A Robust Multi-Stereo Visual-Inertial Odometry Pipeline |
|
Jaekel, Joshua | Carnegie Mellon University |
Mangelson, Joshua | Brigham Young University |
Scherer, Sebastian | Carnegie Mellon University |
Kaess, Michael | Carnegie Mellon University |
Keywords: Localization, SLAM, Visual-Based Navigation
Abstract: In this paper we present a novel multi-stereo visual-inertial odometry (VIO) framework which aims to improve the robustness of a robot's state estimate during aggressive motion and in visually challenging environments. Our system uses a fixed-lag smoother which jointly optimizes for poses and landmarks across all stereo pairs. We propose a 1-point RANdom SAmple Consensus (RANSAC) algorithm which is able to perform outlier rejection across features from all stereo pairs. To handle the problem of noisy extrinsics, we account for uncertainty in the calibration of each stereo pair and model it in both our front-end and back-end. The result is a VIO system which is able to maintain an accurate state estimate under conditions that have typically proven to be challenging for traditional state-of-the-art VIO systems. We demonstrate the benefits of our proposed multi-stereo algorithm by evaluating it with both simulated and real world data. We show that our proposed algorithm is able to maintain a state estimate in scenarios where traditional VIO algorithms fail.
|
|
11:00-11:15, Paper TuAT11.5 | |
>Globally Optimal Consensus Maximization for Robust Visual Inertial Localization in Point and Line Map |
|
Jiao, Yanmei | Zhejiang University |
Wang, Yue | Zhejiang University |
Fu, Bo | Zhejiang University, the State Key Laboratory of Industrial Cont |
Tan, Qimeng | Beijing Institute of Spacecraft System Engineering |
Chen, Lei | Beijing Institute of Spacecraft System Engineering |
Wang, Minhang | Huawei |
Huang, Shoudong | University of Technology, Sydney |
Xiong, Rong | Zhejiang University |
Keywords: Localization, Sensor Fusion
Abstract: Map based visual inertial localization is a crucial step to reduce the drift in state estimation of mobile robots. The underlying problem for localization is to estimate the pose from a set of 3D-2D feature correspondences, of which the main challenge is the presence of outliers, especially in changing environment. In this paper, we propose a robust solution based on efficient global optimization of the consensus maximization problem, which is insensitive to high percentage of outliers. We first introduce translation invariant measurements (TIMs) for both points and lines to decouple the consensus maximization problem into rotation and translation subproblems, allowing for a two-stage solver with reduced solution dimensions. Then we show that (i) the rotation can be calculated by minimizing TIMs using only 1-dimensional branch-and-bound (BnB), (ii) the translation can be found by running 1-dimensional search for three times with prioritized progressive voting. Compared with the popular randomized solver, our solver achieves deterministic global convergence without depending on an initial value. While compared with existing BnB based methods, ours is exponentially faster. Finally, by evaluating the performance on both simulation and real-world datasets, our approach gives accurate pose even when there are 90% outliers (only 2 inliers).
|
|
11:15-11:30, Paper TuAT11.6 | |
>The Invariant Rauch-Tung-Striebel Smoother |
|
van der Laan, Niels | Delft University of Technology |
Cohen, Mitchell | McGill University |
Arsenault, Jonathan | McGill University |
Forbes, James Richard | McGill University |
Keywords: Localization, Autonomous Vehicle Navigation, Sensor Fusion
Abstract: This paper presents an invariant Rauch-Tung-Striebel (IRTS) smoother applicable to systems with states that are an element of a matrix Lie group. In particular, the extended Rauch-Tung-Striebel (RTS) smoother is adapted to work within a matrix Lie group framework. The main advantage of the invariant RTS (IRTS) smoother is that the linearization of the process and measurement models is independent of the state estimate resulting in state-estimate-independent Jacobians when certain technical requirements are met. A sample problem is considered that involves estimation of the three dimensional pose of a rigid body on SE(3), along with sensor biases. The multiplicative RTS (MRTS) smoother is also reviewed and is used as a direct comparison to the proposed IRTS smoother using experimental data. Both smoothing methods are also compared to invariant and multiplicative versions of the Gauss-Newton approach to solving the batch state estimation problem.
|
|
TuAT12 |
Room T12 |
Visual Localization III |
Regular session |
Chair: Barfoot, Timothy | University of Toronto |
Co-Chair: Oishi, Shuji | National Institute of Advanced Industrial Science and Technology (AIST) |
|
10:00-10:15, Paper TuAT12.1 | |
>C*: Cross-Modal Simultaneous Tracking and Rendering for 6-DoF Monocular Camera Localization Beyond Modalities |
> Video Attachment
|
|
Oishi, Shuji | National Institute of Advanced Industrial Science and Technology |
Kawamata, Yasunori | Toyohashi University of Technology |
Yokozuka, Masashi | Nat. Inst. of Advanced Industrial Science and Technology |
Koide, Kenji | National Institute of Advanced Industrial Science and Technology |
Banno, Atsuhiko | National Instisute of Advanced Industrial Science and Technology |
Miura, Jun | Toyohashi University of Technology |
Keywords: Localization, Visual Tracking, Multi-Modal Perception
Abstract: We present a monocular camera localization technique for a three-dimensional prior map. Visual localization has been attracting considerable attention as a lightweight and widely available localization technique for any mobilities; however, it still suffers from appearance changes and a high computational cost. With a view to achieving robust and real-time visual localization, we first reduce the localization problem to alternate local tracking and occasional keyframe rendering by following a simultaneous tracking and rendering algorithm. At the same time, by using an information-theoretic metric denoted normalized information distance in the local tracking, we developed a 6-DoF localization method robust to intensity variations between modalities and varying sensor properties. We quantitatively evaluated the accuracy and robustness of our method using both synthetic and real datasets and achieved reliable and practical localization even in the case of extreme appearance changes.
|
|
10:15-10:30, Paper TuAT12.2 | |
>Denoising IMU Gyroscopes with Deep Learning for Open-Loop Attitude Estimation |
|
Brossard, Martin | Mines ParisTech |
Bonnabel, Silvere | Mines ParisTech |
Barrau, Axel | Safran |
Keywords: Localization, Calibration and Identification
Abstract: This paper proposes a learning method for denois- ing gyroscopes of Inertial Measurement Units (IMUs) using ground truth data, and estimating in real time the orientation (attitude) of a robot in dead reckoning. The obtained algorithm outperforms the state-of-the-art on the (unseen) test sequences. The obtained performances are achieved thanks to a well-chosen model, a proper loss function for orientation increments, and through the identification of key points when training with high-frequency inertial data. Our approach builds upon a neural network based on dilated convolutions, without requiring any recurrent neural network. We demonstrate how efficient our strategy is for 3D attitude estimation on the EuRoC and TUM-VI datasets. Interestingly, we observe our dead reckoning algorithm manages to beat top-ranked visual-inertial odometry systems in terms of attitude estimation although it does not use vision sensors. We believe this paper offers new perspectives for visual-inertial localization and constitutes a step toward more efficient learning methods involving IMUs. Our open-source implementation is available at https://github.com/ mbrossar/denoise-imu-gyro.
|
|
10:30-10:45, Paper TuAT12.3 | |
>Variational Inference with Parameter Learning Applied to Vehicle Trajectory Estimation |
|
Wong, Jeremy Nathan | University of Toronto |
Yoon, David Juny | University of Toronto |
Schoellig, Angela P. | University of Toronto |
Barfoot, Timothy | University of Toronto |
Keywords: Localization, SLAM, Sensor Fusion
Abstract: We present parameter learning in a Gaussian variational inference setting using only noisy measurements (i.e., no groundtruth). This is demonstrated in the context of vehicle trajectory estimation, although the method we propose is general. The paper extends the Exactly Sparse Gaussian Variational Inference (ESGVI) framework, which has previously been used for large-scale nonlinear batch state estimation. Our contribution is to additionally learn parameters of our system models (which may be difficult to choose in practice) within the ESGVI framework. In this paper, we learn the covariances for the motion and sensor models used within vehicle trajectory estimation. Specifically, we learn the parameters of a white-noise-on-acceleration motion model and the parameters of an Inverse-Wishart prior over measurement covariances for our sensor model. We demonstrate our technique using a 36 km dataset consisting of a car using lidar to localize against a high-definition map; we learn the parameters on a training section of the data and then show that we achieve high-quality state estimates on a test section, even in the presence of outliers. Lastly, we show that our framework can be used to solve pose graph optimization even with many false loop closures.
|
|
10:45-11:00, Paper TuAT12.4 | |
>Time-Relative RTK-GNSS: GNSS Loop Closure in Pose Graph Optimization |
|
Suzuki, Taro | Chiba Institute of Technology |
Keywords: Localization, Sensor Fusion, SLAM
Abstract: A pose-graph-based optimization technique is widely used to estimate robot poses using various sensor measurements from devices such as laser scanners and cameras. The global navigation satellite system (GNSS) has recently been used to estimate the absolute 3D position of outdoor mobile robots. However, since the accuracy of GNSS single-point positioning is only a few meters, the GNSS is not used for the loop closure of a pose graph. The main purpose of this study is to generate a loop closure of a pose graph using a time-relative real-time kinematic GNSS (TR-RTK-GNSS) technique. The proposed TR-RTK-GNSS technique uses time–differential carrier phase positioning, which is based on carrier-phase-based differential GNSS with a single GNSS receiver. Unlike a conventional RTK-GNSS, we can directly compute the robot’s relative position using only a stand-alone GNSS receiver. The initial pose graph is generated from the accumulated velocity computed from GNSS Doppler measurements. To reduce the accumulated error of velocity, we use the TR-RTK-GNSS technique for the loop closure in the graph-based optimization framework. The kinematic positioning tests were performed using an unmanned aerial vehicle to confirm the effectiveness of the proposed technique. From the tests, we can estimate the vehicle's trajectory with approximately 3 cm accuracy using only a stand-alone GNSS receiver.
|
|
11:00-11:15, Paper TuAT12.5 | |
>Improving Visual SLAM in Car-Navigated Urban Environments with Appearance Maps |
|
Jaenal, Alberto | University of Malaga |
Zuñiga-Noël, David | University of Malaga |
Gomez-Ojeda, Ruben | University of Málaga |
Gonzalez-Jimenez, Javier | University of Malaga |
Keywords: Localization, Recognition, Visual-Based Navigation
Abstract: This paper describes a method that corrects errors of a VSLAM-estimated trajectory for cars driving in GPS-denied environments, by applying constraints from public databases of geo-tagged images (Google Street View, Mapillary, etc). The method, dubbed Appearance-based Geo-Alignment for Simultaneous Localisation and Mapping (AGA-SLAM), encodes the available image database as an appearance map, which represents the space with a compact holistic descriptor for each image plus its associated geo-tag. The VSLAM trajectory is corrected on-line by incorporating constraints from the recognized places along the trajectory into a position-based optimization framework. The paper presents a seamless formulation to combine local and absolute metric observations with associations from Visual Place Recognition. The robustness of the holistic image descriptor to changes due to weather or illumination variations ensures a long-term consistent method to improve car localization. The proposed method has been extensively evaluated on more than 70 sequences from 4 different datasets, proving out its effectiveness and endurance to appearance challenges.
|
|
11:15-11:30, Paper TuAT12.6 | |
>ROVINS: Robust Omnidirectional Visual Inertial Navigation System |
> Video Attachment
|
|
Seok, Hochang | Hanyang University |
Lim, Jongwoo | Hanyang University |
Keywords: SLAM, Visual-Based Navigation, Omnidirectional Vision
Abstract: Visual odometry is an essential component in robot navigation and autonomous driving. However visual sensors are vulnerable in fast motion or sudden illumination changes. To compensate such weakness, inertial measurement units (IMUs) can be used to maintain the short-term motion when visual sensing is unstable, and to enhance the quality of estimated motion with inertial information. Recently ROVO (omnidirectional visual odometry) demonstrated superior performance and stability due to unceasing feature observation of the omnidirectional setup. However it still has the shortcomings of visual odometry. In this paper we propose an omnidirectional visual-inertial odometry system, which seamlessly integrate the inertial information into the omnidirectional visual odometer algorithm. First the soft relative pose constraints from inertial measurement is added to the pose optimization formulation, which enables blind motion estimation when all visual features are lost. Second by initializing the visual features in tracking using the prediction results from the estimated velocity, the feature tracking becomes more robust to visual disturbances. The experimental results show that the proposed visual-inertial algorithm outperforms the vision-only algorithm with significant margins.
|
|
TuAT13 |
Room T13 |
Mapping |
Regular session |
Chair: Olson, Edwin | University of Michigan |
Co-Chair: Zheng, Nanning | Xi'an Jiaotong University |
|
10:00-10:15, Paper TuAT13.1 | |
>CoBigICP: Robust and Precise Point Set Registration Using Correntropy Metrics and Bidirectional Correspondence |
> Video Attachment
|
|
Yin, Pengyu | Xi'an Jiaotong University |
Wang, Di | Xi'an Jiaotong University |
Du, Shaoyi | Xi'an Jiaotong University |
Ying, Shihui | School of Science, ShanghaiUniversity |
Gao, Yue | Tsinghua University |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Probability and Statistical Methods, Mapping, Localization
Abstract: In this paper, we propose a novel probabilistic variant of iterative closest point (ICP) dubbed as CoBigICP. The method leverages both local geometrical information and global noise characteristics. Locally, the 3D structure of both target and source clouds are incorporated into the objective function through bidirectional correspondence. Globally, error metric of correntropy is introduced as noise model to resist outliers. Importantly, the close resemblance between normal-distributions transform (NDT) and correntropy is revealed. To ease the minimization step, an on-manifold parameterization of the special Euclidean group is proposed. Extensive experiments validate that CoBigICP outperforms several well-known and state-of-the-art methods.
|
|
10:15-10:30, Paper TuAT13.2 | |
>The Masked Mapper: Masked Metric Mapping |
|
Haggenmiller, Acshi | University of Michigan |
Kabacinski, Cameron | University of Michigan |
Krogius, Maximilian | University of Michigan |
Olson, Edwin | University of Michigan |
Keywords: SLAM, Mapping, Localization
Abstract: In this paper, we propose a flexible mapping scheme that uses a masking function ( mask) to focus the attention of a pose graph SLAM (Simultaneous Localization and Mapping) system. The masking function takes the robot's observations and returns true if the robot is in an important location. State-of-the-art methods in SLAM generate dense metric lidar maps, creating precise maps at a high computational cost by storing lidar scans for each pose node and continually attempting to close loops. In many cases, trying to always make loop closures is unnecessary for localization and even risky because of perceptual aliasing and false positives. By masking out these less useful positions, our method can create more accurate maps despite performing far fewer scan matches. We evaluate our system with three simple mask functions on a 2.5 km trajectory with significant angular drift. We compare the number of scan matches performed under each mask as well as the accuracy of the loop closures.
|
|
10:30-10:45, Paper TuAT13.3 | |
>Allocating Limited Sensing Resources to Accurately Map Dynamic Environments |
> Video Attachment
|
|
Mitchell, Derek | Carnegie Mellon University |
Michael, Nathan | Carnegie Mellon University |
Keywords: Energy and Environment-Aware Automation, Environment Monitoring and Management, Mapping
Abstract: This work addresses the problem of learning a model of a dynamic environment using many independent Hidden Markov Models (HMMs) with a limited number of observations available per iteration. Many techniques exist to model dynamic environments, but do not consider how to deploy robots to build this model. Additionally, there are many techniques for exploring environments that do not consider how to prioritize regions when resources, in terms of robots to deploy and deployment durations, are limited. Here, we consider an environment model consisting of a series of HMMs that evolve over time independently and can be directly observed. At each iteration, we must determine which HMMs to observe in order to maximize the gain in model accuracy. We present a utility measure that balances a Pearson's chi-squared goodness-of-fit of the dynamics model with Mutual Information (MI) to ensure that observations are allocated to maximize the convergence rate of all HMMs, resulting in a faster convergence to higher steady-state model confidence and accuracy than either chi-squared or MI alone.
|
|
10:45-11:00, Paper TuAT13.4 | |
>Adaptive Kernel Inference for Dense and Sharp Occupancy Grids |
> Video Attachment
|
|
Kwon, Youngsun | KAIST |
Moon, Bochang | Gwangju Institute of Science and Technology |
Yoon, Sung-eui | KAIST |
Keywords: Mapping, SLAM
Abstract: In this paper, we present a new approach, AKIMap, that uses an adaptive kernel inference for dense and sharp occupancy grid representations. Our approach is based on the multivariate kernel estimation, and we propose a simple, two-stage based method that selects an adaptive bandwidth matrix for an efficient and accurate occupancy estimation. To utilize correlations of occupancy observations given sparse and non-uniform distributions of point samples, we propose to use the covariance matrix as an initial bandwidth matrix, and then optimize the bandwidth matrix by adjusting its scale in an efficient, data-driven way for on-the-fly mapping. We demonstrate that the proposed technique estimates occupancy states more accurately than state-of-the-art methods given equal-data or equal-time settings, thanks to our adaptive inference. Furthermore, we show the practical benefits of the proposed work in on-the-fly mapping and observe that our adaptive approach shows the dense as well as sharp occupancy representations in a real environment.
|
|
11:00-11:15, Paper TuAT13.5 | |
>Object-Based Pose Graph for Dynamic Indoor Environments |
> Video Attachment
|
|
Gomez, Clara | University Carlos III of Madrid |
Hernandez Silva, Alejandra Carolina | University Carlos III of Madrid |
Derner, Erik | Czech Technical University in Prague |
Barber, Ramon | Universidad Carlos III of Madrid |
Babuska, Robert | Delft University of Technology |
Keywords: Mapping, Dynamics, Service Robotics
Abstract: Relying on static representations of the environment limits the use of mapping methods in most real-world tasks. Real-world environments are dynamic and undergo changes that need to be handled through map adaptation. In this work, an object-based pose graph is proposed to solve the problem of mapping in indoor dynamic environments with mobile robots. In contrast to state-of-the art methods where binary classifications between movable and static objects are used, we propose a new method to capture the probability of different objects over time. Object probability represents how likely it is to find a specific object in its previous location and it gives a quantification of how movable specific objects are. In addition, grouping object probabilities according to object class allows us to evaluate the movability of different object classes. We validate our object-based pose graph in real-world dynamic environments. Results in mapping and map adaptation with a real robot show efficient map maintenance through several mapping sessions and results in object classification according to movability show an improvement compared to binary classification.
|
|
11:15-11:30, Paper TuAT13.6 | |
>UFOMap: An Efficient Probabilistic 3D Mapping Framework That Embraces the Unknown |
|
Duberg, Daniel | KTH - Royal Institute of Technology |
Jensfelt, Patric | KTH - Royal Institute of Technology |
Keywords: Mapping, RGB-D Perception, Motion and Path Planning
Abstract: 3D models are an essential part of many robotic applications. In applications where the environment is unknown a-priori, or where only a part of the environment is known, it is important that the 3D model can handle the unknown space efficiently. Path planning, exploration, and reconstruction all fall into this category. In this paper we present an extension to OctoMap which we call UFOMap. UFOMap uses an explicit representation of all three states in the map, i.e., unknown, free, and occupied. This gives, surprisingly, a more memory efficient representation. We provide methods that allow for significantly faster insertions into the octree. Furthermore, UFOMap supports fast queries based on occupancy state using so called indicators and based on location by exploiting the octree structure and bounding volumes. This enables real-time colored octree mapping at high resolution (below 1 cm). UFOMap is contributed as a C++ library that can be used standalone but is also integrated into ROS.
|
|
TuAT14 |
Room T14 |
Mapping for Navigation |
Regular session |
Chair: Gawel, Abel Roman | ETH Zurich |
Co-Chair: Bertrand, Sylvain | Institute for Human and Machine Cognition |
|
10:00-10:15, Paper TuAT14.1 | |
>Detecting Usable Planar Regions for Legged Robot Locomotion |
|
Bertrand, Sylvain | Institute for Human and Machine Cognition |
Lee, Inho | IHMC |
Mishra, Bhavyansh | Institute of Human and Machine Cognition, University of West Flo |
Calvert, Duncan | IHMC |
Pratt, Jerry | Inst. for Human and Machine Cognition |
Griffin, Robert J. | Institute for Human and Machine Cognition (IHMC) |
Keywords: Mapping, Legged Robots, Visual-Based Navigation
Abstract: Awareness of the environment is essential for mobile robots. Perception for legged robots requires high levels of reliability and accuracy in order to walk stably in the types of complex, cluttered environments we are interested in. In this paper, we present a usable environmental perception algorithm designed to detect steppable areas and obstacles for the autonomous generation of desired footholds for legged robots. To produce an efficient representation of the environment, the proposed perception algorithm is desired to cluster point cloud data to planar regions composed of convex polygons. We describe in this paper the end-to-end pipeline from data collection to generation of the regions, where we first compose an octree in order to create a more efficient data representation. We then group the leaves in the tree using a nearest neighbor search into a planar region, which is composed of the concave hull of points that is decomposed into convex polygons. We present a variety of environments, and illustrate the usability of this approach by the Atlas humanoid robots walking over rough terrain. We also discuss various challenges we faced and insights we gained in the development of this approach.
|
|
10:15-10:30, Paper TuAT14.2 | |
>Accurate Mapping and Planning for Autonomous Racing |
> Video Attachment
|
|
Andresen, Leiv | ETH Zurich, Autonomous Systems Lab |
Brandemuehl, Adrian | ETH Zurich, Autonomous Systems Lab |
Hönger, Alex | ETH Zurich, Autonomous Systems Lab |
Kuan, Benson | ETH Zurich |
Vödisch, Niclas | ETH Zurich, Autonomous Systems Lab |
Blum, Hermann | ETH Zurich |
Reijgwart, Victor | ETH Zurich |
Bernreiter, Lukas | ETH Zurich, Autonomous Systems Lab |
Schaupp, Lukas | ETH Zurich |
Chung, Jen Jen | Eidgenössische Technische Hochschule Zürich |
Bürki, Mathias | Autonomous Systems Lab, ETH Zuerich |
Oswald, Martin R. | ETH Zurich |
Siegwart, Roland | ETH Zurich |
Gawel, Abel Roman | ETH Zurich |
Keywords: Mapping, Motion and Path Planning, Sensor Fusion
Abstract: This paper presents the perception, mapping, and planning pipeline implemented on an autonomous race car. It was developed by the 2019 AMZ driverless team for the Formula Student Germany (FSG) 2019 driverless competition, where it won 1st place overall. The presented solution combines early fusion of camera and LiDAR data, a layered mapping approach, and a planning approach that uses Bayesian filtering to achieve high-speed driving on unknown race tracks while creating accurate maps. We benchmark the method against our team’s previous solution, which won FSG 2018, and show improved accuracy when driving at the same speeds. Furthermore, the new pipeline makes it possible to reliably raise the maximum driving speed in unknown environments from 3 m/s to 12 m/s while still mapping with an acceptable RMSE of 0.29 m.
|
|
10:30-10:45, Paper TuAT14.3 | |
>Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach |
|
Chawla, Hemang | Navinfo Europe |
Jukola, Matti | Navinfo EU |
Brouns, Terence | NavInfo Europe |
Arani, Elahe | Navinfo Europe |
Zonooz, Bahram | Navinfo Europe |
Keywords: Mapping, SLAM, Deep Learning for Visual Perception
Abstract: The ability to efficiently utilize crowd-sourced visual data carries immense potential for the domains of large scale dynamic mapping and autonomous driving. However, state-of-the-art methods for crowdsourced 3D mapping assume prior knowledge of camera intrinsics. In this work we propose a framework that estimates the 3D positions of semantically meaningful landmarks such as traffic signs without assuming known camera intrinsics, using only monocular color camera and GPS. We utilize multi-view geometry as well as deep learning based self-calibration, depth, and ego-motion estimation for traffic sign positioning, and show that combining their strengths is important for increasing the map coverage. To facilitate research on this task, we construct and make available a KITTI based 3D traffic sign ground truth positioning dataset. Using our proposed framework, we achieve an average single-journey relative and absolute positioning accuracy of 39 cm and 1.26 m respectively, on this dataset.
|
|
10:45-11:00, Paper TuAT14.4 | |
>Efficient Multiresolution Scrolling Grid for Stereo Vision-Based MAV Obstacle Avoidance |
|
Dexheimer, Eric | Carnegie Mellon University |
Mangelson, Joshua | Brigham Young University |
Scherer, Sebastian | Carnegie Mellon University |
Kaess, Michael | Carnegie Mellon University |
Keywords: Mapping, Aerial Systems: Perception and Autonomy, Collision Avoidance
Abstract: Fast, aerial navigation in cluttered environments requires a suitable map representation for path planning. In this paper, we propose the use of an efficient, structured multiresolution representation that expands the sensor range of dense local grids for memory-constrained platforms. While similar data structures have been proposed, we avoid processing redundant occupancy information and use the organization of the grid to improve efficiency. By layering 3D circular buffers that double in resolution at each level, obstacles near the robot are represented at finer resolutions while coarse spatial information is maintained at greater distances. We also introduce a novel method for efficiently calculating the Euclidean distance transform on the multiresolution grid by leveraging its structure. Lastly, we utilize our proposed framework to demonstrate improved stereo camera-based MAV obstacle avoidance with an optimization-based planner in simulation.
|
|
11:00-11:15, Paper TuAT14.5 | |
>DenseFusion: Large-Scale Online Dense Pointcloud and DSM Mapping for UAVs |
> Video Attachment
|
|
Chen, Lin | Northwestern Polytechnical University |
Zhao, Yong | Northwestern Polytechnic University |
Xu, Shibiao | Institute of Automation, Chinese Academy of Sciences |
Bu, Shuhui | Northwestern Polytechnical University |
Han, Pengcheng | Northwestern Polytechnical University |
Wan, Gang | Information Engineering University |
Keywords: Mapping, SLAM, Localization
Abstract: With the rapidly developing unmanned aerial vehicles, the requirements of generating maps efficiently and quickly are increasing. To realize online mapping, we develop a real-time dense mapping framework named DenseFusion which can incrementally generates dense geo-referenced 3D point cloud, digital orthophoto map (DOM) and digital surface model (DSM) from sequential aerial images with optional GPS information. The proposed method works in real-time on standard CPUs even for processing high resolution images. Based on the advanced monocular SLAM, our system first estimates appropriate camera poses and extracts effective keyframes, and next constructs virtual stereo-pair from consecutive frame to generate pruned dense 3D point clouds; then a novel real-time DSM fusion method is proposed which can incrementally process dense point cloud. Finally, a high efficiency visualization system is developed to adopt dynamic levels of detail method, which makes it render dense point cloud and DSM smoothly. The performance of the proposed method is evaluated through qualitative and quantitative experiments. The results indicate that compared to traditional structure from motion based approaches, the presented framework is able to output both large-scale high-quality DOM and DSM in real-time with low computational cost.
|
|
TuAT15 |
Room T15 |
Search and Mapping |
Regular session |
Chair: Ayanian, Nora | University of Southern California |
|
10:00-10:15, Paper TuAT15.1 | |
>Sampling-Based Search for a Semi-Cooperative Target |
|
Vandermeulen, Isaac | IRobot Corporation |
Gross, Roderich | The University of Sheffield |
Kolling, Andreas | Amazon |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Multi-Robot Systems, Motion and Path Planning
Abstract: Searching for a lost teammate is an important task for multirobot systems. We present a variant of rapidly-expanding random trees (RRT) for generating search paths based on a probabilistic belief of the target teammate’s position. The belief is updated using a hidden Markov model built from knowledge of the target’s planned or historic behavior. For any candidate search path, this belief is used to compute a discounted reward which is a weighted sum of the connection probability at each time step. The RRT search algorithm uses randomly sampled locations to generate candidate vertices and adds candidate vertices to a planning tree based on bounds on the discounted reward. Candidate vertices are along the shortest path from an existing vertex to the sampled location, biasing the search based on the topology of the environment. This method produces high quality search paths which are not constrained to a grid and can be computed fast enough to be used in real time. Compared with two other strategies, it found the target significantly faster in the most difficult 60% of situations and was similar in the easier 40% of situations.
|
|
10:15-10:30, Paper TuAT15.2 | |
>Mixed-Integer Linear Programming Models for Multi-Robot Non-Adversarial Search |
|
Arruda Asfora, Beatriz | Cornell University |
Banfi, Jacopo | Cornell University |
Campbell, Mark | Cornell University |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Multi-Robot Systems, Search and Rescue Robots
Abstract: In this letter, we consider the Multi-Robot Efficient Search Path Planning (MESPP) problem, where a team of robots is deployed in a graph-represented environment to capture a moving target within a given deadline. We prove this problem to be NP-hard, and present the first set of Mixed-Integer Linear Programming (MILP) models to tackle the MESPP problem. Our models are the first to encompass multiple searchers, arbitrary capture ranges, and false negatives simultaneously. While state-of-the-art algorithms for MESPP are based on simple path enumeration, the adoption of MILP as a planning paradigm allows to leverage the powerful techniques of modern solvers, yielding better computational performance and, as a consequence, longer planning horizons. The models are designed for computing optimal solutions offline, but can be easily adapted for a distributed online approach. Our simulations show that it is possible to achieve 98% decrease in computational time relative to the previous state-of-the-art. We also show that the distributed approach performs nearly as well as the centralized, within 6% in the settings studied in this letter, with the advantage of requiring significant less time – an important consideration in practical search missions.
|
|
10:30-10:45, Paper TuAT15.3 | |
>Decentralised Self-Organising Maps for Multi-Robot Information Gathering |
|
Best, Graeme | Oregon State University |
Hollinger, Geoffrey | Oregon State University |
Keywords: Multi-Robot Systems, Planning, Scheduling and Coordination, Environment Monitoring and Management
Abstract: This paper presents a new coordination algorithm for decentralised multi-robot information gathering. We consider planning for an online variant of the multi-agent orienteering problem with neighbourhoods. This formulation closely aligns with a number of important tasks in robotics, including inspection, surveillance, and reconnaissance. We propose a decentralised variant of the self-organising map (SOM) learning procedure, named Dec-SOM, which efficiently plans sequences of waypoints for a team of robots. Decentralisation is achieved by performing a distributed allocation scheme jointly with a series of SOM adaptations. We also offer an efficient heuristic to select when to perform negotiations, which reduces communication resource usage. Simulation results in two settings, including an infrastructure inspection scenario with a real-world dataset of oil rigs, demonstrate that Dec-SOM outperforms baseline methods and other SOM variants, is competitive with centralised SOM, and is a viable solution for decentralised information gathering.
|
|
10:45-11:00, Paper TuAT15.4 | |
>Asynchronous Adaptive Sampling and Reduced-Order Modeling of Dynamic Processes by Robot Teams Via Intermittently Connected Networks |
> Video Attachment
|
|
Rovina, Hannes Kaspar | Swiss Federal Institute of Technology Lausanne, EPFL |
Salam, Tahiya | 1995 |
Kantaros, Yiannis | University of Pennsylvania |
Hsieh, M. Ani | University of Pennsylvania |
Keywords: Distributed Robot Systems, Path Planning for Multiple Mobile Robots or Agents, Environment Monitoring and Management
Abstract: This work presents an asynchronous multi-robot adaptive sampling strategy through the synthesis of an intermittently connected mobile robot communication network. The objective is to enable a team of robots to adaptively sample and model a nonlinear dynamic spatiotemporal process. By employing an intermittently connected communication network, the team is not required to maintain an all-time connected network enabling them to cover larger areas, especially when the team size is small. The approach first determines the next meeting locations for data exchange and as the robots move towards these predetermined locations, they take measurements along the way. The data is then shared with other team members at the designated meeting locations and a reduced-order-model (ROM) of the process is obtained in a distributed fashion. The ROM is used to estimate field values in areas without sensor measurements, which informs the path planning algorithm when determining a new meeting location for the team. The main contribution of this work is an intermittent communication framework for asynchronous adaptive sampling of dynamic spatiotemporal processes. We demonstrate the framework in simulation and compare different reduced-order models under full, all-time and intermittent connectivity.
|
|
11:00-11:15, Paper TuAT15.5 | |
>Inter-Robot Range Measurements in Pose Graph Optimization |
> Video Attachment
|
|
Boroson, Elizabeth | University of Southern California |
Hewitt, Robert | Jet Propulsion Laboratory |
Ayanian, Nora | University of Southern California |
de la Croix, Jean-Pierre | Jet Propulsion Laboratory, California Institute of Technology |
Keywords: SLAM, Multi-Robot Systems, Field Robots
Abstract: For multiple robots performing exploration in a previously unmapped environment, such as planetary exploration, maintaining accurate localization and building a consistent map are vital. If the robots do not have a map to localize against and do not explore the same area, they may not be able to find visual loop closures to constrain their relative poses, making traditional SLAM impossible. This paper presents a method for using UWB ranging sensors in multi-robot SLAM, which allows the robots to localize and build a map together even without visual loop closures. The ranging measurements are added to the pose graph as edges and used in optimization to estimate the robots’ relative poses. This method builds a map using all robots’ observations that is consistent and usable. It performs similarly to visual loop closures when they are available, and provides a good map when they are not, which other methods cannot do. The method is demonstrated on PUFFER robots, developed for autonomous planetary exploration, in an unstructured environment.
|
|
11:15-11:30, Paper TuAT15.6 | |
>An Approach to Reduce Communication for Multi-Agent Mapping Applications |
|
Kepler, Michael | Virginia Polytechnic Institute and State University |
Stilwell, Daniel | Virginia Tech |
Keywords: Multi-Robot Systems, Mapping, Distributed Robot Systems
Abstract: In the context of a multi-agent system that uses a Gaussian process to estimate a spatial field of interest, we propose an approach that enables an agent to reduce the amount of data it shares with other agents. The main idea of the strategy is to rigorously assign a novelty metric to each measurement as it is collected, and only measurements that are sufficiently novel are communicated. We consider the ideal scenario where an agent can instantly share novel measurements, and we also consider the more practical scenario in which communication suffers from low bandwidth and is range-limited. For this scenario, an agent can only broadcast an informative subset of the novel measurements when the agent encounters other agents. We explore three different informative criteria for subset selection, namely entropy, mutual information, and a new criterion that reflects the value of a measurement. We apply our approach to three real-world datasets relevant to robotic mapping. The empirical findings show that an agent can reduce the amount of communicated measurements by two orders of magnitude and that the new criterion for subset selection yields superior predictive performance relative to entropy and mutual information.
|
|
TuAT16 |
Room T16 |
Sensor Fusion for Localization and Mapping |
Regular session |
Chair: Weiss, Stephan | Universität Klagenfurt |
Co-Chair: Min, Byung-Cheol | Purdue University |
|
10:00-10:15, Paper TuAT16.1 | |
>Pi-Map: A Decision-Based Sensor Fusion with Global Optimization for Indoor Mapping |
> Video Attachment
|
|
Yang, Zhiliu | Clarkson University |
Yu, Bo | PerceptIn |
Hu, Wei | PerceptIn Inc |
Tang, Jie | South China University of Technology |
Liu, Shaoshan | PerceptIn |
Liu, Chen | Clarkson University |
Keywords: Mapping, Sensor Fusion
Abstract: In this paper, we propose pi-map, an affordable, reliable, and scalable indoor mapping system for autonomous robot navigation. Firstly, we split participants for localization and mapping according to the precision of different sensors. Only LiDAR range data is used for global pose estimation with loop closure. Both LiDAR and sonar are used for map registration in a Bayesian filter fashion. Then, a tightly-coupled decision-based sensor fusion is performed by trajectory revisiting and rays casting. A trajectory fitting mechanism is also introduced to handle the nodes density mismatch between different sensors. Whole system applying only economical off-the-shelf sensors for map construction. Our experimental results quantitatively demonstrate the effectiveness of the proposed method, which is able to produce high-quality maps in both small-scale and large-scale real-world environments.
|
|
10:15-10:30, Paper TuAT16.2 | |
>MOZARD: Multi-Modal Localization for Autonomous Vehicles in Urban Outdoor Environments |
> Video Attachment
|
|
Schaupp, Lukas | ETH Zurich |
Pfreundschuh, Patrick | ETH Zurich |
Bürki, Mathias | Autonomous Systems Lab, ETH Zuerich |
Cadena Lerma, Cesar | ETH Zurich |
Siegwart, Roland | ETH Zurich |
Nieto, Juan | ETH Zürich |
Keywords: Sensor Fusion, Mapping, Localization
Abstract: Visually poor scenarios are one of the main sources of failure in visual localization systems in outdoor environments. To address this challenge, we present MOZARD, a multi-modal localization system for urban outdoor environments using vision and LiDAR. By extending our preexisting key-point based visual multi-session local localization approach with the use of semantic data, an improved localization recall can be achieved across vastly different appearance conditions. In particular we focus on the use of curbstone information because of their broad distribution and reliability within urban environments. We present thorough experimental evaluations on several driving kilometers in challenging urban outdoor environments, analyze the recall and accuracy of our localization system and demonstrate in a case study possible failure cases of each subsystem.We demonstrate that MOZARD is able to bridge scenarios where our previous work VIZARD fails, hence yielding an increased recall performance, while a similar localization accuracy of 0.2[m] is achieved.
|
|
10:30-10:45, Paper TuAT16.3 | |
>Consistent Covariance Pre-Integration for Invariant Filters with Delayed Measurements |
|
Allak, Eren | Universität Klagenfurt |
Fornasier, Alessandro | University of Klagenfurt |
Weiss, Stephan | Universität Klagenfurt |
Keywords: Sensor Fusion, Localization, Autonomous Vehicle Navigation
Abstract: Sensor fusion systems merging (multiple) delayed sensor signals through a statistical approach are challenging setups, particularly for resource constrained platforms. For statistical consistency, one would be required to keep an appropriate history, apply the correcting signal at the given time stamp in the past, and re-apply all information received until the present time. This re-calculation becomes impractical (the bottleneck being the re-propagation of the covariance matrices for estimator consistency) for platforms with multiple sensors/states and low compute power. This work presents a novel approach for consistent covariance pre-integration allowing delayed sensor signals to be incorporated in a statistically consistent fashion with very low complexity. We leverage recent insights in Invariant Extended Kalman Filters (IEKF) and their log-linear, state independent error propagation together with insights from the scattering theory to mimic the re-calculation process as a medium through which we can propagate waves (covariance information in this case) in single operation steps. We support our findings in simulation and with real data.
|
|
10:45-11:00, Paper TuAT16.4 | |
>Synchronization of Microphones Based on Rank Minimization of Warped Spectrum for Asynchronous Distributed Recording |
|
Itoyama, Katsutoshi | Tokyo Institute of Technology |
Nakadai, Kazuhiro | Honda Research Inst. Japan Co., Ltd |
Keywords: Robot Audition, Sensor Fusion, Sensor Networks
Abstract: This paper describes a new method for synchronizing microphones based on spectral warping in an asynchronous microphone array. In an audio signal observed by an asynchronous microphone array, two factors are involved: the time lag caused by a mismatch of the sampling rate and offset between microphones, and the modulation caused by differences in spatial transfer function between the sound source and each microphone. A spectrum warping matrix representing a resampling effect in the frequency domain is formulated and an observation model of audio (spectrum) mixture in an asynchronous microphone array is constructed. The proposed synchronization method uses an iterative optimization algorithm based on gradient descent of a new objective function. The function is formulated as a logarithmic determinant of a spectrum correlation matrix that is derived from relaxation of a rank minimization problem. Experimental results showed that the proposed method effectively estimates modulated sampling rate and that the proposed method outperforms an existing synchronization method.
|
|
11:00-11:15, Paper TuAT16.5 | |
>Self-Supervised Neural Audio-Visual Sound Source Localization Via Probabilistic Spatial Modeling |
> Video Attachment
|
|
Masuyama, Yoshiki | Waseda University |
Bando, Yoshiaki | Kyoto University |
Yatabe, Kohei | Waseda University |
Sasaki, Yoko | National Inst. of Advanced Industrial Science and Technology |
Onishi, Masaki | National Inst. of AIST |
Oikawa, Yasuhiro | Waseda University |
Keywords: Robot Audition, Multi-Modal Perception, Sensor Fusion
Abstract: Detecting sound source objects within visual observation is important for autonomous robots to comprehend surrounding environments. Since sounding objects have a large variety with different appearances in our living environments, labeling all sounding objects is impossible in practice. This calls for self-supervised learning which does not require manual labeling. Most of conventional self-supervised learning uses monaural audio signals and images and cannot distinguish sound source objects having similar appearances due to poor spatial information in audio signals. To solve this problem, this paper presents a self-supervised training method using 360-deg images and multichannel audio signals. By incorporating with the spatial information in multichannel audio signals, our method trains deep neural networks (DNNs) to distinguish multiple sound source objects. Our system for localizing sound source objects in the image is composed of audio and visual DNNs. The visual DNN is trained to localize sound source candidates within an input image. The audio DNN verifies whether each candidate actually produces sound or not. These DNNs are jointly trained in a self-supervised manner based on a probabilistic spatial audio model. Experimental results with simulated data showed that the DNNs trained by our method localized multiple speakers. We also demonstrate that the visual DNN detected objects including talking visitors and specific exhibits from real data recorded in a science museum.
|
|
11:15-11:30, Paper TuAT16.6 | |
>Material Mapping in Unknown Environments Using Tapping Sound |
> Video Attachment
|
|
Kannan, Shyam Sundar | Purdue University |
Jo, Wonse | Purdue University |
Parasuraman, Ramviyas | University of Georgia |
Min, Byung-Cheol | Purdue University |
Keywords: Mapping, Multi-Modal Perception, Motion and Path Planning
Abstract: In this paper, we propose an autonomous exploration and a tapping mechanism-based material mapping system for a mobile robot in unknown environments. The goal of the proposed system is to integrate simultaneous localization and mapping (SLAM) modules and sound-based material classification to enable a mobile robot to explore an unknown environment autonomously and at the same time identify the various objects and materials in the environment. This creates a material map that localizes the various materials in the environment which has potential applications for search and rescue scenarios. A tapping mechanism and tapping audio signal processing based on machine learning techniques are exploited for a robot to identify the objects and materials. We demonstrate the proposed system through experiments using a mobile robot platform installed with Velodyne LiDAR, a linear solenoid, and microphones in an exploration-like scenario with various materials. Experiment results demonstrate that the proposed system can create useful material maps in unknown environments.
|
|
TuAT17 |
Room T17 |
Cooeprative SLAM |
Regular session |
Chair: Heckman, Christoffer | University of Colorado at Boulder |
Co-Chair: Kim, Jinwhan | KAIST |
|
10:00-10:15, Paper TuAT17.1 | |
>Dense Decentralized Multi-Robot SLAM Based on Locally Consistent TSDF Submaps |
> Video Attachment
|
|
Dubois, Rodolphe | ONERA |
Eudes, Alexandre | ONERA |
Moras, Julien | ONERA |
Fremont, Vincent | Ecole Centrale De Nantes, CNRS, LS2N, UMR 6004 |
Keywords: SLAM, Multi-Robot Systems
Abstract: This article introduces a decentralized multi-robot algorithm for Simultaneous Localization And Mapping (SLAM) inspired from the work of Duhautbout et al. (2019). This method makes each robot jointly build and exchange i) a collection of 3D dense locally consistent submaps, based on a Truncated Signed Distance Field (TSDF) representation of the environment, and ii) a pose-graph representation which encodes the relative pose constraints between the TSDF submaps and the trajectory keyframes, derived from the odometry, inter-robot observations and loop closures. Such loop closures are spotted by aligning and fusing the TSDF submaps. The performances of this method have been evaluated on the EuRoC dataset (Burri et al., 2016).
|
|
10:15-10:30, Paper TuAT17.2 | |
>A Decentralized Framework for Simultaneous Calibration, Localization and Mapping with Multiple LiDARs |
> Video Attachment
|
|
Lin, Jiarong | The University of Hong Kong |
Liu, Xiyuan | The University of Hong Kong |
Zhang, Fu | University of Hong Kong |
Keywords: SLAM, Sensor Fusion, Calibration and Identification
Abstract: LiDAR is playing a more and more essential role in autonomous driving vehicles for objection detection, self localization and mapping. A single LiDAR frequently suffers from hardware failure (e.g., temporary loss of connection) due to the harsh vehicle environment (e.g., temperature, vibration, etc.), or performance degradation due to the lack of sufficient geometry features, especially for solid-state LiDARs with small field of view (FoV). To improve the system robustness and performance in self-localization and mapping, we develop a decentralized framework for simultaneous calibration, localization and mapping with multiple LiDARs. Our proposed framework is based on an extended Kalman filter (EKF), but is specially formulated for decentralized implementation. Such an implementation could potentially distribute the intensive computation among smaller computing devices or resources dedicated for each LiDAR and remove the single point of failure problem. Then this decentralized formulation is implemented on an unmanned ground vehicle (UGV) carrying 5 low-cost LiDARs and moving at 1.36m/s in urban environments. Experiment results show that the proposed method can successfully and simultaneously estimate the vehicle state (i.e., pose and velocity) and all LiDAR extrinsic parameters. The localization accuracy is up to textbf{0.2}% on the two datasets we collected. To share our findings and to make contributions to the community, meanwhile enable the readers to verify our work, we will release all our source codes and hardware design blueprint on our Github.
|
|
10:30-10:45, Paper TuAT17.3 | |
>Better Together: Online Probabilistic Clique Change Detection in 3D Landmark-Based Maps |
|
Bateman, Samuel | University of Colorado - Boulder |
Harlow, Kyle | University of Colorado Boulder |
Heckman, Christoffer | University of Colorado at Boulder |
Keywords: SLAM, Probability and Statistical Methods, Mapping
Abstract: Many modern simultaneous localization and mapping (SLAM) techniques rely on sparse landmark-based maps due to their real-time performance. However, these techniques frequently assert that these landmarks are fixed in position over time, known as the "static-world assumption." This is rarely, if ever, the case in most real-world environments. Even worse, over long deployments, robots are bound to observe traditionally static landmarks change, e.g. when an autonomous vehicle encounters a construction zone. This work addresses this challenge, accounting for changes in complex three dimensional environments with the creation of a probabilistic filter that operates on the features that give rise to landmarks. To accomplish this, landmarks are clustered into cliques and a filter is developed to estimate their persistence jointly among observations of the landmarks in a clique. This filter uses estimated spatial-temporal priors of geometric objects, allowing for dynamic and semi-static objects to be removed from a formally static map. The proposed algorithm is validated in a 3D simulated environment.
|
|
10:45-11:00, Paper TuAT17.4 | |
>Robust Loop Closure Method for Multi-Robot Map Fusion by Integration of Consistency and Data Similarity |
|
Do, Haggi | KAIST |
Hong, Seonghun | Keimyung University |
Kim, Jinwhan | KAIST |
Keywords: SLAM, Multi-Robot Systems
Abstract: For an efficient collaboration of multi-robot system during missions, it is essential for the system to create a global map and localize the robots in it. However, the relative poses among robots may be unknown, preventing the system from generating the reference map. In such cases, the necessary information must be inferred through inter-robot loop closures, which are mainly perception-derived measurements obtained when robots observe the same place. However, as perception-derived measurements rely on the similarity of sensor data, different places could be wrongly identified as the same location if they exhibit similar appearances. This phenomenon, called perceptual aliasing, produces inaccurate loop closures that can severely distort the global map. This study presents a robust inter-robot loop closure selection for map fusion that utilizes the degrees of both consistency and data similarity of the loop closures for accurate measurement determination. We define the coalition of these information as the measurement pair score and employ it as weights in the objective function of the combinatorial optimization problem that can be solved as maximum edge weight clique from graph theory. The algorithm is tested on an experimental dataset for performance evaluation and the result is discussed in comparison to a state-of-the-art method.
|
|
11:00-11:15, Paper TuAT17.5 | |
>Real-Time Multi-SLAM System for Agent Localization and 3D Mapping in Dynamic Scenarios |
> Video Attachment
|
|
Ireta Muñoz, Fernando Israel | INRIA |
Roussel, David | IBISC, UEVE, Université Paris Saclay |
Alliez, Pierre | INRIA Sophia-Antipolis |
Bonardi, Fabien | Université De Rouen |
Bouchafa, Samia | Univ d'Evry Val d'Essonne/Université Paris Saclay |
Didier, Jean-Yves | Université D'Evry |
Kachurka, Viachaslau | Universite Paris Saclay, Univ Evry |
Rault, Bastien | Innodura TB |
Hadj-Abdelkader, Hicham | IBISC |
Robin, Maxime | Innodura TB |
Keywords: Sensor Fusion, SLAM, Agent-Based Systems
Abstract: This paper introduces a Wearable SLAM system that performs indoor and outdoor SLAM in real time. The related project is part of the MALIN challenge which aims at creating a system to track emergency response agents in complex scenarios (such as dark environments, smoked rooms, repetitive patterns, building floor transitions and doorway crossing problems), where GPS technology is insufficient or inoperative. The proposed system fuses different SLAM technologies to compensate the lack of robustness of each, while estimating the pose individually. LiDAR and visual SLAM are fused with an inertial sensor in such a way that the system is able to maintain GPS coordinates that are sent via radio to a ground station, for real-time tracking. More specifically, LiDAR and monocular vision technologies are tested in dynamic scenarios where the main advantages of each have been evaluated and compared. Finally, 3D reconstruction up to three levels of details is performed.
|
|
11:15-11:30, Paper TuAT17.6 | |
>Asynchronous and Parallel Distributed Pose Graph Optimization |
> Video Attachment
|
|
Tian, Yulun | Massachusetts Institute of Technology |
Koppel, Alec | University of Pennsylvania |
Bedi, Amrit Singh | US Army Research Lab |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Keywords: SLAM, Distributed Robot Systems, Multi-Robot Systems
Abstract: We present Asynchronous Stochastic Parallel Pose Graph Optimization (ASAPP), the first asynchronous algorithm for distributed pose graph optimization (PGO) in multi-robot simultaneous localization and mapping. By enabling robots to optimize their local trajectory estimates without synchronization, ASAPP offers resiliency against communication delays and alleviates the need to wait for stragglers in the network. Furthermore, ASAPP can be applied on the rank-restricted relaxations of PGO, a crucial class of non-convex Riemannian optimization problems that underlies recent breakthroughs on globally optimal PGO. Under bounded delay, we establish the global first-order convergence of ASAPP using a sufficiently small stepsize. The derived stepsize depends on the worst-case delay and inherent problem sparsity, and furthermore matches known result for synchronous algorithms when there is no delay. Numerical evaluations on simulated and real-world datasets demonstrate favorable performance compared to state-of-the-art synchronous approach, and show ASAPP’s resilience against a wide range of delays in practice.
|
|
TuAT18 |
Room T18 |
Visual SLAM I |
Regular session |
Chair: Pradalier, Cedric | GeorgiaTech Lorraine |
Co-Chair: Scherer, Sebastian | Carnegie Mellon University |
|
10:00-10:15, Paper TuAT18.1 | |
>TartanAir: A Dataset to Push the Limits of Visual SLAM |
> Video Attachment
|
|
Wang, Wenshan | Carnegie Mellon University |
Zhu, Delong | The Chinese University of Hong Kong |
Wang, Xiangwei | Tongji University |
Hu, Yaoyu | Carnegie Mellon University |
Qiu, Yuheng | Carnegie Mellon University |
Wang, Chen | Carnegie Mellon University |
Hu, Yafei | Carnegie Mellon University |
Kapoor, Ashish | MicroSoft |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: SLAM, Visual Learning, Localization
Abstract: We present a challenging dataset, the TartanAir, for robot navigation tasks and more. The data is collected in photo-realistic simulation environments with the presence of moving objects, changing light and various weather conditions. By collecting data in simulations, we are able to obtain multi-modal sensor data and precise ground truth labels such as the stereo RGB image, depth image, segmentation, optical flow, camera poses, and LiDAR point cloud. We set up large numbers of environments with various styles and scenes, covering challenging viewpoints and diverse motion patterns that are difficult to achieve by using physical data collection platforms. In order to enable data collection at such a large scale, we develop an automatic pipeline, including mapping, trajectory sampling, data processing, and data verification. We evaluate the impact of various factors on visual SLAM algorithms using our data. The results of state-of-the-art algorithms reveal that the visual SLAM problem is far from solved. Methods that show good performance on established datasets such as KITTI do not perform well in more difficult scenarios. Although we use the simulation, our goal is to push the limits of Visual SLAM algorithms in the real world by providing a challenging benchmark for testing new methods, while also using a large diverse training data for learning-based methods. Our dataset is available at http://theairlab.org/tartanair-dataset.
|
|
10:15-10:30, Paper TuAT18.2 | |
>From Points to Planes - Adding Planar Constraints to Monocular SLAM Factor Graphs |
> Video Attachment
|
|
Arndt, Charlotte | Robert Bosch GmbH, Corporate Sector Research and Advance Enginee |
Sabzevari, Reza | Robert Bosch GmbH, Corporate Sector Research and Advance Enginee |
Civera, Javier | Universidad De Zaragoza |
Keywords: SLAM, Mapping
Abstract: Planar structures are common in man-made environments. Their addition to monocular SLAM algorithms is of relevance in order to achieve more complete and higherlevel scene representations. Also, the additional constraints they introduce might reduce the estimation errors in certain situations. In this paper we present a novel formulation to incorporate plane landmarks and planar constraints to feature-based monocular SLAM. Specifically, we enforce in-plane points to lie exactly in the plane they belong to, propagating such information to the rest of the states. Our formulation, differently from the state of the art, allows us to incorporate general planes, independently of depth information or CNN segmentation being available (although we could also use them). We evaluate our method in several sequences of public databases, showing accurate plane estimations and pose accuracy on par with state-of-the-art point-only monocular SLAM.
|
|
10:30-10:45, Paper TuAT18.3 | |
>Robust Monocular Edge Visual Odometry through Coarse-To-Fine Data Association |
|
Wu, Xiaolong | Georgia Institute of Technology |
Vela, Patricio | Georgia Institute of Technology |
Pradalier, Cedric | GeorgiaTech Lorraine |
Keywords: SLAM, Localization, Mapping
Abstract: This work describes a monocular visual odometry framework, which exploits the best attributes of edge features for illumination-robust camera tracking, while at the same time ameliorating the performance degradation of edge mapping. In the front-end, an ICP-based edge registration provides robust motion estimation and coarse data association under lighting changes. In the back-end, a novel edge-guided data association pipeline searches for the best photometrically matched points along geometrically possible edges through template matching, so that the matches can be further refined in later bundle adjustment. The core of our proposed data association strategy lies in a point-to-edge geometric uncertainty analysis, which analytically derives (1) a probabilistic search length formula that significantly reduces the search space and (2) a geometric confidence metric for mapping degradation detection based on the predicted depth uncertainty. Moreover, a match confidence based patch size adaption strategy is integrated into our pipeline to reduce matching ambiguity. We present extensive analysis and evaluation of our proposed system on synthetic and real-world benchmark datasets under the influence of illumination changes and large camera motions, where our proposed system outperforms current state-of-art algorithms.
|
|
10:45-11:00, Paper TuAT18.4 | |
>SaD-SLAM: A Visual SLAM Based on Semantic and Depth Information |
|
Yuan, Xun | University of Science and Technology of China |
Chen, Song | University of Science and Technology of China |
Keywords: SLAM
Abstract: Simultaneous Localization and Mapping (SLAM) is considered significant for intelligent mobile robot autonomous pathfinding. Over the past years, many successful SLAM systems have been developed and worked satisfactorily in static environments. However, in some dynamic scenes containing moving objects, the camera pose estimation error would be unacceptable, or the systems even lose their locations. In this paper, we present SaD-SLAM, a visual SLAM system that, building on ORB-SLAM2, achieves excellent performance in dynamic environments. With the help of semantic and depth information, we find out feature points that belong to movable objects. And we detect whether those feature points are keeping still at the moment. To make the system perform accurately and robustly in dynamic scenes, we use both feature points extracted from static objects and static feature points derived from movable objects to finetune the camera pose estimation. We evaluate our algorithm in TUM RGB-D datasets. The results demonstrate the absolute trajectory accuracy of SaD-SLAM can be improved significantly compared with the original ORB-SLAM2. We also compare our algorithm with DynaSLAM and DS-SLAM, which are designed to fit dynamic scenes.
|
|
11:00-11:15, Paper TuAT18.5 | |
>Exploit Semantic and Public Prior Information in MonoSLAM |
|
Ye, Chenxi | University College London |
Wang, Yiduo | University of Oxford |
Lu, Ziwen | University College London |
Gilitschenski, Igor | Massachusetts Institute of Technology |
Parsley, Martin Peter | University College London |
Julier, Simon | University College London |
Keywords: SLAM, Semantic Scene Understanding, Visual-Based Navigation
Abstract: In this paper, we propose a method to use semantic information to improve the use of map priors in a sparse, feature-based MonoSLAM system. To incorporate the priors, the features in the prior and SLAM maps must be associated with one another. Most existing systems build a map using SLAM and then align it with the prior map. However, this approach assumes that the local map is accurate, and the majority of the features within it can be constrained by the prior. We use the intuition that many prior maps are created to provide semantic information. Therefore, valid associations only exist if the features in the SLAM map arise from the same kind of semantic object as the prior map. Using this intuition, we extend ORB-SLAM2 using an open source pre-trained semantic segmentation network (DeepLabV3+) to incorporate prior information from Open Street Map building footprint data. We show that the amount of drift, before loop closing, is significantly smaller than that for original ORB-SLAM2. Furthermore, we show that when ORB-SLAM2 is used as a prior-aided visual odometry system, the tracking accuracy is equal to or better than the full ORB-SLAM2 system without the need for global mapping or loop closure.
|
|
11:15-11:30, Paper TuAT18.6 | |
>Dual-SLAM: A Framework for Robust Single Camera Navigation |
> Video Attachment
|
|
Huang, Huajian | The Hong Kong University of Science and Technology |
Lin, Wen-Yan | Singapore Management University |
Liu, Siying | Institute for Infocomm Research, Singapore |
Zhang, Dong | Sun Yat-Sen University |
Yeung, Sai-Kit | Hong Kong University of Science and Technology |
Keywords: SLAM, Autonomous Vehicle Navigation
Abstract: SLAM (Simultaneous Localization And Mapping) seeks to provide a moving agent with real-time self-localization. To achieve real-time speed, SLAM incrementally propagates position estimates. This makes SLAM fast but also makes it vulnerable to local pose estimation failures. As local pose estimation is ill-conditioned, local pose estimation failures happen regularly, making the overall SLAM system brittle. This paper attempts to correct this problem. We note that while local pose estimation is ill-conditioned, pose estimation over longer sequences is well-conditioned. Thus, local pose estimation errors eventually manifest themselves as mapping inconsistencies. When this occurs, we save the current map and activate two new SLAM threads. One processes incoming frames to create a new map and the other, recovery thread, backtracks to link new and old maps together. This creates a Dual-SLAM framework that maintains real-time performance while being robust to local pose estimation failures. Evaluation on benchmark datasets show Dual-SLAM can reduce failures by a dramatic 88%.
|
|
TuAT19 |
Room T19 |
Visual SLAM II |
Regular session |
Chair: Tombari, Federico | Technische Universität München |
Co-Chair: Kerr, Dermot | University of Ulster |
|
10:00-10:15, Paper TuAT19.1 | |
>Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints |
> Video Attachment
|
|
Jau, You-Yi | University of California San Diego |
Zhu, Rui | University of California San Diego |
Su, Hao | UCSD |
Chandraker, Manmohan | University of California, San Diego |
Keywords: SLAM, Deep Learning for Visual Perception, Localization
Abstract: Estimating relative camera poses from consecutive frames is a fundamental problem in visual odometry (VO) and simultaneous localization and mapping (SLAM), where classic methods consisting of hand-crafted features and sampling-based outlier rejection have been a dominant choice for over a decade. Although multiple works propose to replace these modules with learning-based counterparts, most have not yet been as accurate, robust and generalizable as conventional methods. In this paper, we design an end-to-end trainable framework consisting of learnable modules for detection, feature extraction, matching and outlier rejection, while directly optimizing for the geometric pose objective. We show both quantitatively and qualitatively that pose estimation performance may be achieved on par with the classic pipeline. Moreover, we are able to show by end-to-end training, the key components of the pipeline could be significantly improved, which leads to better generalizability to unseen datasets compared to existing learning-based methods.
|
|
10:15-10:30, Paper TuAT19.2 | |
>DXSLAM: A Robust and Efficient Visual SLAM System with Deep Features |
> Video Attachment
|
|
Li, Dongjiang | Beijing Jiaotong University |
Shi, Xuesong | Intel |
Long, Qiwei | Beijingjiaotong University |
Liu, Shenghui | Intel Corporation |
Yang, Wei | Beijing Jiaotong University, School of Electronic and Information |
Wang, Fangshi | Beijing Jiaotong University |
Wei, Qi | Tsinghua University |
Qiao, Fei | Tsinghua University |
Keywords: SLAM, Localization
Abstract: A robust and efficient Simultaneous Localization and Mapping (SLAM) system is essential for robot autonomy. For visual SLAM algorithms, though the theoretical framework has been well established for most aspects, feature extraction and association is still empirically designed in most cases, and can be vulnerable in complex environments. This paper shows that feature extraction with deep convolutional neural networks (CNNs) can be seamlessly incorporated into a modern SLAM framework. The proposed SLAM system utilizes a state-of-the-art CNN to detect keypoints in each image frame, and to give not only keypoint descriptors, but also a global descriptor of the whole image. These local and global features are then used by different SLAM modules, resulting in much more robustness against environmental changes and viewpoint changes compared with using hand-crafted features. We also train a visual vocabulary of local features with a Bag of Words (BoW) method. Based on the local features, global features, and the vocabulary, a highly reliable loop closure detection method is built. Experimental results show that all the proposed modules significantly outperforms the baseline, and the full system achieves much lower trajectory errors and much higher correct rates on all evaluated data. Furthermore, by optimizing the CNN with Intel OpenVINO toolkit and utilizing the Fast BoW library, the system benefits greatly from the SIMD (single-instruction-multiple-data) techniques in modern CPUs. The full system can run in real-time without any GPU or other accelerators. The code is public at https://github.com/ivipsourcecode/dxslam.
|
|
10:30-10:45, Paper TuAT19.3 | |
>EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association |
> Video Attachment
|
|
Wu, Yanmin | Northeastern University |
Zhang, Yunzhou | Northeastern University |
Zhu, Delong | The Chinese University of Hong Kong |
Feng, Yonghui | Northeastern University |
Coleman, Sonya | University of Ulster |
Kerr, Dermot | University of Ulster |
Keywords: SLAM, Computer Vision for Automation, Perception for Grasping and Manipulation
Abstract: Object-level data association and pose estimation play a fundamental role in semantic SLAM, which remain unsolved due to the lack of robust and accurate algorithms. In this work, we propose an ensemble data associate strategy for integrating the parametric and nonparametric statistic tests. By exploiting the nature of different statistics, our method can effectively aggregate the information of different measurements, and thus significantly improve the robustness and accuracy of data association. We then present an accurate object pose estimation framework, in which an outliers-robust centroid and scale estimation algorithm and an object pose initialization algorithm are developed to help improve the optimality of pose estimation results. Furthermore, we build a SLAM system that can generate semi-dense or lightweight object-oriented maps with a monocular camera. Extensive experiments are conducted on three publicly available datasets and a real scenario. The results show that our approach significantly outperforms state-of-the-art techniques in accuracy and robustness. The source code is available on https://github.com/yanmin-wu/EAO-SLAM.
|
|
10:45-11:00, Paper TuAT19.4 | |
>Dynamic Object Tracking and Masking for Visual SLAM |
> Video Attachment
|
|
Vincent, Jonathan | Université De Sherbrooke |
Labbé, Mathieu | Université De Sherbrooke |
Lauzon, Jean-Samuel | Université De Sherbrooke |
Grondin, Francois | Massachusetts Institute of Technology |
Comtois-Rivet, Pier-Marc | Institut Du Vehicule Innovant |
Michaud, Francois | Universite De Sherbrooke |
Keywords: SLAM, Mapping
Abstract: In dynamic environments, performance of visual SLAM techniques can be impaired by visual features taken from moving objects. One solution is to identify those objects so that their visual features can be removed for localization and mapping. This paper presents a simple and fast pipeline that uses deep neural networks, extended Kalman filters and visual SLAM to improve both localization and mapping in dynamic environments (around 14 fps on a GTX 1080). Results on the dynamic sequences from the TUM dataset using RTAB-Map as visual SLAM suggest that the approach achieves similar localization performance compared to other state-of-the-art methods, while also providing the position of the tracked dynamic objects, a 3D map free of those dynamic objects, better loop closure detection with the whole pipeline able to run on a robot moving at moderate speed.
|
|
11:00-11:15, Paper TuAT19.5 | |
>Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments |
|
Li, Yanyan | Technical University of Munich |
Brasch, Nikolas | Technical University of Munich |
Wang, Yida | Technical University of Munich |
Navab, Nassir | TU Munich |
Tombari, Federico | Technische Universität München |
Keywords: SLAM, Visual Tracking
Abstract: In this paper a low-drift monocular SLAM method is proposed targeting indoor scenarios, where monocular SLAM often fails due to the lack of textured surfaces. Our approach decouples rotation and translation estimation of the tracking process to reduce the long-term drift in indoor environments. In order to take full advantage of the available geometric information in the scene, surface normals are predicted by a convolutional neural network from each input RGB image in real-time. First, a drift-free rotation is estimated based on lines and surface normals using spherical mean-shift clustering, leveraging the weak Manhattan World assumption. Then translation is computed from point and line features. Finally, the estimated poses are refined with a map-to-frame optimization strategy. The proposed method outperforms the state of the art on common SLAM benchmarks such as ICL-NUIM and TUM RGB-D.
|
|
11:15-11:30, Paper TuAT19.6 | |
>Comparing Visual Odometry Systems in Actively Deforming Simulated Colon Environments |
> Video Attachment
|
|
Fulton, Mitchell | University of Colorado at Boulder |
Prendergast, Joseph Micah | University of Colorado at Boulder |
DiTommaso, Emily Rose | University of Colorado Boulder |
Rentschler, Mark | University of Colorado at Boulder |
Keywords: SLAM, Localization, Computer Vision for Medical Robotics
Abstract: This paper presents a new open-source dataset with ground truth position in a simulated colon environment to promote development of real-time feedback systems for physicians performing colonoscopies. Four systems (DSO, LSD-SLAM, SfMLearner, ORB-SLAM2) are tested on this dataset and their failures are analyzed. A data collection platform was fabricated and used to take the dataset in a colonoscopy training simulator that was affixed to a flat surface. The noise in the ground truth positional data induced from the metal in the data collection platform was then characterized and corrected. The Absolute Trajectory RMSE Error (ATE) and Relative Error (RE) metrics were performed on each of the sequences in the dataset for each of the Simultaneous Localization And Mapping (SLAM) systems. While these systems all had good performance in idealized conditions, more realistic conditions in the harder sequences caused them to produce poor results or fail completely. These failures will be a hindrance to physicians in a real-world scenario, so future systems made for this environment must be more robust to the difficulties found in the colon, even at the expense of trajectory accuracy. The authors believe that this is the first open-source dataset with groundtruth data displaying a simulated in vivo environment with active deformation, and that this is the first step toward achieving useful SLAM within the colon. The dataset is available at www.colorado.edu/lab/amtl/datasets.
|
|
TuAT20 |
Room T20 |
Visual SLAM III |
Regular session |
Chair: Ila, Viorela | The University of Sydney |
Co-Chair: Indelman, Vadim | Technion - Israel Institute of Technology |
|
10:00-10:15, Paper TuAT20.1 | |
>Speed and Memory Efficient Dense RGB-D SLAM in Dynamic Scenes |
> Video Attachment
|
|
Canovas, Bruce | GIPSA-Lab |
Rombaut, Michele | Universite Grenoble Alpes |
Pellerin, Denis | Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-Lab |
Negre, Amaury | Cnrs Gipsa Lab |
Keywords: SLAM, Mapping, RGB-D Perception
Abstract: Real-time dense 3D localization and mapping systems are required to enable robotics platforms to interact in and with their environments. Several solutions have used surfel representations to model the world. While they produce impressive results, they require heavy and costly hardware to operate properly. Many of them are also limited to static environments and small inter-frame motions. Whereas most of the state of the art approaches focus on the accuracy of the reconstruction, we assume that many robotics applications do not require a high resolution level in the rebuilt surface and can benefit from a less accurate but less expensive map, so as to gain in run-time and memory efficiency. In this paper we propose a fast RGB-D SLAM articulated around a rough and lightweight 3D representation for dense compact mapping in dynamic indoor environment, targeting mainstream computing platforms. A simple and fast formulation to detect and filter out dynamic elements is also presented. We show the robustness of our system, its low memory requirement and the good performance it enables.
|
|
10:15-10:30, Paper TuAT20.2 | |
>DUI-VIO: Depth Uncertainty Incorporated Visual Inertial Odometrybased on an RGB-D Camera |
> Video Attachment
|
|
Zhang, He | Virginia Commonwealth University |
Ye, Cang | Virginia Commonwealth University |
Keywords: SLAM, Service Robotics, RGB-D Perception
Abstract: This paper presents a new RGB-D-camera-based visual-inertial odometry (VIO), termed IDU-VIO, for estimating the motion state of the camera. First, a Gaussian mixture model (GMM) to is employed to model the uncertainty of the depth data for each pixel on the camera’s color image. Second, the uncertainties are incorporated into the VIO’s initialization and optimization processes to make the state estimate more accurate. In order to perform the initialization process, we propose a hybrid-perspective-n-point (PnP) method to compute the pose change between two camera frames and use the result to triangulate the depth for an initial set of visual features whose depth values are unavailable from the camera. Hybrid-PnP first uses a 2D-2D PnP algorithm to compute rotation so that more visual features may be used to obtain a more accurate rotation estimate. It then uses a 3D-2D scheme to compute translation by taking into account the uncertainties of depth data, resulting in a more accurate translation estimate. The more accurate pose change estimated by Hybrid-PnP help to improve the initialization result and thus the VIO performance in state estimation. In addition, Hybrid-PnP make it possible to compute the pose change by using a small number of features with a known depth. This improves the reliability of the initialization process. Finally, IDU-VIO incorporates the uncertainties of the inverse depth measurements into the nonlinear optimization process, leading to a reduced state estimation error. Experimental results validate that the proposed IDU-VIO method outperforms the state-of-the-art VIO methods in terms of accuracy and reliability.
|
|
10:30-10:45, Paper TuAT20.3 | |
>Probabilistic Qualitative Localization and Mapping |
|
Mor, Roee | Technion - Israel Institute of Technology |
Indelman, Vadim | Technion - Israel Institute of Technology |
Keywords: Autonomous Vehicle Navigation, Mapping, SLAM
Abstract: Simultaneous localization and mapping (SLAM) is essential in numerous robotics applications such as autonomous navigation. Traditional SLAM approaches infer the metric state of the robot along with a metric map of the environment. While existing algorithms exhibit good results, they are still sensitive to measurement noise, sensors quality, data association and are still computationally expensive. Alternatively, we note that some navigation and mapping missions can be achieved using only qualitative geometric information, an approach known as qualitative spatial reasoning (QSR). In this work we contribute a novel probabilistic qualitative localization and mapping ap- proach, which extends the state of the art by inferring also the qualitative state of the camera poses (localization), as well as incorporating probabilistic connections between views (in time and in space). Our method is in particular appealing in scenarios with a small number of salient landmarks and sparse landmark tracks. We evaluate our approach in simulation and in a real-world dataset, and show its superior performance and low complexity compared to state of the art.
|
|
10:45-11:00, Paper TuAT20.4 | |
>Robust Ego and Object 6-DoF Motion Estimation and Tracking |
> Video Attachment
|
|
Zhang, Jun | Australian National University |
Henein, Mina | Australian National University |
Mahony, Robert | Australian National University |
Ila, Viorela | The University of Sydney |
Keywords: SLAM, RGB-D Perception, Visual Tracking
Abstract: The problem of tracking self-motion as well as motion of objects in the scene using information from a camera is known as multi-body visual odometry and is a challenging task. This paper proposes a robust solution to achieve accurate estimation and consistent track-ability for dynamic multi- body visual odometry. A compact and effective framework is proposed leveraging recent advances in semantic instance-level segmentation and accurate optical flow estimation. A novel formulation, jointly optimizing SE(3) motion and optical flow is introduced that improves the quality of the tracked points and the motion estimation accuracy. The proposed approach is evaluated on the virtual KITTI Dataset and tested on the real KITTI Dataset, demonstrating its applicability to autonomous driving applications. For the benefit of the community, we make the source code public.
|
|
11:00-11:15, Paper TuAT20.5 | |
>SeqSphereVLAD: Sequence Matching Enhanced Orientation-Invariant Place Recognition |
> Video Attachment
|
|
Yin, Peng | Carnegie Mellon University |
Wang, Fuying | Tsinghua University |
Egorov, Anton | Skolkovo Institute of Science and Technology |
Hou, Jiafan | The Chinese University of Hong Kong, Shenzhen |
Zhang, Ji | Carnegie Mellon University |
Choset, Howie | Carnegie Mellon University |
Keywords: SLAM, Mapping, Recognition
Abstract: Human beings and animals are capable of recognizing places from a previous journey when viewing them under different environmental conditions (e.g., illuminations and weathers). This paper seeks to provide robots with a human-like place recognition ability using a new point cloud feature learning method. This is a challenging problem due to the difficulty of extracting invariant local descriptors from the same place under various orientation differences and dynamic obstacles. In this paper, we propose a novel lightweight 3D place recognition method, SeqSphereVLAD, which is capable of recognizing places from a previous trajectory regardless of the viewpoint and the temporary observation differences. The major contributions of our method lie in two modules: (1) the spherical convolution feature extraction module, which produces orientation-invariant local place descriptors, and (2) the coarse-to-fine sequence matching module, which ensures both accurate loop-closure detection and real-time performance. Despite the apparent simplicity, our proposed approach outperform the state-of-the-arts for place recognition under datasets that combine orientation and context differences. Compared with the arts, our method can achieve above 95% average recall for the best match with only 18% inference time of PointNet-based place recognition methods.
|
|
11:15-11:30, Paper TuAT20.6 | |
>Online Visual Place Recognition Via Saliency Re-Identification |
> Video Attachment
|
|
Wang, Han | Nanyang Technological University |
Wang, Chen | Carnegie Mellon University |
Xie, Lihua | NanyangTechnological University |
Keywords: SLAM, Computer Vision for Other Robotic Applications, Recognition
Abstract: As an essential component of visual simultaneous localization and mapping (SLAM), place recognition is crucial for robot navigation and autonomous driving. Existing methods often formulate visual place recognition as feature matching, which is computationally expensive for many robotic applications with limited computing power, e.g., autonomous driving. Inspired by the fact that human beings always recognize a place by remembering salient regions or objects that are more attractive or interesting than others, we formulate visual place recognition as saliency re-identification, which is natural and straightforward. In order to reduce computational cost, we propose to perform both saliency detection and re-identification in frequency domain, in which all operations become element-wise. The experiments show that our proposed method achieves competitive accuracy and much higher speed than the state-of-the-art feature-based methods. The proposed method is open-sourced.
|
|
TuAT21 |
Room T21 |
SLAM |
Regular session |
Chair: Steckel, Jan | University of Antwerp |
Co-Chair: Mangelson, Joshua | Brigham Young University |
|
10:00-10:15, Paper TuAT21.1 | |
>ARAS: Ambiguity-Aware Robust Active SLAM Based on Multi-Hypothesis State and Map Estimations |
> Video Attachment
|
|
Hsiao, Ming | Carnegie Mellon University |
Mangelson, Joshua | Brigham Young University |
Suresh, Sudharshan | Carnegie Mellon University |
Debrunner, Chris | Lockheed Martin |
Kaess, Michael | Carnegie Mellon University |
Keywords: SLAM, Mapping, Motion and Path Planning
Abstract: In this paper, we introduce an ambiguity-aware robust active SLAM (ARAS) framework that makes use of multi-hypothesis state and map estimations to achieve better robustness. Ambiguous measurements can result in multiple probable solutions in a multi-hypothesis SLAM (MH-SLAM) system if they are temporarily unsolvable (due to insufficient information), our ARAS aims at taking all these probable estimations into account explicitly for decision making and planning, which, to the best of our knowledge, has not yet been covered by any previous active SLAM approach (which mostly consider a single hypothesis at a time). This novel ARAS framework 1) adopts local contours for efficient multihypothesis exploration, 2) incorporates an active loop closing module that revisits mapped areas to acquire information for hypotheses pruning to maintain the overall computational efficiency, and 3) demonstrates how to use the output target pose for path planning under the multi-hypothesis estimations. Through extensive simulations and a real-world experiment, we demonstrate that the proposed ARAS algorithm can actively map general indoor environments more robustly than a similar single-hypothesis approach in the presence of ambiguities.
|
|
10:15-10:30, Paper TuAT21.2 | |
>On-Plate Localization and Mapping for an Inspection Robot Using Ultrasonic Guided Waves: A Proof of Concept |
> Video Attachment
|
|
Pradalier, Cedric | GeorgiaTech Lorraine |
Ouabi, Othmane-Latif | Umi 2958 Gt-Cnrs |
Pomarede, Pascal | GeorgiaTech Lorraine |
Steckel, Jan | University of Antwerp |
Keywords: SLAM, Industrial Robots, Probability and Statistical Methods
Abstract: This paper presents a proof-of-concept for a localization and mapping system for magnetic crawlers performing inspection tasks on structures made of large metal plates. By relying on ultrasonic guided waves reflected from the plate edges, we demonstrate that it is possible to recover the plate geometry and robot trajectory to a precision comparable to the signal wavelength. The approach is tested using real acoustic signals acquired on test metal plates using lawn-mower paths and random-walks. To the contrary of related works, this paper focuses on the practical details of the localization and mapping algorithm.
|
|
10:30-10:45, Paper TuAT21.3 | |
>Plug-And-Play SLAM: A Unified SLAM Architecture for Modularity and Ease of Use |
> Video Attachment
|
|
Colosi, Mirco | Sapienza, University of Rome |
Aloise, Irvin | Sapienze University of Rome |
Guadagnino, Tiziano | Sapienza University of Rome |
Schlegel, Dominik | Sapienza - University of Rome |
Della Corte, Bartolomeo | Sapienza University of Rome |
Arras, Kai Oliver | Bosch Research |
Grisetti, Giorgio | Sapienza University of Rome |
Keywords: SLAM, Mapping
Abstract: Simultaneous Localization and Mapping (SLAM) is considered a mature research field with numerous applications and publicly available open-source systems. Despite this maturity, existing SLAM systems often rely on ad-hoc implementations or are tailored to predefined sensor setups. In this work, we tackle these issues, proposing a novel unified SLAM architecture specifically designed to standardize the SLAM problem and to address heterogeneous sensor configurations. Thanks to its modularity and design patterns, the presented framework is easy to extend, maximizes code reuse and improves computational efficiency. We show in our experiments with a variety of typical sensor configurations that these advantages come without compromising state-of-the-art SLAM performance. The result demonstrates the architecture’s relevance for facilitating further research in (multi-sensor) SLAM and its transfer into practical applications.
|
|
10:45-11:00, Paper TuAT21.4 | |
>Majorization Minimization Methods for Distributed Pose Graph Optimization with Convergence Guarantees |
|
Fan, Taosha | Northwestern University |
Murphey, Todd | Northwestern University |
Keywords: SLAM, Mapping, Optimization and Optimal Control
Abstract: In this paper, we consider the problem of distributed pose graph optimization (PGO) that has extensive applications in multi-robot simultaneous localization and mapping (SLAM). We propose majorization minimization methods for distributed PGO and show that our methods are guaranteed to converge to first-order critical points under mild conditions. Furthermore, since our methods rely a proximal operator of distributed PGO, the convergence rate can be significantly accelerated with Nesterov's method, and more importantly, the acceleration induces no compromise of convergence guarantees. In addition, we also present accelerated majorization minimization methods for the distributed chordal initialization that have a quadratic convergence, which can be used to compute an initial guess for distributed PGO. The efficacy of this work is validated through applications on a number of 2D and 3D SLAM datasets and comparisons with existing state-of-the-art methods, which indicates that our methods have faster convergence and result in better solutions to distributed PGO.
|
|
11:00-11:15, Paper TuAT21.5 | |
>Variational Filtering with Copula Models for SLAM |
|
Martin, John D. | Stevens Institute of Technology |
Doherty, Kevin | Massachusetts Institute of Technology |
Cyr, Caralyn | Stevens Institute of Technology |
Englot, Brendan | Stevens Institute of Technology |
Leonard, John | MIT |
Keywords: SLAM, Localization
Abstract: The ability to infer map variables and estimate pose is crucial to the operation of autonomous mobile robots. In most cases the shared dependency between these variables is modeled through a multivariate Gaussian distribution, but there are many situations where that assumption is unrealistic. Our paper shows how it is possible to relax this assumption and perform simultaneous localization and mapping (SLAM) with a larger class of distributions, whose multivariate dependency is represented with a copula model. We integrate the distribution model with copulas into a Sequential Monte Carlo estimator and show how unknown model parameters can be learned through gradient-based optimization. We demonstrate our approach is effective in settings where Gaussian assumptions are clearly violated, such as environments with uncertain data association and nonlinear transition models.
|
|
11:15-11:30, Paper TuAT21.6 | |
>Cluster-Based Penalty Scaling for Robust Pose Graph Optimization |
|
Wu, Fang | Ecole Polytechnique De Montreal |
Beltrame, Giovanni | Ecole Polytechnique De Montreal |
Keywords: SLAM, Mapping
Abstract: Robust pose graph optimization is essential for reliable pose estimation in Simultaneous Localization and Mapping (SLAM) system. Due to the nature of loop closures, even one spurious measurement could trick the SLAM estimator and severely distort the mapping results. Existing methods to avoid this problem mostly focus on ensuring local measurement consistency by evaluating measurements independently, often requiring parameters that are difficult to tune. This paper proposes a cluster-based penalty scaling (CPS) method to ensure both the local and global consistency by first evaluating the edge quality locally, and then integrating this information into the optimization formulation.
|
|
11:15-11:30, Paper TuAT21.7 | |
>A Theory of Fermat Paths for 3D Imaging Sonar Reconstruction |
|
Westman, Eric | Carnegie Mellon University |
Gkioulekas, Ioannis | Carnegie Mellon University |
Kaess, Michael | Carnegie Mellon University |
Keywords: Marine Robotics, Mapping, Field Robots
Abstract: In this work, we present a novel method for reconstructing particular 3-D surface points using an imaging sonar sensor. We derive the two-dimensional Fermat flow equation, which may be applied to the planes defined by each discrete azimuth angle in the sonar image. We show that the Fermat flow equation applies to boundary points and surface points which correspond to specular reflections within the 2-D plane defined by their azimuth angle measurement. The Fermat flow equation can be used to resolve the 2-D location of these surface points within the plane, and therefore also their full 3-D location. This is achieved by translating the sensor to estimate the spatial gradient of the range measurement. This method does not rely on the precise image intensity values or the reflectivity of the imaged surface to solve for the surface point locations. We demonstrate the effectiveness of our proposed method by reconstructing 3-D object points on both simulated and real-world datasets.
|
|
TuAT22 |
Room T22 |
Sensor Fusion for SLAM |
Regular session |
Chair: Atanasov, Nikolay | University of California, San Diego |
Co-Chair: Nakamura, Yoshihiko | University of Tokyo |
|
10:00-10:15, Paper TuAT22.1 | |
>Tightly-Coupled Fusion of Global Positional Measurements in Optimization-Based Visual-Inertial Odometry |
|
Cioffi, Giovanni | University of Zurich |
Scaramuzza, Davide | University of Zurich |
Keywords: SLAM, Sensor Fusion
Abstract: Motivated by the goal of achieving robust, drift-free pose estimation in long-term autonomous navigation, in this work we propose a methodology to fuse global positional information with visual and inertial measurements in a tightly-coupled nonlinear-optimization–based estimator. Differently from previous works, which are loosely-coupled, the use of a tightly-coupled approach allows exploiting the correlations amongst all the measurements. A sliding window of the most recent system states is estimated by minimizing a cost function that includes visual re-projection errors, relative inertial errors, and global positional residuals. We use IMU preintegration to formulate the inertial residuals and leverage the outcome of such algorithm to efficiently compute the global position residuals. The experimental results show that the proposed method achieves accurate and globally consistent estimates, with negligible increase of the optimization computational cost. Our method consistently outperforms the loosely-coupled fusion approach. The mean position error is reduced up to 50% with respect to the loosely-coupled approach in Unmanned Aerial Vehicle (UAV) flights with distance travelled of about 1 km, where the global position information is given by noisy GPS measurements. To the best of our knowledge, this is the first work where global positional measurements are tightly fused in an optimization-based visual-inertial odometry (VIO) algorithm, leveraging the IMU preintegration method to define the global positional factors.
|
|
10:15-10:30, Paper TuAT22.2 | |
>GR-SLAM: Vision-Based Sensor Fusion SLAM for Ground Robots on Complex Terrain |
> Video Attachment
|
|
Su, Yun | Shenyang Institute of Automation |
Wang, Ting | Robotics Lab., Shenyang Institute of Automation, CAS |
Yao, Chen | Shenyang Institute of Automation, Chinese Academy of Sciences |
Shao, Shiliang | SIA |
Wang, Zhidong | Chiba Institute of Technology |
Keywords: SLAM, Sensor Fusion, Visual-Based Navigation
Abstract: In recent years, many excellent SLAM methods based on cameras, especially the camera-IMU fusion (VIO), have emerged, which has greatly improved the accuracy and robustness of SLAM. However, we find through experiments that most of the existing VIO methods perform well on drones or drone datasets, but for ground robots on complex terrain, they cannot continuously provide accurate and robust localization results. Some researchers have proposed methods for ground robots, but most of them have limited applications due to the assumption of plane motion. Therefore, this paper proposes GR-SLAM for the localization of ground robots on complex terrain, which can fuse camera, IMU, and encoder data in a tightly coupled scheme to provide accurate and robust state estimation for robots. First, an odometer increment model is proposed, which can fuse the encoder and IMU data to calculate the robot pose increment on manifold, and calculate the frame constraints through the pre-integrated increment. Then we propose an evaluation algorithm for multi-sensor measurements, which can detect abnormal data and adjust its optimization weight. Finally, we implement a complete factor graph optimization framework based on sliding window, which can tightly couple camera, IMU, and encoder data to perform state estimation. Extensive experiments are conducted based on a real ground robot and the results show that GR-SLAM can provide accurate and robust state estimation for ground robots.
|
|
10:30-10:45, Paper TuAT22.3 | |
>OrcVIO: Object Residual Constrained Visual-Inertial Odometry |
> Video Attachment
|
|
Shan, Mo | University of California San Diego |
Feng, Qiaojun | University of California, San Diego |
Atanasov, Nikolay | University of California, San Diego |
Keywords: SLAM, Semantic Scene Understanding, Object Detection, Segmentation and Categorization
Abstract: Introducing object-level semantic information into simultaneous localization and mapping (SLAM) system is critical. It not only improves the performance but also enables tasks specified in terms of meaningful objects. This work presents OrcVIO, for visual-inertial odometry tightly coupled with tracking and optimization over structured object models. OrcVIO differentiates through semantic feature and bounding-box reprojection errors to perform batch optimization over the pose and shape of objects. The estimated object states aid in real-time incremental optimization over the IMU-camera states. The ability of OrcVIO for accurate trajectory estimation and large-scale object-level mapping is evaluated using real data.
|
|
10:45-11:00, Paper TuAT22.4 | |
>LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking |
> Video Attachment
|
|
Zuo, Xingxing | Zhejiang University |
Yang, Yulin | University of Delaware |
Geneva, Patrick | University of Delaware |
Lv, Jiajun | Zhejiang University |
Liu, Yong | Zhejiang University |
Huang, Guoquan (Paul) | University of Delaware |
Pollefeys, Marc | ETH Zurich |
Keywords: Sensor Fusion, Localization, SLAM
Abstract: Multi-sensor fusion of multi-modal measurements from commodity inertial, visual and LiDAR sensors to provide robust and accurate 6DOF pose estimation holds great potential in robotics and beyond. In this paper, building upon our prior work (i.e., LIC-Fusion), we develop a sliding-window filter based LiDAR-Inertial-Camera odometry with online spatiotemporal calibration (i.e., LIC-Fusion 2.0), which introduces a novel sliding-window plane-feature tracking for efficiently processing 3D LiDAR point clouds. In particular, after motion compensation for LiDAR points by leveraging IMU data, low-curvature planar points are extracted and tracked across the sliding window. A novel outlier rejection criteria is proposed in the plane-feature tracking for high quality data association. Only the tracked planar points belonging to the same plane will be used for plane initialization, which makes the plane extraction efficient and robust. Moreover, we perform the observability analysis for the IMU-LiDAR subsystem under consideration and report the degenerate cases for spatiotemporal calibration using plane features. While the estimation consistency and identified degenerate motions are validated in Monte-Carlo simulations, different real-world experiments are also conducted to show that the proposed LIC-Fusion 2.0 outperforms its predecessor and other state-of-the-art methods.
|
|
11:00-11:15, Paper TuAT22.5 | |
>Leveraging Planar Regularities for Point Line Visual-Inertial Odometry |
> Video Attachment
|
|
Li, Xin | Peking University |
He, Yijia | Institute of Automation, Chinese Academy of Sciences |
Lin, Jinlong | Peking University |
Liu, Xiao | Megvii Technology Inc |
Keywords: SLAM, Mapping, Sensor Fusion
Abstract: With monocular Visual-Inertial Odometry (VIO) system, 3D point cloud and camera motion can be estimated simultaneously. Because pure sparse 3D points provide a structureless representation of the environment, generating 3D mesh from sparse points can further model the environment topology and produce dense mapping. To improve the accuracy of 3D mesh generation and localization, we propose a tightly-coupled monocular VIO system, PLP-VIO, which exploits point features and line features as well as plane regularities. The co-planarity constraints are used to leverage additional structure information for the more accurate estimation of 3D points and spatial lines in state estimator. To detect plane and 3D mesh robustly, we combine both the line features with point features in the detection method. The effectiveness of the proposed method is verified on both synthetic data and public datasets and is compared with other state-of-the-art algorithms.
|
|
11:15-11:30, Paper TuAT22.6 | |
>SplitFusion: Simultaneous Tracking and Mapping for Non-Rigid Scenes |
> Video Attachment
|
|
Li, Yang | The University of Tokyo |
Zhang, Tianwei | The University of Tokyo |
Nakamura, Yoshihiko | University of Tokyo |
Harada, Tatsuya | The University of Tokyo |
Keywords: Mapping, SLAM, Localization
Abstract: We present SplitFusion, a novel dense RGBD SLAM framework that simultaneously performs tracking and volumetric reconstruction for both rigid and non-rigid components of the scene. SplitFusion first adopts deep learning based semantic instant segmentation technique to split the scene into rigid or non-rigid geometric surfaces. The split surfaces are independently tracked via rigid or non-rigid ICP and reconstructed through incremental depth map volumetric fusion. Experimental results show that the proposed approach can provide not only accurate environment maps but also well reconstructed non-rigid targets, e.g., the moving humans.
|
|
TuAT23 |
Room T23 |
Range SLAM |
Regular session |
Chair: Wang, Sen | Edinburgh Centre for Robotics, Heriot-Watt University |
Co-Chair: Tan, U-Xuan | Singapore University of Techonlogy and Design |
|
10:00-10:15, Paper TuAT23.1 | |
>LIO-SAM: Tightly-Coupled Lidar Inertial Odometry Via Smoothing and Mapping |
|
Shan, Tixiao | Massachusetts Institute of Technology |
Englot, Brendan | Stevens Institute of Technology |
Meyers, Drew | MIT |
Wang, Wei | Massachusetts Institute of Technology |
Ratti, Carlo | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Sensor Fusion, Range Sensing, Mapping
Abstract: We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system. The estimated motion from inertial measurement unit (IMU) pre-integration de-skews point clouds and produces an initial guess for lidar odometry optimization. The obtained lidar odometry solution is used to estimate the bias of the IMU. To ensure high performance in real-time, we marginalize old lidar scans for pose optimization, rather than matching lidar scans to a global map. Scan-matching at a local scale instead of a global scale significantly improves the real-time performance of the system, as does the selective introduction of keyframes, and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior "sub-keyframes." The proposed method is extensively evaluated on datasets gathered from three platforms over various scales and environments.
|
|
10:15-10:30, Paper TuAT23.2 | |
>LiTAMIN: LiDAR Based Tracking and MappINg by Stabilized ICP for Geometry Approximation with Normal Distributions |
> Video Attachment
|
|
Yokozuka, Masashi | Nat. Inst. of Advanced Industrial Science and Technology |
Koide, Kenji | National Institute of Advanced Industrial Science and Technology |
Oishi, Shuji | National Institute of Advanced Industrial Science and Technology |
Banno, Atsuhiko | National Instisute of Advanced Industrial Science and Technology |
Keywords: SLAM, Mapping, Localization
Abstract: This paper proposes a 3D LiDAR SLAM method that improves accuracy, robustness and computational efficiency for iterative closest point (ICP) employed locally approximated geometry with clusters of normal distributions. In comparison with previous normal distribution based ICP methods, such as NDT and GICP, our ICP method is simply stabilized with normalization of the cost function by Frobenius norm and regulalization of covariance matrix. The previous methods are stabilized with pricipal component analysis (PCA), whose computational cost is higher than our method. Moreover, our SLAM method can reduce the effect of the wrong loop closure constraints. Exeprimental results show that our SLAM method has advantages against open source state-of-the-art methods that are LOAM, LeGO-LOAM and hdl graph slam.
|
|
10:30-10:45, Paper TuAT23.3 | |
>GOSMatch: Graph-Of-Semantics Matching for Detecting Loop Closures in 3D LiDAR Data |
|
Zhu, Yachen | Sun Yat-Sen University |
Ma, Yanyang | Sun Yat-Sen University |
Chen, Long | Sun Yat-Sen University |
Liu, Cong | Sun Yat-Sen University |
Ye, Maosheng | Wuhan University |
Li, Lingxi | Indiana University-Purdue University Indianapolis |
Keywords: SLAM, Localization
Abstract: Detecting loop closures in 3D Light Detection and Ranging (LiDAR) data is a challenging task since point-level methods always suffer from instability. This paper presents a semantic-level approach named GOSMatch to perform reliable place recognition. Our method leverages novel descriptors, which are generated from the spatial relationship between semantics, to perform frame description and data association. We also propose a coarse-to-fine strategy to efficiently search for loop closures. Besides, GOSMatch can give an accurate 6-DOF initial pose estimation once a loop closure is confirmed. Extensive experiments have been conducted on the KITTI odometry dataset and the results show that GOSMatch can achieve robust loop closure detection performance and outperform existing methods.
|
|
10:45-11:00, Paper TuAT23.4 | |
>Seed: A Segmentation-Based Egocentric 3D Point Cloud Descriptor for Loop Closure Detection |
|
Fan, Yunfeng | Singapore University of Technology and Design |
He, Yichang | SUTD |
Tan, U-Xuan | Singapore University of Techonlogy and Design |
Keywords: SLAM, Mapping
Abstract: Place recognition is essential for SLAM system since it is critical for loop closure and can help to correct the accumulated drift and result in a globally consistent map. Unlike the visual slam which can use diverse feature detection methods to describe the scene, there are limited works reported to represent a place using single LiDAR scan. In this paper, we propose a segmentation-based egocentric descriptor termed emph{Seed} by using a single LiDAR scan to describe the scene. Through the segmentation approach, we first obtain different segmented objects, which can reduce the noise and resolution effect, making it more robust. Then, the topological information of the segmented objects is encoded into the descriptor. Unlike other reported approaches, the proposed method is rotation invariant and insensitive to translation variation. The feasibility of proposed method is evaluated through the KITTI dataset and the results show that the proposed method outperforms the state-of-the-art method in terms of accuracy.
|
|
11:00-11:15, Paper TuAT23.5 | |
>RadarSLAM: Radar Based Large-Scale SLAM in All Weathers |
> Video Attachment
|
|
Hong, Ziyang | Heriot-Watt University |
Petillot, Yvan R. | Heriot-Watt University |
Wang, Sen | Edinburgh Centre for Robotics, Heriot-Watt University |
Keywords: SLAM, Localization, Mapping
Abstract: Numerous Simultaneous Localization and Mapping (SLAM) algorithms have been presented in last decade using different sensor modalities. However, robust SLAM in extreme weather conditions is still an open research problem. In this paper, RadarSLAM, a full radar based graph SLAM system, is proposed for reliable localization and mapping in large-scale environments. It is composed of pose tracking, local mapping, loop closure detection and pose graph optimization, enhanced by novel feature matching and probabilistic point cloud generation on radar images. Extensive experiments are conducted on a public radar dataset and several self-collected radar sequences, demonstrating the state-of-the-art reliability and localization accuracy in various adverse weather conditions, such as dark night, dense fog and heavy snowfall.
|
|
11:15-11:30, Paper TuAT23.6 | |
>GP-SLAM+: Real-Time 3D Lidar SLAM Based on Improved Regionalized Gaussian Process Map Reconstruction |
> Video Attachment
|
|
Ruan, Jianyuan | Zhejiang University |
Li, Bo | Zhejiang University |
Wang, Yinqiang | Zhejiang University |
Fang, Zhou | Zhejiang University |
Keywords: SLAM, Mapping, Localization
Abstract: This paper presents a 3D lidar SLAM system based on improved regionalized Gaussian process (GP) map reconstruction to provide both low-drift state estimation and mapping in real-time for robotics applications. We utilize spatial GP regression to model the environment. This tool enables us to recover surfaces including those in sparsely scanned areas and obtain uniform samples with uncertainty. Those properties facilitate robust data association and map updating in our scan-to-map registration scheme, especially when working with sparse range data. Compared with previous GP-SLAM, this work overcomes the prohibitive computational complexity of GP and redesigns the registration strategy to meet the accuracy requirements in 3D scenarios. For large-scale tasks, a two-thread framework is employed to suppress the drift further. Aerial and ground-based experiments demonstrate that our method allows robust odometry and precise mapping in real-time. It also outperforms the state-of-the-art lidar SLAM systems in our tests with light-weight sensors.
|
|
TuBT1 |
Room T1 |
Imitation Learning I |
Regular session |
Chair: Stone, Peter | University of Texas at Austin |
Co-Chair: Taniguchi, Tadahiro | Ritsumeikan University |
|
11:45-12:00, Paper TuBT1.1 | |
>Domain-Adversarial and -Conditional State Space Model for Imitation Learning |
> Video Attachment
|
|
Okumura, Ryo | Panasonic Corporation |
Okada, Masashi | Panasonic Corporation |
Taniguchi, Tadahiro | Ritsumeikan University |
Keywords: Imitation Learning, Representation Learning, Model Learning for Control
Abstract: State representation learning (SRL) in partially observable Markov decision processes has been studied to learn abstract features of data useful for robot control tasks. For SRL, acquiring domain-agnostic states is essential for achieving efficient imitation learning. Without these states, imitation learning is hampered by domain-dependent information useless for control. However, existing methods fail to remove such disturbances from the states when the data from experts and agents show large domain shifts. To overcome this issue, we propose a domain-adversarial and -conditional state space model (DAC-SSM) that enables control systems to obtain domain-agnostic and task- and dynamics-aware states. DAC-SSM jointly optimizes the state inference, observation reconstruction, forward dynamics, and reward models. To remove domain-dependent information from the states, the model is trained with domain discriminators in an adversarial manner, and the reconstruction is conditioned on domain labels. We experimentally evaluated the model predictive control performance via imitation learning for continuous control of sparse reward tasks in simulators and compared it with the performance of the existing SRL method. The agents from DAC-SSM achieved performance comparable to experts and more than twice the baselines. We conclude domain-agnostic states are essential for imitation learning that has large domain shifts and can be obtained using DAC-SSM.
|
|
12:00-12:15, Paper TuBT1.2 | |
>Planning on the Fast Lane: Learning to Interact Using Attention Mechanisms in Path Integral Inverse Reinforcement Learning |
> Video Attachment
|
|
Rosbach, Sascha | Volkswagen AG |
Li, Xing | Volkswagen AG |
Grossjohann, Simon | Volkswagen AG |
Homoceanu, Silviu | Volkswagen AG |
Roth, Stefan | TU Darmstadt |
Keywords: Learning from Demonstration, Motion and Path Planning, Imitation Learning
Abstract: General-purpose trajectory planning algorithms for automated driving utilize complex reward functions to perform a combined optimization of strategic, behavioral, and kinematic features. The specification and tuning of a single reward function is a tedious task and does not generalize over a large set of traffic situations. Deep learning approaches based on path integral inverse reinforcement learning have been successfully applied to predict local situation-dependent reward functions using features of a set of sampled driving policies. Sample-based trajectory planning algorithms are able to approximate a spatio-temporal subspace of feasible driving policies that can be used to encode the context of a situation. However, the interaction with dynamic objects requires an extended planning horizon, which depends on sequential context modeling. In this work, we are concerned with the sequential reward prediction over an extended time horizon. We present a neural network architecture that uses a policy attention mechanism to generate a low-dimensional context vector by concentrating on trajectories with a human-like driving style. Apart from this, we propose a temporal attention mechanism to identify context switches and allow for stable adaptation of rewards. We evaluate our results on complex simulated driving situations, including other moving vehicles. Our evaluation shows that our policy attention mechanism learns to focus on collision-free policies in the configuration space. Furthermore, the temporal attention mechanism learns persistent interaction with other vehicles over an extended planning horizon.
|
|
12:15-12:30, Paper TuBT1.3 | |
>A Geometric Perspective on Visual Imitation Learning |
> Video Attachment
|
|
Jin, Jun | University of Alberta |
Petrich, Laura | University of Alberta |
Dehghan, Masood | University of Alberta |
Jagersand, Martin | University of Alberta |
Keywords: Visual Learning, Imitation Learning, Visual Servoing
Abstract: We consider the problem of visual imitation learning without human kinesthetic teaching or teleoperation, nor access to an interactive reinforcement learning training environment. We present a geometric perspective to this problem where geometric feature correspondences are learned from one training video and used to execute tasks via visual servoing. Specifically, we propose VGS-IL (Visual Geometric Skill Imitation Learning), an end-to-end geometry-parameterized task concept inference method, to infer globally consistent geometric feature association rules from human demonstration video frames. We show that, instead of learning actions from image pixels, learning a geometry-parameterized task concept provides an explainable and invariant representation across demonstrator to imitator under various environmental settings. Moreover, such a task concept representation provides a direct link with geometric vision based controllers (e.g. visual servoing), allowing for efficient mapping of high-level task concepts to low-level robot actions.
|
|
12:30-12:45, Paper TuBT1.4 | |
>RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration |
> Video Attachment
|
|
Pavse, Brahma | University of Texas at Austin |
Torabi, Faraz | University of Texas at Austin |
Hanna, Josiah | The University of Texas at Austin |
Warnell, Garrett | U.S. Army Research Laboratory |
Stone, Peter | University of Texas at Austin |
Keywords: Imitation Learning, Reinforecment Learning
Abstract: Augmenting reinforcement learning with imitation learning is often hailed as a method by which to improve upon learning from scratch. However, most existing methods for integrating these two techniques are subject to several strong assumptions---chief among them that information about demonstrator actions is available. In this paper, we investigate the extent to which this assumption is necessary by introducing and evaluating reinforced inverse dynamics modeling (RIDM), a novel paradigm for combining imitation from observation (IfO) and reinforcement learning with no dependence on demonstrator action information. Moreover, RIDM requires only a single demonstration trajectory and is able to operate directly on raw (unaugmented) state features. We find experimentally that RIDM performs favorably compared to a baseline approach for several tasks in simulation as well as for tasks on a real UR5 robot arm. Experiment videos can be found at https://sites.google.com/view/ridm-reinforced-inverse-dynami.
|
|
12:45-13:00, Paper TuBT1.5 | |
>Learn by Observation: Imitation Learning for Drone Patrolling from Videos of a Human Navigator |
|
Fan, Yue | Johns Hopkins University |
Chu, Shilei | Shandong Univ |
Zhang, Wei | Shandong University |
Song, Ran | Shandong University |
Li, Yibin | Shandong University |
Keywords: Imitation Learning, Deep Learning for Visual Perception, Autonomous Vehicle Navigation
Abstract: We present an imitation learning method for autonomous drone patrolling based only on raw videos. Different from previous methods, we propose to let the drone learn patrolling in the air by observing and imitating how a human navigator does it on the ground. The observation process enables the automatic collection and annotation of data using inter-frame geometric consistency, resulting in less manual effort and high accuracy. Then a newly designed neural network is trained based on the annotated data to predict appropriate directions and translations for the drone to patrol in a lane-keeping manner as humans. Our method allows the drone to fly at a high altitude with a broad view and low risk. It can also detect all accessible directions at crossroads and further carry out the integration of available user instructions and autonomous patrolling control commands. Extensive experiments are conducted to demonstrate the accuracy of the proposed imitating learning process as well as the reliability of the holistic system for autonomous drone navigation. The codes, datasets as well as video demonstrations are available at https://vsislab.github.io/uavpatrol.
|
|
13:00-13:15, Paper TuBT1.6 | |
>Imitation Learning Based on Bilateral Control for Human–Robot Cooperation |
> Video Attachment
|
|
Sasagawa, Ayumu | Saitama University |
Fujimoto, Kazuki | Saitama University |
Sakaino, Sho | University of Tsukuba |
Tsuji, Toshiaki | Saitama University |
Keywords: Imitation Learning, Cognitive Human-Robot Interaction, Manipulation Planning
Abstract: Robots are required to autonomously respond to changing situations. Imitation learning is a promising candidate for achieving generalization performance, and extensive results have been demonstrated in object manipulation. However, cooperative work between humans and robots is still a challenging issue because robots must control dynamic interactions among themselves, humans, and objects. Furthermore, it is difficult to follow subtle perturbations that may occur among coworkers. In this study, we find that cooperative work can be accomplished by imitation learning using bilateral control. Thanks to bilateral control, which can extract response values and command values independently, human skills to control dynamic interactions can be extracted. Then, the task of serving food is considered. The experimental results clearly demonstrate the importance of force control, and the dynamic interactions can be controlled by the inferred action force.
|
|
TuBT2 |
Room T2 |
Imitation Learning II |
Regular session |
Chair: Urain De Jesus, Julen | TU Darmstadt |
Co-Chair: Kolathaya, Shishir | Indian Institute of Science |
|
11:45-12:00, Paper TuBT2.1 | |
>Multi-Instance Aware Localization for End-To-End Imitation Learning |
> Video Attachment
|
|
Gubbi Venkatesh, Sagar | Indian Institute of Science |
Upadrashta, Raviteja | Indian Institute of Science |
Kolathaya, Shishir | Indian Institute of Science |
Amrutur, Bharadwaj | Indian Institute of Science |
Keywords: Imitation Learning, Localization, Learning from Demonstration
Abstract: Existing architectures for imitation learning using image-to-action policy networks perform poorly when presented with an input image containing multiple instances of the object of interest, especially when the number of expert demonstrations available for training are limited. We show that end-to-end policy networks can be trained in a sample efficient manner by (a) appending the feature map output of the vision layers with an embedding that can indicate instance preference or take advantage of an implicit preference present in the expert demonstrations, and (b) employing an autoregressive action generator network for the control layers. The proposed architecture for localization has improved accuracy and sample efficiency and can generalize to the presence of more instances of objects than seen during training. When used for end-to-end imitation learning to perform reach, push, and pick-and-place tasks on a real robot, training is achieved with as few as 15 expert demonstrations.
|
|
12:00-12:15, Paper TuBT2.2 | |
>ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by Normalizing Flows |
|
Urain De Jesus, Julen | TU Darmstadt |
Ginesi, Michele | University of Verona |
Tateo, Davide | Technische Universität Darmstadt |
Peters, Jan | Technische Universität Darmstadt |
Keywords: Learning from Demonstration, Novel Deep Learning Methods, Motion Control
Abstract: We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, stochastic, nonlinear dynamics. Our approach extends the Normalizing Flows framework to learn stable Stochastic Differential Equations. We prove the Lyapunov stability for a class of Stochastic Differential Equations and we propose a learning algorithm to learn them from a set of demonstrated trajectories. Our model extends the set of stable dynamical systems that can be represented by state-of-the-art approaches, eliminates the Gaussian assumption on the demonstrations, and outperforms the previous algorithms in terms of representation accuracy. We show the effectiveness of our method with both standard datasets and a real robot experiment.
|
|
12:15-12:30, Paper TuBT2.3 | |
>Standard Deep Generative Models for Density Estimation in Configuration Spaces: A Study of Benefits, Limits and Challenges |
|
Gieselmann, Robert | KTH Royal Institute of Technology |
Pokorny, Florian T. | KTH Royal Institute of Technology |
Keywords: Imitation Learning, Motion and Path Planning, Probability and Statistical Methods
Abstract: Deep Generative Models such as Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) have found multiple applications in Robotics, with recent works suggesting the potential use of these methods as a generic solution for the estimation of sampling distributions for motion planning in parameterized sets of environments. In this work we provide a first empirical study of challenges, benefits and drawbacks of utilizing vanilla GANs and VAEs for the approximation of probability distributions arising from sampling-based motion planner path solutions. We present an evaluation on a sequence of simulated 2D configuration spaces of increasing complexity and a 4D planar robot arm scenario and find that vanilla GANs and VAEs both outperform classical statistical estimation by an n-dimensional histogram in our chosen scenarios. We furthermore highlight differences in convergence and noisiness between the trained models and propose and study a benchmark sequence of planar C-space environments parameterized by opened or closed doors. In this setting, we find that the chosen geometrical embedding of the parameters of the family of considered C-spaces is a key performance contributor that relies heavily on human intuition about C-space structure at present. We discuss some of the challenges of parameter selection and convergence for applying this approach with an out-of-the box GAN and VAE model.
|
|
12:30-12:45, Paper TuBT2.4 | |
>Progressive Automation of Periodic Tasks on Planar Surfaces of Unknown Pose with Hybrid Force/position Control |
> Video Attachment
|
|
Dimeas, Fotios | Aristotle University of Thessaloniki |
Doulgeri, Zoe | Aristotle University of Thessaloniki |
Keywords: Learning from Demonstration
Abstract: This paper presents a teaching by demonstration method for contact tasks with periodic movement on planar surfaces of unknown pose. To learn the motion on the plane, we utilize frequency oscillators with periodic movement primitives and we propose modified adaptation rules along with an extraction method of the task's fundamental frequency by automatically discarding near-zero frequency components. Additionally, we utilize an online estimate of the normal vector to the plane, so that the robot is able to quickly adapt to rotated hinged surfaces such as a window or a door. Using the framework of progressive automation for compliance adaptation, the robot transitions seamlessly and bi-directionally between hand guidance and autonomous operation within few repetitions of the task. While the level of automation increases, a hybrid force/position controller is progressively engaged for the autonomous operation of the robot. Our methodology is verified experimentally in surfaces of different orientation, with the robot being able to adapt to surface orientation perturbations.
|
|
TuBT3 |
Room T3 |
Model Learning I |
Regular session |
Chair: Kelly, Jonathan | University of Toronto |
Co-Chair: Mouret, Jean-Baptiste | Inria |
|
11:45-12:00, Paper TuBT3.1 | |
>Learning Hybrid Object Kinematics for Efficient Hierarchical Planning under Uncertainty |
> Video Attachment
|
|
Jain, Ajinkya | University of Texas at Austin |
Niekum, Scott | University of Texas at Austin |
Keywords: Model Learning for Control, Learning from Demonstration, Manipulation Planning
Abstract: Sudden changes in the dynamics of robotic tasks, such as contact with an object or the latching of a door, are often viewed as inconvenient discontinuities that make manipulation difficult. However, when these transitions are well-understood, they can be leveraged to reduce uncertainty or aid manipulation---for example, wiggling a screw to determine if it is fully inserted or not. Current model-free reinforcement learning approaches require large amounts of data to learn to leverage such dynamics, scale poorly as problem complexity grows, and do not transfer well to significantly different problems. By contrast, hierarchical POMDP planning-based methods scale well via plan decomposition, work well on novel problems, and directly consider uncertainty, but often rely on precise hand-specified models and task decompositions. To combine the advantages of these opposing paradigms, we propose a new method, MICAH, which given unsegmented data of an object's motion under applied actions, (1) detects changepoints in the object motion model using action-conditional inference, (2) estimates the individual local motion models with their parameters, and (3) converts them into a hybrid automaton that is compatible with hierarchical POMDP planning. We show that model learning under MICAH is more accurate and robust to noise than prior approaches. Further, we combine MICAH with a hierarchical POMDP planner to demonstrate that the learned models are rich enough to be used for performing manipulation tasks under uncertainty that require the objects to be used in novel ways not encountered during training.
|
|
12:00-12:15, Paper TuBT3.2 | |
>Learning State-Dependent Losses for Inverse Dynamics Learning |
|
Morse, Kristen | Facebook AI Research |
Das, Neha | Facebook |
Lin, Yixin | Facebook AI Research |
Wang, Austin S. | Carnegie Mellon University |
Rai, Akshara | Facebook AI Research |
Meier, Franziska | Facebook |
Keywords: Model Learning for Control, Novel Deep Learning Methods, Transfer Learning
Abstract: Being able to quickly adapt to changes in dynamics is paramount in model-based control for object manipulation tasks. In order to influence fast adaptation of the inverse dynamics model's parameters, data efficiency is crucial. Given observed data, a key element to how an optimizer updates model parameters is the loss function. In this work, we propose to apply meta-learning to learn structured, state-dependent loss functions during a meta-training phase. We then replace standard losses with our learned losses during online adaptation tasks. We evaluate our proposed approach on inverse dynamics learning tasks, both in simulation and on real hardware data. In both settings, the structured and state-dependent learned losses improve online adaptation speed, when compared to standard, state-independent loss functions.
|
|
12:15-12:30, Paper TuBT3.3 | |
>Fast Online Adaptation in Robotics through Meta-Learning Embeddings of Simulated Priors |
> Video Attachment
|
|
Kaushik, Rituraj | INRIA - Nancy Grand Est, France |
Anne, Timothée | ENS Rennes |
Mouret, Jean-Baptiste | Inria |
Keywords: Model Learning for Control, Reinforecment Learning
Abstract: Meta-learning algorithms can accelerate the model-based reinforcement learning (MBRL) algorithms by finding an initial set of parameters for the dynamical model such that the model can be trained to match the actual dynamics of the system with only a few data-points. However, in the real world, a robot might encounter any situation starting from motor failures to finding itself in a rocky terrain where the dynamics of the robot can be significantly different from one another. In this paper, first, we show that when meta-training situations (the prior situations) have such diverse dynamics, using a single set of meta-trained parameters as a starting point still requires a large number of observations from the real system to learn a useful model of the dynamics. Second, we propose an algorithm called FAMLE that mitigates this limitation by meta-training several initial starting points (i.e., initial parameters) for training the model and allows robots to select the most suitable starting point to adapt the model to the current situation with only a few gradient steps. We compare FAMLE to MBRL, MBRL with a meta-trained model with MAML, and model-free policy search algorithm PPO for various simulated and real robotic tasks, and show that FAMLE allows robots to adapt to novel damages in significantly fewer time-steps than the baselines.
|
|
12:30-12:45, Paper TuBT3.4 | |
>Heteroscedastic Uncertainty for Robust Generative Latent Dynamics |
|
Limoyo, Oliver | University of Toronto |
Chan, Bryan | University of Toronto |
Maric, Filip | University of Toronto Institute for Aerospace Studies |
Wagstaff, Brandon | University of Toronto |
Mahmood, Ashique Rupam | Kindred Inc |
Kelly, Jonathan | University of Toronto |
Keywords: Representation Learning, Model Learning for Control, Reinforecment Learning
Abstract: Learning or identifying dynamics from a sequence of high-dimensional observations is a difficult challenge in many domains, including reinforcement learning and control. The problem has recently been studied from a generative perspective through latent dynamics, where the high-dimensional observations are embedded into a lower-dimensional space in which the dynamics can be learned. Despite some successes, latent dynamics models have not yet been applied to real-world robotic systems where learned representations must be robust to a variety of perceptual confounds and noise sources not seen during training. In this paper, we present a method to jointly learn a latent state representation and the associated dynamics that is amenable for long-term planning and closed-loop control under perceptually difficult conditions. As our main contribution, we describe how our representation is able to capture a notion of heteroscedastic or input-specific uncertainty at test time by detecting novel or out-of-distribution (OOD) inputs. We present results from prediction and control experiments on two image-based tasks: a simulated pendulum balancing task and a real-world robotic manipulator reaching task. We demonstrate that our model produces significantly more accurate predictions and exhibits improved control performance, compared to a model that assumes homoscedastic uncertainty only, in the presence of varying degrees of input degradation.
|
|
12:45-13:00, Paper TuBT3.5 | |
>Multi-Robot Active Sensing and Environmental Model Learning with Distributed Gaussian Process |
> Video Attachment
|
|
Jang, Dohyun | Seoul National University |
Yoo, Jaehyun | Hankyong National University |
Son, Clark Youngdong | Seoul National University |
Kim, Dabin | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Multi-Robot Systems, Distributed Robot Systems, Networked Robots
Abstract: This paper deals with the problem of multiple robots working together to explore and gather at the global maximum of the unknown field. Given noisy sensor measurements obtained at the location of robots with no prior knowledge about the environmental map, Gaussian process regression can be an efficient solution to construct a map that represents spatial information with confidence intervals. However, because the conventional Gaussian process algorithm operates in a centralized manner, it is difficult to process information coming from multiple distributed sensors in real-time. In this work, we propose a multi-robot exploration algorithm that deals with the following challenges: 1) distributed environmental map construction using networked sensing platforms; 2) online learning using successive measurements suitable for a multi-robot team; 3) multi-agent coordination to discover the highest peak of an unknown environmental field with collision avoidance. We demonstrate the effectiveness of our algorithm via simulation and a topographic survey experiment with multiple UAVs.
|
|
13:00-13:15, Paper TuBT3.6 | |
>Gaussians on Riemannian Manifolds: Applications for Robot Learning and Adaptive Control (I) |
|
Calinon, Sylvain | Idiap Research Institute |
|
|
TuBT4 |
Room T4 |
Model Learning II |
Regular session |
Chair: Posner, Ingmar | Oxford University |
Co-Chair: Boularias, Abdeslam | Rutgers University |
|
11:45-12:00, Paper TuBT4.1 | |
>Self-Adapting Recurrent Models for Object Pushing from Learning in Simulation |
> Video Attachment
|
|
Cong, Lin | University of Hamburg |
Görner, Michael | University of Hamburg |
Ruppel, Philipp | University of Hamburg |
Liang, Hongzhuo | University of Hamburg |
Hendrich, Norman | University of Hamburg |
Zhang, Jianwei | University of Hamburg |
Keywords: Model Learning for Control, Reinforecment Learning, AI-Based Methods
Abstract: Planar pushing remains a challenging research topic, where building the dynamic model of the interaction is the core issue. Even an accurate analytical dynamic model is inherently unstable because physics parameters such as inertia and friction can only be approximated. Data-driven models usually rely on large amounts of training data, but data collection is time consuming when working with real robots. In this paper, we collect all training data in a physics simulator and build an LSTM-based model to fit the pushing dynamics. Domain Randomization is applied to capture the pushing trajectories of a generalized class of objects. When executed on the real robot, the trained recursive model adapts to the tracked object's real dynamics within a few steps. We propose the algorithm Recurrent Model Predictive Path Integral (RMPPI) as a variation of traditional MPPI approach, employing state-dependent recurrent models. As a comparison, we also train a Deep Deterministic Policy Gradient (DDPG) network as a model-free baseline, which is also used as the action generator in the data collection phase. During policy training, Hindsight Experience Replay is used to improve exploration efficiency. Pushing experiments on our UR5 platform demonstrate the model's adaptability and the effectiveness of the proposed framework.
|
|
12:00-12:15, Paper TuBT4.2 | |
>A Probabilistic Model for Planar Sliding of Objects with Unknown Material Properties: Identification and Robust Planning |
> Video Attachment
|
|
Song, Changkyu | Rutgers University |
Boularias, Abdeslam | Rutgers University |
Keywords: Model Learning for Control, Manipulation Planning, Probability and Statistical Methods
Abstract: This paper introduces a new technique for learning probabilistic models of mass and friction distributions of unknown objects, and performing robust sliding actions by using the learned models. The proposed method is executed in two consecutive phases. In the exploration phase, a table-top object is poked by a robot from different angles. The observed motions of the object are compared against simulated motions with various hypothesized mass and friction models. The simulation-to-reality gap is then differentiated with respect to the unknown mass and friction parameters, and the analytically computed gradient is used to optimize those parameters. Since it is difficult to disentangle the mass from the friction coefficients in low-data and quasi-motion regimes, our approach retains a set of locally optimal pairs of mass and friction models. A probability distribution on the models is computed based on the relative accuracy of each pair of models. In the exploitation phase, a probabilistic planner is used to select a goal configuration and waypoints that are stable with a high confidence. The proposed technique is evaluated on real objects and using a real manipulator. The results show that this technique can not only identify accurately mass and friction coefficients of non-uniform heterogeneous objects, but can also be used to successfully slide an unknown object to the edge of a table and pick it up from there, without any human assistance or feedback.
|
|
12:15-12:30, Paper TuBT4.3 | |
>Hindsight for Foresight: Unsupervised Structured Dynamics Models from Physical Interaction |
> Video Attachment
|
|
Nematollahi, Iman | University of Freiburg |
Mees, Oier | Albert-Ludwigs-Universität |
Hermann, Lukas | University of Freiburg |
Burgard, Wolfram | Toyota Research Institute |
Keywords: Model Learning for Control, Representation Learning, Novel Deep Learning Methods
Abstract: A key challenge for an agent learning to interact with the world is to reason about physical properties of objects and to foresee their dynamics under the effect of applied forces. In order to scale learning through interaction to many objects and scenes, robots should be able to improve their own performance from real-world experience without requiring human supervision. To this end, we propose a novel approach for modeling the dynamics of a robot’s interactions directly from unlabeled 3D point clouds and images. Unlike previous approaches, our method does not require ground-truth data associations provided by a tracker or any pre-trained perception network. To learn from unlabeled real-world interaction data, we enforce consistency of estimated 3D clouds, actions and 2D images with observed ones. Our joint forward and inverse network learns to segment a scene into salient object parts and predicts their 3D motion under the effect of applied actions. Moreover, our object-centric model outputs action-conditioned 3D scene flow, object masks and 2D optical flow as emergent properties. Our extensive evaluation both in simulation and with real-world data demonstrates that our formulation leads to effective, interpretable models that can be used for visuomotor control and planning. Videos, code and dataset are available at http://hind4sight.cs.uni-freiburg.de
|
|
12:30-12:45, Paper TuBT4.4 | |
>Multi-Sparse Gaussian Process: Learning Based Semi-Parametric Control |
> Video Attachment
|
|
Khan, Mouhyemen | Georgia Institute of Technology |
Patel, Akash | Georgia Institute of Technology |
Chatterjee, Abhijit | Georgia Institute of Technology |
Keywords: Model Learning for Control, Aerial Systems: Mechanics and Control
Abstract: A key challenge with controlling complex dynamical systems is to accurately model them. However, this requirement is very hard to satisfy in practice. Data-driven approaches such as Gaussian processes (GPs) have proved quite effective by employing regression based methods to capture the unmodeled dynamical effects. However, GPs scale cubically with number of data points n, and it is often a challenge to perform real-time regression. In this paper, we propose a semi-parametric framework exploiting sparsity for learning-based control. We combine the parametric model of the system with multiple sparse GP models to capture any unmodeled dynamics. Multi-Sparse Gaussian Process (MSGP) uses multiple sparse models with unique hyperparameters for each one, thereby, preserving the richness and uniqueness of each sparse model. For a query point, a weighted sparse posterior prediction is performed based on N neighboring sparse models. Hence, the prediction complexity is significantly reduced from O(n^3) to O(Npu^2), p and u are data points and pseudo-inputs respectively for each sparse model. We validate MSGP’s learning performance for a quadrotor using a geometric controller in simulation. Comparison with GP, sparse GP, and local GP shows that MSGP has higher prediction accuracy than sparse and local GP, with significantly lower time complexity than all three. We also validate MSGP on a real quadrotor setup for unmodeled mass, inertia, and disturbances. The experiment video can be seen at: https://youtu.be/zUk1ISux6ao.
|
|
12:45-13:00, Paper TuBT4.5 | |
>Decentralized Deep Reinforcement Learning for a Distributed and Adaptive Locomotion Controller of a Hexapod Robot |
> Video Attachment
|
|
Schilling, Malte | Bielefeld University |
Konen, Kai | Neuroinformatics Group, Bielefeld University |
Ohl, Frank | Leibniz Institute for Neurobiology |
Korthals, Timo | Bielefeld University |
Keywords: Multi-legged Robots, Parallel Robots, Reinforecment Learning
Abstract: Locomotion is a prime example for adaptive behavior in animals and biological control principles have inspired control architectures for legged robots. While machine learning has been successfully applied to many tasks in recent years, Deep Reinforcement Learning approaches still appear to struggle when applied to real world robots in continuous control tasks and in particular do not appear as robust solutions that can handle uncertainties well. Therefore, there is a new interest in incorporating biological principles into such learning architectures. While inducing a hierarchical organization as found in motor control has shown already some success, we here propose a decentralized organization as found in insect motor control for coordination of different legs. A decentralized and distributed architecture is introduced on a simulated hexapod robot and the details of the controller are learned through Deep Reinforcement Learning. We first show that such a concurrent local structure is able to learn good walking behavior. Secondly, that the simpler organization is learned faster compared to holistic approaches.
|
|
13:00-13:15, Paper TuBT4.6 | |
>First Steps: Latent-Space Control with Semantic Constraints for Quadruped Locomotion |
> Video Attachment
|
|
Mitchell, Alexander Luis | University of Oxford |
Engelcke, Martin | University of Oxford |
Parker Jones, Oiwi | University of Oxford |
Surovik, David | University of Oxford |
Gangapurwala, Siddhant | University of Oxford |
Melon, Oliwier Aleksander | University of Oxford |
Havoutis, Ioannis | University of Oxford |
Posner, Ingmar | Oxford University |
Keywords: Model Learning for Control
Abstract: Traditional approaches to quadruped control frequently employ simplified, hand-derived models. This significantly reduces the capability of the robot since its effective kinematic range is curtailed. In addition, kinodynamic constraints are often non-differentiable and difficult to implement in an optimisation approach. In this work, these challenges are addressed by framing quadruped control as optimisation in a structured latent space. A deep generative model captures a statistical representation of feasible joint configurations, whilst complex dynamic and terminal constraints are expressed via high-level, semantic indicators and represented by learned classifiers operating upon the latent space. As a consequence, complex constraints are rendered differentiable and evaluated an order of magnitude faster than analytical approaches. We validate the feasibility of locomotion trajectories optimised using our approach both in simulation and on a real-world ANYmal quadruped. Our results demonstrate that this approach is capable of generating smooth and realisable trajectories. To the best of our knowledge, this is the first time latent space control has been successfully applied to a complex, real robot platform.
|
|
TuBT5 |
Room T5 |
Transfer Learning |
Regular session |
Chair: Kroeger, Torsten | Karlsruher Institut Für Technologie (KIT) |
Co-Chair: Johns, Edward | Imperial College London |
|
11:45-12:00, Paper TuBT5.1 | |
>Stir to Pour: Efficient Calibration of Liquid Properties for Pouring Actions |
> Video Attachment
|
|
Lopez-Guevara, Tatiana | University of Edinburgh |
Pucci, Rita | University of Udine |
Taylor, Nicholas K. | Heriot-Watt University |
Gutmann, Michael U. | University of Edinburgh |
Ramamoorthy, Subramanian | The University of Edinburgh |
Subr, Kartic | The University of Edinburgh |
Keywords: Calibration and Identification, Cognitive Control Architectures, Transfer Learning
Abstract: Humans use simple probing actions to develop intuition about the physical behavior of common objects. Such intuition is particularly useful for adaptive estimation of favorable manipulation strategies of those objects in novel contexts. For example, observing the effect of tilt on a transparent bottle containing an unknown liquid provides clues on how the liquid might be poured. It is desirable to equip general-purpose robotic systems with this capability because it is inevitable that they will encounter novel objects and scenarios. In this paper, we teach a robot to use a simple, specified probing strategy --stirring with a stick-- to reduce spillage when pouring unknown liquids. In the probing step, we continuously observe the effects of a real robot stirring a liquid, while simultaneously tuning the parameters to a model (simulator) until the two outputs are in agreement. We obtain optimal simulation parameters, characterizing the unknown liquid, via a Bayesian Optimizer that minimizes the discrepancy between real and simulated outcomes. Then, we optimize the pouring policy conditioning on the optimal simulation parameters determined via stirring. We show that using stirring as a probing strategy result in reduced spillage for three qualitatively different liquids when executed on a UR10 Robot, compared to probing via pouring. Finally, we provide quantitative insights into the reason for stirring being a suitable calibration task for pouring --a step towards automatic discovery of probing strategies.
|
|
12:00-12:15, Paper TuBT5.2 | |
>Haptic Knowledge Transfer between Heterogeneous Robots Using Kernel Manifold Alignment |
|
Tatiya, Gyan | Tufts University |
Shukla, Yash | Worcester Polytechnic Institute |
Edegware, Michael | Tufts University |
Sinapov, Jivko | Tufts University |
Keywords: Transfer Learning, Haptics and Haptic Interfaces, Multi-Robot Systems
Abstract: Humans learn about object properties using multiple modes of perception. Recent advances show that robots can use non-visual sensory modalities (i.e., haptic and tactile sensory data) coupled with exploratory behaviors (i.e., grasping, lifting, pushing, dropping, etc.) for learning objects' properties such as shape, weight, material and affordances. However, non-visual sensory representations cannot be easily transferred from one robot to another, as different robots have different bodies and sensors. Therefore, each robot needs to learn its task-specific sensory models from scratch. To address this challenge, we propose a framework for knowledge transfer using kernel manifold alignment (KEMA) that enables source robots to transfer haptic knowledge about objects to a target robot. The idea behind our approach is to learn a common latent space from multiple robots' feature spaces produced by respective sensory data while interacting with objects. To test the method, we used a dataset in which 3 simulated robots interacted with 25 objects and showed that our framework speeds up haptic object recognition and allows novel object recognition.
|
|
12:15-12:30, Paper TuBT5.3 | |
>Robo-Gym – an Open Source Toolkit for Distributed Deep Reinforcement Learning on Real and Simulated Robots |
> Video Attachment
|
|
Lucchi, Matteo | Joanneum Research |
Zindler, Friedemann | Joanneum Research |
Mühlbacher-Karrer, Stephan | JOANNEUM RESEARCH Forschungsgesellschaft mbH - ROBOTICS |
Pichler, Horst | Joanneum Research Robotics |
Keywords: Reinforecment Learning, Transfer Learning, Software, Middleware and Programming Environments
Abstract: Applying Deep Reinforcement Learning (DRL) to complex tasks in the field of robotics has proven to be very successful in the recent years. However, most of the publications focus either on applying it to a task in simulation or to a task in a real world setup. Although there are great examples of combining the two worlds with the help of transfer learning, it often requires a lot of additional work and fine-tuning to make the setup work effectively. In order to increase the use of DRL with real robots and reduce the gap between simulation and real world robotics, we propose an open source toolkit: robo-gym. We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot. We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots: a mobile robot and a robot arm. The distributed capabilities of the framework enable several advantages like using distributed algorithms, separating the workload of simulation and training on different physical machines as well as enabling the future opportunity to train in simulation and real world at the same time. Finally, we offer an overview and comparison of robo-gym with other frequently used state-of-the-art DRL frameworks.
|
|
12:30-12:45, Paper TuBT5.4 | |
>Crossing the Gap: A Deep Dive into Zero-Shot Sim-To-Real Transfer for Dynamics |
> Video Attachment
|
|
Valassakis, Eugene | Imperial College London |
Ding, Zihan | Imperial College London |
Johns, Edward | Imperial College London |
Keywords: Transfer Learning, Simulation and Animation, Reinforecment Learning
Abstract: Zero-shot sim-to-real transfer of tasks with complex dynamics is a highly challenging and unsolved problem. A number of solutions have been proposed in recent years, but we have found that many works do not present a thorough evaluation in the real world, or underplay the significant engineering effort and task-specific fine tuning that is required to achieve the published results. In this paper, we dive deeper into the sim-to-real transfer challenge, investigate why this is such a difficult problem, and present objective evaluations of a number of transfer methods across a range of real-world tasks. Surprisingly, we found that a method which simply injects random forces into the simulation performs just as well as more complex methods, such as those which randomise the simulator's dynamics parameters, or adapt a policy online using recurrent network architectures.
|
|
12:45-13:00, Paper TuBT5.5 | |
>Tensor Action Spaces for Multi-Agent Robot Transfer Learning |
> Video Attachment
|
|
Schwab, Devin | Carnegie Mellon University |
Zhu, Yifeng | University of Texas, Austin |
Veloso, Manuela | Carnegie Mellon University |
Keywords: Transfer Learning, Reinforecment Learning, Multi-Robot Systems
Abstract: We explore using reinforcement learning on single and multi-agent systems such that after learning is finished we can apply a policy zero-shot to new environment sizes, as well as different number of agents and entities. Building off previous work, we show how to map back and forth between the state and action space of a standard Markov Decision Process (MDP) and multi-dimensional tensors such that zero-shot transfer in these cases is possible. Like in previous work, we use a special network architecture designed to work well with the tensor representation, known as the Fully Convolutional Q-Network (FCQN). We show simulation results that this tensor state and action space combined with the FCQN architecture can learn faster than traditional representations in our environments. We also show that the performance of a transferred policy is comparable to the performance of policy trained from scratch in the modified environment sizes and with modified number of agents and entities. We also show that the zero-shot transfer performance across team sizes and environment sizes remains comparable to the performance of training from scratch specific policies in the transferred environments. Finally, we demonstrate that our simulation trained policies can be applied to real robots and real sensor data with comparable performance to our simulation results. Using such policies we can run variable sized teams of robots in a variable sized operating environment with no changes to the policy and no additional learning necessary.
|
|
13:00-13:15, Paper TuBT5.6 | |
>TrueÆdapt: Learning Smooth Online Trajectory Adaptation with Bounded Jerk, Acceleration and Velocity in Joint Space |
> Video Attachment
|
|
Kiemel, Jonas | Karlsruhe Institute of Technology |
Weitemeyer, Robin | Karlsruhe Institute of Technology |
Meißner, Pascal | University of Aberdeen |
Kroeger, Torsten | Karlsruher Institut Für Technologie (KIT) |
Keywords: Reactive and Sensor-Based Planning, Transfer Learning, Motion and Path Planning
Abstract: We present TrueÆdapt, a model-free method to learn online adaptations of robot trajectories based on their effects on the environment. Given sensory feedback and future waypoints of the original trajectory, a neural network is trained to predict joint accelerations at regular intervals. The adapted trajectory is generated by linear interpolation of the predicted accelerations, leading to continuously differentiable joint velocities and positions. Bounded jerks, accelerations and velocities are guaranteed by calculating the range of valid accelerations at each decision step and clipping the network’s output accordingly. A deviation penalty during the training process causes the adapted trajectory to follow the original one. Smooth movements are encouraged by penalizing high accelerations and jerks. We evaluate our approach by training a simulated KUKA iiwa robot to balance a ball on a plate while moving and demonstrate that the balancing policy can be directly transferred to a real robot.
|
|
TuBT6 |
Room T6 |
Learning from Demonstration |
Regular session |
Chair: Lee, Dongheui | Technical University of Munich |
Co-Chair: Calinon, Sylvain | Idiap Research Institute |
|
11:45-12:00, Paper TuBT6.1 | |
>Active Improvement of Control Policies with Bayesian Gaussian Mixture Model |
|
Girgin, Hakan | EPFL, Idiap Research Institute |
Pignat, Emmanuel | Idiap Research Institute, Martigny, Switzerland |
Jaquier, Noémie | Idiap Research Institute |
Calinon, Sylvain | Idiap Research Institute |
Keywords: Learning from Demonstration, Model Learning for Control, Imitation Learning
Abstract: Learning from demonstration (LfD) is an intuitive framework allowing non-expert users to easily (re-)program robots. However, the quality and quantity of demonstrations have a great influence on the generalization performances of LfD approaches. In this paper, we introduce a novel active learning framework in order to improve the generalization capabilities of control policies. The proposed approach is based on the epistemic uncertainties of Bayesian Gaussian mixture models (BGMMs). We determine the new query point location by optimizing a closed-form information-density cost based on the quadratic R´enyi entropy. Furthermore, to better represent uncertain regions and to avoid local optima problem, we propose to approximate the active learning cost with a Gaussian mixture model (GMM). We demonstrate our active learning framework in the context of a reaching task in a cluttered environment with an illustrative toy example and a real experiment with a Panda robot.
|
|
12:00-12:15, Paper TuBT6.2 | |
>Collaborative Programming of Conditional Robot Tasks |
> Video Attachment
|
|
Willibald, Christoph | German Aerospace Center (DLR) |
Eiband, Thomas | German Aerospace Center (DLR) |
Lee, Dongheui | Technical University of Munich |
Keywords: Learning from Demonstration, Imitation Learning, Human-Centered Robotics
Abstract: Conventional robot programming methods are not suited for non-experts to intuitively teach robots new tasks. For this reason, the potential of collaborative robots for production cannot yet be fully exploited. In this work, we propose an active learning framework, in which the robot and the user collaborate to incrementally program a complex task. Starting with a basic model, the robot’s task knowledge can be extended over time if new situations require additional skills. An on-line anomaly detection algorithm therefore automatically identifies new situations during task execution by monitoring the deviation between measured- and commanded sensor values. The robot then triggers a teaching phase, in which the user decides to either refine an existing skill or demonstrate a new skill. The different skills of a task are encoded in separate probabilistic models and structured in a high-level graph, guaranteeing robust execution and successful transition between skills. In the experiments, our approach is compared to two state-of-the-art Programming by Demonstration frameworks on a real system. Increased intuitiveness and task performance of the method can be shown, allowing shop-floor workers to program industrial tasks with our framework.
|
|
12:15-12:30, Paper TuBT6.3 | |
>Learning Constraint-Based Planning Models from Demonstrations |
|
Loula, João | MIT |
Allen, Kelsey | Massachusetts Institute of Technology |
Silver, Tom | MIT |
Tenenbaum, Joshua | Massachusetts Institute of Technology |
Keywords: Hybrid Logical/Dynamical Planning and Verification, Learning from Demonstration, Deep Learning in Grasping and Manipulation
Abstract: How can we learn representations for planning that are both efficient and flexible? Hybrid models are a good candidate, having been very successful in long-horizon planning tasks—however, they've proved challenging for learning, relying mostly on hand-coded representations. We present a framework for learning constraint-based task and motion planning models using gradient descent. Our model observes expert demonstrations of a task and decomposes them into modes---segments which specify a set of constraints on a trajectory optimization problem. We show that our model learns these modes from few demonstrations, that modes can be used to plan flexibly in different environments and to achieve different types of goals, and that the model can recombine these modes in novel ways.
|
|
12:30-12:45, Paper TuBT6.4 | |
>Learning Object Manipulation with Dexterous Hand-Arm Systems from Human Demonstration |
> Video Attachment
|
|
Ruppel, Philipp | University of Hamburg |
Zhang, Jianwei | University of Hamburg |
Keywords: Learning from Demonstration, Dexterous Manipulation, Deep Learning in Grasping and Manipulation
Abstract: We present a novel learning and control framework that combines artificial neural networks with online trajectory optimization to learn dexterous manipulation skills from human demonstration and to transfer the learned behaviors to real robots. Humans can perform the demonstrations with their own hands and with real objects. An instrumented glove is used to record motions and tactile data. Our system learns neural control policies that generalize to modified object poses directly from limited amounts of demonstration data. Outputs from the neural policy network are combined at runtime with kinematic and dynamic safety and feasibility constraints as well as a learned regularizer to obtain commands for a real robot through online trajectory optimization. We test our approach on multiple tasks and robots.
|
|
12:45-13:00, Paper TuBT6.5 | |
>MixGAIL: Autonomous Driving Using Demonstrations with Mixed Qualities |
> Video Attachment
|
|
Lee, Gunmin | Seoul National University |
Kim, Dohyeong | Seoul National University |
Oh, Wooseok | Seoul National Univetsity |
Lee, Kyungjae | Seoul National University |
Oh, Songhwai | Seoul National University |
Keywords: Imitation Learning, Autonomous Vehicle Navigation, Collision Avoidance
Abstract: In this paper, we consider autonomous driving of a vehicle using imitation learning. Generative adversarial imitation learning (GAIL) is a widely used algorithm for imitation learning. This algorithm leverages positive demonstrations to imitate the behavior of an expert. In this paper, we propose a novel method, called mixed generative adversarial imitation learning (MixGAIL), which incorporates both of expert demonstrations and negative demonstrations, such as vehicle collisions. To this end, the proposed method utilizes an occupancy measure and a constraint function. The occupancy measure is used to follow expert demonstrations and provides a positive feedback. On the other hand, the constraint function is used for negative demonstrations to assert a negative feedback. Experimental results show that the proposed algorithm converges faster than the other baseline methods. Also, hardware experiments using a real-world RC car shows an outstanding performance and faster convergence compared with existing methods.
|
|
13:00-13:15, Paper TuBT6.6 | |
>Driving through Ghosts: Behavioral Cloning with False Positives |
> Video Attachment
|
|
Bühler, Andreas | ETH Zürich |
Gaidon, Adrien | Toyota Research Institute |
Cramariuc, Andrei | ETHZ |
Ambrus, Rares | Toyota Research Institute |
Rosman, Guy | Massachusetts Institute of Technology |
Burgard, Wolfram | Toyota Research Institute |
Keywords: Learning from Demonstration, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: Safe autonomous driving requires robust detection of other traffic participants. However, robust does not mean perfect, and safe systems typically minimize missed detections at the expense of a higher false positive rate. This results in conservative and yet potentially dangerous behavior such as avoiding imaginary obstacles. In the context of behavioral cloning, perceptual errors at training time can lead to learning difficulties or wrong policies, as expert demonstrations might be inconsistent with the perceived world state. In this work, we propose a behavioral cloning approach that can safely leverage imperfect perception without being conservative. Our core contribution is a novel representation of perceptual uncertainty for learning to plan. We propose a new probabilistic birds-eye-view semantic grid to encode the noisy output of object perception systems. We then leverage expert demonstrations to learn an imitative driving policy using this probabilistic representation. Using the CARLA simulator, we show that our approach can safely overcome critical false positives that would otherwise lead to catastrophic failures or conservative behavior.
|
|
TuBT7 |
Room T7 |
Policy Learning |
Regular session |
Chair: Gonzalez, Joseph E. | UC Berkeley |
Co-Chair: Ramos, Fabio | University of Sydney, NVIDIA |
|
11:45-12:00, Paper TuBT7.1 | |
>Proximal Deterministic Policy Gradient |
|
Maggipinto, Marco | University of Padova |
Susto, Gian Antonio | University of Padova |
Chaudhari, Pratik | University of Pennsylvania |
Keywords: Reinforecment Learning
Abstract: This paper introduces two simple techniques to improve off-policy Reinforcement Learning (RL) algorithms. First, we formulate off-policy RL as a stochastic proximal point iteration. The target network plays the role of the variable of optimization and the value network computes the proximal operator. Second, we exploits the two value functions commonly employed in state-of-the-art off-policy algorithms to provide an improved action value estimate through bootstrapping with limited increase of computational resources. Further, we demonstrate significant performance improvement over state-of-the-art algorithms on standard continuous-control RL benchmarks.
|
|
12:00-12:15, Paper TuBT7.2 | |
>Online BayesSim for Combined Simulator Parameter Inference and Policy Improvement |
> Video Attachment
|
|
Possas, Rafael | University of Sydney |
Barcelos, Lucas | University of Sydney |
Oliveira, Rafael | University of Sydney |
Fox, Dieter | University of Washington |
Ramos, Fabio | University of Sydney, NVIDIA |
Keywords: Model Learning for Control, Reinforecment Learning, Optimization and Optimal Control
Abstract: Recent advancements in Bayesian likelihood-free inference enables a probabilistic treatment for the problem of estimating simulation parameters and their uncertainty given sequences of observations. Domain randomization can be performed much more effectively when a posterior distribution provides the correct uncertainty over parameters in a simu- lated environment. In this paper, we study the integration of simulation parameter inference with both model-free reinforce- ment learning and model-based control in a novel sequential algorithm that alternates between learning a better estimation of parameters and improving the controller. This approach exploits the interdependence between the two problems to generate computational efficiencies and improved reliability when a black-box simulator is available. Experimental results suggest that both control strategies have better performance when compared to traditional domain randomization methods
|
|
12:15-12:30, Paper TuBT7.3 | |
>An Online Training Method for Augmenting MPC with Deep Reinforcement Learning |
> Video Attachment
|
|
Bellegarda, Guillaume | University of California, Santa Barbara |
Byl, Katie | UCSB |
Keywords: Reinforecment Learning, Nonholonomic Motion Planning, AI-Based Methods
Abstract: Recent breakthroughs both in reinforcement learning and trajectory optimization have made significant advances towards real world robotic system deployment. Reinforcement learning (RL) can be applied to many problems without needing any modeling or intuition about the system, at the cost of high sample complexity and the inability to prove any metrics about the learned policies. Trajectory optimization (TO) on the other hand allows for stability and robustness analyses on generated motions and trajectories, but is only as good as the often over-simplified derived model, and may have prohibitively expensive computation times for real-time control, for example in contact rich environments. This paper seeks to combine the benefits from these two areas while mitigating their drawbacks by (1) decreasing RL sample complexity by using existing knowledge of the problem with real-time optimal control, and (2) allowing online policy deployment at any point in the training process by using the TO (MPC) as a baseline or worst-case scenario action, while continuously improving the combined learned-optimized policy with deep RL. This method is evaluated on tasks of successively navigating a car model to a series of goal destinations over slippery terrains as fast as possible, in which drifting will allow the system to more quickly change directions while maintaining high speeds.
|
|
12:30-12:45, Paper TuBT7.4 | |
>Stochastic Neural Control Using Raw Pointcloud Data and Building Information Models |
> Video Attachment
|
|
Ferguson, Max | Stanford University |
Law, Kincho H. | Stanford University |
Keywords: Autonomous Agents, Reinforecment Learning, Path Planning for Multiple Mobile Robots or Agents
Abstract: Recently, there has been a lot of excitement surrounding the use of reinforcement learning for robot control and navigation. However, many of these algorithms encounter difficulty navigating long or complex trajectories. This paper presents a new mobile robot control system called Stochastic Neural Control (SNC), that uses a stochastic policy gradient algorithm for local control and a modified probabilistic roadmap planner for global motion planning. In SNC, each mobile robot control decision is conditioned on observations from the robot sensors as well as pointcloud data, allowing the robot to safely operate within geometrically complex environments. SNC is tested on a number of challenging navigation tasks and learns advanced policies for navigation, collision-avoidance and fall-prevention. Three variants of the SNC system are evaluated against a conventional motion planning baseline. SNC outperforms the baseline and four other similar RL navigation systems in many of trials. Finally, we present a strategy for transferring SNC from a simulated environment to a real robot. We empirically show that the SNC system exhibits good policies for mobile robot navigation when controlling a real mobile robot.
|
|
12:45-13:00, Paper TuBT7.5 | |
>RILaaS: Robot Inference and Learning As a Service |
|
Tanwani, Ajay Kumar | UC Berkeley |
Anand, Raghav | UC Berkeley |
Gonzalez, Joseph E. | UC Berkeley |
Goldberg, Ken | UC Berkeley |
Keywords: Networked Robots, Behavior-Based Systems, Distributed Robot Systems
Abstract: Programming robots is complicated due to the lack of `plug-and-play' modules for skill acquisition. Virtualizing deployment of deep learning models can facilitate large-scale use/re-use of off-the-shelf functional behaviors. Deploying deep learning models on robots entails real-time, accurate and reliable inference service under varying query load. This paper introduces a novel Robot-Inference-and-Learning-as-a-Service (RILaaS) platform for low-latency and secure inference serving of deep models on robots. Unique features of RILaaS include: 1) low-latency and reliable serving with gRPC under dynamic loads by distributing queries over multiple servers on Edge and Cloud, 2) SSH based authentication coupled with SSL/TLS based encryption for security and privacy of the data, and 3) front-end REST API for sharing, monitoring and visualizing performance metrics of the available models. We report experiments to evaluate the RILaaS platform under varying loads of batch size, number of robots, and various model placement hosts on Cloud, Edge, and Fog for providing benchmark applications of object recognition and grasp planning as a service. We address the complexity of load balancing with a Q-learning algorithm that optimizes simulated profiles of networked robots; outperforming several baselines including round robin, least connections, and least model time with 68.30 % and 14.04 % decrease in round-trip latency time across models compared to the worst and the next best baseline respectively. Details and updates are available at: https://sites.google.com/view/rilaas
|
|
13:00-13:15, Paper TuBT7.6 | |
>Actor-Critic Reinforcement Learning for Control with Stability Guarantee |
> Video Attachment
|
|
Han, Minghao | Harbin Institute of Technology |
Zhang, Lixian | Harbin Institute of Technology |
Wang, Jun | University College London |
Pan, Wei | Delft University of Technology |
Keywords: Reinforecment Learning, Motion Control
Abstract: Reinforcement Learning (RL) and its integration with deep learning have achieved impressive performance in various robotic control tasks, ranging from motion planning and navigation to end-to-end visual manipulation. However, stability is not guaranteed in model-free RL by solely using data. From a control-theoretic perspective, stability is the most important property for any control system, since it is closely related to safety, robustness, and reliability of robotic systems. In this paper, we propose an actor-critic RL framework for control which can guarantee closed-loop stability by employing the classic Lyapunov's method in control theory. First of all, a data-based stability theorem is proposed for stochastic nonlinear systems modeled by Markov decision process. Then we show that the stability condition could be exploited as the critic in the actor-critic RL to learn a controller/policy. At last, the effectiveness of our approach is evaluated on several well-known 3-dimensional robot control tasks and a synthetic biology gene network tracking task in three different popular physics simulation platforms. As an empirical evaluation on the advantage of stability, we show that the learned policies can enable the systems to recover to the equilibrium or way-points when interfered by uncertainties such as system parametric variations and external disturbances to a certain extent.
|
|
TuBT8 |
Room T8 |
Reinforcement Learning Algorithms |
Regular session |
Chair: Torras, Carme | Csic - Upc |
Co-Chair: Guan, Yisheng | Guangdong University of Technology |
|
11:45-12:00, Paper TuBT8.1 | |
>TTR-Based Reward for Reinforcement Learning with Implicit Model Priors |
|
Lyu, Xubo | Simon Fraser University |
Chen, Mo | Simon Fraser University |
Keywords: Reinforecment Learning, Optimization and Optimal Control
Abstract: Model-free reinforcement learning (RL) is a powerful approach for learning control policies directly from high-dimensional state and observation. However, it tends to be data-inefficient, which is especially costly in robotic learning tasks. On the other hand, optimal control does not require data if the system model is known, but cannot scale to models with high-dimensional states and observations. To exploit benefits of both model-free RL and optimal control, we propose time-to-reach-based (TTR-based) reward shaping, an optimal control-inspired technique to alleviate data inefficiency while retaining advantages of model-free RL. This is achieved by summarizing key system model information using a TTR function to greatly speed up the RL process, as shown in our simulation results. The TTR function is defined as the minimum time required to move from any state to the goal under assumed system dynamics constraints. Since the TTR function is computationally intractable for systems with high-dimensional states, we compute it for approximate, lower-dimensional system models that still captures key dynamic behaviors. Our approach can be flexibly and easily incorporated into any model-free RL algorithm without altering the original algorithm structure, and is compatible with any other techniques that may facilitate the RL process. We evaluate our approach on two representative robotic learning tasks and three well-known model-free RL algorithms, and show significant improvements in data efficiency and performance.
|
|
12:00-12:15, Paper TuBT8.2 | |
>Learning Hierarchical Acquisition Functions for Bayesian Optimization |
|
Rottmann, Nils | University of Luebeck |
Kunavar, Tjasa | Jozef Stefan Institute |
Babic, Jan | Jozef Stefan Institute |
Peters, Jan | Technische Universität Darmstadt |
Rueckert, Elmar | University of Luebeck |
Keywords: Reinforecment Learning, Humanoid Robot Systems, Human and Humanoid Motion Analysis and Synthesis
Abstract: Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints. In contrast humans can infer protective and safe solutions after a single failure or unexpected observation. In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks. A Gaussian Process implements the modeling and the sampling of the acquisition function. This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task. The method outperforms standard optimization techniques, such as Bayesian Optimization, in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures. Further, we show that our method performs similar to humans for learning the postural balancing task by comparing our simulation results with real human data.
|
|
12:15-12:30, Paper TuBT8.3 | |
>Reinforcement Learning in Latent Action Sequence Space |
|
Kim, Heecheol | The University of Tokyo |
Yamada, Masanori | NTT |
Miyoshi, Kosuke | Narrative Nights Inc |
Iwata, Tomoharu | NTT |
Yamakawa, Hiroshi | The Whole Brain Architecture Initiative |
Keywords: Reinforecment Learning, Transfer Learning, Learning from Demonstration
Abstract: One problem in real-world applications of reinforcement learning is the high dimensionality of the action search spaces, which comes from the combination of actions over time. To reduce the dimensionality of action sequence search spaces, macro actions have been studied, which are sequences of primitive actions to solve tasks. However, previous studies relied on humans to define macro actions or assumed macro actions to be repetitions of the same primitive actions. We propose encoded action sequence reinforcement learning (EASRL), a reinforcement learning method that learns flexible sequences of actions in a latent space for a high-dimensional action sequence search space. With EASRL, encoder and decoder networks are trained with demonstration data by using variational autoencoders for mapping macro actions into the latent space. Then, we learn a policy network in the latent space, which is a distribution over encoded macro actions given a state. By learning in the latent space, we can reduce the dimensionality of the action sequence search space and handle various patterns of action sequences. We experimentally demonstrate that the proposed method outperforms other reinforcement learning methods on tasks that require an extensive amount of search.
|
|
12:30-12:45, Paper TuBT8.4 | |
>Deep Adversarial Reinforcement Learning for Object Disentangling |
|
Laux, Melvin | Technische Universtät Darmstadt |
Arenz, Oleg | TU Darmstadt |
Peters, Jan | Technische Universität Darmstadt |
Pajarinen, Joni | Tampere University |
Keywords: Reinforecment Learning, Robust/Adaptive Control of Robotic Systems, Transfer Learning
Abstract: Deep learning in combination with improved training techniques and high computational power has led to recent advances in the field of reinforcement learning (RL) and to successful robotic RL applications such as in-hand manipulation. However, most robotic RL relies on a well known initial state distribution. In real-world tasks, this information is however often not available. For example, when disentangling waste objects the actual position of the robot w.r.t. the objects may not match the positions the RL policy was trained for. To solve this problem, we present a novel adversarial reinforcement learning (ARL) framework. The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states. We train the protagonist and the adversary jointly to allow them to adapt to the changing policy of their opponent. We show that our method can generalize from training to test scenarios by training an end-to-end system for robot control to solve a challenging object disentangling task. Experiments with a KUKA LBR+ 7-DOF robot arm show that our approach outperforms the baseline method in disentangling when starting from different initial states than provided during training.
|
|
12:45-13:00, Paper TuBT8.5 | |
>Contextual Policy Search for Micro-Data Robot Motion Learning through Covariate Gaussian Process Latent Variable Models |
> Video Attachment
|
|
Delgado-Guerrero, Juan Antonio | IRI |
Colomé, Adrià | Institut De Robòtica I Informàtica Industrial (CSIC-UPC), Q28180 |
Torras, Carme | Csic - Upc |
Keywords: Learning from Demonstration, Reinforecment Learning, Robust/Adaptive Control of Robotic Systems
Abstract: In the next few years, the amount and variety of context-aware robotic manipulator applications is expected to increase significantly, especially in household environments. In such spaces, thanks to programming by demonstration, non-expert people will be able to teach robots how to perform specific tasks, for which the adaptation to the environment is imperative, for the sake of effectiveness and users safety. These robot motion learning procedures allow the encoding of such tasks by means of parameterized trajectory generators, usually a Movement Primitive (MP) conditioned on contextual variables. However, naively sampled solutions from these MPs are generally suboptimal/inefficient, according to a given reward function. Hence, Policy Search (PS) algorithms leverage the information of the experienced rewards to improve the robot performance over executions, even for new context configurations. Given the complexity of the aforementioned tasks, PS methods face the challenge of exploring in high-dimensional parameter search spaces. In this work, a solution combining Bayesian Optimization, a data-efficient PS algorithm, with covariate Gaussian Process Latent Variable Models, a recent Dimensionality Reduction technique, is presented. It enables reducing dimensionality and exploiting prior demonstrations to converge in few iterations, while also being compliant with context requirements. Thus, contextual variables are considered in the latent search space, from which a surrogate model for the reward function is built. Then, samples are generated in a low-dimensional latent space, and mapped to a context-dependent trajectory. This allows us to drastically reduce the search space with the covariate GPLVM, e.g. from 105 to 2 parameters, plus a few contextual features. Experimentation in two different scenarios proves the data-efficiency and the power of dimensionality reduction of our approach.
|
|
13:00-13:15, Paper TuBT8.6 | |
>Invariant Transform Experience Replay: Data Augmentation for Deep Reinforcement Learning |
> Video Attachment
|
|
Lin, Yijiong | Guangdong University of Technology |
Huang, Jiancong | Guangdong University of Technology |
Zimmer, Matthieu | Shanghai Jiao Tong University |
Guan, Yisheng | Guangdong University of Technology |
Rojas, Juan | Chinese University of Hong Kong |
Weng, Paul | Shanghai Jiao Tong University |
Keywords: Reinforecment Learning, Deep Learning in Grasping and Manipulation, AI-Based Methods
Abstract: Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements. To alleviate this issue, we propose to exploit the symmetries present in robotic tasks. Intuitively, symmetries from real trajectories define transformations that leave the space of feasible RL trajectories invariant and can be used to generate new feasible trajectories, which could be used for training. Based on this data augmentation idea, we formulate a general framework, called Invariant Transform Experience Replay that we present with two techniques. First, Kaleidoscope Experience Replay exploits reflectional symmetries. Second, Goal-augmented Experience Replay takes advantage of lax goal definitions. In the Fetch tasks from OpenAI Gym, our experimental results show significant increases in learning rates and success rates. Particularly, we attain an 8x speed up in multi-goal tasks. Invariant trajectories on RL trajectories are a promising methodology to speed up learning in DRL.
|
|
TuBT9 |
Room T9 |
Reinforcement Learning Applications |
Regular session |
Chair: Büscher, Daniel | Albert-Ludwigs-Universität Freiburg |
Co-Chair: Fantini, Michael | Rice University |
|
11:45-12:00, Paper TuBT9.1 | |
>Efficiency and Equity Are Both Essential: A Generalized Traffic Signal Controller with Deep Reinforcement Learning |
> Video Attachment
|
|
Yan, Shengchao | University of Freiburg |
Zhang, Jingwei | Albert Ludwigs University of Freiburg |
Büscher, Daniel | Albert-Ludwigs-Universität Freiburg |
Burgard, Wolfram | Toyota Research Institute |
Keywords: Novel Deep Learning Methods, Reinforecment Learning, AI-Based Methods
Abstract: Traffic signal controllers play an essential role in today’s traffic system. However, the majority of them currently is not sufficiently flexible or adaptive to generate optimal traffic schedules. In this paper we present an approach to learn policies for signal controllers using deep reinforcement learning aiming for optimized traffic flow. Our method uses a novel formulation of the reward function that simultaneously considers efficiency and equity. We furthermore present a general approach to find the bound for the proposed equity factor and we introduce the adaptive discounting approach that greatly stabilizes learning and helps to maintain a high flexibility of green light duration. The experimental evaluations on both simulated and real-world data demonstrate that our proposed algorithm achieves state-of-the-art performance (previously held by traditional non- learning methods) on a wide range of traffic situations.
|
|
12:00-12:15, Paper TuBT9.2 | |
>Ultrasound-Guided Robotic Navigation with Deep Reinforcement Learning |
|
Hase, Hannes | Technical University of Munich |
Azampour, Mohammad Farid | Technical Univeristy of Munich |
Tirindelli, Maria | Computer Aided Medical Procedures, Technical University of Munic |
Paschali, Magdalini | Technical Univeristy of Munich |
Simson, Walter | Technical University Munich |
Fatemizadeh, Emad | Sharif University of Technology |
Navab, Nassir | TU Munich |
Keywords: Reinforecment Learning, Medical Robots and Systems, Autonomous Agents
Abstract: In this paper, we introduce the first reinforcement learning (RL) based robotic navigation method which utilizes ultrasound (US) images as an input. Our approach combines state-of-the-art RL techniques, specifically deep Q-networks (DQN) with memory buffers and a binary classifier for deciding when to terminate the task. Our method is trained and evaluated on an in-house collected data-set of 34 volunteers and when compared to pure RL and supervised learning (SL) techniques, it performs substantially better, which highlights the suitability of RL navigation for US-guided procedures. When testing our proposed model, we obtained an 82.91% chance of navigating correctly to the sacrum from 165 different starting positions on 5 different unseen simulated environments.
|
|
12:15-12:30, Paper TuBT9.3 | |
>Deep R-Learning for Continual Area Sweeping |
|
Shah, Rishi | The University of Texas at Austin |
Jiang, Yuqian | University of Texas at Austin |
Hart, Justin | University of Texas at Austin |
Stone, Peter | University of Texas at Austin |
Keywords: Reinforecment Learning, AI-Based Methods, Service Robots
Abstract: Coverage path planning is a well-studied problem in robotics in which a robot must plan a path that passes through every point in a given area repeatedly, usually with a uniform frequency. To address the scenario in which some points need to be visited more frequently than others, this problem has been extended to non-uniform coverage planning. This paper considers the variant of non-uniform coverage in which the robot does not know the distribution of relevant events beforehand and must nevertheless learn to maximize the rate of detecting events of interest. This continual area sweeping problem has been previously formalized in a way that makes strong assumptions about the environment, and to date only a greedy approach has been proposed. We generalize the continual area sweeping formulation to include fewer environmental constraints, and propose a novel approach based on reinforcement learning in a Semi-Markov Decision Process. This approach is evaluated in an abstract simulation and in a high fidelity Gazebo simulation. These evaluations show significant improvement upon the existing approach in general settings, which is especially relevant in the growing area of service robotics. We also present a video demonstration on a real service robot.
|
|
12:30-12:45, Paper TuBT9.4 | |
>Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards |
> Video Attachment
|
|
Schoettler, Gerrit | Siemens Corporation |
Nair, Ashvin | UC Berkeley |
Luo, Jianlan | UC Berkeley |
Bahl, Shikhar | UC Berkeley |
Aparicio Ojea, Juan | Siemens |
Solowjow, Eugen | Siemens Corporation |
Levine, Sergey | UC Berkeley |
Keywords: Reinforecment Learning, Deep Learning in Grasping and Manipulation, Industrial Robots
Abstract: Connector insertion and many other tasks commonly found in modern manufacturing settings involve complex contact dynamics and friction. Since it is difficult to capture related physical effects with first-order modeling, traditional control methods often result in brittle and inaccurate controllers, which have to be manually tuned. Reinforcement learning (RL) methods have been demonstrated to be capable of learning controllers in such environments from autonomous interaction with the environment, but running RL algorithms in the real world poses sample efficiency and safety challenges. Moreover, in practical real-world settings, we cannot assume access to perfect state information or dense reward signals. In this paper, we consider a variety of difficult industrial insertion tasks with visual inputs and different natural reward specifications, namely sparse rewards and goal images. We show that methods that combine RL with prior information, such as classical controllers or demonstrations, can solve these tasks from a reasonable amount of real-world interaction.
|
|
12:45-13:00, Paper TuBT9.5 | |
>Robotic Table Tennis with Model-Free Reinforcement Learning |
> Video Attachment
|
|
Gao, Wenbo | Columbia University |
Graesser, Laura | Google |
Choromanski, Krzysztof | Google Brain Robotics |
Song, Xingyou | Google Brain |
Lazic, Nevena | Deepmind |
Sanketi, Pannag | Google |
Sindhwani, Vikas | Google Brain, NYC |
Jaitly, Navdeep | Google Research |
Keywords: Reinforecment Learning, Novel Deep Learning Methods, Humanoid Robot Systems
Abstract: We propose a model-free algorithm for learning efficient policies capable of returning table tennis balls by controlling robot joints at a rate of 100Hz. We demonstrate that evolutionary search (ES) methods acting on CNN-based policy architectures for non-visual inputs and convolving across time learn compact controllers leading to smooth motions. Furthermore, we show that with appropriately tuned curriculum learning on the task and rewards, policies are capable of developing multi-modal styles, specifically forehand and backhand stroke, whilst achieving 80% return rate on a wide range of ball throws. We observe that multi-modality does not require any architectural priors, such as multi-head architectures or hierarchical policies.
|
|
13:00-13:15, Paper TuBT9.6 | |
>Optimizing a Continuum Manipulator's Search Policy through Model-Free Reinforcement Learning |
|
Frazelle, Chase | Clemson University |
Rogers, Jonathan | NASA Johnson Space Center |
Karamouzas, Ioannis | Clemson University |
Walker, Ian | Clemson University |
Keywords: Flexible Robots, Reinforecment Learning, Modeling, Control, and Learning for Soft Robots
Abstract: Continuum robots have long held a great potential for applications in inspection of remote, hard-to-reach environments. In future environments such as the Deep Space Gateway, remote deployment of robotic solutions will require a high level of autonomy due to communication delays and unavailability of human crews. In this work, we explore the application of policy optimization methods through Actor-Critic gradient descent in order to optimize a continuum manipulator’s search method for an unknown object. We show that we can deploy a continuum robot without prior knowledge of a goal object location and converge to a policy that finds the goal and can be reused in future deployments. We also show that the method can be quickly extended for multiple Degrees-of-Freedom and that we can restrict the policy with virtual and physical obstacles. These two scenarios are highlighted using a simulation environment with 15 and 135 unique states, respectively.
|
|
TuBT10 |
Room T10 |
Reinforcement Learning |
Regular session |
Chair: Driggs-Campbell, Katherine | University of Illinois at Urbana-Champaign |
Co-Chair: Niekum, Scott | University of Texas at Austin |
|
11:45-12:00, Paper TuBT10.1 | |
>Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning |
> Video Attachment
|
|
Chuck, Caleb | University of Texas at Austin |
Chockchowwat, Supawit | The University of Texas at Austin |
Niekum, Scott | University of Texas at Austin |
Keywords: AI-Based Methods, Model Learning for Control, Visual Learning
Abstract: Deep reinforcement learning (DRL) is capable of learning high-performing policies on a variety of complex high-dimensional tasks, ranging from video games to robotic manipulation. However, standard DRL methods often suffer from poor sample efficiency, partially because they aim to be entirely problem-agnostic. In this work, we introduce a novel approach to exploration and hierarchical skill learning that derives its sample efficiency from intuitive assumptions it makes about the behavior of objects both in the physical world and simulations which mimic physics. Specifically, we propose the Hypothesis Proposal and Evaluation (HyPE) algorithm, which discovers objects from raw pixel data, generates hypotheses about the controllability of observed changes in object state, and learns a hierarchy of skills to test these hypotheses. We demonstrate that HyPE can dramatically improve the sample efficiency of policy learning in two different domains: a simulated robotic block-pushing domain, and a popular benchmark task: Breakout. In these domains, HyPE learns high-scoring policies an order of magnitude faster than several state-of-the-art reinforcement learning methods.
|
|
12:00-12:15, Paper TuBT10.2 | |
>Robot Sound Interpretation: Combining Sight and Sound in Learning-Based Control |
> Video Attachment
|
|
Chang, Peixin | University of Illinois at Urbana Champaign |
Liu, Shuijing | University of Illinois at Urbana Champaign |
Chen, Haonan | Zhejiang University-University of Illinois at Urbana-Champaign I |
Driggs-Campbell, Katherine | University of Illinois at Urbana-Champaign |
Keywords: Cognitive Control Architectures, Robot Audition, AI-Based Methods
Abstract: We explore the interpretation of sound for robot decision making, inspired by human speech comprehension. While previous methods separate sound processing unit and robot controller, we propose an end-to-end deep neural network which directly interprets sound commands for visual-based decision making. The network is trained using reinforcement learning with auxiliary losses on the sight and sound networks. We demonstrate our approach on two robots, a TurtleBot3 and a Kuka-IIWA arm, which hear a command word, identify the associated target object, and perform precise control to reach the target. For both robots, we show the effectiveness of our network in generalization to sound types and robotic tasks empirically. We successfully transfer the policy learned in simulator to a real-world TurtleBot3.
|
|
12:15-12:30, Paper TuBT10.3 | |
>"Good Robot!": Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer |
> Video Attachment
|
|
Hundt, Andrew | Johns Hopkins University |
Killeen, Benjamin | Johns Hopkins University |
Greene, Nicholas | Johns Hopkins University |
Wu, Hongtao | Johns Hopkins University |
Kwon, Heeyeon | Johns Hopkins University |
Paxton, Chris | NVIDIA Research |
Hager, Gregory | Johns Hopkins University |
Keywords: Deep Learning in Grasping and Manipulation, Computer Vision for Other Robotic Applications, Reinforecment Learning
Abstract: Current Reinforcement Learning (RL) algorithms struggle with long-horizon tasks where time can be wasted exploring dead ends and task progress may be easily reversed. We develop the SPOT framework, which explores within action safety zones, learns about unsafe regions without exploring them, and prioritizes experiences that reverse earlier progress to learn with remarkable efficiency. The SPOT framework successfully completes simulated trials of a variety of tasks, improving a baseline trial success rate from 13% to 100% when stacking 4 cubes, from 13% to 99% when creating rows of 4 cubes, and from 84% to 95% when clearing toys arranged in adversarial patterns. Efficiency with respect to actions per trial typically improves by 30% or more, while training takes just 1-20k actions, depending on the task. Furthermore, we demonstrate direct sim to real transfer. We are able to create real stacks in 100% of trials with 61% efficiency and real rows in 100% of trials with 59% efficiency by directly loading the simulation-trained model on the real robot with no additional real-world fine-tuning. To our knowledge, this is the first instance of reinforcement learning with successful sim to real transfer applied to long term multi-step tasks such as block-stacking and row-making with consideration of progress reversal. Code is available at https://github.com/jhu-lcsr/good_robot.
|
|
12:30-12:45, Paper TuBT10.4 | |
>Deep Reinforcement Learning for Tactile Robotics: Learning to Type on a Braille Keyboard |
> Video Attachment
|
|
Church, Alex | University of Bristol |
Lloyd, John | University of Bristol |
Hadsell, Raia | DeepMind |
Lepora, Nathan | University of Bristol |
Keywords: Force and Tactile Sensing, Reinforecment Learning, Biomimetics
Abstract: Artificial touch would seem well-suited for Reinforcement Learning (RL), since both paradigms rely on interaction with an environment. Here we propose a new environment and set of tasks to encourage development of tactile reinforcement learning: learning to type on a braille keyboard. Four tasks are proposed, progressing in difficulty from arrow to alphabet keys and from discrete to continuous actions. A simulated counterpart is also constructed by sampling tactile data from the physical environment. Using state-of-the-art deep RL algorithms, we show that all of these tasks can be successfully learnt in simulation, and 3 out of 4 tasks can be learned on the real robot. A lack of sample efficiency currently makes the continuous alphabet task impractical on the robot. To the best of our knowledge, this work presents the first demonstration of successfully training deep RL agents in the real world using observations that exclusively consist of tactile images. To aid future research utilising this environment, the code for this project has been released along with designs of the braille keycaps for 3D printing and a guide for recreating the experiments.
|
|
12:45-13:00, Paper TuBT10.5 | |
>Encoding Formulas As Deep Networks: Reinforcement Learning for Zero-Shot Execution of LTL Formulas |
|
Kuo, Yen-Ling | MIT |
Katz, Boris | MIT |
Barbu, Andrei | MIT |
Keywords: AI-Based Methods, Reinforecment Learning
Abstract: We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions. The input LTL formulas have never been seen before, yet the network performs zero-shot generalization to satisfy them. This is a novel form of multi-task learning for RL agents where agents learn from one diverse set of tasks and generalize to a new set of diverse tasks. The formulation of the network enables this capacity to generalize. We demonstrate this ability in two domains. In a symbolic domain, the agent finds a sequence of letters that is accepted. In a Minecraft-like environment, the agent finds a sequence of actions that conform to the formula. While prior work could learn to execute one formula reliably given examples of that formula, we demonstrate how to encode all formulas reliably. This could form the basis of new multi-task agents that discover sub-tasks and execute them without any additional training, as well as the agents which follow more complex linguistic commands. The structures required for this generalization are specific to LTL formulas, which opens up an interesting theoretical question: what structures are required in neural networks for zero-shot generalization to different logics?
|
|
TuBT11 |
Room T11 |
Representation Learning |
Regular session |
Chair: Jenkins, Odest Chadwicke | University of Michigan |
Co-Chair: Sharf, Inna | McGill University |
|
11:45-12:00, Paper TuBT11.1 | |
>PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference |
> Video Attachment
|
|
Okada, Masashi | Panasonic Corporation |
Kosaka, Norio | Panasonic Corporation |
Taniguchi, Tadahiro | Ritsumeikan University |
Keywords: Representation Learning, Reinforecment Learning, Probability and Statistical Methods
Abstract: In the present paper, we propose an extension of the Deep Planning Network (PlaNet), also referred to as PlaNet of the Bayesians (PlaNet-Bayes). There has been a growing demand in model predictive control (MPC) in partially observable environments in which complete information is unavailable because of, for example, lack of expensive sensors. PlaNet is a promising solution to realize such latent MPC, as it is used to train state-space models via model-based reinforcement learning (MBRL) and to conduct planning in the latent space. However, recent state-of-the-art strategies mentioned in MBRR literature, such as involving uncertainty into training and planning, have not been considered, significantly suppressing the training performance. The proposed extension is to make PlaNet uncertainty-aware on the basis of Bayesian inference, in which both model and action uncertainty are incorporated. Uncertainty in latent models is represented using a neural network ensemble to approximately infer model posteriors. The ensemble of optimal action candidates is also employed to capture multimodal uncertainty in the optimality. The concept of the action ensemble relies on a general variational inference MPC (VI-MPC) framework and its instance, probabilistic action ensemble with trajectory sampling (PaETS). In this paper, we extend VI-MPC and PaETS, which have been originally introduced in previous literature, to address partially observable cases. We experimentally compare the performances on continuous control tasks, and conclude that our method can consistently improve the asymptotic performance compared with PlaNet.
|
|
12:00-12:15, Paper TuBT11.2 | |
>Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation |
> Video Attachment
|
|
Lippi, Martina | Università Degli Studi Di Salerno |
Poklukar, Petra | KTH Royal Institute of Technology |
Welle, Michael C. | KTH Royal Institute of Technology |
Varava, Anastasiia | KTH, the Royal Institute of Technology |
Yin, Hang | KTH |
Marino, Alessandro | University of Cassino and Southern Lazio |
Kragic, Danica | KTH |
Keywords: Representation Learning, Perception-Action Coupling, Novel Deep Learning Methods
Abstract: We present a framework for visual action planning of complex manipulation tasks with high-dimensional state spaces such as manipulation of deformable object. Planning is performed in a low-dimensional latent state space that embeds images. We define and implement a Latent Space Roadmap (LSR) which is a graph-based structure that globally captures the latent system dynamics. Our framework consists of two main components: a Visual Foresight Module (VFM) that generates a visual plan as a sequence of images, and an Action Proposal Network (APN) that predicts the actions between them. We show the effectiveness of the method on a simulated box stacking task as well as a T-shirt folding task performed with a real robot.
|
|
12:15-12:30, Paper TuBT11.3 | |
>Learning the Latent Space of Robot Dynamics for Cutting Interaction Inference |
> Video Attachment
|
|
Rezaei-Shoshtari, Sahand | McGill University |
Meger, David Paul | McGill University |
Sharf, Inna | McGill University |
Keywords: Representation Learning, Model Learning for Control, Robotics in Agriculture and Forestry
Abstract: Utilization of the latent space to capture a lower-dimensional representation of a complex dynamics model is explored in this work. The targeted application is of a robotic manipulator executing a complex environment interaction task, in particular, cutting a wooden object. We train two flavours of Variational Autoencoders---standard and Vector-Quantised---to learn the latent space which is then used to infer certain properties of the cutting operation, such as whether the robot is cutting or not, as well as, material and geometry of the object being cut. The two VAE models are evaluated with reconstruction, prediction and a combined reconstruction/prediction decoders. The results demonstrate the expressiveness of the latent space for robotic interaction inference and the competitive prediction performance against recurrent neural networks.
|
|
12:30-12:45, Paper TuBT11.4 | |
>SwingBot: Learning Physical Features from In-Hand Tactile Exploration for Dynamic Swing-Up Manipulation |
> Video Attachment
|
|
Wang, Chen | Shanghai Jiao Tong University |
Wang, Shaoxiong | MIT |
Romero, Branden | Massachusetts Institute of Technology |
Veiga, Filipe Fernandes | MIT |
Adelson, Edward | MIT |
Keywords: Representation Learning, Force and Tactile Sensing, In-Hand Manipulation
Abstract: Several robot manipulation tasks are extremely sensitive to variations of the physical properties of the manipulated objects. One such task is manipulating objects by using gravity or arm accelerations, increasing the importance of mass, center of mass, and friction information. We present SwingBot, a robot that is able to learn the physical features of an held object through tactile exploration. Two exploration actions (tilting and shaking) provide the tactile information used to create a physical feature embedding space. With this embedding, SwingBot is able to predict the swing angle achieved by a robot performing dynamic swing-up manipulations on a previously unseen object. Using these predictions, it is able to search for the optimal control parameters for a desired swing-up angle. We show that with the learned physical features our end-to-end self-supervised learning pipeline is able to substantially improve the accuracy of swinging up unseen objects. We also show that objects with similar dynamics are closer to each other on the embedding space and that the embedding can be disentangled into values of specific physical properties.
|
|
12:45-13:00, Paper TuBT11.5 | |
>Representation and Experience-Based Learning of Explainable Models for Robot Action Execution |
|
Mitrevski, Alex | Hochschule Bonn-Rhein-Sieg |
Plöger, Paul G. | Hochschule Bonn Rhein Sieg |
Lakemeyer, Gerhard | Computer Science Department, RWTH Aachen University |
Keywords: Representation Learning, Probability and Statistical Methods, Cognitive Control Architectures
Abstract: For robots acting in human-centered environments, the ability to improve based on experience is essential for reliable and adaptive operation; however, particularly in the context of robot failure analysis, experience-based improvement is practically useful only if robots are also able to reason about and explain the decisions they make during execution. In this paper, we describe and analyse a representation of execution-specific knowledge that combines (i) a relational model in the form of qualitative attributes that describe the conditions under which actions can be executed successfully and (ii) a continuous model in the form of a Gaussian process that can be used for generating parameters for action execution, but also for evaluating the expected execution success given a particular action parameterisation. The proposed representation is based on prior, modelled knowledge about actions and is combined with a learning process that is supervised by a teacher. We analyse the benefits of this representation in the context of two actions - grasping handles and pulling an object on a table - such that the experiments demonstrate that the joint relational-continuous model allows a robot to improve its execution based on experience, while reducing the severity of failures experienced during execution.
|
|
13:00-13:15, Paper TuBT11.6 | |
>TSBP: Tangent Space Belief Propagation for Manifold Learning |
|
Cohn, Thomas | University of Michigan |
Jenkins, Odest Chadwicke | University of Michigan |
Desingh, Karthik | University of Michigan |
Zeng, Zhen | University of Michigan |
Keywords: Representation Learning
Abstract: We present Tangent Space Belief Propagation (TSBP) as a method for graph denoising to improve the robustness of manifold learning algorithms. Dimension reduction by manifold learning relies heavily on the accurate selection of nearest neighbors, which has proven an open problem for sparse and noisy datasets. TSBP performs loopy nonparametric belief propagation to accurately infer the tangent spaces of the underlying manifold at each data point. Edges of the neighborhood graph that deviate from the tangent spaces are then removed. The resulting denoised graph can then be embedded into a lower-dimensional space using methods from existing manifold learning algorithms, such as ISOMAP. Artificially generated manifold data, as well as simulated sensor data from a mobile robot, demonstrate the efficacy of our method, in comparison to existing manifold learning algorithms. Artificially generated manifold data, as well as simulated sensor data from a mobile robot, are used to demonstrate the efficacy of our TSBP method.
|
|
13:00-13:15, Paper TuBT11.7 | |
>Improving Unimodal Object Recognition with Multimodal Contrastive Learning |
|
Meyer, Johannes | University of Freiburg |
Eitel, Andreas | University of Freiburg |
Brox, Thomas | University of Freiburg |
Burgard, Wolfram | Toyota Research Institute |
Keywords: Representation Learning, Visual Learning, RGB-D Perception
Abstract: Robots perceive their environment using various sensor modalities, e.g., vision, depth, sound or touch. Each modality provides complementary information for perception. However, while it can be assumed that all modalities are available for training, when deploying the robot in real-world scenarios the sensor setup often varies. In order to gain flexibility with respect to the deployed sensor setup we propose a new multimodal approach within the framework of contrastive learning. In particular, we consider the case of learning from RGB-D images while testing with one modality available, i.e., exclusively RGB or depth. We leverage contrastive learning to capture high-level information between different modalities in a compact feature embedding. We extensively evaluate our multimodal contrastive learning method on the Falling Things dataset and learn representations that outperform prior methods for RGB-D object recognition on the NYU-D dataset.
|
|
TuBT12 |
Room T12 |
Collision Avoidance I |
Regular session |
Chair: Gnanasekera, Manaram | University of New South Wales |
|
11:45-12:00, Paper TuBT12.1 | |
>Roadmap Subsampling for Changing Environments |
|
Murray, Sean | Duke University |
Konidaris, George | Brown University |
Sorin, Daniel | Duke University |
Keywords: Collision Avoidance, Motion and Path Planning
Abstract: Precomputed roadmaps can enable effective multi-query motion planning: a roadmap can be built for a robot as if no obstacles were present, and then after edges invalidated by obstacles observed at query time are deleted, path search through the remaining roadmap returns a collision-free plan. However, large roadmaps are memory intensive to store, and can be too slow for practical use. We present an algorithm for compressing a large roadmap so that the collision detection phase fits into a computational budget, while retaining a high probability of finding high-quality paths. Our algorithm adapts work from graph theory and data mining by treating roadmaps as unreliable networks, where the probability of edge failure models the probability of a query-time obstacle causing a collision. We experimentally evaluate the quality of the resulting roadmaps in a suite of four motion planning benchmarks.
|
|
12:00-12:15, Paper TuBT12.2 | |
>Robot Navigation in Crowded Environments Using Deep Reinforcement Learning |
> Video Attachment
|
|
Liu, Lucia | ETH Zurich |
Dugas, Daniel | ETH Zurich |
Cesari, Gianluca | ETH Zurich |
Siegwart, Roland | ETH Zurich |
Dubé, Renaud | ETH Zürich |
Keywords: Collision Avoidance, Motion and Path Planning, Reinforecment Learning
Abstract: Mobile robots operating in public environments require the ability to navigate among humans and other obstacles in a socially compliant and safe manner. This work presents a combined imitation learning and deep reinforcement learning approach for motion planning in such crowded and cluttered environments. By separately processing information related to static and dynamic objects, we enable our network to learn motion patterns that are tailored to real-world environments. Our model is also designed such that it can handle usual cases in which robots can be equipped with sensor suites that only offer limited field of view. Our model outperforms current state-of-the-art approaches, which is shown in simulated environments containing human-like agents and static obstacles. Additionally, we demonstrate the real-time performance and applicability of our model by successfully navigating a robotic platform through real-world environments.
|
|
12:15-12:30, Paper TuBT12.3 | |
>Configuration Space Decomposition for Learning-Based Collision Checking in High-DOF Robots |
|
Han, Yiheng | Tsinghua University |
Zhao, Wang | Tsinghua University |
Pan, Jia | University of Hong Kong |
Liu, Yong-Jin | Tsinghua University |
Keywords: Collision Avoidance, Motion and Path Planning
Abstract: Motion planning for robots of high degrees-of-freedom (DOFs) is an important problem in robotics with sampling-based methods in configuration space C as one popular solution. Recently, machine learning methods have been introduced into sampling-based motion planning methods, which train a classifier to distinguish collision free subspace from in-collision subspace in C. In this paper, we propose a novel configuration space decomposition method and show two nice properties resulted from this decomposition. Using these two properties, we build a composite classifier that works compatibly with previous machine learning methods by using them as the elementary classifiers. Experimental results are presented, showing that our composite classifier outperforms state-of-the-art single-classifier methods by a large margin. A real application of motion planning in a multi-robot system in plant phenotyping using three UR5 robotic arms is also presented.
|
|
12:30-12:45, Paper TuBT12.4 | |
>A Time Optimal Reactive Collision Avoidance Method for UAVs Based on a Modified Collision Cone Approach |
> Video Attachment
|
|
Gnanasekera, Manaram | University of New South Wales |
Katupitiya, Jayantha | The University of New South Wales |
Keywords: Collision Avoidance, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: UAVs or Unmanned Aerial Vehicles are an upcoming technology which has eased human lifestyles in many ways. Due to this trend future skies have a risk of getting congested. In such a situation time optimal collision avoidance would be extremely vital to travel in a shortest possible time by avoiding collisions. The paper proposes a novel method for time optimal collision avoidance for UAVs. The proposed algorithm is constructed as a three-stage approach based on the Collision Cone method with slight modifications. A sliding mode controller is used as the control law for the navigation. Mathematical proofs are included to verify the time optimality of the proposed method. The efficiency and the applicability of the work carried out is confirmed by both simulation and experimental results. An automated Matrice 600 Pro hexacopter has been used for the experiments.
|
|
12:45-13:00, Paper TuBT12.5 | |
>Computationally Efficient Obstacle Avoidance Trajectory Planner for UAVs Based on Heuristic Angular Search Method |
> Video Attachment
|
|
Chen, Han | The Hongkong Polytechnic University |
Lu, Peng | The Hong Kong Polytechnic University |
Keywords: Collision Avoidance, Motion and Path Planning
Abstract: For accomplishing a variety of missions in challenging environments, the capability of navigating with full autonomy while avoiding unexpected obstacles is the most crucial requirement for UAVs in real applications. In this paper, we proposed such a computationally efficient obstacle avoidance trajectory planner that can be used in unknown cluttered environments. Because of the narrow view field of single depth camera on a UAV, the information of obstacles around is quite limited thus the shortest entire path is difficult to achieve. Therefore we focus on the time cost of the trajectory planner and safety rather than other factors. This planner is mainly composed of a point cloud processor, a waypoint publisher with Heuristic Angular Search(HAS) method and a motion planner with minimum acceleration optimization. Furthermore, we propose several techniques to enhance safety by making the possibility of finding a feasible trajectory as large as possible. The proposed approach is implemented to run onboard in real-time and is tested extensively in simulation and the average control output calculating time of iteration steps is less than 18 ms.
|
|
13:00-13:15, Paper TuBT12.6 | |
>Closing the Loop: Real-Time Perception and Control for Robust Collision Avoidance with Occluded Obstacles |
> Video Attachment
|
|
Tulbure, Andreea Roxana | ETH |
Khatib, Oussama | Stanford University |
Keywords: Collision Avoidance, Whole-Body Motion Planning and Control, Perception-Action Coupling
Abstract: Robots have been successfully used in well-structured and deterministic environments, but they are still unable to function in unstructured environments mainly because of missing reliable real-time systems that integrate perception and control. In this paper, we close the loop between perception and control for real-time obstacle avoidance by introducing a new robust perception algorithm and a new collision avoidance strategy, which combines local artificial potential fields with global elastic planning to maintain the convergence towards the goal. We evaluate our new approach in real-world experiments using a Franka Panda robot and show that it is able to robustly avoid dynamic or even partially occluded obstacles while performing position or path following tasks.
|
|
TuBT13 |
Room T13 |
Collision Avoidance II |
Regular session |
Chair: Shames, Iman | The University of Melbourne |
|
11:45-12:00, Paper TuBT13.1 | |
>A Modified Hybrid Reciprocal Velocity Obstacles Approach for Multi-Robot Motion Planning without Communication |
> Video Attachment
|
|
Sainte Catherine, Maxime | CEA |
Lucet, Eric | CEA Tech |
Keywords: Motion and Path Planning, Collision Avoidance, Wheeled Robots
Abstract: Ensuring a safe online motion planning despite a large number of moving agents is the problem addressed in this paper. Collision avoidance is achieved without communication between the agents and without global localization system. The proposed solution is a modification of the Hybrid Reciprocal Velocity Obstacles (HRVO) combined with a tracking error estimation, in order to adapt the Velocity Obstacle paradigm to agents with kinodynamic constraints and unreliable velocity estimates. This solution, evaluated in simulation and in real test scenario with three dynamic unicycle type robots, shows an improvement over HRVO.
|
|
12:00-12:15, Paper TuBT13.2 | |
>Safe and Effective Picking Paths in Clutter Given Discrete Distributions of Object Poses |
> Video Attachment
|
|
Wang, Rui | Rutgers University |
Mitash, Chaitanya | Rutgers University |
Lu, Shiyang | University of Michigan, Ann Arbor |
Boehm, Daniel | Rutgers University |
Bekris, Kostas E. | Rutgers, the State University of New Jersey |
Keywords: Motion and Path Planning, Collision Avoidance, Manipulation Planning
Abstract: Picking an item in the presence of other objects can be challenging as it involves occlusions and partial views. Given object models, one approach is to perform object pose estimation and use the most likely candidate pose per object to pick the target without collisions. This approach, however, ignores the uncertainty of the perception process both regarding the target's and the surrounding objects' poses. This work proposes first a perception process for 6D pose estimation, which returns a discrete distribution of object poses in a scene. Then, an open-loop planning pipeline is proposed to return safe and effective solutions for moving a robotic arm to pick, which (a) minimizes the probability of collision with the obstructing objects; and (b) maximizes the probability of reaching the target item. The planning framework models the challenge as a stochastic variant of the Minimum Constraint Removal (MCR) problem. The effectiveness of the methodology is verified given both simulated and real data in different scenarios. The experiments demonstrate the importance of considering the uncertainty of the perception process in terms of safe execution. The results also show that the methodology is more effective than conservative MCR approaches, which avoid all possible object poses regardless of the reported uncertainty.
|
|
12:15-12:30, Paper TuBT13.3 | |
>Collision Avoidance Based on Robust Lexicographic Task Assignment |
|
Wood, Tony A. | University of Melbourne |
Khoo, Mitchell | The University of Melbourne |
Michael, Elad | The University of Melbourne |
Manzie, Chris | University of Melbourne |
Shames, Iman | The University of Melbourne |
Keywords: Collision Avoidance, Path Planning for Multiple Mobile Robots or Agents, Task Planning
Abstract: Traditional task assignment approaches for multi-agent motion control do not take the possibility of collisions into account. This can lead to challenging requirements for path planning. We derive an assignment method that not only minimises the largest distance between an agent and its assigned destination but also provides local constraints for guaranteed collision avoidance. To this end, we introduce a sequential bottleneck optimisation problem and define a notion of robustness of an optimising assignment to changes of individual assignment costs. Conditioned on a sufficient level of robustness in relation to the size of the agents, we construct time-varying position bounds for every individual agent. These local constraints are a direct byproduct of the assignment procedure and only depend on the initial agent positions, the destinations that are to be visited, and a timing parameter. We prove that no agent that is assigned to move to one of the target locations collides with any other agent if all agents satisfy their local position constraints. We demonstrate the method in a illustrative case study.
|
|
12:30-12:45, Paper TuBT13.4 | |
>Risk-Averse MPC Via Visual-Inertial Input and Recurrent Networks for Online Collision Avoidance |
> Video Attachment
|
|
Schperberg, Alexander | University of California Los Angeles |
Chen, Kenny | University of California, Los Angeles |
Tsuei, Stephanie | University of California, Los Angeles |
Jewett, Michael | University of California, Los Angeles |
Hooks, Joshua | UCLA |
Soatto, Stefano | University of California, Los Angeles |
Mehta, Ankur | UCLA |
Hong, Dennis | UCLA |
Keywords: Motion and Path Planning, Collision Avoidance, Visual-Based Navigation
Abstract: In this paper, we propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties for safer navigation through cluttered environments. Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates through each step of our MPC’s finite time horizon. The RNN model is trained on a dataset that comprises of robot and landmark poses generated from camera images and inertial measurement unit (IMU) readings via a state-of-the-art visual-inertial odometry framework. To detect and extract object locations for avoidance, we use a custom-trained convolutional neural network model in conjunction with a feature extractor to retrieve 3D centroid and radii boundaries of nearby obstacles. The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms, demonstrating autonomous behaviors that can plan fast and collision-free paths towards a goal point.
|
|
12:45-13:00, Paper TuBT13.5 | |
>A Data-Driven Framework for Proactive Intention-Aware Motion Planning of a Robot in a Human Environment |
> Video Attachment
|
|
Peddi, Rahul | University of Virginia |
Di Franco, Carmelo | University of Virginia |
Gao, Shijie | University of Virginia |
Bezzo, Nicola | University of Virginia |
Keywords: Collision Avoidance, Motion and Path Planning, Social Human-Robot Interaction
Abstract: For safe and efficient human-robot interaction, a robot needs to predict and understand the intentions of humans who share the same space. Mobile robots are traditionally built to be {em reactive}, moving in unnatural ways without following social protocol, hence forcing people to behave very differently from human-human interaction rules, which can be overcome if robots instead were {em proactive}. In this paper, we build an intention-aware proactive motion planning strategy for mobile robots that coexist with multiple humans. We propose a framework that uses Hidden Markov Model (HMM) theory with a history of observations to: i) predict future states and estimate the likelihood that humans will cross the path of a robot, and ii) concurrently learn, update, and improve the predictive model with new observations at run-time. Stochastic reachability analysis is proposed to identify multiple possibilities of future states and a control scheme that leverages temporal virtual physics inspired by spring-mass systems is proposed to enable safe proactive motion planning. The proposed approach is validated with simulations and experiments involving an unmanned ground vehicle (UGV) performing go-to-goal operations in the presence of multiple humans, demonstrating improved performance and effectiveness of online learning when compared to reactive obstacle avoidance approaches.
|
|
13:00-13:15, Paper TuBT13.6 | |
>Frozone: Freezing-Free, Pedestrian-Friendly Navigation in Human Crowds |
> Video Attachment
|
|
Sathyamoorthy, Adarsh Jagan | University of Maryland |
Patel, Utsav | University of Maryland |
Guan, Tianrui | University of Maryland |
Manocha, Dinesh | University of Maryland |
Keywords: Collision Avoidance, Motion and Path Planning, Computational Geometry
Abstract: We present Frozone, a novel algorithm to deal with the Freezing Robot Problem (FRP) that arises when a robot navigates through dense scenarios and crowds. Our method senses and explicitly predicts the trajectories of pedestrians and constructs a Potential Freezing Zone (PFZ); a spatial zone where the robot could freeze or be obtrusive to humans. Our formulation computes a deviation velocity to avoid the PFZ, which also accounts for social constraints. Furthermore, Frozone is designed for robots equipped with sensors with a limited sensing range and field of view. We ensure that the robot's deviation is bounded, thus avoiding sudden angular motion which could lead to the loss of perception data of the surrounding obstacles. We have combined Frozone with a Deep Reinforcement Learning-based (DRL) collision avoidance method and use our hybrid approach to handle crowds of varying densities. Our overall approach results in smooth and collision-free navigation in dense environments. We have evaluated our method's performance in simulation and on real differential drive robots in challenging indoor scenarios. We highlight the benefits of our approach over prior methods in terms of success rates (up to 50 % increase), pedestrian-friendliness (100 % increase) and the rate of freezing ( > 80 % decrease) in challenging scenarios.
|
|
TuBT14 |
Room T14 |
Perception for Navigation |
Regular session |
Chair: Waslander, Steven Lake | University of Toronto |
Co-Chair: Zhang, Xiaolin | Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Science |
|
11:45-12:00, Paper TuBT14.1 | |
>Dynamic Attention-Based Visual Odometry |
> Video Attachment
|
|
Kuo, Xin-Yu | National Tsing Hua University |
Liu, Chien | National Tsing Hua University |
Lin, Kai-Chen | National Tsing Hua University |
Luo, Evan | National Tsing Hua University |
Chen, Yu-Wen | National Tsing Hua University |
Lee, Chun-Yi | National Tsing Hua University |
Keywords: Localization, Visual Learning
Abstract: This paper proposes a dynamic attention-based visual odometry framework (DAVO), a learning-based VO method, for estimating the ego-motion of a monocular camera. DAVO dynamically adjusts the attention weights on different semantic categories for different motion scenarios based on optical flow maps. These weighted semantic categories can then be used to generate attention maps that highlight the relative importance of different semantic regions in input frames for pose estimation. In order to examine the proposed DAVO, we perform a number of experiments on the KITTI Visual Odometry and SLAM benchmark suite to quantitatively and qualitatively inspect the impacts of the dynamically adjusted weights on the accuracy of the evaluated trajectories. Moreover, we design a set of ablation analyses to justify each of our design choices, and validate the effectiveness as well as the advantages of DAVO. Our experiments on the KITTI dataset shows that the proposed DAVO framework does provide satisfactory performance in ego-motion estimation, and is able deliver competitive performance when compared to the contemporary VO methods.
|
|
12:00-12:15, Paper TuBT14.2 | |
>Richer Aggregated Features for Optical Flow Estimation with Edge-Aware Refinement |
|
Wang, Xianshun | Shanghai Institute of Microsystem and Information Technology, Ch |
Zhu, Dongchen | Shanghai Institute of Microsystem and Information Technology, Chi |
Song, Jiafei | SIMIT |
Liu, Yanqing | Shanghai Institute of Microsystem and Information Technology, Ch |
Li, Jiamao | Shanghai Institute of Microsystem and Information Technology, Chi |
Zhang, Xiaolin | Shanghai Institute of Microsystem and Information Technology, Chi |
Keywords: Computer Vision for Other Robotic Applications, Deep Learning for Visual Perception
Abstract: Recent CNN-based optical flow approaches have a separated structure of feature extraction and flow estimation. The core task of optical flow is finding the corresponding points while rich representation is just the key part of such matching problems. However, the prior work usually pays more attention to the design of flow decoder than the feature extraction. In this paper, we present a novel optical flow estimation network to enrich the feature representation of each pyramid level, with a hierarchical dilated architecture and a bottom-up aggregation scheme. In addition, inspired by edge guided classical methods, we bring the edge-aware idea into our approach and propose an edge-aware refinement (EAR) subnetwork to handle motion boundaries. Using the same decoding structure as PWC-Net, our network outperforms it by a large margin and leads all its derivatives both on KITTI-2012 and KITTI-2015. Further performance analysis proves the effectiveness of proposed ideas.
|
|
12:15-12:30, Paper TuBT14.3 | |
>LiDAR Iris for Loop-Closure Detection |
|
Wang, Ying | Nanjing University of Science and Technology |
Sun, Zezhou | Nanjing University of Science and Technology |
Xu, Cheng-Zhong | University of Macau |
Sarma, Sanjay E. | MIT |
Yang, Jian | Nanjing University of Science & Technology |
Kong, Hui | Nanjing University of Science and Technology |
Keywords: Computer Vision for Other Robotic Applications, Localization
Abstract: In this paper, a global descriptor for a LiDAR point cloud, called LiDAR Iris, is proposed for fast and accurate loop-closure detection. A binary signature image can be obtained for each point cloud after several LoG-Gabor filtering and thresholding operations on the LiDAR-Iris image representation. Given two point clouds, their similarities can be calculated as the Hamming distance of two corresponding binary signature images extracted from the two point clouds, respectively. Our LiDAR-Iris method can achieve a pose-invariant loop-closure detection at a descriptor level with the Fourier transform of the LiDAR-Iris representation if assuming a 3D (x,y,yaw) pose space, although our method can generally be applied to a 6D pose space by re-aligning point clouds with an additional IMU sensor. Experimental results on five road-scene sequences demonstrate its excellent performance in loop-closure detection.
|
|
12:30-12:45, Paper TuBT14.4 | |
>Confidence Guided Stereo 3D Object Detection with Split Depth Estimation |
> Video Attachment
|
|
Li, Chengyao | University of Toronto |
Ku, Jason | University of Toronto |
Waslander, Steven Lake | University of Toronto |
Keywords: Object Detection, Segmentation and Categorization, Deep Learning for Visual Perception, Computer Vision for Transportation
Abstract: Accurate and reliable 3D object detection is vital to safe autonomous driving. Despite recent developments, the performance gap between stereo-based methods and LiDAR-based methods is still considerable. Accurate depth estimation is crucial to the performance of stereo-based 3D object detection methods, particularly for those pixels associated with objects in the foreground. Moreover, stereo-based methods suffer from high variance in the depth estimation accuracy, which is often not considered in the object detection pipeline. To tackle these two issues, we propose CG-Stereo, a confidence-guided stereo 3D object detection pipeline that uses separate decoders for foreground and background pixels during depth estimation, and leverages the confidence estimation from the depth estimation network as a soft attention mechanism in the 3D object detector. Our approach outperforms all state-of-the-art stereo-based 3D detectors on the KITTI benchmark.
|
|
12:45-13:00, Paper TuBT14.5 | |
>End-To-End Contextual Perception and Prediction with Interaction Transformer |
|
Li, Lingyun | Uber Advanced Technologies Group |
Yang, Bin | University of Toronto |
Liang, Ming | Uber |
Ren, Mengye | University of Toronto, Uber ATG |
Zeng, Wenyuan | University of Toronto, Uber |
Segal, Sean | Uber ATG, University of Toronto |
Urtasun, Raquel | University of Toronto |
Keywords: Computer Vision for Transportation, Novel Deep Learning Methods, Collision Avoidance
Abstract: In this paper, we tackle the problem of detecting objects in 3D and forecasting their future motion in the context of self-driving. Towards this goal, we design a novel approach that explicitly takes into account the interactions between actors. To capture the spatial-temporal dependency between actors, we propose a recurrent neural network with a novel Transformer architecture, which we call the Interaction Transformer. Importantly, our model can be trained end-to-end, and runs in real-time. We validate our approach on two challenging real-world datasets: ATG4D and nuScenes. We show that our approach can outperform the state-of-the-art results on both datasets. In particular, we significantly improve the social compliance between the estimated future trajectories, resulting in far fewer collisions between the predicted actors.
|
|
13:00-13:15, Paper TuBT14.6 | |
>Inferring Spatial Uncertainty in Object Detection |
> Video Attachment
|
|
Wang, Zining | University of California, Berkeley |
Feng, Di | Technical University of Munich |
Zhou, Yiyang | University of California, Berkeley |
Rosenbaum, Lars | Robert Bosch GmbH |
Timm, Fabian | Robert Bosch GmbH |
Dietmayer, Klaus | University of Ulm |
Tomizuka, Masayoshi | University of California |
Zhan, Wei | Univeristy of California, Berkeley |
Keywords: Object Detection, Segmentation and Categorization, Deep Learning for Visual Perception, Computer Vision for Transportation
Abstract: The availability of real-world datasets is the prerequisite for developing object detection methods for autonomous driving. While ambiguity exists in object labels due to error-prone annotation process or sensor observation noises, current object detection datasets only provide deterministic annotations without considering their uncertainty. This precludes an in-depth evaluation among different object detection methods, especially for those that explicitly model predictive probability. In this work, we propose a generative model to estimate bounding box label uncertainties from LiDAR point clouds, and define a new representation of the probabilistic bounding box through spatial distribution. Comprehensive experiments show that the proposed model represents uncertainties commonly seen in driving scenarios. Based on the spatial distribution, we further propose an extension of IoU, called the Jaccard IoU (JIoU), as a new evaluation metric that incorporates label uncertainty. Experiments on the KITTI and the Waymo Open Datasets show that JIoU is superior to IoU when evaluating probabilistic object detectors.
|
|
TuBT15 |
Room T15 |
Vision-Based Navigation I |
Regular session |
Chair: Fermuller, Cornelia | University of Maryland |
Co-Chair: Tran, Quang | AIOZ |
|
11:45-12:00, Paper TuBT15.1 | |
>One-Shot Informed Robotic Visual Search in the Wild |
> Video Attachment
|
|
Koreitem, Karim | McGill University |
Shkurti, Florian | University of Toronto |
Manderson, Travis | McGill University |
Chang, Wei-Di | McGill University |
Gamboa Higuera, Juan Camilo | McGill University |
Dudek, Gregory | McGill University |
Keywords: Visual-Based Navigation, Field Robots, Representation Learning
Abstract: We consider the task of underwater robot navigation for the purpose of collecting scientifically relevant video data for environmental monitoring. The majority of field robots that currently perform monitoring tasks in unstructured natural environments navigate via path-tracking a pre-specified sequence of waypoints. Although this navigation method is often necessary, it is limiting because the robot does not have a model of what the scientist deems to be relevant visual observations. Thus, the robot can neither visually search for particular types of objects, nor focus its attention on parts of the scene that might be more relevant than the pre-specified waypoints and viewpoints. In this paper we propose a method that enables informed visual navigation via a learned visual similarity operator that guides the robot’s visual search towards parts of the scene that look like an exemplar image, which is given by the user as a high-level specification for data collection. We propose and evaluate a weakly supervised video representation learning method that outperforms ImageNet embeddings for similarity tasks in the underwater domain. We also demonstrate the deployment of this similarity operator during informed visual navigation in collaborative environmental monitoring scenarios, in large-scale field trials, where the robot and a human scientist collaboratively search for relevant visual content. Code: https://github.com/rvl-lab-utoronto/visual_search_in_the_wild
|
|
12:00-12:15, Paper TuBT15.2 | |
>Perception-Aware Path Planning for UAVs Using Semantic Segmentation |
> Video Attachment
|
|
Bartolomei, Luca | ETH Zurich |
Teixeira, Lucas | ETH Zurich |
Chli, Margarita | ETH Zurich |
Keywords: Visual-Based Navigation, Aerial Systems: Perception and Autonomy, Autonomous Vehicle Navigation
Abstract: In this work, we present a perception-aware path-planning pipeline for Unmanned Aerial Vehicles (UAVs) for navigation in challenging environments. The objective is to reach a given destination safely and accurately by relying on monocular camera-based state estimators, such as Keyframe-based Visual-Inertial Odometry (VIO) systems. Motivated by the recent advances in semantic segmentation using deep learning, our path-planning architecture takes into consideration the semantic classes of parts of the scene that are perceptually more informative than others. This work proposes a planning strategy capable of avoiding both texture-less regions and problematic areas, such as lakes and oceans, that may cause large drift or failures in the robot's pose estimation, by using the semantic information to compute the next best action with respect to perception quality. We design a hierarchical planner, composed of an A* path-search step followed by B-Spline trajectory optimization. While the A* steers the UAV towards informative areas, the optimizer keeps the most promising landmarks in the camera's field of view. We extensively evaluate our approach in a set of photo-realistic simulations, showing a remarkable improvement with respect to the state-of-the-art in active perception.
|
|
12:15-12:30, Paper TuBT15.3 | |
>Learning Your Way without Map or Compass: Panoramic Target Driven Visual Navigation |
> Video Attachment
|
|
Watkins-Valls, David | Columbia University |
Xu, Jingxi | Columbia University |
Waytowich, Nicholas | University of North Florida |
Allen, Peter | Columbia University |
Keywords: Visual-Based Navigation, Big Data in Robotics and Automation, Imitation Learning
Abstract: We present a robot navigation system that uses an imitation learning framework to successfully navigate in complex environments. Our framework takes a pre-built 3D scan of a real environment and trains an agent from pre-generated expert trajectories to navigate to any position given a panoramic view of the goal and the current visual input without relying on map, compass, odometry, or relative position of the target at runtime. Our end-to-end trained agent uses RGB and depth (RGBD) information and can handle large environments (up to 1031m^2) across multiple rooms (up to 40) and generalizes to unseen targets. We show that when compared to several baselines our method (1) requires fewer training examples and less training time, (2) reaches the goal location with higher accuracy, and (3) produces better solutions with shorter paths for long-range navigation tasks.
|
|
12:30-12:45, Paper TuBT15.4 | |
>Autonomous Navigation in Complex Environments with Deep Multimodal Fusion Network |
> Video Attachment
|
|
Nguyen, Anh | Imperial College London |
Nguyen, Ngoc | AIOZ Pte Ltd |
Tran, Xuan Kim | Company |
Tjiputra, Erman | AIOZ |
Tran, Quang | AIOZ |
Keywords: Visual-Based Navigation, Novel Deep Learning Methods, Deep Learning for Visual Perception
Abstract: Autonomous navigation in complex environments is a crucial task in time-sensitive scenarios such as disaster response or search and rescue. However, complex environments pose significant challenges for autonomous platforms to navigate due to their challenging properties: constrained narrow passages, unstable pathway with debris and obstacles, or irregular geological structures and poor lighting conditions. In this work, we propose a multimodal fusion approach to address the problem of autonomous navigation in complex environments such as collapsed cites, or natural caves. We first simulate the complex environments in a physics-based simulation engine and collect a large-scale dataset for training. We then propose a Navigation Multimodal Fusion Network (NMFNet) which has three branches to effectively handle three visual modalities: laser, RGB images, and point cloud data. The extensively experimental results show that our NMFNet outperforms recent state of the art by a fair margin while achieving real-time performance. We further show that the use of multiple modalities is essential for autonomous navigation in complex environments. Finally, we successfully deploy our network to both simulated and real mobile robots.
|
|
12:45-13:00, Paper TuBT15.5 | |
>Unsupervised Learning of Dense Optical Flow, Depth and Egomotion with Event-Based Sensors |
|
Ye, Chengxi | University of Maryland |
Mitrokhin, Anton | University of Maryland, College Park |
Yorke, James | University of Maryland, College Park |
Fermuller, Cornelia | University of Maryland |
Aloimonos, Yiannis | University of Maryland |
Keywords: Autonomous Vehicle Navigation, Visual-Based Navigation, Deep Learning for Visual Perception
Abstract: We present an unsupervised learning pipeline for dense depth, optical flow and egomotion estimation for autonomous driving applications, using the event-based output of the Dynamic Vision Sensor (DVS) as input. The backbone of our pipeline is a bioinspired encoder-decoder neural network architecture - ECN. To train the pipeline, we introduce a covariance normalization technique which resembles the lateral inhibition mechanism found in animal neural systems. Our work is the first monocular pipeline that generates dense depth and optical flow from sparse event data only, and is able to transfer from day to night scenes without any additional training. The network works in self-supervised mode and has just 150k parameters. We evaluate our pipeline on the MVSEC self driving dataset and present results for depth, optical flow and and egomotion estimation. Thanks to the efficient design, we are able to achieve inference rates of 300 FPS on a single Nvidia 1080Ti GPU. Our experiments demonstrate significant improvements upon works that used deep learning on event data, as well as the ability to perform well during both day and night.
|
|
13:00-13:15, Paper TuBT15.6 | |
>HouseExpo: A Large-Scale 2D Indoor Layout Dataset for Learning-Based Algorithms on Mobile Robots |
> Video Attachment
|
|
Li, Tingguang | The Chinese University of Hong Kong |
Ho, Danny | The Chinese University of Hong Kong |
Li, Chenming | The Chinese University of Hong Kong |
Zhu, Delong | The Chinese University of Hong Kong |
Wang, Chaoqun | The Chinese University of HongKong |
Meng, Max Q.-H. | The Chinese University of Hong Kong |
Keywords: AI-Based Methods, Big Data in Robotics and Automation, Visual-Based Navigation
Abstract: As one of the most promising areas, mobile robots draw much attention these years. Current work in this field is often evaluated in a few manually designed scenarios, due to the lack of a common experimental platform. Meanwhile, with the recent development of deep learning techniques, some researchers attempt to apply learning-based methods to mobile robot tasks, which requires a substantial amount of data. To satisfy the underlying demand, in this paper we build HouseExpo, a large-scale indoor layout dataset containing 35, 126 2D floor plans including 252,550 rooms in total. Together we develop Pseudo-SLAM, a lightweight and efficient simulation platform to accelerate the data generation procedure, thereby speeding up the training process. In our experiments, we build models to tackle obstacle avoidance and autonomous exploration from a learning perspective in simulation as well as real-world experiments to verify the effectiveness of our simulator and dataset. All the data and codes are available online and we hope HouseExpo and Pseudo-SLAM can feed the need for data and benefit the whole community.
|
|
TuBT16 |
Room T16 |
Vision-Based Navigation II |
Regular session |
Chair: Gammell, Jonathan | University of Oxford |
Co-Chair: Yu, Changbin (Brad) | The Australian National University |
|
11:45-12:00, Paper TuBT16.1 | |
>Multimodal Aggregation Approach for Memory Vision-Voice Indoor Navigation with Meta-Learning |
> Video Attachment
|
|
Yan, Liqi | Fudan University |
Liu, Dongfang | Purdue University |
Song, Yaoxian | Fudan University |
Yu, Changbin (Brad) | The Australian National University |
Keywords: Visual-Based Navigation, Reinforecment Learning, Motion and Path Planning
Abstract: Vision and voice are two vital keys for agents' interaction and learning. In this paper, we present a novel indoor navigation model called Memory Vision-Voice Indoor Navigation (MVV-IN), which receives voice commands and analyzes multimodal information of visual observation in order to enhance robots' environment understanding. We make use of single RGB images taken by a first-view monocular camera. We also apply a self-attention mechanism to keep the agent focusing on key areas. Memory is important for the agent to avoid repeating certain tasks unnecessarily and in order for it to adapt adequately to new scenes, therefore, we make use of meta-learning. We have experimented with various functional features extracted from visual observation. Comparative experiments prove that our methods outperform state-of-the-art baselines.
|
|
12:00-12:15, Paper TuBT16.2 | |
>Occlusion-Robust MVO: Multimotion Estimation through Occlusion Via Motion Closure |
> Video Attachment
|
|
Judd, Kevin Michael | University of Oxford |
Gammell, Jonathan | University of Oxford |
Keywords: Visual-Based Navigation, Visual Tracking, Autonomous Vehicle Navigation
Abstract: Visual motion estimation is an integral and well studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation, which is especially challenging in highly dynamic environments. Such environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Previous work in object tracking focuses on maintaining the integrity of object tracks but usually relies on specific appearance-based descriptors or constrained motion models. These approaches are very effective in specific applications but do not generalize to the full multimotion estimation problem. This paper presents a pipeline for estimating multiple motions, including the camera egomotion, in the presence of occlusions. This approach uses an expressive motion prior to estimate the SE (3) trajectory of every motion in the scene, even during temporary occlusions, and identify the reappearance of motions through motion closure. The performance of this occlusion-robust multimotion visual odometry (MVO) pipeline is evaluated on real-world data and the Oxford Multimotion Dataset.
|
|
12:15-12:30, Paper TuBT16.3 | |
>IDOL: A Framework for IMU-DVS Odometry Using Lines |
> Video Attachment
|
|
Le Gentil, Cedric | University of Technology Sydney |
Tschopp, Florian | ETH Zurich |
Alzugaray, Ignacio | ETH Zürich |
Vidal-Calleja, Teresa A. | University of Technology Sydney |
Siegwart, Roland | ETH Zurich |
Nieto, Juan | ETH Zürich |
Keywords: Visual-Based Navigation, SLAM, Sensor Fusion
Abstract: In this paper, we introduce IDOL, an optimization-based framework for IMU-DVS Odometry using Lines. Event cameras, also called Dynamic Vision Sensors (DVSs), generate highly asynchronous streams of events triggered upon illumination changes for each individual pixel. This novel paradigm presents advantages in low illumination conditions and high-speed motions. Nonetheless, this unconventional sensing modality brings new challenges to perform scene reconstruction or motion estimation. The proposed method offers to leverage a continuous-time representation of the inertial readings to associate each event with timely accurate inertial data. The method's front-end extracts event clusters that belong to line segments in the environment whereas the back-end estimates the system's trajectory alongside the lines' 3D position by minimizing point-to-line distances between individual events and the lines' projection in the image space. A novel attraction/repulsion mechanism is presented to accurately estimate the lines' extremities, avoiding their explicit detection in the event data. The proposed method is benchmarked against a state-of-the-art frame-based visual-inertial odometry framework using public datasets. The results show that IDOL performs at the same order of magnitude on most datasets and even shows better orientation estimates. These findings can have a great impact on new algorithms for DVS.
|
|
12:30-12:45, Paper TuBT16.4 | |
>Point Cloud Based Reinforcement Learning for Sim-To-Real and Partial Observability in Visual Navigation |
> Video Attachment
|
|
Lobos-Tsunekawa, Kenzo | Universidad De Chile |
Harada, Tatsuya | The University of Tokyo |
Keywords: Visual-Based Navigation, Reinforecment Learning, AI-Based Methods
Abstract: Reinforcement Learning (RL), among other learning-based methods, represents powerful tools to solve complex robotic tasks (e.g., actuation, manipulation, navigation, etc.), with the need for real-world data to train these systems as one of its most important limitations. The use of simulators is one way to address this issue, yet knowledge acquired in simulations does not work directly in the real-world, which is known as the sim-to-real transfer problem. While previous works focus on the nature of the images used as observations (e.g., textures and lighting), which has proven useful for a sim-to-sim transfer, they neglect other concerns regarding said observations, such as precise geometrical meanings, failing at robot-to-robot, and thus in sim-to-real transfers. We propose a method that learns on an observation space constructed by point clouds and environment randomization, generalizing among robots and simulators to achieve sim-to-real, while also addressing partial observability. We demonstrate the benefits of our methodology on the point goal navigation task, in which our method proves to be highly unaffected to unseen scenarios produced by robot-to-robot transfer, outperforms image-based baselines in robot-randomized experiments, and presents high performances in sim-to-sim conditions. Finally, we perform several experiments to validate the sim-to-real transfer to a physical domestic robot platform, confirming the out-of-the-box performance of our system.
|
|
12:45-13:00, Paper TuBT16.5 | |
>Autonomous Robot Navigation Based on Multi-Camera Perception |
|
Zhu, Kunyan | Shandong University |
Chen, Wei | Shandong University |
Zhang, Wei | Shandong University |
Song, Ran | Shandong University |
Li, Yibin | Shandong University |
Keywords: Visual-Based Navigation, Collision Avoidance, Motion and Path Planning
Abstract: In this paper, we propose an autonomous method for robot navigation based on a multi-camera setup that takes advantage of a wide field of view. A new multi-task network is designed for handling the visual information supplied by the left, central and right cameras to find the passable area, detect the intersection and infer the steering. Based on the outputs of the network, three navigation indicators are generated and then combined with the high-level control commands extracted by the proposed MapNet, which are finally fed into the driving controller. The indicators are also used through the controller for adjusting the driving velocity, which assists the robot to adjust the speed for smoothly bypassing obstacles. Experiments in real-world environments demonstrate that our method performs well in both local obstacle avoidance and global goal-directed navigation tasks.
|
|
TuBT17 |
Room T17 |
Vision-Based Navigation III |
Regular session |
Chair: Kim, H. Jin | Seoul National University |
Co-Chair: Tombari, Federico | Technische Universität München |
|
11:45-12:00, Paper TuBT17.1 | |
>Model Quality Aware RANSAC: A Robust Camera Motion Estimator |
|
Yeh, Shu-Hao | Texas A&M University |
Lu, Yan | Google |
Song, Dezhen | Texas A&M University |
Keywords: Visual-Based Navigation, SLAM, Computer Vision for Other Robotic Applications
Abstract: Robust estimation of camera motion under the presence of outlier noise is a fundamental problem in robotics and computer vision. Despite existing efforts that focus on detecting motion and scene degeneracies, the best existing approach that builds on Random Consensus Sampling (RANSAC) still has non-negligible failure rate. Since a single failure can lead to the failure of the entire visual simultaneous localization and mapping, it is important to further improve the robust estimation algorithm. We propose a new robust camera motion estimator (RCME) by incorporating two main changes: a model-sample consistency test at the model instantiation step and an inlier set quality test that verifies model-inlier consistency using differential entropy. We have implemented our RCME algorithm and tested it under many public datasets. The results have shown a consistent reduction in failure rate when comparing to the RANSAC-based Gold Standard approach and two recent variations of RANSAC methods.
|
|
12:00-12:15, Paper TuBT17.2 | |
>A Fast and Robust Place Recognition Approach for Stereo Visual Odometry Using LiDAR Descriptors |
> Video Attachment
|
|
Mo, Jiawei | University of Minnesota, Twin Cities |
Sattar, Junaed | University of Minnesota |
Keywords: Visual-Based Navigation, Autonomous Vehicle Navigation, SLAM
Abstract: Place recognition is a core component of Simultaneous Localization and Mapping (SLAM) algorithms. Particularly in visual SLAM systems, previously-visited places are recognized by measuring the appearance similarity between images representing these locations. However, such approaches are sensitive to visual appearance change and also can be computationally expensive. In this paper, we propose an alternative approach adapting LiDAR descriptors for 3D points obtained from stereo-visual odometry for place recognition. 3D points are potentially more reliable than 2D visual cues (e.g., 2D features) against environmental changes (e.g., variable illumination) and this may benefit visual SLAM systems in long-term deployment scenarios. Stereo-visual odometry generates 3D points with an absolute scale, which enables us to use LiDAR descriptors for place recognition with high computational efficiency. Through extensive evaluations on standard benchmark datasets, we demonstrate the accuracy, efficiency, and robustness of using 3D points for place recognition over 2D methods.
|
|
12:15-12:30, Paper TuBT17.3 | |
>KLIEP-Based Density Ratio Estimation for Semantically Consistent Synthetic to Real Images Adaptation in Urban Traffic Scenes |
|
Savkin, Artem | TUM |
Tombari, Federico | Technische Universität München |
Keywords: Simulation and Animation, Computer Vision for Transportation, Autonomous Vehicle Navigation
Abstract: Synthetic data has been applied in many deep learning based computer vision tasks. Limited performance of algorithms trained solely on synthetic data has been approached with domain adaptation techniques such as the ones based on generative adversarial framework. We demonstrate how adversarial training alone can introduce semantic inconsistencies in translated images. To tackle this issue we propose density prematching strategy using KLIEP-based density ratio estimation procedure. Finally, we show that aforementioned strategy improves quality of translated images of underlying method and their usability for the semantic segmentation task in the context of autonomous driving.
|
|
12:30-12:45, Paper TuBT17.4 | |
>Graduated Assignment Graph Matching for Realtime Matching of Image Wireframes |
|
Menke, Joseph | University of California, Berkeley |
Yang, Allen | University of California, Berkeley |
Keywords: Visual Tracking, Mapping, Semantic Scene Understanding
Abstract: We present an algorithm for the realtime matching of wireframe extractions in pairs of images. Here we treat extracted wireframes as graphs and propose a simplified Graduated Assignment algorithm to use with this problem. Using this algorithm we achieve a 30% accuracy improvement over the baseline method. We show that, for this problem, the simplified Graduated Assignment algorithm can achieve realtime performance without a significant drop in accuracy as compared to the standard Graduated Assignment algorithm. We further demonstrate a method of utilizing this simplified Graduated Assignment algorithm for achieving a similar realtime improvement in the matching quality of standard features without wireframe detection.
|
|
12:45-13:00, Paper TuBT17.5 | |
>Edge-Based Visual Odometry with Stereo Cameras Using Multiple Oriented Quadtrees |
> Video Attachment
|
|
Kim, Changhyeon | Seoul National University |
Kim, Junha | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Visual-Based Navigation, Localization, Mapping
Abstract: We propose an efficient edge-based stereo visual odometry (VO) using multiple quadtrees created according to image gradient orientations. To characterize edges, we classify them into eight orientation groups according to their image gradient directions. Using the edge groups, we construct eight quadtrees and set overlapping areas belonging to adjacent quadtrees for robust and efficient matching. For further acceleration, previously visited tree nodes are stored and reused at the next iteration to warm-start. We propose an edge culling method to extract prominent edgelets and prune redundant edges. The camera motion is estimated by minimizing point-to-edge distances within a re-weighted iterative closest points (ICP) framework, and simultaneously, 3-D structures are recovered by static and temporal stereo settings. To analyze the effects of the proposed methods, we conduct extensive simulations with various settings. Quantitative results on public datasets confirm that our approach has competitive performance with state-of-the-art stereo methods. In addition, we demonstrate the practical values of our system in author-collected modern building scenes with curved edges only.
|
|
TuBT18 |
Room T18 |
Vision-Based Navigation IV |
Regular session |
Chair: Karaman, Sertac | Massachusetts Institute of Technology |
Co-Chair: Jawahar, C.V. | IIIT, Hyderabad |
|
11:45-12:00, Paper TuBT18.1 | |
>Perception-Aware Path Finding and Following of Snake Robot in Unknown Environment |
> Video Attachment
|
|
Yang, Weixin | University of Nevada, Reno |
Wang, Gang | University of Nevada |
Shen, Yantao | University of Nevada, Reno |
Keywords: Perception-Action Coupling, Biomimetics, Visual-Based Navigation
Abstract: In this paper, we investigate the perception-aware path finding, planning and following for a class of snake robots autonomously serpentining in an unmodeled and unknown environment. In the work, the onboard LiDAR sensor mounted on the head of the snake robot is utilized to reconstruct the local environment, by which and the modified rapidly-exploring random tree method, a feasible path from the current position of the robot to a local selected target position can be obtained. Next, the parametric cubic spline interpolation path-planning method and potential functions are applied to make the path more smooth so as to prevent the multi-link and elongated robot body from hitting obstacles. To steer, a time-varying line-of-sight control law is designed to ensure that the robot moves to the local target position along the generated path by the perception-aware method. The robot will repeatedly perform the above search-find-move strategy until it reaches the final predefined target point. Simulation and experimental results demonstrate a good performance of the proposed perception-aware approach, that is, the elongated and underactuated snake robot is capable of autonomously navigating in an unknown environment.
|
|
12:00-12:15, Paper TuBT18.2 | |
>Joint Feature Selection and Time Optimal Path Parametrization for High Speed Vision-Aided Navigation |
|
Spasojevic, Igor | MIT |
Murali, Varun | Massachusetts Institute of Technology |
Karaman, Sertac | Massachusetts Institute of Technology |
Keywords: Perception-Action Coupling, Visual-Based Navigation, Motion and Path Planning
Abstract: We study a problem in vision-aided navigation in which an autonomous agent has to traverse a specified path in minimal time while ensuring extraction of a steady stream of visual percepts with low latency. Vision-aided robots extract motion estimates from the sequence of images of their on-board cameras by registering the change in bearing to landmarks in their environment. The computational burden of the latter procedure grows with the range of apparent motion undertaken by the projections of the landmarks, incurring a lag in pose estimates that should be minimized while navigating at high speeds. This paper addresses the problem of selecting a desired number of landmarks in the environment, together with the time parametrization of the path, to allow the agent execute it in minimal time while both (i) ensuring the computational burden of extracting motion estimates stays below a set threshold and (ii) respecting the actuation constraints of the agent. We provide two efficient approximation algorithms for addressing the aforementioned problem. Also, we show how it can be reduced to a mixed integer linear program for which there exist well-developed optimization packages. Ultimately, we illustrate the performance of our algorithms in experiments using a quadrotor.
|
|
12:15-12:30, Paper TuBT18.3 | |
>AVP-SLAM: Semantic Visual Mapping and Localization for Autonomous Vehicles in the Parking Lot |
> Video Attachment
|
|
Qin, Tong | Hong Kong University of Science and Technology |
Chen, Tongqing | Huawei Technology |
Chen, Yilun | Huawei Technology |
Su, Qing | Huawei Technologies Co., Ltd |
Keywords: Localization, Computer Vision for Automation, Visual-Based Navigation
Abstract: Autonomous valet parking is a specific application for autonomous vehicles. In this task, vehicles need to navigate in narrow, crowded and GPS-denied parking lots. Accurate localization ability is of great importance. Traditional visual-based methods suffer from tracking lost due to texture-less regions, repeated structures, and appearance changes. In this paper, we exploit robust semantic features to build the map and localize vehicles in parking lots. Semantic features contain guide signs, parking lines, speed bumps, etc, which typically appear in parking lots. Compared with traditional features, these semantic features are long-term stable and robust to the perspective and illumination change. We adopt four surround-view cameras to increase the perception range. Assisting by an IMU (Inertial Measurement Unit) and wheel encoders, the proposed system generates a global visual semantic map. This map is further used to localize vehicles at the centimeter level. We analyze the accuracy and recall of our system and compare it against other methods in real experiments. Furthermore, we demonstrate the practicability of the proposed system by the autonomous parking application.
|
|
12:30-12:45, Paper TuBT18.4 | |
>DGAZE: Driver Gaze Mapping on Road |
> Video Attachment
|
|
Dua, Isha | IIIT Hyderbad |
John, Thrupthi Ann | IIIT Hyderbad |
Gupta, Riya | IIIT Hyderabad |
Jawahar, C.V. | IIIT, Hyderabad |
Keywords: Computer Vision for Transportation, Intelligent Transportation Systems, Deep Learning for Visual Perception
Abstract: Driver gaze mapping is crucial to estimate driver attention and determine which objects the driver is focusing on while driving. We introduce DGAZE, the first large-scale driver gaze mapping dataset. Unlike previous works, our dataset does not require expensive wearable eye-gaze trackers and instead relies on mobile phone cameras for data collection. The data was collected in a lab setting designed to mimic real driving conditions and has point and object-level annotation. It consists of 227,178 road-driver image pairs collected from 20 drivers and contains 103 unique objects on the road belonging to 7 classes: cars, pedestrians, traffic signals, motorbikes, auto-rickshaws, buses and signboards. We also present I-DGAZE, a fused convolutional neural network for predicting driver gaze on the road, which was trained on the DGAZE dataset. Our architecture combines facial features such as face location and head pose along with the image of the left eye to get optimum results. Our model achieves an error of 186.89 pixels on the road view of resolution 1920x1080 pixels. We compare our model with state-of-the-art eye gaze works and present extensive ablation results.
|
|
TuBT19 |
Room T19 |
Navigation and Collision Avoidance |
Regular session |
Chair: Bera, Aniket | University of Maryland |
Co-Chair: Feng, Chen | New York University |
|
11:45-12:00, Paper TuBT19.1 | |
>Autonomous Obstacle Avoidance for UAV Based on Fusion of Radar and Monocular Camera |
> Video Attachment
|
|
Yu, Hang | Northwestern Polytechnical University |
Zhang, Fan | Northwestern Polytechnical Univeristy |
Huang, Panfeng | Northwestern Polytechnical University |
Wang, Chen | Chang’an University |
Yuanhao, Li | Northwestern Polytechnical University |
Keywords: Sensor Fusion, Collision Avoidance, Visual-Based Navigation
Abstract: UAVs face many challenges in autonomous obstacle avoidance in large outdoor scenarios, specifically the long communication distance from ground stations. The computing power of onboard computers is limited, and the unknown obstacles cannot be accurately detected. In this paper, an autonomous obstacle avoidance scheme based on the fusion of millimeter wave radar and monocular camera is proposed. The visual detection is designed to detect unknown obstacles which is more robust than traditional algorithms. Then extended Kalman filter (EKF) data fusion is used to build exact real 3D coordinates of the obstacles. Finally, an efficient path planning algorithm is used to obtain the path to avoid obstacles. Based on the theoretical design, an experimental platform is built to verify the UAV autonomous obstacle avoidance scheme proposed in this paper. The experiment results show the proposed scheme cannot only detect different kinds of unknown obstacles, but can also take up very little computing resources to run on an onboard computer. The outdoor flight experiment shows the feasibility of the proposed scheme.
|
|
12:00-12:15, Paper TuBT19.2 | |
>UST: Unifying Spatio-Temporal Context for Trajectory Prediction in Autonomous Driving |
|
He, Hao | TuSimple |
Dai, Hengchen | Tusimple |
Wang, Naiyan | TuSimple |
Keywords: Big Data in Robotics and Automation, Autonomous Agents
Abstract: Trajectory prediction has always been a challenging problem for autonomous driving, since it needs to infer the latent intention from the behaviors and interactions from traffic participants. This problem is intrinsically hard, because each participant may behave differently under different environments and interactions. This key is to effectively model the interlaced influence from both spatial context and temporal context. Existing work usually encodes these two types of context separately, which would lead to inferior modeling of the scenarios. In this paper, we first propose a unified approach to treat time and space dimensions equally for modeling spatio-temporal context. The proposed module is simple and easy to implement within several lines of codes. In contrast to existing methods which heavily rely on recurrent neural network for temporal context and hand-crafted structure for spatial context, our method could automatically partition the spatio-temporal space to adapt to the data. Lastly, we test our proposed framework on two recently proposed trajectory prediction dataset ApolloScape and Argoverse. We show that the proposed method substantially outperforms the previous state-of-the-art methods while maintaining its simplicity. These encouraging results further validate the superiority of our approach.
|
|
12:15-12:30, Paper TuBT19.3 | |
>Automatic Failure Recovery and Re-Initialization for Online UAV Tracking with Joint Scale and Aspect Ratio Optimization |
|
Ding, Fangqiang | Tongji University |
Fu, Changhong | Tongji University |
Li, Yiming | Tongji University |
Jin, Jin | Tongji University |
Feng, Chen | New York University |
Keywords: Visual-Based Navigation, Computer Vision for Other Robotic Applications, Visual Learning
Abstract: Current unmanned aerial vehicle (UAV) visual tracking algorithms are primarily limited with respect to: (i) the kind of size variation they can deal with, (ii) the implementation speed which hardly meets the real-time requirement. In this work, a real-time UAV tracking algorithm with powerful size estimation ability is proposed. Specifically, the overall tracking task is allocated to two 2D filters: (i) translation filter for location prediction in the space domain, (ii) size filter for scale and aspect ratio optimization in the size domain. Besides, an efficient two-stage re-detection strategy is introduced for long-term UAV tracking tasks. Large-scale experiments on four UAV benchmarks demonstrate the superiority of the presented method which has computation feasibility on a low-cost CPU.
|
|
12:30-12:45, Paper TuBT19.4 | |
>Asynchronous Event-Based Line Tracking for Time-To-Contact Maneuvers in UAS |
> Video Attachment
|
|
Gómez Eguíluz, Augusto | University of Seville |
Rodriguez-Gomez, Juan Pablo | University of Seville |
Martinez-de-Dios, Jose Ramiro | University of Seville |
Ollero, Anibal | University of Seville |
Keywords: Aerial Systems: Perception and Autonomy, Computer Vision for Other Robotic Applications
Abstract: This paper presents an bio-inspired event-based perception scheme for agile aerial robot maneuvering. It tries to mimic birds, which perform purposeful maneuvers by closing the separation in the retinal image (w.r.t. the goal) to follow time-to-contact trajectories. The proposed approach is based on event cameras, also called artificial retinas, which provide fast response and robustness against motion blur and lighting conditions. Our scheme guides the robot by only adjusting the position of features extracted in the event image plane to their goal positions at a predefined time using smooth time-to-contact trajectories. The proposed scheme is robust, efficient and can be added on top of commonly-used aerial robot velocity controllers. It has been validated on-board a UAV with real-time computation in low-cost hardware during sets of experiments with different descent maneuvers and lighting conditions.
|
|
12:45-13:00, Paper TuBT19.5 | |
>Enhanced Transfer Learning for Autonomous Driving with Systematic Accident Simulation |
|
Akhauri, Shivam | University of Maryland College Park |
Zheng, Laura | University of Maryland, College Park |
Lin, Ming C. | University of Maryland at College Park |
Keywords: Collision Avoidance, Transfer Learning, Autonomous Agents
Abstract: Simulation data can be utilized to extend real-world driving data in order to cover edge cases, such as vehicle accidents. The importance of handling edge cases can be observed in the high societal costs in handling car accidents, as well as potential dangers to human drivers. In order to cover a wide and diverse range of all edge cases, we systemically parameterize and simulate the most common accident scenarios. By applying this data to autonomous driving models, we show that transfer learning on simulated data sets provide better generalization and collision avoidance, as compared to random initialization methods. Our results illustrate that information from a model trained on simulated data can be inferred to a model trained on real-world data, indicating the potential influence of simulation data in real world models and advancements in handling of anomalous driving scenarios.
|
|
13:00-13:15, Paper TuBT19.6 | |
>A Framework for Online Updates to Safe Sets for Uncertain Dynamics |
> Video Attachment
|
|
Shih, Jennifer | Uc Berkeley |
Meier, Franziska | Facebook |
Rai, Akshara | Facebook AI Research |
Keywords: Collision Avoidance, Robot Safety, Reinforecment Learning
Abstract: Safety is crucial for deploying robots in the real world. One way of reasoning about safety of robots is by building safe sets through Hamilton-Jacobi (HJ) reachability. However, safe sets are often computed offline, assuming perfect knowledge of the dynamics, due to high compute time. In the presence of uncertainty, the safe set computed offline becomes inaccurate online, potentially leading to dangerous situations on the robot. We propose a novel framework to learn a safe control policy in simulation, and use it to generate online safe sets under uncertain dynamics. We start with a conservative safe set and update it online as we gather more information about the robot dynamics. We also show an application of our framework to a model-based reinforcement learning problem, proposing a safe model-based RL setup. Our framework enables robots to simultaneously learn about their dynamics, accomplish tasks, and update their safe sets. It also generalizes to complex high- dimensional dynamical systems, like 3-link manipulators and quadrotors, and reliably avoids obstacles, while achieving a task, even in the presence of unmodeled noise.
|
|
13:00-13:15, Paper TuBT19.7 | |
>Nonlinear MPC for Collision Avoidance and Control of UAVs with Dynamic Obstacles |
> Video Attachment
|
|
Lindqvist, Björn | Luleå University of Technology |
Mansouri, Sina Sharif | Lulea University of Technology |
Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Nikolakopoulos, George | Luleå University of Technology |
Keywords: Collision Avoidance, Aerial Systems: Applications
Abstract: This article proposes a Novel Nonlinear Model Predictive Control (NMPC) for navigation and obstacle avoidance of an Unmanned Aerial Vehicle (UAV). The proposed NMPC formulation allows for a fully parametric obstacle trajectory, while in this article we apply a classification scheme to differentiate between different kinds of trajectories to predict future obstacle positions. The trajectory calculation is done from an initial condition, and fed to the NMPC as an additional input. The solver used is the nonlinear, non-convex solver Proximal Averaged Newton for Optimal Control (PANOC) and its associated software OpEn (Optimization Engine), in which we apply a penalty method to properly consider the obstacles and other constraints during navigation. The proposed NMPC scheme allows for real-time solutions using a sampling time of 50 ms and a two second prediction of both the obstacle trajectory and the NMPC problem, which implies that the scheme can be considered as a local path-planner. This paper will present the NMPC cost function and constraint formulation, as well as the methodology of dealing with the dynamic obstacles. We include multiple laboratory experiments to demonstrate the efficacy of the proposed control architecture, and to show that the proposed method delivers fast and computationally stable solutions to the dynamic obstacle avoidance scenarios.
|
|
TuBT20 |
Room T20 |
Learning for Mapping and Navigation |
Regular session |
Chair: Hollinger, Geoffrey | Oregon State University |
Co-Chair: Bauer, Daniel | RTWH |
|
11:45-12:00, Paper TuBT20.1 | |
>DMLO: Deep Matching LiDAR Odometry |
|
Li, Zhichao | Tusimple.ai |
Wang, Naiyan | TuSimple |
Keywords: Novel Deep Learning Methods, SLAM
Abstract: LiDAR odometry is a fundamental task for various areas such as robotics, autonomous driving. This problem is difficult since it requires the systems to be highly robust running in noisy real-world data. Existing methods are mostly local iterative methods. Feature-based global registration methods are not preferred since extracting accurate matching pairs in the nonuniform and sparse LiDAR data remains challenging. In this paper, we present Deep Matching LiDAR Odometry (DMLO), a novel learning-based framework which makes the feature matching method applicable to LiDAR odometry task. Unlike many recent learning-based methods, DMLO explicitly enforces geometry constraints in the framework. Specifically, DMLO decomposes the 6-DoF pose estimation into two parts, a learning-based matching network which provides accurate correspondences between two scans and rigid transformation estimation with a close-formed solution by Singular Value Decomposition (SVD). Comprehensive experimental results on real-world datasets KITTI and Argoverse demonstrate that our DMLO dramatically outperforms existing learning-based methods and comparable with the state-of-the-art geometry-based approaches.
|
|
12:00-12:15, Paper TuBT20.2 | |
>Accurate and Robust Teach and Repeat Navigation by Visual Place Recognition: A CNN Approach |
> Video Attachment
|
|
Camara, Luis G. | CIIRC CTU Prague |
Pivoňka, Tomáš | Czech Institute of Informatics, Robotics and Cybernetics |
Jilek, Martin | Czech Technical University in Prague |
Gäbert, Carl | Czech Institute of Informatics, Robotics and Cybernetics |
Kosnar, Karel | Czech Technical University in Prague |
Preucil, Libor | Czech Technical University in Prague |
Keywords: Localization, Deep Learning for Visual Perception, Visual Servoing
Abstract: We propose a novel teach-and-repeat navigation system, SSM-Nav, which is based on the output of the recently introduced SSM visual place recognition methodology. During the teach phase, a teleoperated wheeled robot stores in a database features of images taken along an arbitrary route. During the repeat phase or navigation, a CNN-based comparison of each captured image is performed against the database. With the help of a particle filter, the image associated with the most likely location is selected at each time and its horizontal offset with respect to the current scene used to correct the steering of the robot and to navigate. Indoor tests in our lab show a maximum error of less than 10 cm and excellent robustness to perturbations such as drastic changes in illumination, lateral displacements, different starting positions, or even kidnapping. Preliminary outdoor tests on a 0.22 km route show promising results, with an estimated maximum error of less than 25 cm.
|
|
12:15-12:30, Paper TuBT20.3 | |
>Self-Supervised Simultaneous Alignment and Change Detection |
|
Furukawa, Yukuko | National Institute of Advanced Industrial Science and Technology |
Suzuki, Kumiko | The National Institute of Advanced Industrial Science and Techno |
Hamaguchi, Ryuhei | National Institute of Advanced Industrial Science and Technology |
Onishi, Masaki | National Inst. of AIST |
Sakurada, Ken | National Institute of Advanced Industrial Science and Technology |
Keywords: Semantic Scene Understanding, Deep Learning for Visual Perception, Recognition
Abstract: This study proposes a self-supervised method foretecting scene changes from an image pair. For mobile cameras such as drive recorders, to alleviate the camera viewpoints' difference, image alignment and change detection must be optimized simultaneously because they depend on each other. Moreover, lighting condition makes the scene change detection more difficult because it widely varies in images taken at different times. To solve these challenges, we propose a selfsupervised simultaneous alignment and change detection network (SACD-Net). The proposed network is robust specifically in differences of camera viewpoints and lighting conditions to simultaneously estimate warping parameters and multi-scale change probability maps while change regions are not taken into account of calculation of the feature consistency and semantic losses. Based on comparative analysis between our self-supervised and the previous supervised models as well as ablation study of the losses of SACD-Net, the results show the effectiveness of the proposed method using a synthetic dataset and our new real dataset.
|
|
12:30-12:45, Paper TuBT20.4 | |
>Deep Inverse Sensor Models As Priors for Evidential Occupancy Mapping |
> Video Attachment
|
|
Bauer, Daniel | RTWH |
Kuhnert, Lars | University of Siegen |
Keywords: Deep Learning for Visual Perception, Mapping
Abstract: With the recent boost in autonomous driving, increased attention has been paid on radars as an input for occupancy mapping. Besides their many benefits, the inference of occupied space based on radar detections is notoriously difficult because of the data sparsity and the environment dependent noise (e.g. multipath reflections). Recently, deep learning-based inverse sensor models, from here on called deep ISMs, have been shown to improve over their geometric counterparts in retrieving occupancy information cite{weston2018probably,sless2019road,bauer2019deep}. Nevertheless, these methods perform a data-driven interpolation which has to be verified later on in the presence of measurements. In this work, we describe a novel approach to integrate deep ISMs together with geometric ISMs into the evidential occupancy mapping framework. Our method leverages both the capabilities of the data-driven approach to initialize cells not yet observable for the geometric model effectively enhancing the perception field and convergence speed, while at the same time use the precision of the geometric ISM to converge to sharp boundaries. We further define a lower limit on the deep ISM estimate's certainty together with analytical proofs of convergence which we use to distinguish cells that are solely allocated by the deep ISM from cells already verified using the geometric approach.
|
|
12:45-13:00, Paper TuBT20.5 | |
>Online Exploration of Tunnel Networks Leveraging Topological CNN-Based World Predictions |
|
Saroya, Manish | Oregon State University |
Best, Graeme | Oregon State University |
Hollinger, Geoffrey | Oregon State University |
Keywords: Novel Deep Learning Methods, Reactive and Sensor-Based Planning, Mining Robotics
Abstract: Robotic exploration requires adaptively selecting navigation goals that result in the rapid discovery and mapping of an unknown world. In many real-world environments, subtle structural cues can provide insight about the unexplored world, which may be exploited by a decision maker to improve the speed of exploration. In sparse subterranean tunnel networks, these cues come in the form of topological features, such as loops or dead-ends, that are often common across similar environments. We propose a method for learning these topological features using techniques borrowed from topological image segmentation and image inpainting to learn from a database of worlds. These world predictions then inform a frontier-based exploration policy. Our simulated experiments with a set of real-world mine environments and a database of procedurally-generated artificial tunnel networks demonstrate a substantial increase in the rate of area explored compared to techniques that do not attempt to predict and exploit topological features of the unexplored world.
|
|
13:00-13:15, Paper TuBT20.6 | |
>Building Energy-Cost Maps from Aerial Images and Ground Robot Measurements with Semi-Supervised Deep Learning |
|
Wei, Minghan | University of Minnesota |
Isler, Volkan | University of Minnesota |
Keywords: Energy and Environment-Aware Automation, Motion and Path Planning, Field Robots
Abstract: Planning energy-efficient paths is an important capability in many robotics applications. Obtaining an energy-cost map for a given environment enables planning such paths between any given pair of locations within the environment. However, efficiently building an energy map is challenging, especially for large environments. Some of the prior work uses physics-based laws (friction and gravity force) to model energy costs across environments. These methods work well for uniform surfaces, but they do not generalize well to uneven terrains. In this paper, we present a method to address this mapping problem in a data-driven fashion for the cases where an aerial image of the environment can be obtained. To efficiently build an energy-cost map, we train a neural network that learns to predict the complete energy maps by combining aerial images and sparse ground robot energy-consumption measurements. Field experiments are performed to validate our results. We show that our method can efficiently build an energy-cost map accurately even across different types of ground robots.
|
|
TuBT21 |
Room T21 |
Learning for Navigation |
Regular session |
Chair: Michmizos, Konstantinos | Rutgers University |
Co-Chair: Kanezaki, Asako | National Institute of Advanced Industrial Science and Technology |
|
11:45-12:00, Paper TuBT21.1 | |
>Learning Local Planners for Human-Aware Navigation in Indoor Environments |
> Video Attachment
|
|
Güldenring, Ronja | Mobile Industrial Robots ApS |
Görner, Michael | University of Hamburg |
Hendrich, Norman | University of Hamburg |
Jacobsen, Niels Jul | Mobile Industrial Robots A/S |
Zhang, Jianwei | University of Hamburg |
Keywords: Autonomous Vehicle Navigation, Reinforecment Learning, Intelligent Transportation Systems
Abstract: Established indoor robot navigation frameworks build on the separation between global and local planners. Whereas global planners rely on traditional graph search algorithms, local planners are expected to handle driving dynamics and resolve minor conflicts. We present a system to train neural-network policies for such a local planner component, explicitly accounting for humans navigating the space. DRL-agents are trained in randomized virtual 2D environments with simulated human interaction. Transferability to the real world is achieved through sufficiently abstract state representations, relying on 2D lidar. The trained agents can be deployed as a drop-in replacement for other local planners and significantly improve on traditional implementations. Performance is demonstrated on a MiR-100 transport robot.
|
|
12:00-12:15, Paper TuBT21.2 | |
>Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path |
|
Ota, Kei | Mitsubishi Electric |
Sasaki, Yoko | National Inst. of Advanced Industrial Science and Technology |
Jha, Devesh | Mitsubishi Electric Research Laboratories |
Yoshiyasu, Yusuke | CNRS-AIST JRL |
Kanezaki, Asako | National Institute of Advanced Industrial Science and Technology |
Keywords: Reinforecment Learning, Motion and Path Planning, AI-Based Methods
Abstract: In this paper, we consider the problem of building learning agents that can efficiently learn to navigate in constrained environments. The main goal is to design agents that can efficiently learn to understand and generalize to different environments using high-dimensional inputs (a 2D map), while following feasible paths that avoid obstacles in obstacle-cluttered environment. To achieve this, we make use of traditional path planning algorithms, supervised learning, and reinforcement learning algorithms in a synergistic way. The key idea is to decouple the navigation problem into planning and control, the former of which is achieved by supervised learning whereas the latter is done by reinforcement learning. Specifically, we train a deep convolutional network that can predict collision-free paths based on a map of the environment- this is then used by an reinforcement learning algorithm to learn to closely follow the path. This allows the trained agent to achieve good generalization while learning faster. We test our proposed method in the recently proposed Safety Gym suite that allows testing of safety-constraints during training of learning agents. We compare our proposed method with existing work and show that our method consistently improves the sample efficiency and generalization capability to novel environments.
|
|
12:15-12:30, Paper TuBT21.3 | |
>Multiplicative Controller Fusion: Leveraging Algorithmic Priors for Sample-Efficient Reinforcement Learning and Safe Sim-To-Real Transfer |
> Video Attachment
|
|
Rana, Krishan | Queensland University of Technology |
Dasagi, Vibhavari | Queensland University of Technology |
Talbot, Ben | Queensland University of Technology |
Milford, Michael J | Queensland University of Technology |
Sünderhauf, Niko | Queensland University of Technology |
Keywords: Reactive and Sensor-Based Planning, Reinforecment Learning, Collision Avoidance
Abstract: Learning-based approaches often outperform hand-coded algorithmic solutions for many problems in robotics. However, learning long-horizon tasks on real robot hardware can be intractable, and transferring a learned policy from simulation to reality is still extremely challenging. We present a novel approach to model-free reinforcement learning that can leverage existing sub-optimal solutions as an algorithmic prior during training and deployment. During training, our gated fusion approach enables the prior to guide the initial stages of exploration, increasing sample-efficiency and enabling learning from sparse long-horizon reward signals. Importantly, the policy can learn to improve beyond the performance of the sub-optimal prior since the prior's influence is annealed gradually. During deployment, the policy's uncertainty provides a reliable strategy for transferring a simulation-trained policy to the real world by falling back to the prior controller in uncertain states. We show the efficacy of our Multiplicative Controller Fusion approach on the task of robot navigation and demonstrate safe transfer from simulation to the real world without any fine tuning.
|
|
12:30-12:45, Paper TuBT21.4 | |
>Reinforcement Learning-Based Hierarchical Control for Path Following of a Salamander-Like Robot |
> Video Attachment
|
|
Zhang, Xueyou | Nankai University |
Guo, Xian | Nankai University |
Fang, Yongchun | Nankai University |
Zhu, Wei | Nankai University |
Keywords: Biologically-Inspired Robots, Reinforecment Learning, Legged Robots
Abstract: Path following is a challenging task for legged robots. In this paper, we present a hierarchical control architecture for path following of a quadruped salamander-like robot, in which, the tracking problem is decomposed into two sub-tasks: high-level policy learning based on the framework of reinforcement learning (RL) and low-level traditional controller design. More specifically, the high-level policy is learned in a physics simulator with a low-level controller designed in advance. To improve the tracking accuracy and to eliminate static errors, a soft Actor-Critic algorithm with state integral compensation is proposed. Additionally, to enhance the generalization and transferability,a compact state representation, which only contains the information of the target path and the abstract action similar to front-back and left-right, is proposed. The proposed algorithm is trained offline in the simulation environment and tested on the self-developed real quadruped salamander-like robot for different path following tasks. Simulation and experiments results validate the satisfactory performance of the proposed method.
|
|
12:45-13:00, Paper TuBT21.5 | |
>Hierarchical Reinforcement Learning Method for Autonomous Vehicle Behavior Planning |
> Video Attachment
|
|
Qiao, Zhiqian | Carnegie Mellon University |
Tyree, Zachariah | General Motors Research and Development |
Mudalige, Priyantha | General Motors |
Schneider, Jeff | Carnegie Mellon University |
Dolan, John M. | Carnegie Mellon University |
Keywords: Behavior-Based Systems, Reinforecment Learning, Autonomous Agents
Abstract: Behavioral decision making is an important aspect of autonomous vehicles (AV). In this work, we propose a behavior planning structure based on hierarchical reinforcement learning (HRL) which is capable of performing autonomous vehicle planning tasks in simulated environments with multiple sub-goals. In this hierarchical structure, the network is capable of 1) learning one task with multiple sub-goals simultaneously; 2) extracting attentions of states according to changing sub-goals during the learning process; 3) reusing the well-trained network of sub-goals for other tasks with the same sub-goals. A hybrid reward mechanism is designed for different hierarchical layers in the proposed HRL structure. Compared to traditional RL methods, our algorithm is more sample-efficient, since its modular design allows reusing the policies of sub-goals across similar tasks for various transportation scenarios. The results show that the proposed method converges to an optimal policy faster than traditional RL methods.
|
|
13:00-13:15, Paper TuBT21.6 | |
>Reinforcement Co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware |
> Video Attachment
|
|
Tang, Guangzhi | Rutgers University |
Kumar, Neelesh | Rutgers University |
Michmizos, Konstantinos | Rutgers University |
Keywords: Neurorobotics, Reinforecment Learning, Motion and Path Planning
Abstract: Energy-efficient mapless navigation is crucial for mobile robots as they explore unknown environments with limited on-board resources. Although the recent deep reinforcement learning (DRL) approaches have been successfully applied to navigation, their high energy consumption limits their use in several robotic applications. Here, we propose a neuromorphic approach that combines the energy-efficiency of spiking neural networks with the optimality of DRL and benchmark it in learning control policies for mapless navigation. Our hybrid framework, spiking deep deterministic policy gradient (SDDPG), consists of a spiking actor network (SAN) and a deep critic network, where the two networks were trained jointly using gradient descent. The co-learning enabled synergistic information exchange between the two networks, allowing them to overcome each other's limitations through a shared representation learning. To evaluate our approach, we deployed the trained SAN on Intel's Loihi neuromorphic processor. When validated on simulated and real-world complex environments, our method on Loihi consumed 75 times less energy per inference as compared to DDPG on Jetson TX2, and also exhibited a higher rate of successful navigation to the goal, which ranged from 1% to 4.2% and depended on the forward-propagation timestep size. These results reinforce our ongoing efforts to design brain-inspired algorithms for controlling autonomous robots with neuromorphic hardware.
|
|
TuBT22 |
Room T22 |
RL for Navigation and Locomotion |
Regular session |
Chair: Meger, David Paul | McGill University |
Co-Chair: Ruiz-del-Solar, Javier | Universidad De Chile |
|
11:45-12:00, Paper TuBT22.1 | |
>Learning Agile Locomotion Via Adversarial Training |
> Video Attachment
|
|
Tang, Yujin | Google |
Tan, Jie | Google |
Harada, Tatsuya | The University of Tokyo |
Keywords: Reinforecment Learning, Multi-Robot Systems, Legged Robots
Abstract: Developing controllers for agile locomotion is a long-standing challenge for legged robots. Reinforcement learning (RL) and Evolution Strategy (ES) hold the promise of automating the design process of such controllers. However, dedicated and careful human effort is required to design training environments to promote agility. In this paper, we present a multi-agent learning system, in which a quadruped robot (protagonist) learns to chase another robot (adversary) while the latter learns to escape. We find that this adversarial training process not only encourages agile behaviors but also effectively alleviates the laborious environment design effort. In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility. Through extensive experiments, we show that the locomotion controller learned with adversarial training significantly outperforms carefully designed baselines.
|
|
12:00-12:15, Paper TuBT22.2 | |
>Stochastic Grounded Action Transformation for Robot Learning in Simulation |
> Video Attachment
|
|
Desai, Siddharth | The University of Texas at Austin |
Karnan, Haresh | The University of Texas at Austin |
Hanna, Josiah | The University of Texas at Austin |
Warnell, Garrett | U.S. Army Research Laboratory |
Stone, Peter | University of Texas at Austin |
Keywords: Reinforecment Learning, Humanoid and Bipedal Locomotion, Transfer Learning
Abstract: Robot control policies learned in simulation do not often transfer well to the real world. Many existing solutions to this sim-to-real problem, such as the Grounded Action Transformation (GAT) algorithm, seek to correct for—or ground—these differences by matching the simulator to the real world. However, the efficacy of these approaches is limited if they do not explicitly account for stochasticity in the target environment. In this work, we analyze the problems associated with grounding a deterministic simulator in a stochastic real world environment, and we present examples where GAT fails to transfer a good policy due to stochastic transitions in the target domain. In response, we introduce the Stochastic Grounded Action Transformation (SGAT) algorithm, which models this stochasticity when grounding the simulator. We find experimentally—for both simulated and physical target domains—that SGAT can find policies that are robust to stochasticity in the target domain.
|
|
12:15-12:30, Paper TuBT22.3 | |
>Learning Domain Randomization Distributions for Training Robust Locomotion Policies |
|
Mozifian, Melissa | McGill University |
Gamboa Higuera, Juan Camilo | McGill University |
Meger, David Paul | McGill University |
Dudek, Gregory | McGill University |
Keywords: Reinforecment Learning, Transfer Learning
Abstract: This paper considers the problem of learning behaviors in simulation without knowledge of the precise dynamical properties of the target robot platform(s). In this context, our learning goal is to mutually maximize task efficacy on each environment considered and generalization across the widest possible range of environmental conditions. The physical parameters of the simulator are modified by a component of our technique that learns the emph{Domain Randomization} (DR) that is appropriate at each learning epoch to maximally challenge the current behavior policy, without being overly challenging, which can hinder learning progress. This so-called sweet spot distribution is a selection of simulated domains with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution; and 2) The DR distribution made as wide as possible, to increase variability in the environments. These properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the DR distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment. Our code is available at: url{https://github.com/melfm/lsdr}.
|
|
12:30-12:45, Paper TuBT22.4 | |
>Robust RL-Based Map-Less Local Planning: Using 2D Point Clouds As Observations |
|
Leiva, Francisco | Universidad De Chile |
Ruiz-del-Solar, Javier | Universidad De Chile |
Keywords: Reinforecment Learning, Reactive and Sensor-Based Planning
Abstract: In this paper, we propose a robust approach to train map-less navigation policies that rely on variable size 2D point clouds, using Deep Reinforcement Learning (Deep RL). The navigation policies are trained in simulations using the DDPG algorithm. Through experimental evaluations in simulated and real-world environments, we showcase the benefits of our approach when compared to more classical RL-based formulations: better performance, the possibility to interchange sensors at deployment time, and to easily augment the environment observability through sensor preprocessing and/or sensor fusion. Videos showing trajectories traversed by agents trained with the proposed approach can be found in https://youtu.be/AzvRJyN6rwQ.
|
|
12:45-13:00, Paper TuBT22.5 | |
>Deep Reinforcement Learning for Safe Local Planning of a Ground Vehicle in Unknown Rough Terrain |
> Video Attachment
|
|
Josef, Shirel | Technion - Israel Institute of Technology |
Degani, Amir | Technion - Israel Institute of Technology |
Keywords: Reinforecment Learning, Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: Safe unmanned ground vehicle navigation in unknown rough terrain is crucial for various tasks such as exploration, search and rescue and agriculture. Offline global planning is often not possible when operating in harsh, unknown environments, and therefore, online local planning must be used. Most online rough terrain local planners require heavy computational resources, used for optimal trajectory searching and estimating vehicle orientation in positions within the range of the sensors. In this work, we present a deep reinforcement learning approach for local planning in unknown rough terrain with zero-range to local-range sensing, achieving superior results compared to potential fields or local motion planning search spaces methods. Our approach includes reward shaping which provides a dense reward signal. We incorporate self-attention modules into our deep reinforcement learning architecture in order to increase the explainability of the learnt policy. The attention modules provide insight regarding the relative importance of sensed inputs during training and planning. We extend and validate our approach in a dynamic simulation, demonstrating successful safe local planning in environments with a continuous terrain and a variety of discrete obstacles. By adding the geometric transformation between two successive timesteps and the corresponding action as inputs, our architecture is able to navigate on surfaces with different levels of friction.
|
|
13:00-13:15, Paper TuBT22.6 | |
>Exploration Strategy Based on Validity of Actions in Deep Reinforcement Learning |
|
Yoon, Hyungsuk | Seoul National University |
Lee, Sang-Hyun | Seoul National University |
Seo, Seung-Woo | Seoul National University |
Keywords: Reinforecment Learning, Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: How to explore environments is one of the most critical factors for the performance of an agent in reinforcement learning. Conventional exploration strategies such as epsilon-greedy algorithm and Gaussian exploration noise simply depend on pure randomness. However, it is required for an agent to consider its training progress and long-term usefulness of actions to efficiently explore complex environments, which remains a major challenge in reinforcement learning. To address this challenge, we propose a novel exploration method that selects actions based on their validity. The key idea behind our method is to estimate the validity of actions by leveraging zero avoiding property of kullback-leibler divergence to comprehensively evaluate actions in terms of both exploration and exploitation. We also introduce a framework that allows an agent to explore efficiently in environments where reward is sparse or cannot be defined intuitively. The framework uses expert demonstrations to guide an agent to visit task-relevant state space by combining our exploration strategy with imitation learning. We demonstrate our exploration strategy on several tasks ranging from classical control tasks to high-dimensional urban autonomous driving scenarios at roundabout. The results show that our exploration strategy encourages an agent to visit task-relevant state space to enhance validity of actions, outperforming several previous methods.
|
|
13:00-13:15, Paper TuBT22.7 | |
>Autonomous Exploration under Uncertainty Via Deep Reinforcement Learning on Graphs |
> Video Attachment
|
|
Chen, Fanfei | Stevens Institute of Technology |
Martin, John D. | Stevens Institute of Technology |
Huang, Yewei | Stevens Institute of Technology |
Wang, Jinkun | Stevens Institute of Technology |
Englot, Brendan | Stevens Institute of Technology |
Keywords: Reactive and Sensor-Based Planning, Reinforecment Learning, Sensor-based Control
Abstract: We consider an autonomous exploration problem in which a range sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time; it must choose sensing actions that both curb localization uncertainty and achieve information gain. For this problem, belief space planning methods that forward-simulate robot sensing and estimation may often fail in real-time implementation, scaling poorly with increasing size of the state, belief and action spaces. We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space. The policy, which is trained in different random environments without human intervention, offers a real-time, scalable decision-making process whose high-performance exploratory sensing actions yield accurate maps and high rates of information gain.
|
|
TuBT23 |
Room T23 |
Semantic Mapping and Navigation |
Regular session |
Chair: Wang, Danwei | Nanyang Technological University |
Co-Chair: Buerger, Stephen P. | Sandia National Laboratories |
|
11:45-12:00, Paper TuBT23.1 | |
>No Map, No Problem: A Local Sensing Approach for Navigation in Human-Made Spaces Using Signs |
|
Liang, Claire Yilan | Cornell University |
Knepper, Ross | -- |
Pokorny, Florian T. | KTH Royal Institute of Technology |
Keywords: Reactive and Sensor-Based Planning, Human-Centered Robotics, Service Robotics
Abstract: Robot navigation in human spaces today largely relies on the construction of precise geometric maps and a global motion plan. In this work, we navigate with only local sensing by using available signage --- as designed for humans --- in human-made environments such as airports. We propose a formalization of ``signage'' and define 4 levels of signage that we call complete, fully-specified, consistent and valid. The signage formalization can be used on many space skeletonizations, but we specifically provide an approach for navigation on the medial axis. We prove that we can achieve global completeness guarantees without requiring a global map to plan. We validate with two sets of experiments: (1) with real-world airports and their real signs and (2) real New York City neighborhoods. In (1) we show we can use real-world airport signage to improve on a simple random-walk approach, and we explore augmenting signage to further explore signs' impact on trajectory length. In (2), we navigate in varied sized subsets of New York City to show that, since we only use local sensing, our approach scales linearly with trajectory length rather than freespace area.
|
|
12:00-12:15, Paper TuBT23.2 | |
>Rapid Autonomous Semantic Mapping |
|
Parikh, Anup | Sandia National Laboratories |
Koch, Mark | Sandia National Laboratories |
Blada, Timothy | Sandia National Laboratories |
Buerger, Stephen P. | Sandia National Laboratories |
Keywords: Mapping, Semantic Scene Understanding, Task Planning
Abstract: A semantic understanding of the environment is needed to enable high level autonomy in robotic systems. Recent results have demonstrated rapid progress in underlying technology areas, but few results have been reported on end-to-end systems that enable effective autonomous perception in complex environments. In this paper, we describe an approach for rapidly and autonomously mapping unknown environments with integrated semantic and geometric information. We use surfel-based RGB-D SLAM techniques, with incremental object segmentation and classification methods to update the map in realtime. Information theoretic and heuristic measures are used to quickly plan sensor motion and drive down map uncertainty. Preliminary experimental results in simple and cluttered environments are reported.
|
|
12:15-12:30, Paper TuBT23.3 | |
>Lifelong Update of Semantic Maps in Dynamic Environments |
|
Narayana, Manjunath | IRobot Corp |
Kolling, Andreas | Amazon |
Nardelli, Lucio | IRobot |
Fong, Philip | IRobot |
Keywords: Mapping, SLAM, Visual-Based Navigation
Abstract: A robot understands its world through the raw information it senses from its surroundings. This raw information is not suitable as a shared representation between the robot and its user. A semantic map, containing high-level information that both the robot and user understand, is better suited to be a shared representation. We use the semantic map as the user-facing interface on our fleet of floor-cleaning robots. Jitter in the robot's sensed raw map, dynamic objects in the environment, and exploration of new space by the robot are common challenges for robots. Solving these challenges effectively in the context of semantic maps is key to enabling semantic maps for lifelong mapping. First, as a robot senses new changes and alters its raw map in successive missions, the semantics must be updated appropriately. We update the map using a spatial transfer of semantics. Second, it is important to keep semantics and their relative constraints consistent even in the presence of dynamic objects. Inconsistencies are automatically determined and resolved through the introduction of a map layer of meta-semantics. Finally, a discovery phase allows the semantic map to be updated with new semantics whenever the robot uncovers new information. Deployed commercially on thousands of floor-cleaning robots in real homes, our user-facing semantic maps provide a intuitive user experience through a lifelong mapping robot.
|
|
12:30-12:45, Paper TuBT23.4 | |
>Efficient Object Search through Probability-Based Viewpoint Selection |
> Video Attachment
|
|
Hernandez Silva, Alejandra Carolina | University Carlos III of Madrid |
Derner, Erik | Czech Technical University in Prague |
Gomez, Clara | University Carlos III of Madrid |
Barber, Ramon | Universidad Carlos III of Madrid |
Babuska, Robert | Delft University of Technology |
Keywords: Service Robots, Semantic Scene Understanding
Abstract: The ability to search for objects is a precondition for various robotic tasks. In this paper, we address the problem of finding objects in partially known indoor environments. Using the knowledge of the floor plan and the mapped objects, we consider object–object and object–room co-occurrences as prior information for identifying promising locations where an unmapped object can be present. We propose an efficient search strategy that determines the best pose of the robot based on the analysis of the candidate locations. We optimize the probability of finding the target object and the distance travelled through a cost function. To evaluate our method, several experiments in simulated and real-world environments were performed. The results show that the robot successfully finds the target object in the environment while covering only a small portion of the search space. The real-world experiments with the TurtleBot 2 mobile robot validate the proposed approach and demonstrate that the method performs well also in real environments.
|
|
12:45-13:00, Paper TuBT23.5 | |
>Dense Incremental Metric-Semantic Mapping Via Sparse Gaussian Process Regression |
> Video Attachment
|
|
Zobeidi, Ehsan | University of California San Diego |
Koppel, Alec | University of Pennsylvania |
Atanasov, Nikolay | University of California, San Diego |
Keywords: Mapping, Semantic Scene Understanding, RGB-D Perception
Abstract: We develop an online probabilistic metric-semantic mapping approach for autonomous robots relying on streaming RGB-D observations. We cast this problem as a Bayesian inference task, requiring encoding both the geometric surfaces and semantic labels (e.g., chair, table, wall) of the unknown environment. We propose an online Gaussian Process (GP) training and inference approach, which avoids the complexity of GP classification by regressing a truncated signed distance function representation of the regions occupied by different semantic classes. Online regression is enabled through sparse GP approximation, compressing the training data to a finite set of inducing points, and through spatial domain partitioning into an Octree data structure with overlapping leaves. Our experiments demonstrate the effectiveness of this technique for large-scale probabilistic metric-semantic mapping of 3D environments. A distinguishing feature of our approach is that the generated maps contain full continuous distributional information about the geometric surfaces and semantic labels, making them appropriate for uncertainty-aware planning.
|
|
13:00-13:15, Paper TuBT23.6 | |
>Collaborative Semantic Perception and Relative Localization Based on Map Matching |
|
Yue, Yufeng | Nanyang Technological University |
Zhao, Chunyang | Nanyang Technological University |
Wen, Mingxing | Nanyang Technological University |
Wu, Zhenyu | Nanyang Technological University |
Wang, Danwei | Nanyang Technological University |
Keywords: Mapping, Semantic Scene Understanding, Cooperating Robots
Abstract: In order to enable a team of robots to operate successfully, retrieving accurate relative transformation between robots is the fundamental requirement. So far, most research on relative localization mainly focus on geometry features such as points, lines and planes. To address this problem, collaborative semantic map matching is proposed to perform semantic perception and relative localization. This paper performs semantic perception, probabilistic data association and nonlinear optimization within an integrated framework. Since the voxel correspondence between partial maps is a hidden variable, a probabilistic semantic data association algorithm is proposed based on Expectation-Maximization. Instead of specifying hard geometry data association, semantic and geometry association are jointly updated and estimated. The experimental verification on Semantic KITTI benchmarks demonstrate the improved robustness and accuracy.
|
|
TuCT1 |
Room T1 |
Performance Evaluation and Benchmarking |
Regular session |
Chair: Ye, Cang | Virginia Commonwealth University |
Co-Chair: Paull, Liam | Université De Montréal |
|
14:00-14:15, Paper TuCT1.1 | |
>3D Odor Source Localization Using a Micro Aerial Vehicle: System Design and Performance Evaluation |
> Video Attachment
|
|
Ercolani, Chiara | EPFL |
Martinoli, Alcherio | EPFL |
Keywords: Performance Evaluation and Benchmarking, Aerial Systems: Applications, Environment Monitoring and Management
Abstract: Finding chemical compounds in the air has applications when situations such as gas leaks, environmental emergencies and toxic chemical dispersion occur. Enabling robots to undertake this task would provide a powerful tool to prevent dangerous situations and assist humans when emergencies arise. While the dispersion of chemical compounds in the air is intrinsically a three-dimensional (3D) phenomenon, the scientific community tackled primarily two-dimensional (2D) scenarios so far. This is mainly due to the challenges of developing a platform able to successfully provide chemical compounds samples of a 3D space. In this paper, a 3D bio-inspired algorithm for odor source localization, previously validated in a controlled physical environment leveraging a robotic manipulator, is adapted for deployment on a micro aerial vehicle equipped with an odor sensor. Given the effect that the propellers have on a gas distribution, the algorithmic adaptation focused on enhancing the sensing strategy of the platform. Additionally, two sensor placement configurations are assessed to determine which one yields best sensing results. A performance evaluation in different environmental scenarios is carried out to test the robustness of the implementation. Two different localization systems are used for the performance evaluation experiments to quantify the impact of localization accuracy on the algorithm's outcome.
|
|
14:15-14:30, Paper TuCT1.2 | |
>BARK: Open Behavior Benchmarking in Multi-Agent Environments |
|
Bernhard, Julian | Fortiss GmbH |
Esterle, Klemens | Fortiss GmbH |
Hart, Patrick | Fortiss GmbH |
Kessler, Tobias | Fortiss GmbH |
Keywords: Performance Evaluation and Benchmarking, Agent-Based Systems, Planning, Scheduling and Coordination
Abstract: Predicting and planning interactive behaviors in complex traffic situations presents a challenging task. Especially, in scenarios having multiple traffic participants that interact densely, autonomous vehicles still struggle to interpret situations and to eventually achieve their own driving goal. As driving tests are costly and challenging scenarios are hard to find and reproduce, simulation is widely used to develop, test, and benchmark behavior models. However, most simulations rely on datasets and simplistic behavior models for traffic participants and do not cover the full complexity. In this work, we introduce our open-source behavior benchmarking environment BARK, that is designed to mitigate the above-stated shortcomings. In BARK, behavior models are (re-)used for planning, prediction, and simulation. Currently, there is a wide range of models available, such as an interaction-aware Monte-Carlo Tree Search and Reinforcement Learning-based behavior model. We use a public dataset and sampling-based scenario generation to show the inter-exchangeability of the behavior models. We evaluate how well the used models cope with interactions and how robust they are towards exchanging behavior models.
|
|
14:30-14:45, Paper TuCT1.3 | |
>The VCU-RVI Benchmark: Evaluating Visual Inertial Odometry for Indoor Navigation Applications with an RGB-D Camera |
> Video Attachment
|
|
Zhang, He | Virginia Commonwealth University |
Jin, Lingqiu | Virginia Commonwealth University |
Ye, Cang | Virginia Commonwealth University |
Keywords: Performance Evaluation and Benchmarking, Visual-Based Navigation, SLAM
Abstract: This paper presents VCU-RVI, a new visual inertial odometry (VIO) benchmark with a set of diverse data sequences in different indoor scenarios. The benchmark was captured using an Structure Core (SC) sensor, consisting of an RGB-D camera and an IMU. It provides aligned color and depth images with 640x480 resolution at 30 Hz. The camera’s data is synchronized with the IMU’s data at 100 Hz. Thirty-nine data sequences covering a total of ~3.7 kilometers trajectory were recorded in various indoor environments by two experimental setups: hand-holding the SC sensor or installing it on a wheeled robot. For the data sequences from the handheld SC, some were recorded in our laboratory under three challenging conditions: fast sensor motion, radical illumination changing, and dynamic objects, and the rest were collected in various indoor spaces outside the laboratory in the East Engineering Building, including corridors, halls, and stairways, during long-distance navigation scenarios. For the data sequences captured using the wheeled robot, half of them were recorded with sufficient IMU excitation in the beginning of the sequence, to meet the need of testing the VIO methods with the requirement of sufficient motion conditions for initialization. We placed three bumpers on the floor of the lab to create an uneven terrain to make the robot motion 6-DOF. The sequences also include data collected from navigational courses with a long trajectory. For trajectory evaluation, a motion capture system is used to generate accurate pose data (at a rate of 120 Hz), which will be used as the ground truth.We conducted experiments to evaluate the state-of-the-art VIO algorithms using our benchmark. These algorithms together with the evaluation tools and the VCU-RVI dataset are made publicly available.
|
|
14:45-15:00, Paper TuCT1.4 | |
>A Framework for Human-Robot Interaction User Studies |
> Video Attachment
|
|
Rajendran, Vidyasagar | University of Waterloo |
Carreno, Pamela | Monash University |
Fisher, Wesley | University of Waterloo |
Werner, Alexander | University of Waterloo |
Kulic, Dana | Monash University |
Keywords: Performance Evaluation and Benchmarking, Human-Centered Robotics, Physical Human-Robot Interaction
Abstract: Human-Robot Interaction (HRI) user studies are challenging to evaluate and compare due to a lack of standardization and the infrastructure required to implement each study. The lack of experimental infrastructure also makes it difficult to systematically evaluate the impact of individual components (e.g., the quality of perception software) on overall system performance. This work proposes a framework to ease the implementation and reproducibility of human-robot interaction user studies. The framework utilizes ROS middleware and is implemented with four modules: perception, decision, action, and metrics. The perception module aggregates sensor data to be used by the decision and action modules. The decision module is the task-level executive and can be designed by the HRI researcher for their specific task. The action module takes subtask requests from the decision module and breaks them down into motion primitives for execution on the robot. The metrics module tracks and generates quantitative metrics for the study. The framework is implemented with modular interfaces to allow for alternate implementations within each module and can be generalized for a variety of tasks and human/robot roles. The framework is illustrated through an example scenario involving a human and a Franka Emika Panda arm collaboratively assembling a toolbox together.
|
|
15:00-15:15, Paper TuCT1.5 | |
>Autonomous Vehicle Benchmarking Using Unbiased Metrics |
> Video Attachment
|
|
Paz, David | University of California, San Diego |
Lai, Po-Jung | University of California San Diego |
Chan, Nathan | UCSD |
Jiang, Yuqing | UC San Diego |
Christensen, Henrik Iskov | UC San Diego |
Keywords: Performance Evaluation and Benchmarking, Autonomous Vehicle Navigation, Robot Safety
Abstract: With the recent development of autonomous vehicle technology, there have been active efforts on the deployment of this technology at different scales that include urban and highway driving. While many of the prototypes showcased have been shown to operate under specific cases, little effort has been made to better understand their shortcomings and generalizability to new areas. Distance, uptime and number of manual disengagements performed during autonomous driving provide a high-level idea on the performance of an autonomous system but without proper data normalization, testing location information, and the number of vehicles involved in testing, the disengagement reports alone do not fully encompass system performance and robustness. Thus, in this study a complete set of metrics are applied for benchmarking autonomous vehicle systems in a variety of scenarios that can be extended for comparison with human drivers and other autonomous vehicle systems. These metrics have been used to benchmark UC San Diego’s autonomous vehicle platforms during early deployments for micro-transit and autonomous mail delivery applications.
|
|
15:15-15:30, Paper TuCT1.6 | |
>Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents |
|
Tani, Jacopo | Swiss Federal Institute of Technology in Zurich (ETH Zurich) |
Daniele, Andrea F | Toyota Technological Institute at Chicago |
Camus, Amaury | ETHZ |
Petrov, Aleksandar | ETH Zurich |
Courchesne, Anthony | Mila, Université De Montréal |
Mehta, Bhairav | Mila |
Suri, Rohit | ETH Zurich |
Bernasconi, Gianmarco | ETHZ |
Walter, Matthew | Toyota Technological Institute at Chicago |
Frazzoli, Emilio | ETH |
Paull, Liam | Université De Montréal |
Censi, Andrea | ETH Zürich & NuTonomy |
Keywords: Performance Evaluation and Benchmarking
Abstract: As robotics matures and increases in complexity, it is more necessary than ever that robot autonomy research be "reproducible". Compared to other sciences, there are specific challenges to benchmarking autonomy, such as the complexity of the software stacks, the variability of the hardware and the reliance on data-driven techniques, amongst others. In this paper, we describe a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained "by design" from the beginning of the research/development processes. We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet. One of the central components of this setup is the Duckietown Autolab, a remotely accessible standardized setup that is itself also relatively low-cost and reproducible. When evaluating agents, careful definition of interfaces allows users to choose among local versus remote evaluation using simulation, logs, or remote automated hardware setups. We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs.
|
|
TuCT2 |
Room T2 |
Robot Safety |
Regular session |
Chair: Sadigh, Dorsa | Stanford University |
Co-Chair: Spenko, Matthew | Illinois Institute of Technology |
|
14:00-14:15, Paper TuCT2.1 | |
>Provably Safe Trajectory Optimization in the Presence of Uncertain Convex Obstacles |
|
Dawson, Charles | MIT |
M. Jasour, Ashkan | MIT |
Hofmann, Andreas | MIT |
Williams, Brian | MIT |
Keywords: Motion and Path Planning, Robot Safety, Optimization and Optimal Control
Abstract: Real-world environments are inherently uncertain, and to operate safely in these environments robots must be able to plan around this uncertainty. In the context of motion planning, we desire systems that can maintain an acceptable level of safety as the robot moves, even when the exact locations of nearby obstacles are not known. In this paper, we solve this chance-constrained motion planning problem using a sequential convex optimization framework. To constrain the risk of collision incurred by planned movements, we employ geometric objects called epsilon-shadows to compute upper bounds on the risk of collision between the robot and uncertain obstacles. We use these epsilon-shadow-based estimates as constraints in a nonlinear trajectory optimization problem, which we then solve by iteratively linearizing the non-convex risk constraints. This sequential optimization approach quickly finds trajectories that accomplish the desired motion while maintaining a user-specified limit on collision risk. Our method can be applied to robots and environments with arbitrary convex geometry; even in complex environments, it runs in less than a second and provides provable guarantees on the safety of planned trajectories, enabling fast, reactive, and safe robot motion in realistic environments.
|
|
14:15-14:30, Paper TuCT2.2 | |
>Safety Considerations in Deep Control Policies with Safety Barrier Certificates under Uncertainty |
> Video Attachment
|
|
Hirshberg, Tom | Technion |
Vemprala, Sai | Texas A&M University |
Kapoor, Ashish | MicroSoft |
Keywords: Robot Safety, Collision Avoidance, Perception-Action Coupling
Abstract: Recent advances in Deep Machine Learning have shown promise in solving complex perception and control loops via methods such as reinforcement and imitation learning. However, guaranteeing safety for such learned deep policies has been a challenge due to issues such as partial observability and difficulties in characterizing the behavior of the neural networks. While a lot of emphasis in safe learning has been placed during training, it is non-trivial to guarantee safety at deployment or test time. This paper extends how under mild assumptions, Safety Barrier Certificates can be used to guarantee safety with deep control policies despite uncertainty arising due to perception and other latent variables. Specifically for scenarios where the dynamics are smooth and uncertainty has a finite support, the proposed framework wraps around an existing deep control policy and generates safe actions by dynamically evaluating and modifying the policy from the embedded network. Our framework utilizes control barrier functions to create spaces of control actions that are safe under uncertainty, and when the original actions are found to be in violation of the safety constraint, uses quadratic programming to minimally modify the original actions to ensure they lie in the safe set. Representations of the environment are built through Euclidean signed distance fields that are then used to infer the safety of actions and to guarantee forward invariance. We implement this method in simulation in a drone-racing environment and show that our method results in safer actions compared to a baseline that only relies on imitation learning to generate control actions.
|
|
14:30-14:45, Paper TuCT2.3 | |
>Infusing Reachability-Based Safety into Planning and Control for Multi-Agent Interactions |
|
Wang, Xinrui | Stanford University |
Leung, Karen | Stanford University |
Pavone, Marco | Stanford University |
Keywords: Robot Safety, Collision Avoidance, Path Planning for Multiple Mobile Robots or Agents
Abstract: Within a robot autonomy stack, the planner and controller are typically designed separately, and serve different purposes. As such, there is often a diffusion of responsibilities when it comes to ensuring safety for the robot. We propose that a planner and controller should share the same interpretation of safety but apply this knowledge in a different yet complementary way. To achieve this, we use Hamilton-Jacobi (HJ) reachability theory at the planning level to provide the robot planner with the foresight to avoid entering regions with possible inevitable collision. However, this alone does not guarantee safety. In conjunction with this HJ reachability-infused planner, we propose a minimally-interventional multi-agent safety-preserving controller also derived via HJ-reachability theory. The safety controller maintains safety for the robot without unduly impacting planner performance. We demonstrate the benefits of our proposed approach in a multi-agent highway scenario where a robot car is rewarded to navigate through traffic as fast as possible, and we show that our approach provides strong safety assurances yet achieves the highest performance compared to other safety controllers.
|
|
14:45-15:00, Paper TuCT2.4 | |
>Multi-Agent Safe Planning with Gaussian Processes |
|
Zhu, Zheqing | Stanford University |
Bıyık, Erdem | Stanford University |
Sadigh, Dorsa | Stanford University |
Keywords: Robot Safety, Multi-Robot Systems
Abstract: Multi-agent safe systems have become an increasingly important area of study as we can now easily have multiple AI-powered systems operating together. In such settings, we need to ensure the safety of not only each individual agent, but also the overall system. In this paper, we introduce a novel multi-agent safe learning algorithm that enables decentralized safe navigation when there are multiple different agents in the environment. This algorithm makes mild assumptions about other agents and is trained in a decentralized fashion, i.e. with very little prior knowledge about other agents' policies. Experiments show our algorithm performs well with the robots running other algorithms when optimizing various objectives.
|
|
15:00-15:15, Paper TuCT2.5 | |
>Safe Path Planning with Multi-Model Risk Level Sets |
> Video Attachment
|
|
Huang, Zefan | Singapore-MIT Alliance for Research and Technology |
Schwarting, Wilko | Massachusetts Institute of Technology (MIT) |
Pierson, Alyssa | Massachusetts Institute of Technology |
Hongliang, Guo | Singapore MIT Alliance of Research and Technology |
Ang Jr, Marcelo H | National University of Singapore |
Rus, Daniela | MIT |
Keywords: Robot Safety, Motion and Path Planning
Abstract: This paper investigates the safe path planning problem with large number of moving objects in the cluttered environments. Some of the objects can be detected and tracked very well with canonical perception algorithms, while some of the objects can only be roughly detected with LiDAR scan snapshot differences. For objects with good detection and tracking algorithms, we use a Gaussian Process (GP) regulated risk map to describe the risk map information; for objects with not-so-good detection and/or tracking results, we construct an overall occupancy and velocity field from LiDAR scan snapshots and use the results for risk level set (RLS) calculation. Several methods are proposed for combining the GP risk map and RLS, and the resultant hybrid risk map is used for the proposed safe path planning algorithm. Experimental results show that the hybrid risk map is able to yield a safe path planner to navigate the autonomous testbed within the cluttered environments.
|
|
15:15-15:30, Paper TuCT2.6 | |
>Localization Safety Validation for Autonomous Robots |
|
Duenas Arana, Guillermo | Illinois Institute of Technology |
Abdul Hafez, Osama | Illinois Institute of Technology |
Joerger, Mathieu | Virginia Tech |
Spenko, Matthew | Illinois Institute of Technology |
Keywords: Localization, Autonomous Vehicle Navigation, Robot Safety
Abstract: This paper presents a method to validate localization safety for a preplanned trajectory in a given environment. Localization safety is defined as integrity risk and quantified as the probability of an undetected localization failure. Integrity risk differs from previously used metrics in robotics in that it accounts for unmodeled faults and evaluates safety under the worst possible combination of faults. The methodology can be applied prior to mission execution and thus can be employed to evaluate the safety of potential trajectories. The work has been formulated for localization via smoothing, which differs from previously reported integrity monitoring methods that rely on Kalman filtering. Simulation and experimental results are analyzed to show that localization safety is effectively quantified.
|
|
TuCT3 |
Room T3 |
Trust and Explainability |
Regular session |
Chair: Bryant, De'Aira | Georgia Institute of Technology |
Co-Chair: Soh, Harold | National Universtiy of Singapore |
|
14:00-14:15, Paper TuCT3.1 | |
>Human-Robot Trust Assessment Using Motion Tracking & Galvanic Skin Response |
> Video Attachment
|
|
Hald, Kasper | Aalborg University |
Rehm, Matthias | Aalborg University |
Moeslund, Thomas B. | Aalborg University |
Keywords: Cooperating Robots, Visual Tracking, Human-Centered Robotics
Abstract: In this study we set out to design a computer vision-based system to assess human-robot trust in real time during close-proximity human-robot collaboration. This paper presents the setup and hardware for an augmented reality-enabled human-robot collaboration cell as well as a method of measuring operator proximity using an infrared camera. We tested this setup as a tool for assessing trust through physical apprehension signals in a collaborative drawing task, where participants hold a piece of paper on a table while the robot draws between their hands. Midway through the test we attempt to induce a decrease in trust with an unexpected change in robot speed and evaluate subject motions along with self-reported trust and emotional arousal through galvanic skin response. After performing the experiment with forty participants, we found that reported trust was significantly affected when robot movement speed was increased. The galvanic skin response measurement were not significantly different between the test conditions. The motion tracking method used in this study did not suggest that subjects' motions were significantly affected by the decrease in trust.
|
|
14:15-14:30, Paper TuCT3.2 | |
>Organizing the Internet of Robotic Things: The Effect of Organization Structure on Users’ Evaluation and Compliance Toward IoRT Service Platform |
|
Moon, Byeong June | Seoul National University |
Kwak, Sonya Sona | Korea Institute of Science and Technology (KIST) |
Choi, Jongsuk | Korea Inst. of Sci. and Tech |
Keywords: Social Human-Robot Interaction, Service Robots, Domestic Robots
Abstract: As robots and robotic things become to have more agency, IoRT which consists of robots and robotic things can be considered as a social organization. Accordingly, social organization structure of IoRT could affect users’ behavior and perception of IoRT. In this study, in order to examine the effect of social organization structure on people’s acceptance of IoRT, we conducted a 2 (social organization structure: flat vs. hierarchical) within-participants experiment (N=30). In the experiment, a participant was asked to take part in cooking task with the aid of a robot, a robotic measuring cup, and a robotic mixer. We executed a post-experimental survey and counted the duration of participants’ following the instruction given by the platform. People gave higher scores of trustworthiness and purchase intention to the platform with flat organization structure than that with hierarchical one. On the contrary, participants were more compliant with the hierarchical IoRT service platform than a flat one. Implications for the theory and design of IoRT are discussed.
|
|
14:30-14:45, Paper TuCT3.3 | |
>Getting to Know One Another: Calibrating Intent, Capabilities, and Trust for Human-Robot Collaboration |
|
Lee, Joshua Kai Sheng | National University of Singapore |
Fong, Jeffrey | National University of Singapore |
Kok, Bing Cai | National University of Singapore |
Soh, Harold | National Universtiy of Singapore |
Keywords: Cognitive Human-Robot Interaction, Human Factors and Human-in-the-Loop, Social Human-Robot Interaction
Abstract: Common experience suggests that agents who know each other well are better able to work together. In this work, we address the problem of calibrating intention and capabilities in human-robot collaboration. In particular, we focus on scenarios where the robot is attempting to assist a human who is unable to directly communicate her intent. Moreover, both agents may have differing capabilities that are unknown to one another. We adopt a decision-theoretic approach and propose the TICC-POMDP for modeling this setting, with an associated online solver. Experiments show our approach leads to better team performance both in simulation and in a real-world study with human subjects.
|
|
14:45-15:00, Paper TuCT3.4 | |
>Online Explanation Generation for Planning Tasks in Human-Robot Teaming |
|
Zakershahrak, Mehrdad | Arizona State University |
Gong, Ze | Arizona State University |
Sadassivam, Nikhillesh | Arizona State University |
Zhang, Yu (Tony) | Arizona State University |
Keywords: Cognitive Human-Robot Interaction, Task Planning, Human Factors and Human-in-the-Loop
Abstract: As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable agency. Prior work on explanation generation has been focused on supporting the rationale behind the robot's decision or behavior. These approaches, however, fail to consider the mental demand for understanding the received explanation. In other words, the human teammate is expected to understand an explanation no matter how much information is presented. In this work, we argue that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks. However, a challenge here is that the different parts of an explanation may be dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented with three variations satisfying different “online” properties. The new explanation generation methods are based on a model reconciliation setting introduced in our prior work. We evaluated our methods both with human subjects in a simulated rover domain, using NASA Task Load Index (TLX), and synthetically with ten different problems across two standard IPC domains. Results strongly suggest that our methods generate explanations that are perceived as less cognitively demanding and much preferred over the baselines and are computationally efficient.
|
|
TuCT4 |
Room T4 |
Actuator & Joint Mechanisms I |
Regular session |
Chair: Taylor, Rebecca | Carnegie Mellon University |
Co-Chair: Gaponov, Igor | Innopolis University |
|
14:00-14:15, Paper TuCT4.1 | |
>IMU-Based Parameter Identification and Position Estimation in Twisted String Actuators |
|
Nedelchev, Simeon | Innopolis University |
Kirsanov, Daniil | Innopolis University |
Gaponov, Igor | Innopolis University |
Keywords: Calibration and Identification, Actuation and Joint Mechanisms, Kinematics
Abstract: This study proposes a technique to estimate the output state of twisted string actuators (TSAs) based on payload's acceleration measurements. We outline differential kinematics relationships of the actuator, re-formulate these into a nonlinear parameter identification problem and then apply linearization techniques to efficiently solve it as a quadratic program. Using accurate estimates of string parameters obtained with the proposed method, we can predict TSA position with sub-millimeter accuracy via conventional kinematic relationships. In addition, the proposed method supports accurate estimation under varying operating conditions, unpredictable perturbations, and poorly-excited trajectories. This technique can be employed to improve the accuracy of trajectory tracking when the use of direct position measurements is challenging, with the list of potential applications including flexible and soft robots, long-span cable robots, multi-DOF joints and others.
|
|
14:15-14:30, Paper TuCT4.2 | |
>Reliable Chattering-Free Simulation of Friction Torque in Joints Presenting High Stiction |
|
Cisneros Limon, Rafael | National Institute of Advanced Industrial Science and Technology |
Benallegue, Mehdi | AIST Japan |
Kikuuwe, Ryo | Hiroshima University |
Morisawa, Mitsuharu | National Inst. of AIST |
Kanehiro, Fumio | National Inst. of AIST |
Keywords: Simulation and Animation, Contact Modeling, Actuation and Joint Mechanisms
Abstract: The simulation of static friction, and especially the effect of stiction, is cumbersome to perform in discrete-time due to its discontinuity at zero velocity and its switching behavior. However, it is essential to achieve reliable simulations of friction to develop compliant torque control algorithms, as they are much disturbed by this phenomenon. This paper takes as a base an elastoplastic model approach for friction, which is free from chattering and drift. It proposes two closed-form solutions that can be used to reliably simulate the effect of stiction consistently with the physics-based Stribeck model. These solutions consider the nonlinearity and velocity dependency, which are main characteristics of lubricated joints. One is directly inspired by the Stribeck nonlinear terms, and the other is a simplified rational approximation. The reliability of this simulation method is shown in simulation, where the consistency and stability are assessed. We also demonstrate the accuracy of these methods by comparing them to experimental data obtained from a robot joint equipped with a high gear reduction harmonic drive.
|
|
14:30-14:45, Paper TuCT4.3 | |
>A Study on the Elongation Behaviour of Synthetic Fibre Ropes under Cyclic Loading |
|
Asane, Deoraj | Waseda University |
Schmitz, Alexander | Waseda University |
Wang, Yushi | Waseda University |
Sugano, Shigeki | Waseda University |
Keywords: Actuation and Joint Mechanisms, Mechanism Design, Performance Evaluation and Benchmarking
Abstract: Synthetic fibre ropes have high tensile strength, lower friction coefficient and are more flexible than steel ropes, and are therefore increasingly used in robotics. However, their characteristics are not well studied. In particular, previous work investigated the long-term behaviour only under static loading. In this paper, we investigate the elongation behaviour of synthetic fibre ropes under cyclic loading. In particular, we use ropes made from Dyneema DM20 (UHMWPE) and ZYLON HM (PBO), which according to prior work have low creep. While Dyneema is more widely used, Zylon has higher tensile strength. We could show that under cyclic loading the Dyneema DM20 rope elongated more than 9% and kept on extending even after 500 cycles. Zylon exhibited a more stable and lower elongation of less than 3%.
|
|
14:45-15:00, Paper TuCT4.4 | |
>Steering Magnetic Robots in Two Axes with One Pair of Maxwell Coils |
> Video Attachment
|
|
Benjaminson, Emma | Carnegie Mellon University |
Travers, Matthew | Carnegie Mellon University |
Taylor, Rebecca | Carnegie Mellon University |
Keywords: Actuation and Joint Mechanisms
Abstract: This work demonstrates a novel approach to steering a magnetic swimming robot in two dimensions with a single pair of Maxwell coils. By leveraging the curvature of the magnetic field gradient, we achieve motion along two axes. This method allows us to control medical magnetic robots using only existing MRI technology, without requiring additional hardware or posing any additional risk to the patient. We implement a switching time optimization algorithm which generates a schedule of control inputs that direct the swimming robot to a goal location in the workspace. By alternating the direction of the magnetic field gradient produced by the single pair of coils per this schedule, we are able to move the swimmer to desired points in two dimensions. Finally, we demonstrate the feasibility of our approach with an experimental implementation on the millimeter scale and discuss future opportunities to expand this work to the microscale, as well as other control problems and real-world applications.
|
|
TuCT5 |
Room T5 |
Actuator & Joint Mechanisms II |
Regular session |
Chair: Park, Jaeheung | Seoul National University |
Co-Chair: Verstraten, Tom | Vrije Universiteit Brussel |
|
14:00-14:15, Paper TuCT5.1 | |
>Scaling Laws for Parallel Motor-Gearbox Arrangements |
|
Saerens, Elias | Vrije Universiteit Brussel |
Crispel, Stein | Vrije Universiteit Brussel |
Lopez Garcia, Pablo | Vrije Universiteit Brussel |
Ducastel, Vincent | Vrije Universiteit Brussel |
Beckers, Jarl | Vrije Universiteit Brussel |
De Winter, Joris | Vrije Universiteit Brussel |
Furnémont, Raphaël | Vrije Universiteit Brussel |
Vanderborght, Bram | Vrije Universiteit Brussel |
Verstraten, Tom | Vrije Universiteit Brussel |
Lefeber, Dirk | Vrije Universiteit Brussel |
Keywords: Mechanism Design, Actuation and Joint Mechanisms
Abstract: Research towards (compliant) actuators, especially redundant ones like the Series Parallel Elastic Actuator (SPEA), has led to the development of drive trains, which have demonstrated to increase efficiency, torque-to-mass-ratio, power-to-mass ratio, etc. In the field of robotics such drive trains can be implemented, enabling technological improvements like safe, adaptable and energy-efficient robots. The choice of the used motor and transmission system, as well as the compliant elements composing the drive train, are highly dependent of the application and more specifically on the allowable weight and size. In order to optimally design an actuator adapted to the desired characteristics and the available space, scaling laws governing the specific actuator can simplify and enhance the reliability of the design process. Although scaling laws of electric motors and links are known, none have been investigated for a complete redundant drive train. The present study proposes to fill this gap by providing scaling laws for electric motors in combination with their transmission system. These laws are extended towards parallelization, i.e. replacing one big motor with gearbox by several smaller ones in parallel. The results of this study show that the torque/mass ratio for a motor-gearbox can not be increased by parallelization, but that it can increase the torque/volume ratio. This is however only the case if a good topology is chosen.
|
|
14:15-14:30, Paper TuCT5.2 | |
>A Concept of a Miniaturized MR Clutch Utilizing MR Fluid in Squeeze Mode |
|
Pisetskiy, Sergey | Western University |
Kermani, Mehrdad R. | University of Western Ontario |
Keywords: Mechanism Design, Compliant Assembly, Physical Human-Robot Interaction
Abstract: This paper presents a novel design concept of a miniaturized Magneto-Rheological (MR) clutch. The design uses a set of spur gears as a means to control the torque. MR clutches with various configurations such as disk-, drum-, and armature-based have in the past been reported in the literature. However, to the best of our knowledge, the design of a clutch with spur gears to use MR fluid in squeeze mode is a novel concept that has never been reported previously. After a brief description of the MR clutch principles, the details of the mechanical design of the spur gear MR clutch are discussed. The distribution of the magnetic flux inside the MR clutch is studied using finite element analysis in COMSOL Multiphysics software. Preliminary experimental results using a prototype MR clutch that validate the new concept and the results therein will be presented next. To clearly show the performance of the proposed design, we compared the torque capacity of our MR clutch obtained experimentally with that of a simulated disk-type MR clutch of a similar size.
|
|
14:30-14:45, Paper TuCT5.3 | |
>Development and Evaluation of a Linear Series Clutch Actuator for Vertical Joint Application with Static Balancing |
|
Kulkarni, Shardul | Waseda University |
Schmitz, Alexander | Waseda University |
Funabashi, Satoshi | Waseda University, Sugano Lab |
Sugano, Shigeki | Waseda University |
Keywords: Industrial Robots, Mechanism Design, Robot Safety
Abstract: Future robots are expected to share their workspace with humans. Controlling and limiting the forces that such robots exert on their environment is crucial. While force control can be achieved actively with the help of force sensing, passive mechanisms have no time delay in their response to external forces, and would therefore be preferable. Series clutch actuators can be used to achieve high levels of safety and backdriveability. This work presents the first implementation of a linear series clutch actuator. It can exert forces of more than 110N while weighing less than 2kg. Force controllability and safety are demonstrated. Static balancing, which is important for the application in a vertical joint, is also implemented. The power consumption is evaluated, and for a payload of 3kg and with the maximum speed of 94mm/s, thepowerconsumedbytheactuatoris11W.Overall,apractical implementation of a linear series clutch actuator is reported, which can be used for future collaborative robots.
|
|
14:45-15:00, Paper TuCT5.4 | |
>Elastomeric Continuously Variable Transmission Combined with Twisted String Actuator |
> Video Attachment
|
|
Kim, Seungyeon | Graduate School of Convergence Science and Technology, Seoul Nat |
Sim, Jaehoon | Graduate School of Convergence Science and Technology, Seoul Nat |
Park, Jaeheung | Seoul National University |
Keywords: Mechanism Design, Actuation and Joint Mechanisms
Abstract: Electric motors, with a fixed reduction ratio, have a large unused operating region when mimicking muscle movements owing to the difference between the force-velocity curves of electric motors and muscles. This unused region can be reduced by changing the reduction ratio according to the external force. However, the conventional continuously variable transmissions (CVTs) are large and heavy. Elastomeric CVT (ElaCVT), a new concept relating to CVT, is designed in this study. The primary purpose of ElaCVT is to expand the operating region of a twisted string actuator (TSA) and duplicate the force-velocity curve of the muscles by passively changing the reduction ratio according to the external load applied to the end of the TSA. A combination of ElaCVT and TSA (ElaCVT-TSA) is proposed as a linear actuator that mimics the characteristics of muscles. The deformation of elastomer changes the reduction ratio without the need for complicated mechanisms. This enables the CVT to be small and lightweight so that it can be applied to various robotic systems. The performance of the ElaCVT-TSA was evaluated by means of the experiments, and the results show that the reduction ratio was passively and continuously adjusted as the external load changed. The ElaCVT has a cylindrical shape with a length of 27 mm, a diameter of 24 mm, and weighs 12 g. The reduction ratio in the maximum velocity mode is approximately 2.31 times the reduction ratio in the maximum torque mode.
|
|
15:00-15:15, Paper TuCT5.5 | |
>Low-Cost Coil-Shaped Optical Fiber Displacement Sensor for a Twisted and Coiled Polymer Fiber Actuator Unit |
|
Masuya, Ken | Tokyo Institute of Technology |
Keywords: Soft Sensors and Actuators, Soft Robot Applications, Modeling, Control, and Learning for Soft Robots
Abstract: This study proposes a low-cost coil-shaped optical fiber sensor that can be used in the twisted and coiled polymer fiber (TCPF) actuator unit. The TCPF is a thermal-driven artificial muscle with large stroke and large power ratio. To improve its response, several actuator units, which combine the TCPF with a cooling system, were developed. However, the displacement sensors for these units were not established, unlike the encoder for the motor. Although several methods are available to estimate the TCPF displacement, they are based on the resistance of the heating wire and are affected by TCPF actuation and load change. Therefore, to accurately measure the displacement, this study focuses on the bending loss of optical fiber, which is often used in soft robotics. Because the bending loss of COFs is caused by the change in length of the coil, the displacement can be obtained by measuring the light intensity through the optical fiber. This study experimentally demonstrates that the COF is available, even when driving the TCPF actuator and changing the load.
|
|
TuCT6 |
Room T6 |
Mechanism Design I |
Regular session |
Chair: Zarrouk, David | Ben Gurion University |
Co-Chair: Ma, Shugen | Ritsumeikan University |
|
14:00-14:15, Paper TuCT6.1 | |
>Static Characteristics of Fire Hose Actuators and Design of a Compliant Pneumatic Rotary Drive for Robotics |
> Video Attachment
|
|
Stoll, Johannes T. | Fraunhofer Institute for Manufacturing Engineering and Automatio |
Schanz, Kevin | Fraunhofer Institute for Manufacturing Engineering and Automatio |
Derstroff, Michael | Fraunhofer Institut - IPA |
Pott, Andreas | University of Stuttgart |
Keywords: Hydraulic/Pneumatic Actuators, Actuation and Joint Mechanisms, Soft Sensors and Actuators
Abstract: In this work, we present and explain in detail the design of a new type of pneumatic actuator made of fire hose, the fire hose actuator (FHA), see Fig. 1. We model the force output of this type of actuator and we compare the theoretic results to the data measured on the laboratory test stand. Furthermore, we present the design of a pneumatic rotary drive that is actuated by four of the above-mentioned FHAs. The drive unit features intrinsic compliance and it is capable of high-precision positioning. Due to these characteristics, the design concept of the rotary drive is suitable for the potential use in robotics, especially in human-robot collaboration. In addition, we model the static torque distribution of the rotary drive and we compare the theoretic results to the data measured on the realized laboratory test stand. Moreover, we discuss the most important characteristics of the rotary drive. Hence, we present measurements of the adjustable stiffness and we show that high-precision positioning is possible with the system, reaching in ideal areas every bit of the used 17-bit encoder with a resolution of 0.0027 degree. Moreover, the drive unit is capable of continuous rotation while the maximum continuous torque possible is found to be 63.1 Nm.
|
|
14:15-14:30, Paper TuCT6.2 | |
>Long-Reach Compact Robotic Arm with LMPA Joints for Monitoring of Reactor Interior |
|
Seino, Akira | Fukushima University |
Seto, Noriaki | Fukushima University |
Canete, Luis | University of San Carlos |
Takahashi, Takayuki | Fukushima University |
Keywords: Actuation and Joint Mechanisms, Mechanism Design, Robotics in Hazardous Fields
Abstract: To reduce the risk of radiation leakages similar to the incident at the Fukushima Daiichi Nuclear Power Station, robots have been employed to remove fuel debris from reactors. To perform this process safely, it is important to monitor the interior of a reactor. A camera and neutron sensors are attached to the end of a robotic arm to monitor the interior of the reactor. The basic design requirement for the monitoring system is that the arm must be highly extendable and rigid. To achieve this, a novel compact long-reach manipulator with a joint structure built using a low-melting-point alloy (LMPA) is proposed. The LMPA enables switching between the free and locked states of the rotational joints of the manipulator. Herein, we first explain the design of the proposed joint structure and verify whether it has adequate mechanical strength. The required maximum torque to be sustained by the structure was calculated using the cantilever model, and the actual breaking torque was measured by the tensile test. Experimental results confirmed that the joint could withstand approximately 1.86 times the required torque. Finally, the effectiveness of induction heating, which is used to switch between the free and locked states of the joints, was evaluated experimentally. The LMPA arm was installed in the coil of the induction heating module, and the time required to melt LMPA was measured.The experimental results confirmed that the induction heating can change the state of the LMPA joint, and the time required for the melting is approximately 30.3 s. Therefore, the findings of this study show that the proposed system is capable of averting nuclear disasters through the prevention of radiation leakages at nuclear plants.
|
|
14:30-14:45, Paper TuCT6.3 | |
>An In-Pipe Manipulator for Contamination-Less Rehabilitation of Water Distribution Pipes |
|
Yeung, Yip Fun | MIT |
Youcef-Toumi, Kamal | Massachusetts Institute of Technology |
Keywords: Service Robotics, Product Design, Development and Prototyping, Kinematics
Abstract: The recent development of in-pipe robots (IPR) with locomotion and inspection functions provides a new possibility to water distribution pipe maintenance - to rehabilitate pipe defects internally. Yet only a limited number of Rehabilitation in-pipe robots (R-IPR) have been proposed. One primary concern that impedes the development of Rehabilitation in-pipe robots is the excessive amount of contamination generated during the rehabilitation process. Correspondingly, we propose a novel concept: Contamination-Less in-pipe Rehabilitation (CLR) and develop the CLR in-pipe robot as an innovative solution. The proposed robot contains three modules for pipe-surface sealing, pipe-wall cleaning, and in-pipe manipulation. This paper centers on the comprehensive design of the manipulator module. First, the manipulator features a high-DoF configuration to deploy the other two modules simultaneously. Second, the configuration adopts a nested-outer-inner architecture to ensure the seal always encloses the pipe-wall cleaning device. The holistic and detailed design process of the manipulator, including design concept, kinematics, load requirements, design for manufacturing, and simulated deployment, are presented. Eventually, the fully implemented robot accomplished the first Contamination-Less in-pipe Rehabilitation.
|
|
14:45-15:00, Paper TuCT6.4 | |
>Design and Implementation of a Pipeline Inspection Robot with Camera Image Compensation |
> Video Attachment
|
|
Yuan, Zhaohan | Shanghai JiaoTong University |
Yuan, Jianjun | Shanghai University, China |
Ma, Shugen | Ritsumeikan University |
Keywords: Mechanism Design, Flexible Robots
Abstract: In this paper, we updated an inspection robot with passive adaptation ability, which is used to detect small size water supply pipeline. By geometric calculation and kinematic verification, static model of the robot is checked for flexible movement in the pipeline. Besides, inertial measurement unit is leveraged to simultaneously detect the attitude of robot, and different algorithm is tested to compensate the camera image rotation, stabilizing the image output.
|
|
15:00-15:15, Paper TuCT6.5 | |
>LineSpyX: A Power Line Inspection Robot Based on Digital Radiography |
> Video Attachment
|
|
Gao, Yuan | Southeast University |
Song, Guangming | Southeast University |
Li, Songtao | Southeast University |
Zhen, Fushuai | Southeast University |
Chen, Dabing | State Grid Jiangsu Electric Power Co., Ltd Research Institute |
Song, Aiguo | Southeast University |
Keywords: Industrial Robots, Product Design, Development and Prototyping
Abstract: Most of the current power line inspection robots use cameras and LiDARs to inspect the power line surfaces and the surrounding environment. But it is still difficult to detect the internal defects of the power lines. In this paper, the design and implementation of LineSpyX, a novel power line inspection robot based on digital radiography (DR), is introduced to solve the problem of non-destructive testing (NDT) of the overhead Aluminum Conductor Composite Core (ACCC) wires. The proposed robot has a stable wrapped mechanical structure with a moving system, a live work system, and a NDT system. The wheeled moving system enables the robot to move on the wires and cross obstacles such as vibration dampers. The NDT system consists of a portable X-ray generator and a DR detection panel. When the robot performs the inspection task, the X-ray goes up through the ACCC wire to the panel, where the X-ray images of the internal carbon fiber cores are recorded. A deep learning based defect diagnosis method combined with manual diagnosis is proposed to detect potential defects. The main functionalities of the developed robot are verified by lab experiments and field tests.
|
|
15:15-15:30, Paper TuCT6.6 | |
>The AmphiSTAR High Speed Amphibious Sprawl Tuned Robot: Design and Experiments |
> Video Attachment
|
|
Cohen, Avi | Ben Gurion University of the Negev |
Zarrouk, David | Ben Gurion University |
Keywords: Mechanism Design, Field Robots, Search and Rescue Robots
Abstract: This paper details the development, modeling and performance of AmphiSTAR, a novel high-speed amphibious robot. The palm size AmphiSTAR, which belongs to the family of STAR robots, is a “wheeled” robot fitted with propellers at its bottom that allow it to crawl on the ground and run (i.e. hover) on water at high speeds. The AmphiSTAR is inspired by two members of the animal kingdom. It possesses a sprawling mechanism inspired by cockroaches, and it is designed to run on water at high speeds like the Basilisk lizard. We start by presenting the mechanical design of the robot and its control system. Then we model AmphiSTAR when crawling, swimming and running on water. We then report experiments on the robot to measure its lift and thrust forces in its on-water running mode and evaluate its energy consumption. The results show that in the on-water running mode, the lift forces are a function of the work volume of the propellers whereas the thrust forces are a linear function of the propellers’ rotating speed. Based on these results, the final version of the 3D printed robot was built and experimentally tested in multiple scenarios. The experimental robot can crawl over the ground with performances similar to the original STAR robot and can attain speeds of 3.6 m/s. The robot can run continuously on water surfaces at speeds of 1.5 m/s. It can also swim (i.e. float while advancing by rotating its propellers) at low speeds and transition from swimming to crawling (see video).
|
|
TuCT7 |
Room T7 |
Mechanism Design II |
Regular session |
Chair: Baek, Stanley | United States Air Force Academy |
Co-Chair: Fuchiwaki, Ohmi | Yokohama National University (YNU) |
|
14:00-14:15, Paper TuCT7.1 | |
>Design of an Underactuated Peristaltic Robot on Soft Terrain |
|
Scheraga, Scott | University of Michigan-Dearborn |
Mohammadi, Alireza | University of Michigan, Dearborn |
Kim, Taehyung | University of Michigan-Dearborn |
Baek, Stanley | United States Air Force Academy |
Keywords: Mechanism Design, Multi-legged Robots, Underactuated Robots
Abstract: This paper presents an innovative robotic mechanism for generating peristaltic motion for robotic locomotion systems. The designed underactuated peristaltic robot utilizes a minimum amount of electromechanical hardware. Such a minimal electromechanical design not only reduces the number of potential failure modes but also provides the robot design with great potential for scaling to larger and smaller applications. We performed several speed and force generation tests atop a variety of granular media. Our experiments show the effective design of robot mechanism where the robot can travel with a small input power (1.14W) at 6.0 mm/sec with 2.45 N force atop sand.
|
|
14:15-14:30, Paper TuCT7.2 | |
>Design, Analysis and Preliminary Validation of a 3-DOF Rotational Inertia Generator |
> Video Attachment
|
|
Tremblay-Bugeaud, Jean-Félix | Université Laval |
Laliberte, Thierry | Universite Laval |
Gosselin, Clement | Université Laval |
Keywords: Haptics and Haptic Interfaces, Virtual Reality and Interfaces, Dynamics
Abstract: This paper investigates the design of a three-degree-of-freedom rotational inertia generator using the gyroscopic effect to provide ungrounded torque feedback. It uses a rotating mass in order to influence the torques needed to move the device, creating a perceived inertia. The dynamic model and the control law of the device are derived, along with those of a comparable concept using three flywheels instead of a gyroscope. Both models are then validated through simulations. Further simulations are conducted to establish motor torque and velocity requirements, and the gyroscopic concept is identified as having the less demanding requirements. The mechatronic design of a prototype of an inertia generator is presented, along with modifications to the dynamic model. Preliminary experimental validations are conducted. As the prototype faces instability issues when using the flywheels at high velocities, they are conducted using 0 RPM initial velocities. The results confirm that it is possible to both reduce and increase the rendered inertia even with current limitations. Finally, improvements for a second version of the prototype are discussed.
|
|
14:30-14:45, Paper TuCT7.3 | |
>Development of a Spherical 2-DOF Wrist Employing Spatial Parallelogram Structure |
> Video Attachment
|
|
Jeong, Hyunhwan | Korea University |
Baek, Sunhyuk | Korea University |
Kim, Whee Kuk | Korea University |
Yi, Byung-Ju | Hanyang University |
Keywords: Grasping, Grippers and Other End-Effectors, Mechanism Design
Abstract: A spherical two-degree-of- freedom wrist adapting the structure of the spatial parallelogram is proposed. A U type extended link out of three UU type limbs of the spatial parallelogram is selected as an output link. As a result, the wrist can be interpreted as being formed by combination of a U type limb and a (2-UU)+U type hybrid limb. Screw theory is employed to analyze its first-order kinematic model. Then a compact wrist prototype suitable for wrist module supporting the robot hand is designed and implemented. Finally, experiments with the prototype confirm that the wrist has a very high potential application for wrist modules in terms of dexterity and maximum load handling capacity.
|
|
14:45-15:00, Paper TuCT7.4 | |
>Design of a Linear Gravity Compensator for a Prismatic Joint |
> Video Attachment
|
|
Kim, Do-Won | Korea University |
Lee, Won-Bum | Korea University, Intelligence Robotics Laboratory |
Song, Jae-Bok | Korea University |
Keywords: Mechanism Design, Service Robots
Abstract: Most existing mechanical gravity compensators have been developed for revolute joints that are found in majority of articulated robot arms. However, robots such as patient transport robots use prismatic joints, which need to handle a heavy payload. In this study, a high-capacity linear gravity compensator (LGC), which comprises pure mechanical components, such as coil springs, a rack-pinion gear, a cam, and a wire, is proposed to compensate for the payload applied to a prismatic joint. The LGC is designed to generate a constant compensation force regardless of the payload position. The device can be manufactured at a low cost and has a significantly long lifespan because it uses coil springs to serve as an elastic body. Experiments demonstrate that the robot with the LGC can handle a load of 100 kg more than the robot using the same motors without it.
|
|
15:15-15:30, Paper TuCT7.6 | |
>Development of Δ-Type Mobile Robot Driven by 3 Standing Wave Type Piezoelectric Ultrasonic Motors |
> Video Attachment
|
|
Zhou, Juntian | Yokohama National University |
Suzuki, Masaki | Yokohama National University |
Takahashi, Ryoma | Yokohama National University |
Tanabe, Kengo | Yokohama National University |
Nishiyama, Yuki | Yokohama National University |
Sugiuchi, Hajime | Yokohama National University |
Maeda, Yusuke | Yokohama National University |
Fuchiwaki, Ohmi | Yokohama National University (YNU) |
Keywords: Mechanism Design, Kinematics, Mobile Manipulation
Abstract: Herein, we introduced a newly proposed mobile robot that uses three standing-wave type ultrasonic motors (USMs). The USM is composed of two stacked-type piezoelectric actuators. Recently, with the miniaturization of electronic and MEMS devices and the progress of the bio-medical science, the demand for multifunctional manipulation of those chip parts and bio-medical cells has increased. Conventional multiaxial stages are too bulky for the multifunctional manipulation where multiple manipulators are required. Conventional precise mobile robots are feasible for miniaturization of the multifunctional manipulation, although their cables influence the positioning repeatability. USMs are feasible actuators for realizing cableless robots because its energy efficiency is relatively higher than other motors with millimeter scale, although there is no article concerning the omnidirectional mobile robot using USMs thus far. The aim of this study is to develop a new type of the omnidirectional mobile robot driven by USMs. In experiments, we evaluated the feasibility by investigating velocity, positioning deviation, and achieving repeatability of translational movements under an open-loop control. Here, we determine the repeatability as a ratio of the standard deviation of the final points to the average path length. The proposed mobile robot achieves 18.6 to 31.4 mm/s of velocity and 4.1 to 9.1% of the repeatability with 200g weight.
|
|
TuCT8 |
Room T8 |
Mechanism Design III |
Regular session |
Chair: Tavakoli, Mahdi | University of Alberta |
Co-Chair: Inoue, Syuya | Department of Robotics, Ritsumeikan University |
|
14:00-14:15, Paper TuCT8.1 | |
>Locomotion Performance of a Configurable Paddle-Wheel Robot Over Dry Sandy Terrain |
|
Shen, Yayi | Tokyo Institution of Technology |
Ma, Shugen | Ritsumeikan University |
Zhang, Guoteng | Shandong University |
Inoue, Syuya | Department of Robotics, Ritsumeikan University |
Keywords: Mechanism Design, Wheeled Robots, Legged Robots
Abstract: To access rough terrain and enhance the mobility in sandy terrain, a configurable paddle-wheel robot was proposed. This report addresses the paddle terradynamics, and the experimental verification of the locomotion performance of the robot over dry sandy terrain. To study the interactive forces between the paddle and the media, a terradynamic model is built and verified through experiments. To explore the locomotion performance, an indoor platform that allows the paddle-wheel module to move freely in both horizontal and vertical directions is created. Forward locomotion speed, height variant, and specific resistance are evaluated with different configurations. The protruding paddles have successfully reduced the slippage so as to increase the locomotion efficiency in sandy terrain. The performance of the whole robot has also been verified in outdoor sandy terrain.
|
|
14:15-14:30, Paper TuCT8.2 | |
>Optimal Design of a Novel Spherical Scissor Linkage Remote Center of Motion Mechanism for Medical Robotics |
|
Afshar, Mehrnoosh | University of Alberta |
Carriere, Jay | University of Alberta |
Meyer, Tyler | Baker Cancer Centre |
Sloboda, Ronald | Cross Cancer Institute |
Husain, Siraj | Tom Baker Cancer Centre |
Usmani, Nawaid | Cross Cancer Institute |
Tavakoli, Mahdi | University of Alberta |
Keywords: Mechanism Design, Medical Robots and Systems
Abstract: In this paper, a new remote center of motion (RCM) mechanism is presented whose end-effector is able to move through an entire hemisphere. In general, minimally invasive surgery (MIS) applications, an elliptic cone workspace with vertex angles of 60^{circ} and 90^{circ} gives the surgeon enough freedom to operate. Therefore, the majority of the developed RCM mechanisms have such a cone as the workspace. However, there are still situations in which a larger workspace is required, like the breast ultrasound scanning application in which the RCM mechanisms should be able to move over a hemisphere to do the breast scanning. The proposed RCM mechanism is developed based upon a spherical scissor linkage and benefits from the high stiffness characteristics of parallel structures while eliminating the common problem of linkage collision in parallel structures. It has two rotational degrees of freedom that are decoupled from each other. The Jacobian and the stiffness of the mechanism while considering the bending of the links is calculated through the virtual joints method (VJM). The kinemato-static equations and the methodology for calculating stiffness are described in detail. The optimal arc angle of the mechanism's links is found using a multi-objective genetic algorithm optimization. A prototype of the mechanism is built and forward kinematic of the proposed mechanism is examined experimentally. The experiments indicate that the proposed mechanism is able to provide a hemisphere as its workspace while the RCM point of the mechanism is fixed in the space.
|
|
14:30-14:45, Paper TuCT8.3 | |
>Computational Design of Balanced Open Link Planar Mechanisms with Counterweights from User Sketches |
> Video Attachment
|
|
Takahashi, Takuto | Waseda University |
Okuno, Hiroshi G. | Waseda University |
Sugano, Shigeki | Waseda University |
Coros, Stelian | Carnegie Mellon University |
Thomaszewski, Bernhard | Université De Montréal |
Keywords: Product Design, Development and Prototyping, Mechanism Design, Software, Middleware and Programming Environments
Abstract: We consider the design of under-actuated articulated mechanism that are able to maintain stable static balance. Our method augments an user-provided design with counterweights whose mass and attachment locations are automatically computed. The optimized counterweights adjust the center of gravity such that, for bounded external perturbations, the mechanism returns to its original configuration. Using our sketch-based system, we present several examples illustrating a wide range of user-provided designs can be successfully converted into statically-balanced mechanisms. We further validate our results with a set of physical prototypes.
|
|
14:45-15:00, Paper TuCT8.4 | |
>A Multi-Link In-Pipe Inspection Robot Composed of Active and Passive Compliant Joints |
> Video Attachment
|
|
Kakogawa, Atsushi | Ritsumeikan University |
Ma, Shugen | Ritsumeikan University |
Keywords: Search and Rescue Robots, Field Robots, Mechanism Design
Abstract: AIRo-5.1 an in-pipe inspection robot comprised of two passive compliant joints and a single active compliant joint that is driven by a series elastic actuator (SEA) is presented in the course of this study. As an aid in pipeline maintenance, AIRo-5.1 controls joint angles and the torque of middle joints, to enable them to adapt to bend, branch, vertical pipes, and slippery surfaces. To sense the joint torques, an improved durable polyurethane rubber spring was installed. To smoothly pass through T-branches, the angle trajectory of middle joints was calculated based on the pipe geometry and thus, was interpolated using a cosine curve. Experiments to verify robot performance in bent and T-branch pipes, its joint angle and torque control was conducted.
|
|
15:00-15:15, Paper TuCT8.5 | |
>An Algorithm to Design Redundant Manipulators of Optimally Fault-Tolerant Kinematic Structure |
|
Almarkhi, Ahmad | University |
Maciejewski, Anthony A. | Colorado State University |
Chong, Edwin K. P. | Colorado State University |
Keywords: Redundant Robots, Kinematics, Motion Control
Abstract: One measure of the global fault tolerance of a redundant robot is the size of its self-motion manifold. If this size is defined as the range of its joint angles, then the optimal self-motion manifold size for an n-degree-of-freedom (DoF) robot is n x 2pi, which is not typical for existing robot designs. This paper presents a novel two-step algorithm to optimize the kinematic structure of a redundant manipulator to have an optimal self-motion manifold size. The algorithm exploits the fact that singularities occur on large self-motion manifolds by optimizing the robots kinematic parameters around a singularity. Because a gradient for the self-motion manifold size does not exist, the kinematic parameter optimization uses a coordinate-ascent procedure. The algorithm was used to design 4-DoF, 7-DoF, and 8-DoF manipulators to illustrate its efficacy at generating optimally fault-tolerant robots of any kinematic structure.
|
|
15:15-15:30, Paper TuCT8.6 | |
>Continuously Variable Stiffness Mechanism Using Nonuniform Patterns on Coaxial Tubes for Continuum Microsurgical Robot (I) |
|
Kim, Jongwoo | The Hospital for Sick Children, University of Toronto |
Choi, Woo-Young | Seoul National University |
Kang, Sung-Chul | Samsung Research, Samsung Electronics |
Kim, Chunwoo | Korea Institute of Science and Technology (KIST) |
Cho, Kyu-Jin | Seoul National University, Biorobotics Laboratory |
|
|
TuCT9 |
Room T9 |
Modular Robots & Actuators |
Regular session |
Chair: Gorissen, Benjamin | Harvard University |
Co-Chair: Yang, Woosung | Kwangwoon University |
|
14:00-14:15, Paper TuCT9.1 | |
>Introduction to 7-DoF CoSMo-Arm : High Torque Density Manipulator Based on CoSMoA and E-CoSMo |
> Video Attachment
|
|
Noh, Jaeho | Kwangwoon University |
Lee, Jaeyong | Kwangwoon University |
Cheon, Seyoung | Kwangwoon University |
Yang, Woosung | Kwangwoon University |
Keywords: Actuation and Joint Mechanisms, Mechanism Design, Parallel Robots
Abstract: This study proposes a novel 7-DOF robotic manipulator called CoSMo-Arm for high torque density multi-link robotic platform based on a concentrically stacked modular actuator (CoSMoA) and an extended coaxial spherical joint module (E-CoSMo) introduced in previous researches. The CoSMoA is an actuator module designed to improve thermal characteristics by stacking the motor actuator parts to share the heat dissipation device of the adjacent actuator module, thereby theoretically amplifying motor performance by approximately 3.2 times. The E-CoSMo is a parallel joint mechanism connected to the end of the CoSMoA to create point-centered rotational four degrees of freedom. This joint module has a large range of motion in specific rotational directions and maximum output of approximately up to four times to the actuator output in the specific workspace. The CoSMo-Arm is designed to take advantages of these novel concept modules, having a higher payload than its own weight. To verify the benefits of the proposed mechanism, we performed kinematic analysis and dynamics simulations. From experimental verifications of the real prototype, the feasibility and validity are confirmed as a multi-DOF robotic manipulator.
|
|
14:15-14:30, Paper TuCT9.2 | |
>FreeBOT: A Freeform Modular Self-Reconfigurable Robot with Arbitrary Connection Point - Design and Implementation |
> Video Attachment
|
|
Liang, Guanqi | The Chinese University of Hong Kong, Shenzhen |
Luo, Haobo | The Chinese University of Hong Kong, Shenzhen |
Li, Ming | Chinese University of Hong Kong, Shenzhen |
Qian, Huihuan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Keywords: Cellular and Modular Robots, Mechanism Design
Abstract: This paper proposes a novel modular selfreconfigurable robot (MSRR) “FreeBOT”, which can be connected freely at any point on other robots. FreeBOT is mainly composed of two parts: a spherical ferromagnetic shell and an internal magnet. The connection between the modules is genderless and instant, since the internal magnet can freely attract other FreeBOT spherical ferromagnetic shells, and not need to be precisely aligned with the specified connector. This connection method has fewer physical constraints, so the FreeBOT system can be extended to more configurations to meet more functional requirements. FreeBOT can accomplish multiple tasks although it only has two motors: module independent movement, connector management and system reconfiguration. FreeBOT can move independently on the plane, and even climb on ferromagnetic walls; a group of FreeBOT can traverse complex terrain. Many experiments have been conducted to test its function, which shows that the FreeBOT system has great potential to realize a freeform robotic system.
|
|
14:30-14:45, Paper TuCT9.3 | |
>Design and Modelling of a Minimally Actuated Serial Robot |
> Video Attachment
|
|
Ayalon, Yotam | Ben Gurion University of the Negev |
Damti, Lior | BGU |
Zarrouk, David | Ben Gurion University |
Keywords: Mechanism Design, Underactuated Robots, Simulation and Animation
Abstract: In this paper we present a minimally actuated overly redundant serial robot (MASR). The robot is composed of a planar arm comprised of ten passive rotational joints and a single mobile actuator that travels over the links to reach designated joints and rotate them. The joints remain locked, using a worm gear setup, after the mobile actuator moves to another link. A gripper is attached to the mobile actuator thus allowing it to transport objects along the links to decrease the actuation of the joints and the working time. A linear stepper motor is used to control the vertical motion of the robot in 3D space. Along the paper, we present the mechanical design of the robot with 10 passive joints and the automatic actuation of the mobile actuator. We also present an optimization algorithm and simulations designed to minimize the working time and the travelled distance of the mobile actuator. Multiple experiments conducted using a robotic prototype depict the advantages of the MASR robot: its very low weight compared to similar robots, its high modularity and the ease of replacement of its parts since there is no wiring along the arm, as shown in the accompanying video.
|
|
14:45-15:00, Paper TuCT9.4 | |
>R-Track: Separable Modular Climbing Robot Design for Wall-To-Wall Transition |
|
Park, Changmin | RoDEL |
Bae, Jangho | Seoul National University |
Ryu, Sijun | Hanyang University |
Lee, Jiseok | Hanyang University |
Seo, TaeWon | Hanyang University |
Keywords: Mechanism Design, Cellular and Modular Robots
Abstract: This paper presents the development of a reconfigurable wall-climbing robot (WCR) called R-track. R-Track is designed to apply an adhesive force on a surface with magnetic tracks because it operates inside a metal structure. By applying a modular design concept, R-Track can perform various wallto- wall transitions. Each module of R-Track can be connected or disconnected without an additional actuator. R-Track is capable of all kinds of perpendicular wall-to-wall transitions. In particular, external wall transitions that have been difficult for previous WCRs to realize have been achieved via R-track by cooperation between modules. The statics of R-Track during wall transitions was analyzed to identify and verify an appropriate reconfiguration strategy. Experiments on wall-to-wall transitions were conducted to demonstrate the performance of R-Track. The results indicate that R-Track successfully performed all kinds of perpendicular wall-to-wall transitions.
|
|
15:00-15:15, Paper TuCT9.5 | |
>A Soft, Modular, and Bi-Stable Dome Actuator for Programmable Multi-Modal Locomotion |
> Video Attachment
|
|
Bell, Michael | Harvard School of Engineering and Applied Sciences |
Cattani, Luca | EPFL |
Gorissen, Benjamin | Harvard University |
Bertoldi, Katia | Harvard University |
Weaver, James | Harvard University/ Wyss |
Wood, Robert | Harvard University |
Keywords: Hydraulic/Pneumatic Actuators, Soft Sensors and Actuators, Soft Robot Materials and Design
Abstract: Movement in bio-inspired robots typically relies on the use of a series of actuators and transmissions with one or more degrees of freedom (DOF), allowing asymmetrical ellipsoidal gaits for use in walking, running, swimming, and crawling. In an effort to simplify these multi-component systems, we present a novel, modular, soft, bi-stable, one DOF dome actuator platform that is capable of complex gaits through mechanical programming, driven by simple periodic fluid input. With a modular, reconfigurable design, the end effectors of these bi-stable dome actuators can be quickly modified for use on a variety of surfaces for specific applications. In the present study, we describe the finite element modeling, manufacturing, and characterization of different end effectors and outline a workflow for the implementation of these soft bi-stable dome actuators for the production of functional robotic prototypes.
|
|
TuCT10 |
Room T10 |
Parallel Robots |
Regular session |
Chair: Rojas, Nicolas | Imperial College London |
Co-Chair: Bergeles, Christos | King's College London |
|
14:00-14:15, Paper TuCT10.1 | |
>Modeling, Calibration, and Evaluation of a Tendon-Actuated Planar Parallel Continuum Robot |
> Video Attachment
|
|
Nuelle, Kathrin | Leibniz Universität Hannover |
Sterneck, Tim | Leibniz University Hannover |
Lilge, Sven | University of Toronto Mississauga |
Xiong, Dezhu | Institute of Mechatronic Systems, Leibniz University Hannover |
Burgner-Kahrs, Jessica | University of Toronto Mississauga |
Ortmaier, Tobias | Leibniz University Hanover |
Keywords: Parallel Robots, Flexible Robots, Calibration and Identification
Abstract: In this work, a novel planar parallel continuum robot (PCR) is introduced, consisting of three kinematic chains that are coupled at a triangular end-effector platform and include tendon-actuated continuum segments. The kinematics of the resulting structure are derived by adapting the descriptions for conventional planar parallel manipulators to include constant curvature bending of the utilized continuous segments. To account for friction and non-linear material effects, a data-driven model is used to relate tendon displacements and curvature of the utilized continuum segments. A calibration of the derived kinematic model is conducted to specifically represent the constructed prototype. This includes the calibration of geometric parameters for each kinematic chain and for the end-effector platform. During evaluation, positioning repeatability of 1.0% in relation to one continuum segment length of the robot, and positioning accuracy of 1.4%, are achieved. These results are comparable to commonly used kineto-static modeling approaches for PCR. The presented model achieves high path accuracies regarding the robot’s end-effector pose in an open-loop control scenario.
|
|
14:15-14:30, Paper TuCT10.2 | |
>Transferability in an 8-DoF Parallel Robot with a Configurable Platform |
|
Dahmouche, Redwan | Université De Franche Comté |
Wen, Kefei | Université Laval |
Gosselin, Clement | Université Laval |
Keywords: Parallel Robots, Kinematics, Micro/Nano Robots
Abstract: Parallel robots with configurable platforms (PRCPs) combine the benefits of parallel robots with additional functionalities such as grasping and cutting. However, some of the theoretical tools used to study classical parallel robots do not apply to parallel robots with configurable platforms. This paper uses screw theory to study the transferable wrenches from the robot's limbs to the configurable platform of an 8-DoF parallel robot. Deriving the transferable wrenches allows one to construct the screw system that is applied to each part of the configurable platform. Based on the analytical expressions of the limb and platform wrenches that have been derived and numerically validated, the mathematical tools that are used to study parallel kinematic structures, such as Grassmann line geometry, can thus be applied to the presented parallel robot with a configurable platform.
|
|
14:30-14:45, Paper TuCT10.3 | |
>Design, Modelling, and Implementation of a 7-DOF Cable-Driven Haptic Device with a Configurable Cable Platform |
|
Lambert, Patrice | King's College London |
Da Cruz, Lyndon | Moorfields Eye Hospital |
Bergeles, Christos | King's College London |
Keywords: Parallel Robots, Kinematics, Haptics and Haptic Interfaces
Abstract: This article introduces a novel 7 Degree Of Freedom (DOF) cable-driven haptic device based on the concept of a configurable cable platform. In the proposed concept, a 1-DOF pinch grasping capability is provided via a network of ten passive cables kept in tension. The coordinated action on the cable platform of eight active cables driven from the base, fully controls the position, orientation, and grasping configuration of the device. This constitutes the first 7-DOF cable-driven robot that is made of a network of cables instead of a pure parallel architecture. Original static and kinematic models were developed to address the particularities of the proposed architecture. They are detailed in this manuscript and used to define the workspace and the control algorithm of the design. A working prototype illustrating an implementation of the theory is presented.
|
|
14:45-15:00, Paper TuCT10.4 | |
>Continuous Tension Validation for Cable-Driven Parallel Robots |
> Video Attachment
|
|
Bury, Diane | Tecnalia France |
Izard, Jean-Baptiste | Tecnalia Research & Innovation |
Gouttefarde, Marc | CNRS |
Lamiraux, Florent | CNRS |
Keywords: Parallel Robots, Motion and Path Planning
Abstract: This paper deals with continuous tension validation for Cable-Driven Parallel Robots (CDPRs). The proposed method aims at determining whether or not a quasi-static path is feasible regarding cable tension limits. The available wrench set (AWS) is the set of wrenches that can be generated with cable tensions within given minimum and maximum limits. A pose of the robot is considered valid regarding the tensions if and only if the wrench induced by the platform weight is inside the AWS. The hyperplane shifting method gives a geometric representation of the AWS as the intersection of half-spaces. For each facet-defining hyperplane of the AWS, we define a value which is positive when the pose is valid, i.e. when the corresponding wrench lies on the proper side of the hyperplane. Using this value and an upper bound on its time derivative along the path, the half-length of a valid time interval is obtained. Intervals are repeatedly validated for each hyperplane until either the whole path is validated or a non-valid pose is found. The presented method is integrated within the open-source software Humanoid Path Planner (HPP) and implementation results using the configuration of the CDPR CoGiRo are presented.
|
|
15:00-15:15, Paper TuCT10.5 | |
>Improving Disturbance Rejection and Dynamics of Cable Driven Parallel Robots with On-Board Propellers |
> Video Attachment
|
|
Khayour, Imane | University of Strasbourg |
Cuvillon, Loic | University of Strasbourg |
Butin, Côme | INSA Strasbourg |
Yigit, Arda | University of Strasbourg |
Durand, Sylvain | INSA Strasbourg & ICube |
Gangloff, Jacques | University of Strasbourg |
Keywords: Dynamics, Redundant Robots, Aerial Systems: Mechanics and Control
Abstract: This work studies redundant actuation for both trajectory tracking and disturbance rejection on flexible cable-driven parallel robots (CDPR). High dynamics/bandwidth unidirectional force generators, like air propellers, are used in combination with conventional but slower cable winding winches. To optimally balance the action of the two types of actuation within their saturation constraints, a model predictive controller is used. Experiments show the added value of on-board propulsion units with respect to winch-only control in order to improve the overall CDPR dynamic behavior.
|
|
15:15-15:30, Paper TuCT10.6 | |
>On the False Positives and False Negatives of the Jacobian Matrix in Kinematically Redundant Parallel Mechanisms (I) |
|
Baron, Nicholas | University of Sussex |
Philippides, Andrew | University of Sussex |
Rojas, Nicolas | Imperial College London |
Keywords: Kinematics, Parallel Robots, Redundant Robots
Abstract: The Jacobian matrix is a highly popular tool for the control and performance analysis of closed-loop robots. Its usefulness in parallel mechanisms is certainly apparent, and its application to solve motion planning problems, or other higher level questions, has been seldom queried, or limited to nonredundant systems. In this article, we discuss the shortcomings of the use of the Jacobian matrix under redundancy, in particular when applied to kinematically redundant parallel architectures with non-serially connected actuators. These architectures have become fairly popular recently as they allow the end-effector to achieve full rotations, which is an impossible task with traditional topologies.The problems with the Jacobian matrix in these novel systems arise from the need to eliminate redundant variables forming it, resulting in both situations where the Jacobian incorrectly identifies singularities (false positive), and where it fails to identify singularities (false negative). These issues have, thus far, remained unaddressed in the literature. We highlight these limitations herein by demonstrating several cases using numerical examples of both planar and spatial architectures.
|
|
TuCT11 |
Room T11 |
Formal Methods and Planning I |
Regular session |
Chair: Gao, Xifeng | Florida State University |
Co-Chair: Feng, Lu | University of Virginia |
|
14:00-14:15, Paper TuCT11.1 | |
>Generating New Lower Abstract Task Operator Using Grid-TLI |
> Video Attachment
|
|
Tokuda, Shumpei | Tokyo Institute of Technology |
Katayama, Mizuho | NEC Corporation |
Yamakita, Masaki | Tokyo Inst. of Technology |
Oyama, Hiroyuki | NEC Corporation |
Keywords: Formal Methods in Robotics and Automation, Foundations of Automation, Task Planning
Abstract: We propose a method of subdividing robot tasks into new lower abstract tasks. The description of robot tasks in an abstract manner is effective for motion planning for complex tasks and teaching robot movements in various environments. However, more efficient task description may be obtained by using a lower abstraction according to the work environment. We argue that a higher abstract task can be expressed as a new lower abstract subtasks by applying Grid-based Signal Temporal Inference (Grid-TLI). We show that a new task can be completed using the Signal Temporal Logic formula for each cluster. We demonstrated the efficiency of our method through computer simulations using a 2-D security robot task.
|
|
14:15-14:30, Paper TuCT11.2 | |
>Inner-Approximation of Manipulable and Reachable Regions Using Bilinear Matrix Inequalities |
|
Pan, Zherong | The University of North Carolina at Chapel Hill |
Gao, Xifeng | Florida State University |
Liang, He | University of North Carolina at Chapel Hill |
Keywords: Optimization and Optimal Control, Robot Safety, Robust/Adaptive Control of Robotic Systems
Abstract: Given an articulated robot arm, we present a method to identify two regions with non-empty interiors. The first region is a subset of the configuration space where every point in the region is manipulable. The second region is a subset of the workspace where every point in the region is reachable by the end-effector. Our method expresses the kinematic state of the robot arm using maximal coordinates, so that the kinematic constraints take polynomial forms. We then reformulate the optimization-based inverse kinematics (IK) algorithm as gradient flows. Finally, we use sum-of-squares (SOS) programming to certify the convergence of each gradient flow. As our main result, we show that the feasibility of an SOS programming problem is a sufficient condition for the manipulability and reachability of the sublevel sets of polynomial functions. Our method can be used to certify manipulable or reachable regions by solving linear matrix inequalities (LMIs) or to maximize the volume of a region by solving a set of bilinear matrix inequalities (BMIs). These identified regions can then be used in various motion planning problems as hard safety constraints.
|
|
14:30-14:45, Paper TuCT11.3 | |
>Towards Transparent Robotic Planningvia Contrastive Explanations |
|
Chen, Shenghui | University of Virginia |
Boggess, Kayla | University of Virginia |
Feng, Lu | University of Virginia |
Keywords: Formal Methods in Robotics and Automation, Motion and Path Planning, Human-Centered Automation
Abstract: Providing explanations of chosen robotic actions can help to increase the transparency of robotic planning and improve users' trust. Social sciences suggest that the best explanations are contrastive, explaining not just why one action is taken, but why one action is taken instead of another. We formalize the notion of contrastive explanations for robotic planning policies based on Markov decision processes, drawing on insights from the social sciences. We present methods for the automated generation of contrastive explanations with three key factors: selectiveness, constrictiveness and responsibility. The results of a user study with 100 participants on the Amazon Mechanical Turk platform show that our generated contrastive explanations can help to increase users' understanding and trust of robotic planning policies, while reducing users' cognitive burden.
|
|
14:45-15:00, Paper TuCT11.4 | |
>Decentralized Safe Reactive Planning under TWTL Specifications |
|
Peterson, Ryan | University of Minnesota - Twin Cities |
Buyukkocak, Ali Tevfik | University of Minnesota |
Aksaray, Derya | University of Minnesota |
Yazicioglu, Yasin | University of Minnesota |
Keywords: Formal Methods in Robotics and Automation, Motion and Path Planning, Collision Avoidance
Abstract: We investigate a multi-agent planning problem, where each agent aims to achieve an individual task while avoiding collisions with others. We assume that each agent's task is expressed as a Time-Window Temporal Logic (TWTL) specification defined over a 3D environment. We propose a decentralized receding horizon algorithm for online planning of trajectories. We show that when the environment is sufficiently connected, the resulting agent trajectories are always safe (collision-free) and lead to the satisfaction of the TWTL specifications or their finite temporal relaxations. Accordingly, deadlocks are always avoided and each agent is guaranteed to safely achieve its task with a finite time-delay in the worst case. Performance of the proposed algorithm is demonstrated via numerical simulations and experiments with quadrotors.
|
|
TuCT12 |
Room T12 |
Formal Methods and Planning II |
Regular session |
Chair: O'Kane, Jason | University of South Carolina |
Co-Chair: Dolan, John M. | Carnegie Mellon University |
|
14:00-14:15, Paper TuCT12.1 | |
>Fast LTL-Based Flexible Planning for Dual-Arm Manipulation |
> Video Attachment
|
|
Katayama, Mizuho | NEC Corporation |
Tokuda, Shumpei | Tokyo Institute of Technology |
Yamakita, Masaki | Tokyo Inst. of Technology |
Oyama, Hiroyuki | NEC Corporation |
Keywords: Formal Methods in Robotics and Automation, Hybrid Logical/Dynamical Planning and Verification, Manipulation Planning
Abstract: In this paper, we propose a method for automatically generating object handling actions based on simple action definitions. The need to replace workers by robots is increasing, and, in fact, many research projects on robots have worked with simple motion definitions. Many applications are for mobile robots such as drones, however, and if such methods are applied directly to object handling, like a pick and place operation, it is necessary for humans to give detailed instructions. Hence, our contribution is to propose a model that simulates the real world with an augmented hybrid system that includes the states of objects. Then, it becomes possible to automatically generate robot motions with simple motion definitions and calculate them within a reasonable time. We demonstrate through computer simulation with a dual-arm robot that robot motions can be generated by simple definitions even if the environment changes to a certain degree.
|
|
14:15-14:30, Paper TuCT12.2 | |
>Geometrical Interpretation and Detection of Multiple Task Conflicts Using a Coordinate Invariant Index |
|
Schettino, Vincenzo | KUKA Deutschland GmbH |
Fiore, Mario Daniele | KUKA Deutschland GmbH |
Pecorella, Claudia | Università Degli Studi Di Napoli Federico II |
Ficuciello, Fanny | Università Di Napoli Federico II |
Allmendinger, Felix | KUKA Roboter GmbH |
Lachner, Johannes | University of Twente |
Stramigioli, Stefano | University of Twente |
Siciliano, Bruno | Univ. Napoli Federico II |
Keywords: Kinematics, Redundant Robots
Abstract: Modern robots act in dynamic and partially unknown environments where path replanning can be mandatory if changes in the environment are observed. Task-prioritized control strategies are well known and effective solutions to ensure local adaptation of robot behavior. The highest priority in a stack of tasks is typically given to the management of correct robot operation or safe interaction with the environment such as obstacles or joint limits avoidance, that we can consider as constraints. If a constraint makes impossible achieving a certain task, such as tracking a Cartesian trajectory, a local control algorithm partially sacrifices the latter which is only accomplished to the best of the robot’s ability to generate internal motions. In this control framework, problems may occur in some applications, like in the surgical domain, where it is not safe that some tasks are simply sacrificed without prior notice. The contribution of this work is to introduce a coordinate invariant index, that is used to provide a geometrical interpretation of task conflicts in a task-priority control framework and to develop a method for on-line detection of algorithmic singularities, with the goal of increasing safety and performances during robot operations.
|
|
14:30-14:45, Paper TuCT12.3 | |
>What to Do When You Can't Do It All: Temporal Logic Planning with Soft Temporal Logic Constraints |
|
Rahmani, Hazhar | University of South Carolina |
O'Kane, Jason | University of South Carolina |
Keywords: Formal Methods in Robotics and Automation, Task Planning
Abstract: In this paper, we consider a temporal logic planning problem in which the objective is to find an infinite trajectory that satisfies an optimal selection from a set of soft specifications expressed in linear temporal logic (LTL) while nevertheless satisfying a hard specification expressed in LTL. Our previous work considered a similar problem in which linear dynamic logic for finite traces (LDLf), rather than LTL, was used to express the soft constraints. In that work, LDLf was used to impose constraints on finite prefixes of the infinite trajectory. By using LTL, one is able not only to impose constraints on the finite prefixes of the trajectory, but also to set `soft' goals across the entirety of the infinite trajectory. Our algorithm first constructs a product automaton, on which the planning problem is reduced to computing a lasso with minimum cost. Among all such lassos, it is desirable to compute a shortest one. Though we prove that computing such a shortest lasso is computationally hard, we also introduce an efficient greedy approach to synthesize short lassos nonetheless. We present two case studies describing an implementation of this approach, and report results of our experiment comparing our greedy algorithm with an optimal baseline.
|
|
14:45-15:00, Paper TuCT12.4 | |
>ReachFlow: An Online Safety Assurance Framework for Waypoint-Following of Self-Driving Cars |
> Video Attachment
|
|
Lin, Qin | Carnegie Mellon University |
Chen, Xin | University of Dayton |
Khurana, Aman | Carnegie Mellon University |
Dolan, John M. | Carnegie Mellon University |
Keywords: Robot Safety, Formal Methods in Robotics and Automation
Abstract: Learning-enabled components have been widely deployed in autonomous systems. However, due to the weak interpretability and the prohibitively high complexity of large-scale machine learning models such as neural networks, reliability has been a crucial concern for safety-critical autonomous systems. This work proposes an online monitor called ReachFlow for fault prevention of waypoint-following tasks for self-driving cars. It mainly consists of two components: (a) an online verification tool which conservatively checks the safety of the system behavior in the near future, and (b) a fallback controller which steers the system back to a desired state when the system is potentially unsafe. We implement ReachFlow in a self-driving racing car governed by a reinforcement learning-based controller. We demonstrate the effectiveness by rigorously verifying a safe waypoint-following control and providing a fallback control for an unsafe situation in which a large deviation from the planned path is predicted.
|
|
TuCT13 |
Room T13 |
Geometric Methods in Planning |
Regular session |
Chair: Tapia, Lydia | University of New Mexico |
Co-Chair: Denny, Jory | University of Richmond |
|
14:00-14:15, Paper TuCT13.1 | |
>Competitive Coverage: (Full) Information As a GameChanger |
> Video Attachment
|
|
Samson, Moshe | Bar-Ilan University |
Agmon, Noa | Bar Ilan University |
Keywords: Motion and Path Planning, Formal Methods in Robotics and Automation, Simulation and Animation
Abstract: This paper introduces the Competitive Coverage problem, a new variant of the robotic coverage problem in which a robot R competes with another robot O in order to be the first to cover an area. In the variant discussed in this paper, the asymmetric competitive coverage, O is unaware of the existence of R, which attempts to take that fact into consideration in order to succeed in being the first to cover as many parts of the environment as possible. We consider different information models of R that define how much it knows about the location of O and its planned coverage path. We present an optimal algorithm for R in the full-information case, and show that unless R has information about O's initial location, it is as if it has no information at all. Lastly, we describe a correlation between the time it takes R to reach O's initial location and the coverage paths quality, and present a heuristic algorithm for the case in which R has information only about O's initial location, showing its superiority compared to other coverage algorithms in rigorous simulation experiments.
|
|
14:15-14:30, Paper TuCT13.2 | |
>Planning for Robust Visibility-Based Pursuit-Evasion |
> Video Attachment
|
|
Stiffler, Nicholas | University of South Carolina |
O'Kane, Jason | University of South Carolina |
Keywords: Motion and Path Planning, Computational Geometry, Planning, Scheduling and Coordination
Abstract: This paper addresses the problem of planning for visibility-based pursuit evasion, in contexts where the pursuer robot may experience some positioning errors as it moves in search of the evader. Specifically, we consider the case in which a pursuer with an omnidirectional sensor searches a known environment to locate an evader that may move arbitrarily quickly. Known algorithms for this problem are based on decompositions of the environment into regions, followed by a search for a sequence of those regions through which the pursuer should pass. In this paper, we note that these regions can be arbitrarily small, and thus that the movement accuracy required of the pursuer may be arbitrarily high. To resolve this limitation, we introduce the notion of an ɛ-robust solution strategy, in which ɛ is an upper bound on the positioning error that the pursuer may experience. We establish sufficient conditions under which a solution strategy is ɛ-robust, and introduce an algorithm that determines, for a given environment, the largest value of ɛ for which a solution strategy satisfying those sufficient conditions exists. We describe an implementation and show simulated results demonstrating the effectiveness of the approach.
|
|
14:30-14:45, Paper TuCT13.3 | |
>Topology-Guided Roadmap Construction with Dynamic Region Sampling |
|
Sandstrom, Read | Texas A&M University |
Uwacu, Diane | Texas A&M University |
Denny, Jory | University of Richmond |
Amato, Nancy | University of Illinois |
Keywords: Motion and Path Planning, Computational Geometry
Abstract: Many types of planning problems require discovery of multiple pathways through the environment, such as multi-robot coordination or protein ligand binding. The Probabilistic Roadmap algorithm (PRM) is a powerful tool for this case, but often cannot efficiently connect the roadmap in the presence of narrow passages. In this paper, we present a guidance mechanism that encourages the rapid construction of well-connected roadmaps with PRM methods. We leverage a topological skeleton of the workspace to track the algorithm's progress in both covering and connecting distinct neighborhoods, and employ this information to focus computation on the uncovered and unconnected regions. We demonstrate how this guidance improves PRM's efficiency in building a roadmap that can answer multiple queries in both robotics and protein ligand binding applications.
|
|
14:45-15:00, Paper TuCT13.4 | |
>Quaternion-Based Trajectory Optimization of Human Postures for Inducing Target Muscle Activation Patterns |
|
Teramae, Tatsuya | ATR Computational Neuroscience Laboratories |
Matsubara, Takamitsu | Nara Institute of Science and Technology |
Noda, Tomoyuki | ATR Computational Neuroscience Laboratories |
Morimoto, Jun | ATR Computational Neuroscience Labs |
Keywords: Motion and Path Planning, Rehabilitation Robotics
Abstract: In exercise and rehabilitation, to effectively train the human body, human motion trajectory is essential because it induces muscle activity patterns. In this paper, we develop a novel framework for the trajectory optimization of human postures, including the head, the limbs, and the body to induce patterns of target muscle activities. Our framework has the following features: 1) a data-driven muscle-skeleton model for managing user-specific features; 2) quaternion based state representation amenable for IMU sensors in human posture measurement; 3) joint optimization of human postures to replicate therapists who adjust not only paralyzed limbs but also patient's other limbs and body postures. We experimentally investigated the effectiveness of our framework with a shoulder joint assistive exoskeleton robot for rehabilitation.
|
|
15:00-15:15, Paper TuCT13.5 | |
>Deep Prediction of Swept Volume Geometries: Robots and Resolutions |
> Video Attachment
|
|
Baxter, John | University of New Mexico |
Yousefi, Mohammad R. | University of New Mexico |
Sugaya, Satomi | The University of New Mexico |
Morales, Marco | Instituto Tecnológico Autónomo De México |
Tapia, Lydia | University of New Mexico |
Keywords: Motion and Path Planning, Computational Geometry
Abstract: Computation of the volume of space required for a robot to execute a sweeping motion from a start to a goal has long been identified as a critical primitive operation in both task and motion planning. However, swept volume computation is particularly challenging for multi-link robots with geometric complexity, e.g., manipulators, due to the non-linear geometry. While earlier work has shown that deep neural networks can approximate the swept volume quantity, a useful parameter in sampling-based planning, general network structures do not lend themselves to outputting geometries. In this paper we train and evaluate the learning of a deep neural network that predicts the swept volume geometry from pairs of robot configurations and outputs discretized voxel grids. We perform this training on a variety of robots from 6 to 16 degrees of freedom. We show that most errors in the prediction of the geometry lie within a distance of 3 voxels from the surface of the true geometry and it is possible to adjust the rates of different error types using a heuristic approach. We also show it is possible to train these networks at varying resolutions by training networks with up to 4x smaller grid resolution with errors remaining close to the boundary of the true swept volume geometry surface.
|
|
15:15-15:30, Paper TuCT13.6 | |
>Asymptotically-Optimal Topological Nearest-Neighbor Filtering |
|
Sandstrom, Read | Texas A&M University |
Denny, Jory | University of Richmond |
Amato, Nancy | University of Illinois |
Keywords: Motion and Path Planning, Computational Geometry, Manipulation Planning
Abstract: Nearest-neighbor finding is a major bottleneck for sampling-based motion planning algorithms. The cost of finding nearest neighbors grows with the size of the roadmap, leading to a significant computational bottleneck for problems which require many configurations to find a solution. In this work, we develop a method of mapping configurations of a jointed robot to neighborhoods in the workspace that supports fast search for configurations in nearby neighborhoods. This expedites nearest-neighbor search by locating a small set of the most likely candidates for connecting to the query with a local plan. We show that this filtering technique can preserve asymptotically-optimal guarantees with modest requirements on the distance metric. We demonstrate the method's efficacy in planning problems for rigid bodies and both fixed and mobile-base manipulators.
|
|
TuCT14 |
Room T14 |
Motion and Path Planning I |
Regular session |
Chair: Balkcom, Devin | Dartmouth College |
Co-Chair: Schoellig, Angela P. | University of Toronto |
|
14:00-14:15, Paper TuCT14.1 | |
>PLRC*: A Piecewise Linear Regression Complex for Approximating Optimal Robot Motion |
|
Zhao, Luyang | Dartmouth College |
Putman, Josiah | Dartmouth College |
Wang, Weifu | University at Albany, SUNY |
Balkcom, Devin | Dartmouth College |
Keywords: Motion and Path Planning
Abstract: Discrete graphs are commonly used to approximately represent configuration spaces used in robot motion planning. This paper explores a representation in which the costs of crossing local regions of the configuration space are represented using piecewise linear regression (PLR). We explore a few simple motion planning problems, and show that for these problems, the memory required to store the representation compares favorably to that required for standard discrete vertex-and-edge models, while preserving the quality of paths returned from searches.
|
|
14:15-14:30, Paper TuCT14.2 | |
>Relevant Region Exploration on General Cost-Maps for Sampling-Based Motion Planning |
> Video Attachment
|
|
Joshi, Sagar | Georgia Institute of Technology |
Tsiotras, Panagiotis | Georgia Tech |
Keywords: Motion and Path Planning, Motion Control, Collision Avoidance
Abstract: Asymptotically optimal sampling-based planners require an intelligent exploration strategy to accelerate conver- gence. After an initial solution is found, a necessary condition for improvement is to generate new samples in the so-called “Informed Set”. However, Informed Sampling can be ineffective in focusing search if the chosen heuristic fails to provide a good estimate of the solution cost. This work proposes an algorithm to sample the “Relevant Region” instead, which is a subset of the Informed Set. The Relevant Region utilizes cost- to-come information from the planner’s tree structure, reduces dependence on the heuristic, and further focuses the search. Benchmarking tests in uniform and general cost-space settings demonstrate the efficacy of Relevant Region sampling.
|
|
14:30-14:45, Paper TuCT14.3 | |
>Robot Calligraphy Using Pseudospectral Optimal Controlin Conjunction with a Novel Dynamic Brush Model |
|
Wang, Sen | Georgia Institute of Technology |
Chen, Jiaqi | Georgia Institute of Technology |
Deng, Xuanliang | Georgia Institute of Technology |
Hutchinson, Seth | Georgia Institute of Technology |
Dellaert, Frank | Georgia Institute of Technology |
Keywords: Motion and Path Planning, Optimization and Optimal Control, Modeling, Control, and Learning for Soft Robots
Abstract: Chinese calligraphy is a unique art form with great artistic value but difficult to master. In this paper, we formulate the calligraphy writing problem as a trajectory optimization problem, and propose an improved virtual brush model for simulating the real writing process. Our approach is inspired by pseudospectral optimal control in that we parameterize the actuator trajectory for each stroke as a Chebyshev polynomial. The proposed dynamic virtual brush model plays a key role in formulating the objective function to be optimized. Our approach shows excellent performance in drawing aesthetically pleasing characters, and does so much more efficiently than previous work, opening up the possibility to achieve real-time closed-loop control.
|
|
14:45-15:00, Paper TuCT14.4 | |
>Towards General Infeasibility Proofs in Motion Planning |
|
Li, Sihui | Colorado School of Mines |
Dantam, Neil | Colorado School of Mines |
Keywords: Motion and Path Planning
Abstract: We present a general approach for constructing proofs of motion planning infeasibility. Effective high-dimensional motion planners, such as sampling-based methods, are limited to probabilistic completeness, so when no plan exists, these planners either do not terminate or can only run until a timeout. We address this completeness challenge by augmenting a sampling-based planner with a method to create an infeasibility proof in conjunction with building the search tree. An infeasibility proof is a closed polytope that separates the start and goal into disconnected components of the free configuration space. We identify possible facets of the polytope via a nonlinear optimization procedure using sampled points in the non-free configuration space. We identify the set of facets forming the separating polytope via a linear constraint satisfaction problem. This proof construction is valid for general (i.e., non-Cartesian) configuration spaces. We demonstrate this approach on the low-dimensional Jaco manipulator and discuss engineering approaches to scale to higher dimensional spaces.
|
|
15:00-15:15, Paper TuCT14.5 | |
>Accelerating Bi-Directional Sampling-Based Search for Motion Planning of Non-Holonomic Mobile Manipulators |
> Video Attachment
|
|
Thakar, Shantanu | University of Southern California |
Rajendran, Pradeep | University of Southern California |
Kim, Hyojeong | University of Southern California |
Kabir, Ariyan M | University of Southern California |
Gupta, Satyandra K. | University of Southern California |
Keywords: Motion and Path Planning, Mobile Manipulation, Nonholonomic Motion Planning
Abstract: Determining a feasible path for nonholonomic mobile manipulators operating in congested environments is challenging. Sampling-based methods, especially bi-directional tree search-based approaches, are amongst the most promising candidates for quickly finding feasible paths. However, sampling uniformly when using these methods may result in high computation time. This paper introduces two techniques to accelerate the motion planning of such robots. The first one is coordinated focusing of samples for the manipulator and the mobile base based on the information from robot surroundings. The second one is a heuristic for making connections between the two search trees, which is challenging owing to the nonholonomic constraints on the mobile base. Incorporating these two techniques into the bi-directional RRT framework results in about 5x faster and 10x more successful computation of paths as compared to the baseline method.
|
|
15:15-15:30, Paper TuCT14.6 | |
>Catch the Ball: Accurate High-Speed Motions for Mobile Manipulators Via Inverse Dynamics Learning |
> Video Attachment
|
|
Dong, Ke | University of Toronto |
Pereida Perez, Karime | University of Toronto |
Shkurti, Florian | University of Toronto |
Schoellig, Angela P. | University of Toronto |
Keywords: Mobile Manipulation, Motion and Path Planning, Optimization and Optimal Control
Abstract: Mobile manipulators consist of a mobile platform equipped with one or more robot arms and are of interest for a wide array of challenging tasks because of their extended workspace and dexterity. Typically, mobile manipulators are deployed in slow-motion collaborative robot scenarios. In this paper, we consider scenarios where accurate high-speed motions are required. We introduce a framework for this regime of tasks including two main components: (i) a bi-level motion optimization algorithm for real-time trajectory generation, which relies on Sequential Quadratic Programming (SQP) and Quadratic Programming (QP), respectively; and (ii) a learning-based controller optimized for precise tracking of high-speed motions via a learned inverse dynamics model. We evaluate our framework with a mobile manipulator platform through numerous high-speed ball catching experiments, where we show a success rate of 85.33%. To the best of our knowledge, this success rate exceeds the reported performance of existing related systems and sets a new state of the art.
|
|
TuCT15 |
Room T15 |
Motion and Path Planning II |
Regular session |
Chair: Liu, Cunjia | Loughborough University |
Co-Chair: McMahon, James | The Naval Research Laboratory |
|
14:00-14:15, Paper TuCT15.1 | |
>Informative Path Planning for Gas Distribution Mapping in Cluttered Environments |
> Video Attachment
|
|
Rhodes, Callum | Loughborough University |
Liu, Cunjia | Loughborough University |
Chen, Wen-Hua | Loughborough University |
Keywords: Robotics in Hazardous Fields, Environment Monitoring and Management, Motion and Path Planning
Abstract: Mobile robotic gas distribution mapping (GDM) is a useful tool for hazardous scene assessment where a quick and accurate representation of gas concentration levels is required throughout a staging area. However, research in robotic path planning for GDM has primarily focused on mapping in open spaces or estimating the source term in dispersion models. Whilst this may be appropriate for environment monitoring in general, the vast majority of GDM applications involve obstacles, and path planning for autonomous robots must account for this. This paper aims to tackle this challenge by integrating a GDM function with an informative path planning framework. Several GDM methods are explored for their suitability in cluttered environments and the GMRF method is chosen due to its ability to account for obstacle interactions within the plume. Based on the outputs of the GMRF, several reward functions are proposed for the informative path planner. These functions are compared to a lawnmower sweep in a high fidelity simulation, where the RMSE of the modelled gas distribution is recorded over time. It is found that informing the robot with uncertainty, normalised concentration and time cost, significantly reduces the time required for a single robot to achieve an accurate map in a large-scale, urban environment. In the context of a hazardous gas release scenario, this time reduction could save lives as well as further gas ingress.
|
|
14:15-14:30, Paper TuCT15.2 | |
>Intent-Driven Strategic Tactical Planning for Autonomous SiteInspection Using Cooperative Drones |
|
Buksz, Rares-dorian | King's College London |
Mujumdar, Anusha | Ericsson Research |
Orlic, Marin | Ericsson Research |
Mohalik, Swarup | GM R&D |
Daoutis, Marios | Ericsson |
Ramamurthy, Badrinath | Ericsson |
Magazzeni, Daniele | King's College London |
Cashmore, Michael | University of Strathclyde |
Vulgarakis Feljan, Aneta | Ericsson Research |
Keywords: Planning, Scheduling and Coordination, Task Planning, Robotics in Hazardous Fields
Abstract: Abstract— Realization of industry-scale, goal-driven, au-tonomous systems with AI planning technology faces severalchallenges: flexibly specifying planning goal states in varyingsituations, synthesizing plans in large state spaces, replanningin dynamic situations, and facilitating humans to superviseand provide inputs. In this paper, we present Intent-drivenStrategic Tactical Planning (ISTP) to address these challengesand demonstrate its efficacy through its application for radiobase station inspection across several locations using drones.The inspection task involves capturing images, thermal imagesor signal measurements - called knowledge-objects - of variouscomponents of the base stations for downstream processing.In the ISTP approach, an operator indicates her goals byflying the drone to different components of interest. Thesegoals are generalized to capture the intent of the operator,which are then instantiated in new situations to generate goalsdynamically. Towards planning and replanning in large statespaces to achieve these goals efficiently, we extend the Strategic-Tactical Planning paradigm. All the components of ISTP areintegrated in an intuitive UI and demonstrated through a reallife use-case built on the UNITY simulator platform.
|
|
14:30-14:45, Paper TuCT15.3 | |
>Extended Performance Guarantees for Receding Horizon Search with Terminal Cost |
|
Biggs, Benjamin | Virginia Polytechnic Institute and State University |
Stilwell, Daniel | Virginia Tech |
McMahon, James | The Naval Research Laboratory |
Keywords: Marine Robotics, Motion and Path Planning, Field Robots
Abstract: The computational difficulty of planning search paths that seek to maximize a general deterministic value function increases dramatically as desired path lengths increase. Mobile search agents with limited computational resources often utilize receding horizon methods to address the path planning problem. Unfortunately, receding horizon planners may perform poorly due to myopic planning horizons. We provide methods of incorporating terminal costs in the construction of receding horizon paths that provide a theoretical lower bound on the performance of the search paths produced. The results presented in this paper are of particular value in subsea search applications. We present results from simulated subsea search missions that use real-world data acquired by an autonomous underwater vehicle during a subsea survey of Boston Harbor.
|
|
14:45-15:00, Paper TuCT15.4 | |
>Path Planning for Mobile Manipulators under Nonholonomic and Task Constraints |
> Video Attachment
|
|
Pardi, Tommaso | University of Birmingham |
Maddali, Vamsi Krishna | University of Birmingham |
Ortenzi, Valerio | University of Birmingham |
Stolkin, Rustam | University of Birmingham |
Marturi, Naresh | University of Birmingham |
Keywords: Robotics in Hazardous Fields, Kinematics, Motion and Path Planning
Abstract: This paper presents a path planner, which enables a nonholonomic mobile manipulator to move its end-effector on an observed surface with a constrained orientation, given start and destination points. A partial point cloud of the environment is captured using a vision-based sensor, but no prior knowledge of the surface shape is assumed. We consider the multi-objective optimisation problem of finding robot paths which account for the nonholonomic constraints of the base, maximise the robot's manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of legacy nuclear plants, where dismantling does not require a precise path. We show how our approach embeds the nonholonomic constraints into an extended Jacobian, and further consider constraints at the end-effector to stay in contact with the surface to cut. We use this constrained Jacobian to plan the robot configurations. Also, we show how our novel cost function is suitable for classical path planners, like RRT*. We present several empirical experiments on different scenarios, where a simulated nonholonomic mobile manipulator follows a trajectory, which is generated on real-world noisy point clouds. Our planner (RRT*-CRMM) enables successful task completion by optimising the path over the travelled distance, the manipulability of the arm, and the movements of the base.
|
|
15:00-15:15, Paper TuCT15.5 | |
>Inverse Kinematics of Redundant Manipulators with Dynamic Bounds on Joint Movements |
|
Faroni, Marco | National Research Council of Italy |
Beschi, Manuel | National Research Council of Italy |
Pedrocchi, Nicola | National Research Council of Italy (CNR) |
Keywords: Motion and Path Planning, Redundant Robots, Optimization and Optimal Control
Abstract: Redundant manipulators are usually required to perform tasks in the operational space, but collision-free path planning is computed in the configuration space. Limiting the deviation with respect to the collision-free configuration-space trajectory may allow the robot to avoid collisions without modifying the primary task. This paper proposes a method to guarantee that the solution of the inverse kinematic problem deviates from the nominal joint-space trajectory less than a desired threshold. The excursion limitation is ensured by means of linear constraints and the automatic regulation of the weights of secondary tasks. Numerical and experimental results prove the validness of the proposed approach.
|
|
15:15-15:30, Paper TuCT15.6 | |
>Trajectory Planning Over Regular General Surfaces with Application in Robot-Guided Deposition Printing |
> Video Attachment
|
|
Hosseini Jafari, Bashir | Universit of Texas Atdallas |
Gans, Nicholas (Nick) | University Texas at Arlington |
Keywords: Motion and Path Planning, Computational Geometry, Reactive and Sensor-Based Planning
Abstract: In this work, we present a novel approach to design and carry out trajectories over regular curved surfaces. This has application in a number of robot path planning problems, including our primary interest in deposition printing. Existing solutions are often ad-hoc in terms of path generation and control of robot pose and velocity. Our approach provides a unified methodology for surface fitting from 3D surface measurements and mapping a curve from 2D onto a 3D surface with minimal distortion. Robot pose control is investigated to guide the end effector along the trajectory while maintaining a standoff distance and keeping the end effector normal to the surface. Simulations and experiments show the performance and necessity of our approach in the application of deposition printing on different 3D surfaces.
|
|
TuCT16 |
Room T16 |
Motion and Path Planning III |
Regular session |
Chair: Cefalo, Massimo | Sapienza University of Rome |
Co-Chair: Ankarali, Mustafa Mert | Middle East Technical University |
|
14:00-14:15, Paper TuCT16.1 | |
>Fast Sequence Rejection for Multi-Goal Planning with Dubins Vehicle |
|
Faigl, Jan | Czech Technical University in Prague |
Váňa, Petr | Czech Technical University in Prague |
Drchal, Jan | Czech Technical University in Prague |
Keywords: Motion and Path Planning, Planning, Scheduling and Coordination, Nonholonomic Motion Planning
Abstract: Multi-goal curvature-constrained planning such as the Dubins Traveling Salesman Problem (DTSP) combines NP-hard combinatorial routing with continuous optimization to determine the optimal vehicle heading angle for each target location. The problem can be addressed as combinatorial routing using a finite set of heading samples at target locations. In such a case, optimal heading samples can be determined for a sequence of targets in polynomial time, and the DTSP can be solved as searching for a sequence with the minimal cost. However, the examination of sequences can be computationally demanding for high numbers of heading samples and target locations. A fast rejection schema is proposed to quickly examine unfavorable sequences using lower bound estimation of Dubins tour cost based on windowing technique that evaluates short subtours of the sequences. Furthermore, the computation using small problem instances can benefit from reusing stored results and thus speed up the search. The reported results indicate that the computational burden is decreased about two orders of magnitude, and the proposed approach supports finding high-quality solutions of routing problems with Dubins vehicle.
|
|
14:15-14:30, Paper TuCT16.2 | |
>Experience-Based Prediction of Unknown Environments for Enhanced Belief Space Planning |
|
Asraf, Omri | Technion |
Indelman, Vadim | Technion - Israel Institute of Technology |
Keywords: Task Planning, Learning Categories and Concepts, SLAM
Abstract: Autonomous navigation missions require online decision making abilities, in order to choose from a given set of candidate actions an action that will lead to the best outcome. In a partially observable setting, decision making under uncertainty, also known as belief space planning (BSP), involves reasoning about belief evolution considering realizations of future observations. Yet, when candidate actions lead the robot to an unknown environment the decision making mission becomes a very challenging problem since without a map it is hard to foresee future observations. In this paper we develop a data-driven approach for predicting a distribution over an unexplored map, generating future observations, and combining these observations within BSP. We examine our approach and compare it to existing BSP methods in a Gazebo simulation, and demonstrate it often yields improved performance.
|
|
14:30-14:45, Paper TuCT16.3 | |
>Anytime Kinodynamic Motion Planning Using Region-Guided Search |
|
Westbrook, Matthew | University of New Hampshire |
Ruml, Wheeler | University of New Hampshire |
Keywords: Motion and Path Planning, Nonholonomic Motion Planning
Abstract: Many kinodynamic motion planners have been developed that guarantee probabilistic completeness and asymptotic optimality for systems for which steering functions are available. Recently, some planners have been developed that achieve these properties of completeness and optimality without requiring a steering function. However, these planners have not taken strong advantage of heuristic guidance to speed their search. This paper introduces Region Informed Optimal Trees (RIOT), a sampling-based, asymptotically optimal motion planner for systems without steering functions. RIOT's search is guided by a low-dimensional abstraction of the state space that is updated during planning for better guidance. Simulation results suggest RIOT is adaptable, scalable, and more effective on difficult problems than previous work.
|
|
14:45-15:00, Paper TuCT16.4 | |
>MPC-Graph: Feedback Motion Planning Using Sparse Sampling Based Neighborhood Graph |
> Video Attachment
|
|
Karagoz, Osman Kaan | Middle East Technical University |
Atasoy, Simay | Middle East Technical University |
Ankarali, Mustafa Mert | Middle East Technical University |
Keywords: Motion and Path Planning, Motion Control, Optimization and Optimal Control
Abstract: Robust and safe feedback motion planning and navigation is a critical task for autonomous mobile robotic systems considering the highly dynamic and uncertain nature scenarios of modern applications. For these reasons motion planning and navigation algorithms that have deep roots in feedback control theory has been at the center stage of this domain recently. However, the vast majority of such policies still rely on the idea that a motion planner first generates a set of open-loop possibly time-dependent trajectories, and then a set of feedback control policies track these trajectories in closed-loop while providing some error bounds and guarantees around these trajectories. In contrast to trajectory-based approaches, some researchers developed feedback motion planning strategies based on connected obstacle-free regions, where the task of the local control policies is to drive the robot(s) in between these particular connected regions. In this paper, we propose a feedback motion planning algorithm based on sparse random neighborhood graphs and constrained nonlinear Model Predictive Control (MPC). The algorithm first generates a sparse neighborhood graph as a set of connected simple rectangular regions. After that, during navigation, an MPC based online feedback control policy funnels the robot with nonlinear dynamics from one rectangle to the other in the network, ensuring no constraint violation on state and input variables occurs with guaranteed stability. In this framework, we can drive the robot to any goal location provided that the connected region network covers both the initial condition and the goal position. We demonstrate the effectiveness and validity of the algorithm on simulation studies.
|
|
15:00-15:15, Paper TuCT16.5 | |
>A Disturbance-Aware Trajectory Planning Scheme Based on Model Predictive Control |
|
Paparusso, Luca | Politecnico Di Milano |
Kashiri, Navvab | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Keywords: Motion and Path Planning, Compliance and Impedance Control
Abstract: Despite the development of numerous trajectory planners based on computationally fast algorithms targeting accurate motion of robots, the nowadays robotic applications requiring compliance for interaction with environment demand more comprehensive schemes to cope with unforeseen situations. This paper discusses the problem of online Cartesian trajectory planning, targeting a final state in a desired time interval, in such a way that the generated trajectories comply with the tracking abnormalities due to considerable motion disturbances. We propose a planning scheme based on Model Predictive Control. It utilises a novel strategy to monitor the tracking performance via state feedback and consequently update the trajectory. Also, it ensures the continuity of the generated reference while accounting for realistic implementation constraints, particularly due to computational capacity limits. To validate the efficacy of the proposed scheme, we examine a practical robotic manipulation scenario in which a given task is executed via a Cartesian impedance controller, while an external interaction interrupts the motion. The performance of the proposed strategy as compared to that of a state-of-the-art study is demonstrated in simulation. Finally, a set of experiments verified the effectiveness of the proposed scheme in practice.
|
|
15:15-15:30, Paper TuCT16.6 | |
>An Opportunistic Strategy for Motion Planning in the Presence of Soft Task Constraints |
> Video Attachment
|
|
Cefalo, Massimo | Sapienza University of Rome |
Ferrari, Paolo | Sapienza University of Rome |
Oriolo, Giuseppe | Sapienza University of Rome |
Keywords: Motion and Path Planning, Collision Avoidance, Kinematics
Abstract: Consider the problem of planning collision-free motions for a robot that is assigned a soft task constraint, i.e., a desired path in task space with an associated error tolerance. To this end, we propose an opportunistic planning strategy in which two subplanners take turns in generating motions. The hard planner guarantees exact realization of the desired task path until an obstruction is detected in configuration space; at this point, it invokes the soft planner, which is in charge of exploiting the available task tolerance to bypass the obstruction and returning control to the hard planner as soon as possible. As a result, the robot will perform the desired task for as long as possible, and deviate from it only when strictly needed to avoid a collision. We present several planning experiments performed in V-REP for the PR2 mobile manipulator in order to show the effectiveness of the proposed planner.
|
|
15:15-15:30, Paper TuCT16.7 | |
>Adaptive Reliable Shortest Path in Gaussian Process Regulated Environment |
|
Hou, Xuejie | University of Electronic Science and Technology of China |
Hongliang, Guo | University of Electronic Science and Technology of China |
Zhang, Yucheng | Intelligent Agricultural Machinery Equipment Laboratory of Chine |
Keywords: Motion and Path Planning, Intelligent Transportation Systems, Probability and Statistical Methods
Abstract: This paper studies the adaptive reliable shortest path (RSP) planning problem in a Gaussian process (GP) regulated environment. With the reasonable assumption that the travel times of the underlying transportation network follow a multi-variate Gaussian distribution, we propose two algorithms namely, Gaussian process reactive path planning (GPRPP), and Gaussian process proactive path planning (GP4), to generate online adaptive routing policies for the reliable shortest path. Both algorithms take advantage of the posterior analytical representation of GPs given past and/or imagined future observations of certain links in the network, and calculate the corresponding adaptive routing strategy for RSP. Theoretical analysis and simulation results (on Sioux Falls Network and Singapore road networks) show the superior performance of GPRPP and GP4 over that of the state of the arts.
|
|
TuCT17 |
Room T17 |
Motion and Path Planning: Coverage |
Regular session |
Chair: Nili Ahmadabadi, Zahra | San Diego State University |
Co-Chair: Karydis, Konstantinos | University of California, Riverside |
|
14:00-14:15, Paper TuCT17.1 | |
>Exploration of Unknown Environments with a Tethered Mobile Robot |
> Video Attachment
|
|
Shapovalov, Danylo | West Virginia University |
Pereira, Guilherme | West Virginia University |
Keywords: Motion and Path Planning
Abstract: This paper presents a tangle-free frontier based exploration algorithm for planar mobile robots equipped with limited length and anchored tethers. After planning a path to the closest point in the frontier between free and unknown space, the robot computes an estimate of the future length of its tether and decides, by comparing the anticipated length with the minimum possible tether length, whether the path should be followed or not. If the anticipated tether is longer than the minimum tether by a function of the expected radius of the obstacles, a path planner with homotopic constraints is used to plan a path that brings the robot tether to the same homotopy class of the shortest tether. This behavior will not only limit the tether length but also will prevent tether entangling on the obstacles of the environment. We evaluate our method in different simulated environments and illustrate the approach with an actual tethered robot.
|
|
14:15-14:30, Paper TuCT17.2 | |
>Completeness Seeking Probabilistic Coverage Estimation Using Uncertain State Estimates |
|
Mahajan, Aditya | Stanford University |
Rock, Stephen | Stanford |
Keywords: Autonomous Vehicle Navigation, Motion and Path Planning, Marine Robotics
Abstract: This paper develops a coverage-centric adaptive path planner to visually survey a planar environment. This is achieved by modifying an existing path planning architecture to use a novel coverage estimation approach called convolved coverage estimation (CCE). The planner maximizes the probability of terrain coverage and exploits terrain features for loop closure to keep path uncertainty in check. The developed algorithm considers multi-dimensional uncertainty, operates in real-time, and does not require external correction methods like GPS. These characteristics are validated in high-fidelity simulation and flight tests on an unmanned aerial vehicle (UAV).
|
|
14:30-14:45, Paper TuCT17.3 | |
>A Multi-System Chaotic Path Planner for Fast and Unpredictable Online Coverage of Terrains |
> Video Attachment
|
|
Sridharan, Karan | Wichita State University |
Nili Ahmadabadi, Zahra | Wichita State University |
Keywords: Motion and Path Planning, Autonomous Agents, Surveillance Systems
Abstract: Coverage path planning (CPP) algorithms customize an autonomous robot’s trajectory for various applications. In surveillance and exploration of unknown environments, random CPP can be very effective in searching and finding objects of interest. Robots with random-like search algorithms must be unpredictable in their motion and simultaneously scan an uncertain environment, avoiding intruders and obstacles in their path. Inducing chaos into the robot’s controller system makes its navigation unpredictable, accounts for better scanning coverage, and avoids any hurdles (obstacles and intruders) without the need for a map of the environment. The unpredictability, however, will come at the cost of increased coverage time. Due to the associated challenges, previous studies have ignored the coverage time and focused instead on the coverage rate only. This paper establishes a novel method that addresses the coverage time challenge of chaotic path planners. The method here combines the properties of two chaotic systems and manipulates them to achieve a fast coverage of the environment. The outcome has been a technique that can fully cover an area in at least 81% less time compared to state-of-the-art methods.
|
|
14:45-15:00, Paper TuCT17.4 | |
>Online Exploration and Coverage Planning in Unknown Obstacle-Cluttered Environments |
> Video Attachment
|
|
Kan, Xinyue | University of California, Riverside |
Teng, Hanzhe | University of California, Riverside |
Karydis, Konstantinos | University of California, Riverside |
Keywords: Nonholonomic Motion Planning, Motion and Path Planning, Robotics in Agriculture and Forestry
Abstract: Online coverage planning can be useful in applications like field monitoring and search and rescue. Without prior information of the environment, achieving resolution-complete coverage considering the non-holonomic mobility constraints in commonly-used vehicles (e.g., wheeled robots) remains a challenge. In this paper, we propose a hierarchical, hex-decomposition-based coverage planning algorithm for unknown, obstacle-cluttered environments. The proposed approach ensures resolution-complete coverage, can be tuned to achieve fast exploration, and plans smooth paths for Dubins vehicles to follow at constant velocity in real-time. Gazebo simulations and hardware experiments with a non-holonomic wheeled robot show that our approach can successfully tradeoff between coverage and exploration speed and can outperform existing online coverage algorithms in terms of total covered area or exploration speed according to how it is tuned.
|
|
15:00-15:15, Paper TuCT17.5 | |
>Visual Coverage Path Planning for Urban Environments |
> Video Attachment
|
|
Peng, Cheng | Univerisyt of Minnesota, Twin Cities |
Isler, Volkan | University of Minnesota |
Keywords: Motion and Path Planning, Computational Geometry
Abstract: View planning for visual coverage is a fundamental robotics problem. Coverage for small objects (e.g. for inspection) or small scale indoor scenes have been studied extensively. However, view planning to cover a large scale urban environment remains challenging. Algorithms that can scale up to the size of such environments while providing performance guarantees are missing. In this paper, we model urban environments as a set of surfaces with k distinct surface normals whose viewing cones must be visited by a robot. We model the resulting coverage problem as a novel variant of Traveling Salesman Problem with Neighborhoods (TSPN). The neighborhoods are defined as cones, which constrain the path coverage quality. We present a polynomial time algorithm which admits an approximation factor of O(frac{k}{tan(alpha)}max{[L_B,W_B,H_B]}), where alpha is the maximum viewing angle, and L_B,H_B,W_B are respectively the length, width, and height of a minimum enclosing box of a city scene B. In addition to the analytical upper bounds, we show in simulations that our method outperforms two baseline methods in both trajectory length and run-time. We also demonstrate our method and evaluate the coverage quality of a city containing more than 70 buildings in a photo-realistic rendering software.
|
|
15:15-15:30, Paper TuCT17.6 | |
>Max Orientation Coverage: Efficient Path Planning to Avoid Collisions in the CNC Milling of 3D Objects |
|
Chen, Xin | Georgia Institute of Technology |
Tucker, Thomas M. | Tucker Innovations |
Kurfess, Thomas | Georgia Tech |
Vuduc, Richard | Georgia Institute of Technology |
Hu, Liting | Florida International University |
Keywords: Motion and Path Planning, Collision Avoidance
Abstract: Most path planning algorithms covering complex 3D objects usually ignore limitations or constraints on robots. In reality, these constraints are very likely to consume a large number of resources and thus degrade overall performances. This work considers a scenario in CNC milling application, where a robot needs to cover the surface of a complex 3D object under a constraint that any point on the generated path, the robot needs to be assigned an accessible orientation to avoid collision between them. Our proposed approach, which we call max orientation coverage, employs a two-step optimization scheme. It can improve path efficiency with respect to both the path length cost and the cost of dealing with the constraints of avoiding collisions. We evaluate our approach through extensive simulation studies on four CAD benchmarks against a state-of-the-art baseline. We show that our proposed approach can improve the efficiency of the path by 29.7% on average compared with the baseline and the improvement goes up to 46.5% for certain complex objects.
|
|
TuCT18 |
Room T18 |
Task Planning |
Regular session |
Chair: Fazli, Pooyan | San Francisco State University |
Co-Chair: Sarkar, Chayan | TCS Research & Innovation |
|
14:00-14:15, Paper TuCT18.1 | |
>Task Planning with Belief Behavior Trees |
> Video Attachment
|
|
Safronov, Evgenii | Istituto Italiano Di Tecnologia |
Colledanchise, Michele | IIT - Italian Institute of Technology |
Natale, Lorenzo | Istituto Italiano Di Tecnologia |
Keywords: Behavior-Based Systems, Task Planning
Abstract: In this paper, we propose Belief Behavior Trees (BBTs), an extension to Behavior Trees (BTs) that allows to automatically create a policy that controls a robot in partially observable environments. We extend the semantic of BTs to account for the uncertainty that affects both the conditions and action nodes of the BT. The tree gets synthesized following a planning strategy for BTs proposed recently: from a set of goal conditions we iteratively select a goal and find the action, or in general the subtree, that satisfies it. Such action may have preconditions that do not hold. For those preconditions, we find an action or subtree in the same fashion. We extend this approach by including, in the planner, actions that have the purpose to reduce the uncertainty that affects the value of a condition node in the BT (for example, turning on the lights to have better lighting conditions). We demonstrate that BBTs allows task planning with non-deterministic outcomes for actions. We provide experimental validation of our approach in a real robotic scenario and - for sake of reproducibility - in a simulated one.
|
|
14:15-14:30, Paper TuCT18.2 | |
>Cleaning Robot Operation Decision Based on Causal Reasoning and Attribute Learning |
|
Li, Yapeng | Xiangtan University |
Zhang, Dongbo | Xiangtan University |
Yin, Feng | Xiangtan University |
Zhang, Ying | Xiangtan University |
Keywords: Domestic Robots, Learning Categories and Concepts, Human-Centered Robotics
Abstract: In order to improve the operation ability of cleaning robots, this paper proposes a decision method for cleaning robot’s operation mode. Firstly, we use the hierarchical expression ability of deep network to obtain the attributes of garbage such as state, shape, distribution, size and so on. Then the causal relationship between the attributes and the operation modes can be built by using joint learning of association attributes with depth network model and causal inference. Based on this, a fuzzy inference decision network is designed. With the help of causal analysis, the structure of the decision model is greatly simplified. Compared with conventional fuzzy neural networks, the total parameters of the model are reduced by 2 / 3. The method proposed in this paper imitates the way that human dispose of different types of garbage and has good interpretability. The experimental results verify the effectiveness of the proposed method.
|
|
14:30-14:45, Paper TuCT18.3 | |
>Robust Task and Motion Planning for Long-Horizon Problems |
> Video Attachment
|
|
Hartmann, Valentin Noah | University of Stuttgart |
Oguz, Ozgur S. | University of Stuttgart |
Driess, Danny | University of Stuttgart |
Toussaint, Marc | Tu Berlin |
Menges, Achim | Insitute for Computational Design and Construction, University O |
Keywords: Task Planning, Motion and Path Planning, Robotics in Construction
Abstract: The interest in integration of robotics systems in architectural and construction processes is rising. Enabling autonomy for these systems can provide analysis tools and facilitate faster design iteration cycles for designers and engineers. However, current use cases mostly comprise conventional robotics functionalities without autonomy. The necessary long-horizon planning is also beyond the capabilities of current task-and-motion planning (TAMP) approaches. In this paper, we develop a multi-agent TAMP robotic framework for long horizon problems such as constructing a full-scale building. The previously introduced Logic-Geometric Programming framework is extended by sampling-based motion planning, and various task specific strategies (e.g., optimizing the structural stability of placed parts) that allow an effective decomposition of the task. We show that our framework is capable of constructing a large pavilion built from several hundred geometrically unique building elements from start to end autonomously. It enables exploration of robotic construction sequences. Overall, the flexible motion planning integration and the task specific decomposition objectives allow our framework to tackle complex, long-horizon problems.
|
|
14:45-15:00, Paper TuCT18.4 | |
>Task Planning from Complex Natural Instructions by a Collocating Robot |
> Video Attachment
|
|
Pramanick, Pradip | TCS Research & Innovation |
Barua, Hrishav Bakul | TCS Research & Innovation |
Sarkar, Chayan | TCS Research & Innovation |
Keywords: Cognitive Human-Robot Interaction, Social Human-Robot Interaction, Task Planning
Abstract: As the number of robots in our daily surroundings like home, office, restaurants, factory floors, etc. are increasing rapidly, the development of natural human-robot interaction mechanism becomes more vital as it dictates the usability and acceptability of the robots. One of the valued features of such a cohabitant robot is that it performs tasks that are instructed in natural language. However, it is not trivial to execute the human intended tasks as natural language expressions can have large linguistic variations. Existing works assume either single task instruction is given to the robot at a time or there are multiple independent tasks in an instruction. However, complex task instructions composed of multiple inter-dependent tasks are not handled efficiently in the literature. There can be ordering dependency among the tasks, i.e., the tasks have to be executed in a certain order or there can be execution dependency, i.e., input parameter or execution of a task depends on the outcome of another task. Understanding such dependencies in a complex instruction is not trivial if an unconstrained natural language is allowed. In this work, we propose a method to find the intended order of execution of multiple inter-dependent tasks given in natural language instruction. Based on our experiment, we show that our system is very accurate in generating a viable execution plan from a complex instruction.
|
|
15:00-15:15, Paper TuCT18.5 | |
>Leveraging Multiple Environments for Learning and Decision Making: A Dismantling Use Case |
|
Suárez-Hernández, Alejandro | CSIC-UPC |
Gaugry, Thierry | Univ. Rennes, INSA, IRISA |
Segovia-Aguas, Javier | Institut De Robòtica I Informàtica Industrial (IRI), CSIC-UPC |
Bernardin, Antonin | INSA Rennes |
Torras, Carme | Csic - Upc |
Marchal, Maud | INSA/INRIA |
Alenyà, Guillem | CSIC-UPC |
Keywords: Task Planning, Simulation and Animation, Probability and Statistical Methods
Abstract: Learning is usually performed by observing real robot executions. Physics-based simulators are a good alternative for providing highly valuable information while avoiding costly and potentially destructive robot executions. We present a novel approach for learning the probabilities of symbolic robot action outcomes. This is done leveraging different environments, such as physics-based simulators, in execution time. To this end, we propose MENID (Multiple Environment Noise Indeterministic Deictic) rules, a novel representation able to cope with the inherent uncertainties present in robotic tasks. MENID rules explicitly represent each possible outcomes of an action, keep memory of the source of the experience, and maintain the probability of success of each outcome. We also introduce an algorithm to distribute actions among environments, based on previous experiences and expected gain. Before using physics-based simulations, we propose a methodology for evaluating different simulation settings and determining the least time-consuming model that could be used while still producing coherent results. We demonstrate the validity of the approach in a dismantling use case, using a simulation with reduced quality as simulated system, and a simulation with full resolution where we add noise to the trajectories and some physical parameters as a representation of the real system.
|
|
15:15-15:30, Paper TuCT18.6 | |
>Multi-Robot Task Allocation with Time Window and Ordering Constraints |
|
Suslova, Elina | San Francisco State University |
Fazli, Pooyan | San Francisco State University |
Keywords: Multi-Robot Systems, Task Planning, Planning, Scheduling and Coordination
Abstract: The multi-robot task allocation problem comprises task assignment, coalition formation, task scheduling, and routing. We extend the distributed constraint optimization problem (DCOP) formalism to allocate tasks to a team of robots. The tasks have time window and ordering constraints. Each robot creates a simple temporal network to maintain the tasks in its schedule. The proposed layered framework, called L-DCOP, forms efficient coalitions among robots to accomplish the tasks more efficiently as a result of their collective abilities. We conduct extensive experiments to assess the performance of the proposed algorithm and compare it against a benchmark auction-based approach. The results show that L-DCOP increases the task completion rate and task completion frequency by 1.7% and 10.1%, respectively, and reduces the task execution time by 52.5% on average.
|
|
TuCT19 |
Room T19 |
Learning in Motion Planning |
Regular session |
Chair: Yip, Michael C. | University of California, San Diego |
Co-Chair: Pan, Jia | University of Hong Kong |
|
14:00-14:15, Paper TuCT19.1 | |
>Optimal Robot Motion Planning in Constrained Workspaces Using Reinforcement Learning |
|
Rousseas, Panagiotis | NTUA |
Bechlioulis, Charalampos | National Technical University of Athens |
Kyriakopoulos, Kostas | National Technical Univ. of Athens |
Keywords: Motion and Path Planning, Reactive and Sensor-Based Planning, Optimization and Optimal Control
Abstract: In this work, a novel solution to the optimal motion planning problem is proposed, through a continuous, deterministic and provably correct approach with guaranteed safety and convergence, based on a parametrized Artificial Potential Field (APF). In particular, Reinforcement Learning (RL) is applied to adjust appropriately the parameters of the underlying potential field towards minimizing the Hamilton-Jacobi-Bellman (HJB) error. The proposed method, outperforms consistently a Rapidly-exploring Random Trees method (RRT*) and consists a fertile advancement in the optimal motion planning problem.
|
|
14:15-14:30, Paper TuCT19.2 | |
>Learning to Use Adaptive Motion Primitives in Search-Based Motion Planning for Navigation |
|
Sood, Raghav | Carnegie Mellon University |
Vats, Shivam | Carnegie Mellon University |
Likhachev, Maxim | Carnegie Mellon University |
Keywords: Nonholonomic Motion Planning
Abstract: Heuristic-based graph search algorithms like A* are frequently used to solve motion planning problems in many domains. For most practical applications, it is infeasible and unnecessary to pre-compute the graph representing the whole search space. Instead, these algorithms generate the graph incrementally by applying a fixed set of actions (frequently called motion primitives) to find the successors of every node that they need to evaluate. In many domains, it is possible to define actions (called adaptive motion primitives) that are not pre-computed but generated on the fly. The generation and validation of these adaptive motion primitives is usually quite expensive compared to pre-computed motion primitives. However, they have been shown to drastically speed up search if used judiciously. In prior work, ad hoc techniques like fixed thresholds have been used to limit unsuccessful evaluations of these actions. In this paper, we propose a learning-based approach to make more intelligent decisions about when to evaluate them. We do a thorough empirical evaluation of our model on a 3 degree-of-freedom (dof) motion planning problem for navigation using the Reeds-Shepp path as an adaptive motion primitive. Our experiments show that using our approach in conjunction with search algorithms leads to over 2x speedup in planning time.
|
|
14:30-14:45, Paper TuCT19.3 | |
>Adaptive Dynamic Window Approach for Local Navigation |
|
Dobrevski, Matej | University of Ljubljana |
Skocaj, Danijel | University of Ljubljana |
Keywords: Motion and Path Planning, Collision Avoidance, Reinforecment Learning
Abstract: Local navigation is an essential ability of any mobile robot working in a real-world environment. One of the most commonly used methods for local navigation is Dynamic Window Approach, which however heavily depends on the settings of the parameters in its cost function. Since the optimal choice of the parameters depends on the environment that may significantly vary and change at any time, the parameters should be chosen dynamically in a data-driven way. To cope with this problem, we propose a novel deep convolutional neural network, which dynamically predicts these parameters considering the sensor readings. The network is trained using the state-of-the art reinforcement learning algorithm. In this way, we combine the power of the data-driven learning and the dynamic model of the robot, enabling adaptation to the current environment as well as guaranteeing collision-free movement and smooth trajectories of the mobile robot. The experimental results show that the proposed method outperforms DWA method as well as its recent extension.
|
|
14:45-15:00, Paper TuCT19.4 | |
>Dynamically Constrained Motion Planning Networks for Non-Holonomic Robots |
> Video Attachment
|
|
Johnson, Jacob | UCSD |
Linjun, Li | University of California San Diego |
Liu, Fei | University of California San Diego |
Qureshi, Ahmed Hussain | University of California San Diego |
Yip, Michael C. | University of California, San Diego |
Keywords: Nonholonomic Motion Planning, Motion and Path Planning, Model Learning for Control
Abstract: Reliable real-time planning for robots is essential in today's rapidly expanding automated ecosystem. In such environments, traditional methods that plan by relaxing constraints become unreliable or slow-down for kinematically constrained robots. This paper describes the algorithm Dynamic Motion Planning Networks (Dynamic MPNet), an extension to Motion Planning Networks, for non-holonomic robots that address the challenge of real-time motion planning using a neural planning approach. We propose modifications to the training and planning networks that make it possible for real-time planning while improving the data efficiency of training and trained models' generalizability. We evaluate our model in simulation for planning tasks for a non-holonomic robot. We also demonstrate experimental results for an indoor navigation task using a Dubins car.
|
|
15:00-15:15, Paper TuCT19.5 | |
>Defensive Escort Teams for Navigation in Crowds Via Multi-Agent Deep Reinforcement Learning |
> Video Attachment
|
|
Hasan, Yazied | University of New Mexico |
Garg, Arpit | University of New Mexico |
Sugaya, Satomi | The University of New Mexico |
Tapia, Lydia | University of New Mexico |
Keywords: Motion and Path Planning
Abstract: Coordinated defensive escorts can aid a navigating payload by positioning themselves strategically in order to maintain the safety of the payload from obstacles. In this paper, we present a novel, end-to-end solution for coordinating an escort team for protecting high-value payloads in a space crowded with interacting obstacles. Our solution employs deep reinforcement learning in order to train a team of escorts to maintain payload safety while navigating alongside the payload. The escorts utilize a trained centralized policy in a distributed fashion (i.e., no explicit communication between the escorts), relying only on range-limited positional information of the environment. Given this observation, escorts automatically prioritize obstacles to intercept and determine where to intercept them, using their repulsive interaction force to actively manipulate the environment. When compared to a payload navigating with a state-of-art algorithm for obstacle avoidance our defensive escort team increased navigation success up to 83% over escorts in static formation, up to 69% over orbiting escorts, and up to 66% compared to an analytic method providing guarantees in crowded environments. We also show that our learned solution is robust to several adaptations in the scenario including: a changing number of escorts in the team, changing obstacle density, unexpected obstacle behavior, changes in payload conformation, and added sensor noise.
|
|
15:15-15:30, Paper TuCT19.6 | |
>DeepMNavigate: Deep Reinforced Multi-Robot Navigation Unifying Local & Global Collision Avoidance |
> Video Attachment
|
|
Tan, Qingyang | University of Maryland at College Park |
Fan, Tingxiang | The University of Hong Kong |
Pan, Jia | University of Hong Kong |
Manocha, Dinesh | University of Maryland |
Keywords: Motion and Path Planning, Collision Avoidance
Abstract: We present a novel algorithm (DeepMNavigate) for global multi-agent navigation in dense scenarios using deep reinforcement learning (DRL). Our approach uses local and global information for each robot from motion information maps. We use a three-layer CNN that takes these maps as input to generate a suitable action to drive each robot to its goal position. Our approach is general, learns an optimal policy using a multi-scenario, multi-state training algorithm, and can directly handle raw sensor measurements for local observations. We demonstrate the performance on dense, complex benchmarks with narrow passages and environments with tens of agents. We highlight the algorithm's benefits over prior learning methods and geometric decentralized algorithms in complex scenarios.
|
|
TuCT20 |
Room T20 |
Planning and Safety |
Regular session |
Chair: Dames, Philip | Temple University |
Co-Chair: Tron, Roberto | Boston University |
|
14:00-14:15, Paper TuCT20.1 | |
>Model-Adaptive High-Speed Collision Detection for Serial-Chain Robot Manipulators |
|
Baradaran Birjandi, Seyed Ali | Technical University of Munich |
Haddadin, Sami | Technical University of Munich |
Keywords: Collision Avoidance, Sensor Fusion, Robot Safety
Abstract: In this paper, we introduce a novel regressorbased observer method to adapt an initially erroneous dynamics model of serial manipulators for improving collision detection sensitivity. Specifically, we assume that the robot joint velocity and acceleration can be accurately estimated via our previously introduced nonlinear estimator [1], [2] that fuses Inertial measurement unit (IMU) measurements with the robot proprioceptive sensing. Given the relatively high bandwidth of nowadays IMUs compared to a standard robot sensorization, the estimated kinematic joint variables support the prompt detection of unpredictable collisions. Compared to the state of the art, our algorithm notably improves collision detection accuracy and sensitivity, surpassing traditional methods such as the well established momentum based scheme. We support our claims and demonstrate the performance of our algorithm on a 7 degree of freedom (DoF) robot manipulator, both in simulation and experiment.
|
|
14:15-14:30, Paper TuCT20.2 | |
>Collision-Free Distributed Multi-Target Tracking Using Teams of Mobile Robots with Localization Uncertainty |
> Video Attachment
|
|
Chen, Jun | Temple University |
Dames, Philip | Temple University |
Keywords: Distributed Robot Systems, Cooperating Robots
Abstract: Accurately tracking dynamic targets relies on robots accounting for uncertainties in their own states to share information and maintain safety. The problem becomes even more challenging when there is an unknown and time-varying number of targets in the environment. In this paper we address this problem by introducing four new distributed algorithms that allow large teams of robots to: i) run the prediction and ii) update steps of a distributed recursive Bayesian multi-target tracker, iii) determine the set of local neighbors that must exchange data, and iv) exchange data in a consistent manner. All of these algorithms account for a bounded level of localization uncertainty in the robots by leveraging our recent introduction of the convex uncertainty Voronoi (CUV) diagram, which extends the traditional Voronoi diagram to account for localization uncertainty. The CUV diagram introduces a tessellation over the environment, which we use in this work both to distribute the multi-target tracker and to make control decisions about where to search next. We examine the efficacy of our method via a series of simulations and compare them to our previous work which assumed perfect localization.
|
|
14:30-14:45, Paper TuCT20.3 | |
>Augmenting Control Policies with Motion Planning for Robust and Safe Multi-Robot Navigation |
|
Pan, Tianyang | Rice University |
Verginis, Christos | Electrical Engineering, KTH Royal Institute of Technology |
Wells, Andrew | Rice University |
Kavraki, Lydia | Rice University |
Dimarogonas, Dimos V. | KTH Royal Institute of Technology |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Multi-Robot Systems, Motion and Path Planning
Abstract: This work proposes a novel method of incorporating calls to a motion planner inside a potential field control policy for safe multi-robot navigation with uncertain dynamics. The proposed framework can handle more general scenes than the control policy and has low computational costs. Our work is robust to uncertain dynamics and quickly finds high-quality paths in scenarios generated from real-world floor plans. In the proposed approach, we attempt to follow the control policy as much as possible, and use calls to the motion planner to escape local minima. Trajectories returned from the motion planner are followed using a path-following controller guaranteeing robustness. We demonstrate the utility of our approach with experiments based on floor plans gathered from real buildings.
|
|
14:45-15:00, Paper TuCT20.4 | |
>Lloyd-Based Approach for Robots Navigation in Human-Shared Environments |
> Video Attachment
|
|
Boldrer, Manuel | University of Trento |
Palopoli, Luigi | University of Trento |
Fontanelli, Daniele | University of Trento |
Keywords: Multi-Robot Systems, Autonomous Vehicle Navigation, Distributed Robot Systems
Abstract: We present a Lloyd-based navigation solution for robots that are required to move in a dynamic environment, where static obstacles (e.g, furnitures, parked cars) and unpredicted moving obstacles (e.g., humans, other robots) have to be detected and avoided on the fly. The algorithm can be computed in real-time and falls in the category of the reactive methods. The simplicity, the small amount of information required for the control inputs synthesis, and the low number of parameters to be tuned are the highlights of this method. Moreover, we propose an extension to the multiagent case that deals with cohesion and cooperation between agents. The goodness of the method is proved through extensive simulations and, for the single agent navigation in human-shared environment, also with experiments on a unicycle-like robot.
|
|
15:00-15:15, Paper TuCT20.5 | |
>Multi-Agent Path Planning under Observation Schedule Constraints |
> Video Attachment
|
|
Yang, Ziqi | Boston University |
Tron, Roberto | Boston University |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Optimization and Optimal Control, Robot Safety
Abstract: We consider the problem of enhanced security of multi-robot systems to prevent cyber-attackers from taking control of one or more robots in the group. We build upon a recently proposed solution that utilizes the physical measurement capabilities of the robots to perform introspection, i.e., detect the malicious actions of compromised agents using other members of the group. In particular, the proposed solution finds multi-agent paths on discrete spaces combined with a set of mutual observations at specific locations to detect robots with significant deviations from the preordained routes. In this paper, we develop a planner that works on continuous configuration spaces while also taking into account similar spatio-temporal constraints. In addition, the planner allows for more general tasks that can be formulated as arbitrary smooth cost functions to be specified. The combination of constraints and objectives considered in this paper are not easily handled by popular path planning algorithms (e.g., sampling-based methods), thus we propose a method based on the Alternating Direction Method of Multipliers (ADMM). ADMM is capable of finding locally optimal solutions to problems involving different kinds of objectives and non-convex temporal and spatial constraints, and allows for infeasible initialization. We benchmark our proposed method on multi-agent map exploration with minimum-uncertainty cost function, obstacles, and observation schedule constraints.
|
|
15:15-15:30, Paper TuCT20.6 | |
>Game-Theoretic Planning for Risk-Aware Interactive Agents |
|
Wang, Mingyu | Stanford University |
Mehr, Negar | Stanford University |
Gaidon, Adrien | Toyota Research Institute |
Schwager, Mac | Stanford University |
Keywords: Motion and Path Planning, Optimization and Optimal Control
Abstract: Modeling the stochastic behavior of interacting agents is key for safe motion planning. In this paper, we study the interaction of risk-aware agents in a game-theoretical framework. Under the entropic risk measure, we derive an iterative algorithm for approximating the intractable feedback Nash equilibria of a risk-sensitive dynamic game. We use an iteratively linearized approximation of the system dynamics and a quadratic approximation of the cost function in solving a backward recursion for finding feedback Nash equilibria. In this respect, the algorithm shares a similar structure with DDP and iLQR methods. We conduct experiments in a set of challenging scenarios such as roundabouts. Compared to ignoring the game interaction or the risk sensitivity, we show that our risk-sensitive game-theoretic framework leads to more time-efficient, intuitive, and safe behaviors when facing underlying risks and uncertainty.
|
|
15:15-15:30, Paper TuCT20.7 | |
>Energy Autonomy for Resource-Constrained Multi Robot Missions |
> Video Attachment
|
|
Fouad, Hassan | Computer Engineering Dept., École Polytechnique De Montréal, Can |
Beltrame, Giovanni | Ecole Polytechnique De Montreal |
Keywords: Multi-Robot Systems, Planning, Scheduling and Coordination, Robot Safety
Abstract: One of the key factors for extended autonomy and resilience of multi-robot systems, especially when robots operate on batteries, is their ability to maintain energy sufficiency by recharging when needed. In situations with limited access to charging facilities, robots need to be able to share and coordinate recharging activities, with guarantees that no robot will run out of energy. In this work, we present an approach based on Control Barrier Functions (CBFs) to enforce both energy sufficiency (assuring that no robot runs out of battery) and coordination constraints (guaranteeing mutual exclusive use of an available charging station), all in a mission agnostic fashion. Moreover, we investigate the system capacity in terms of the relation between feasible requirements of charging cycles and individual robot properties. We show simulation results, using a physics-based simulator and real robot experiments to demonstrate the effectiveness of the proposed approach.
|
|
TuCT21 |
Room T21 |
Planning in Challenging Environments |
Regular session |
Chair: Neumann, Gerhard | Karlsruhe Institute of Technology |
Co-Chair: Isele, David | University of Pennsylvania, Honda Research Institute USA |
|
14:00-14:15, Paper TuCT21.1 | |
>Improving Autonomous Rover Guidance in Round-Trip Missions Using Dynamic Cost Map |
|
Paz Delgado, Gonzalo Jesús | University of Málaga |
Azkarate, Martin | European Space Agency (ESA) - ESTEC |
Sanchez, Ricardo | University of Malaga |
Perez-del-Pulgar, Carlos | Universidad De Málaga |
Gerdes, Levin | ESA/ESTEC |
García-Cerezo, Alfonso | University of Malaga |
Keywords: Motion and Path Planning, Space Robotics and Automation, Autonomous Vehicle Navigation
Abstract: Autonomous round trip missions arise as an interesting topic since ESA and NASA agreed to bring rock samples from Mars. This work proposes a new method to improve autonomous rover guidance for this kind of missions. It is focused on the use of dynamically updated cost maps that are used to plan the rover path for a round-trip. The main advantage of the proposed method is the use of gathered information by the rover during the traverse. So, the cost map is updated in two ways, the first one, related to the encountered obstacles, which are included within the cost map; and the second one that uses terrain features to assign different costs to different patches of a segmented terrain. The generated cost map is then used to plan the return path based on the previously obtained information from the rover. To validate the proposed method, it has been implemented in a simulation environment using the software in the loop concept. Additionally, a field test in a Mars-like environment has been carried out. Results show the return traverse is improved by means of the proposed method.
|
|
14:15-14:30, Paper TuCT21.2 | |
>A Comprehensive Trajectory Planner for a Person-Following ATV |
> Video Attachment
|
|
Febbo, Huckleberry | University of Michigan, Honda Research Institute USA |
Huang, Jiawei | Honda Research Institute USA |
Isele, David | University of Pennsylvania, Honda Research Institute USA |
Keywords: Autonomous Vehicle Navigation, Motion and Path Planning, Motion Control
Abstract: This paper presents a trajectory planning algorithm for person following that is more comprehensive than existing algorithms. This algorithm is tailored for a front-wheel-steered vehicle, is designed to follow a person while avoiding collisions with both static and moving obstacles, simultaneously optimizing speed and steering, and minimizing control effort.This algorithm uses nonlinear model predictive control, where the underling trajectory optimization problem is approximated using a simultaneous method. Results collected in an unknown environment show that the proposed planning algorithm works well with a perception algorithm to follow a person in uneven grass near obstacles and over ditches and curbs, and on asphalt over train-tracks and near buildings and cars. Overall, the results indicate that the proposed algorithm can safely follow a person in unknown, dynamic environments.
|
|
14:30-14:45, Paper TuCT21.3 | |
>Energy-Efficient Motion Planning for Multi-Modal Hybrid Locomotion |
> Video Attachment
|
|
Suh, Hyung Ju Terry | Massachusetts Institute of Technology |
Xiong, Xiaobin | California Institute of Technology |
Singletary, Andrew | California Institute of Technology |
Ames, Aaron | Caltech |
Burdick, Joel | California Institute of Technology |
Keywords: Motion and Path Planning, Hybrid Logical/Dynamical Planning and Verification, Optimization and Optimal Control
Abstract: Hybrid locomotion, which combines multiple modalities of locomotion within a single robot, enables robots to carry out complex tasks in diverse environments. This paper presents a novel method for planning multi-modal locomotion trajectories using approximate dynamic programming. We formulate this problem as a shortest-path search through a state-space graph, where the edge cost is assigned as optimal transport cost along each segment. This cost is approximated from batches of offline trajectory optimizations, which allows the complex effects of vehicle under-actuation and dynamic constraints to be approximately captured in a tractable way. Our method is illustrated on a hybrid double-integrator, an amphibious robot, and a flying-driving drone, showing the practicality of the approach.
|
|
14:45-15:00, Paper TuCT21.4 | |
>Navigation on the Line: Traversability Analysis and Path Planning for Extreme-Terrain Rappelling Rovers |
> Video Attachment
|
|
Paton, Michael | Jet Propulsion Laboratory |
Strub, Marlin Polo | University of Oxford |
Brown, Travis | NASA Jet Propulsion Laboratory, California Institute of Technolo |
Greene, Rebecca J. | Johns Hopkins University |
Lizewski, Jacob | Georgia Tech |
Patel, Vandan | Georgia Institute of Technology |
Gammell, Jonathan | University of Oxford |
Nesnas, Issa | Jet Propulsion Laboratory |
Keywords: Motion and Path Planning, Space Robotics and Automation, Field Robots
Abstract: Many areas of scientific interest in planetary exploration, such as lunar pits, icy-moon crevasses, and Martian craters, are inaccessible to current wheeled rovers. Rappelling rovers can safely traverse these steep surfaces, but require techniques to navigate their complex terrain. This dynamic navigation is inherently time-critical and communication constraints (e.g. delays and small communication windows) will require planetary systems to have some autonomy. Autonomous navigation for Martian rovers is well studied on moderately sloped and locally planar surfaces, but these methods do not readily transfer to tethered systems in non-planar 3D environments. Rappelling rovers in these situations have additional challenges, including terrain-tether interaction and its effects on rover stability, path planning and control. This paper presents novel traversability analysis and path planning algorithms for rappelling rovers operating on steep terrains that account for terrain-tether interaction and the unique stability and reachability constraints of a rappelling system. The system is evaluated with a series of simulations and an analogue mission. In simulation, the planner was shown to reliably find safe paths down a 55 degree slope when a stable tether-terrain configuration exists and never recommended an unsafe path when one did not. In a planetary analogue mission, elements of the system were used to autonomously navigate Axel, a JPL rappelling rover, down a 30 degree slope with 95% autonomy by distance traveled over 46 meters. %US: meters or metres? ;)
|
|
15:00-15:15, Paper TuCT21.5 | |
>Probabilistic Approach to Physical Object Disentangling |
|
Pajarinen, Joni | Tampere University |
Arenz, Oleg | TU Darmstadt |
Peters, Jan | Technische Universität Darmstadt |
Neumann, Gerhard | Karlsruhe Institute of Technology |
Keywords: Motion and Path Planning, Robotics in Hazardous Fields, Manipulation Planning
Abstract: Physically disentangling entangled objects from each other is a problem encountered in waste segregation or in any task that requires disassembly of structures. Often there are no object models, and, especially with cluttered irregularly shaped objects, the robot can not create a model of the scene due to occlusion. One of our key insights is that based on previous sensory input we are only interested in moving an object out of the disentanglement around obstacles. That is, we only need to know where the robot can successfully move in order to plan the disentangling. Due to the uncertainty we integrate information about blocked movements into a probability map. The map defines the probability of the robot successfully moving to a specific configuration. Using as cost the failure probability of a sequence of movements we can then plan and execute disentangling iteratively. Since our approach circumvents only obstacles that it already knows about new movements will yield information about unknown obstacles that block movement until the robot has learned to circumvent all obstacles and disentangling succeeds. In the experiments, we use a special probabilistic version of the Rapidly Exploring Random Tree (RRT) algorithm for planning and demonstrate successful disentanglement of objects both in 2-D and 3-D simulation, and, on a KUKA LBR 7-DOF robot. Moreover, our approach outperforms baseline methods.
|
|
TuCT22 |
Room T22 |
Reactive and Sensor-Based Planning I |
Regular session |
Chair: Karydis, Konstantinos | University of California, Riverside |
Co-Chair: Tanner, Herbert G. | University of Delaware |
|
14:00-14:15, Paper TuCT22.1 | |
>PC-NBV: A Point Cloud Based Deep Network for Efficient Next Best View Planning |
> Video Attachment
|
|
Zeng, Rui | Tsinghua University |
Zhao, Wang | Tsinghua University |
Liu, Yong-Jin | Tsinghua University |
Keywords: Reactive and Sensor-Based Planning, Novel Deep Learning Methods
Abstract: The Next Best View (NBV) problem is important in the active robotic reconstruction. It enables the robot system to perform scanning actions in a reasonable view sequence, and fulfil the reconstruction task in an effective way. Previous works mainly follow the volumetric methods, which convert the point cloud information collected by sensors into a voxel representation space and evaluate candidate views through ray casting simulations to pick the NBV. However, the process of volumetric data transformation and ray casting is often timeconsuming. To address this issue, in this paper, we propose a point cloud based deep neural network called PC-NBV to achieve efficient view planning without these computationally expensive operations. The PC-NBV network takes the raw point cloud data and current view selection states as input, and then directly predicts the information gain of all candidate views. By avoiding costly data transformation and ray casting, and utilizing powerful neural network to learn structure priors from point cloud, our method can achieve efficient and effective NBV planning. Experiments on multiple datasets show the proposed method outperforms state-of-the-art NBV methods, giving better views for robot system with much less inference time. Furthermore, we demonstrate the robustness of our method against noise and the ability to extend to multi-view system, making it more applicable for various scenarios.
|
|
14:15-14:30, Paper TuCT22.2 | |
>Reactive Receding Horizon Planning and Control for Quadrotors with Limited On-Board Sensing |
> Video Attachment
|
|
Yadav, Indrajeet | University of Delaware |
Tanner, Herbert G. | University of Delaware |
Keywords: Reactive and Sensor-Based Planning, Collision Avoidance, Autonomous Vehicle Navigation
Abstract: The paper presents a receding horizon planning strategy for quadrotor-type acs{mav} to navigate through an unknown cluttered environment at high speeds. Given a lightweight on-board short-range sensor that generates point-clouds within a narrow ac{fov}, the proposed strategy generates safe and dynamically feasible trajectories within the acs{fov} of the sensor, which the acs{mav} uses to navigate through the workspace without reliance on any global planner or prior information about the environment. The effectiveness of the planner has been demonstrated in both indoor and outdoor tests featuring speeds of up to of 3.5m/s. It is shown how with minor adjustments, the local motion planner can be utilized for interception and chasing of a moving target, and evidence to this effect are provided in the form of numerical (Gazebo) simulations. Given the absence of any global information about the robot's workspace, the extent to which the local planner can provide convergence guarantees is limited; when complemented by a global planner and/or target tracker, the reported lower-level, sensor-driven reactive motion control strategy completes the autonomous acs{mav} navigation stack, enabling navigation in dynamic, uncertain, and partially-known environments with guaranteed convergence to any static or dynamic target.
|
|
14:30-14:45, Paper TuCT22.3 | |
>Motion Planning for Collision-Resilient Mobile Robots in Obstacle-Cluttered Unknown Environments with Risk Reward Trade-Offs |
|
Lu, Zhouyu | University of California, Riverside |
Liu, Zhichao | University of California, Riverside |
Correa, Gustavo | University of California Riverside |
Karydis, Konstantinos | University of California, Riverside |
Keywords: Reactive and Sensor-Based Planning, Motion and Path Planning, Collision Avoidance
Abstract: Collision avoidance in unknown obstacle-cluttered environments may not always be feasible. This paper focuses on an emerging paradigm shift in which potential collisions with the environment can be harnessed instead of being avoided altogether. To this end, we introduce a new sampling-based online planning algorithm that can explicitly handle the risk of colliding with the environment and can switch between collision avoidance and collision exploitation. Central to the planner's capabilities is a novel joint optimization function that evaluates the effect of possible collisions using a reflection model. This way, the planner can make deliberate decisions to collide with the environment if such collision is expected to help the robot make progress toward its goal. To make the algorithm online, we present a state expansion pruning technique that significantly reduces the search space while ensuring completeness. The proposed algorithm is evaluated experimentally with a built-in-house holonomic wheeled robot that can withstand collisions. We perform an extensive parametric study to investigate trade-offs between (user-tuned) levels of risk, deliberate collision decision making, and trajectory statistics such as time to reach the goal and path length.
|
|
14:45-15:00, Paper TuCT22.4 | |
>Reactive Semantic Planning in Unexplored Semantic Environments Using Deep Perceptual Feedback |
> Video Attachment
|
|
Vasilopoulos, Vasileios | University of Pennsylvania |
Pavlakos, Georgios | University of Pennsylvania |
Bowman, Sean | University of Pennsylvania |
Caporale, J. Diego | University of Pennsylvania |
Daniilidis, Kostas | University of Pennsylvania |
Pappas, George J. | University of Pennsylvania |
Koditschek, Daniel | University of Pennsylvania |
Keywords: Reactive and Sensor-Based Planning, Motion and Path Planning, Semantic Scene Understanding
Abstract: This paper presents a reactive planning system that enriches the topological representation of an environment with a tightly integrated semantic representation, achieved by incorporating and exploiting advances in deep perceptual learning and probabilistic semantic reasoning. Our architecture combines object detection with semantic SLAM, affording robust, reactive logical as well as geometric planning in unexplored environments. Moreover, by incorporating a human mesh estimation algorithm, our system is capable of reacting and responding in real time to semantically labeled human motions and gestures. New formal results allow tracking of suitably non-adversarial moving targets, while maintaining the same collision avoidance guarantees. We suggest the empirical utility of the proposed control architecture with a numerical study including comparisons with a state-of-the-art dynamic replanning algorithm, and physical implementation on both a wheeled and legged platform in different settings with both geometric and semantic goals.
|
|
TuCT23 |
Room T23 |
Reactive and Sensor-Based Planning II |
Regular session |
Chair: Srivastava, Vaibhav | Michigan State University |
Co-Chair: Lauri, Mikko | University of Hamburg |
|
14:00-14:15, Paper TuCT23.1 | |
>Localization Uncertainty-Driven Adaptive Framework for Controlling Ground Vehicle Robots |
> Video Attachment
|
|
Kent, Daniel | Michigan State University |
McKinley, Philip | Michigan State University |
Radha, Hayder | Michigan State University |
Keywords: Reactive and Sensor-Based Planning, Perception-Action Coupling, Autonomous Vehicle Navigation
Abstract: Modern localization techniques allow ground vehicle robots to determine their position with centimeter-level accuracy under nominal conditions, enabling them to utilize fixed maps to navigate their environments. However, when localization measurements become unavailable, the position accuracy will drop and uncertainty will increase. While research and development on localization estimation seeks to reduce the severity of these outages, the question of what actions a robot should take under high localization uncertainty is still unresolved, and can vary on a platform-by-platform and mission-by-mission basis. In this paper, we exploit localization uncertainty measures to adapt system control parameters in real time. Offline, we optimize non-linear activation functions whose control parameters and relevant weights are trained and learned using Evolutionary Algorithm (EA). Subsequently, in real time, we apply the optimized adaptation functions to the controller look-ahead distance and intermediate linear and angular velocity commands, which we identify as the most sensitive to localization error. Evolutionary runs are conducted in which a simulated target vehicle is tasked with following a randomly generated path while minimizing cross-track error, with time varying localization uncertainty added. These runs produce situation- dependent weights for parameters to the adaptation functions, which are transferred to the physical platform, a 1:5-scale autonomous vehicle. In simulation, our system was able to reduce cross-track error, which in certain cases exceeds 250 centimeters on non-adapted systems, to below 15 centimeters on average using EA-derived weights and parameters applied to our proposed adaptation system. Evaluation on the physical platform demonstrates that without the adaptation module in place, the platform is unable to successfully follow the path; with the adaptation module, the platform automatically adjusts its velocity and look-ahead distance to compensate for localization uncertainty.
|
|
14:15-14:30, Paper TuCT23.2 | |
>Skill-Based Programming Framework for Composable Reactive Robot Behaviors |
> Video Attachment
|
|
Pane, Yudha Prawira | KU Leuven |
Aertbelien, Erwin | KU Leuven |
De Schutter, Joris | KU Leuven |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords: Reactive and Sensor-Based Planning, Software, Middleware and Programming Environments, Sensor-based Control
Abstract: This paper introduces a constraint-based skill framework for programming robot applications. Existing skill frameworks allow application developers to reuse skills and compose them sequentially or in parallel. However, they typically assume that the skills are running independently and in a nominal condition. This limitation hinders their applications for more involved and realistic scenarios e.g. when the skills need to run synchronously and in the presence of disturbances. This paper addresses this problem in two steps. First, we revisit how constraint-based skills are modeled. We classify different skill types based on how their progress can be evaluated over time. Our skill model separates the constraints that impose task-consistency and the constraints that make the skills progress i.e. reaching their end conditions. Second, this paper introduces composition patterns that couple skills in parallel such that they are executed in a synchronized manner and reactive to disturbances. The effectiveness of our framework is evaluated on a dual-arm robotics setup that performs an industrial assembly task in the presence of disturbance.
|
|
14:30-14:45, Paper TuCT23.3 | |
>Expedited Multi-Target Search with Guaranteed Performance Via Multi-Fidelity Gaussian Processes |
> Video Attachment
|
|
Wei, Lai | Michigan State University |
Xiaobo, Tan | Michigan State University |
Srivastava, Vaibhav | Michigan State University |
Keywords: Reactive and Sensor-Based Planning, Optimization and Optimal Control
Abstract: We consider a scenario in which an autonomous vehicle equipped with a downward facing camera operates in a 3D environment and is tasked with searching for an unknown number of stationary targets on the 2D floor of the environment. The key challenge is to minimize the search time while ensuring a high detection accuracy. We model the sensing field using a multi-fidelity Gaussian process that systematically describes the sensing information available at different altitudes from the floor. Based on the sensing model, we design a novel algorithm called Expedited Multi-Target Search (EMTS) that (i) addresses the coverage-accuracy trade-off: sampling at locations farther from the floor provides wider field of view but less accurate measurements, (ii) computes an occupancy map of the floor within a prescribed accuracy and quickly eliminates unoccupied regions from the search space, and (iii) travels efficiently to collect the required samples for target detection. We rigorously analyze the algorithm and establish formal guarantees on the target detection accuracy and the detection time. We illustrate the algorithm using a simulated multi-target search scenario.
|
|
14:45-15:00, Paper TuCT23.4 | |
>Multi-Sensor Next-Best-View Planning As Matroid-Constrained Submodular Maximization |
|
Lauri, Mikko | University of Hamburg |
Pajarinen, Joni | Tampere University |
Peters, Jan | Technische Universität Darmstadt |
Frintrop, Simone | University of Hamburg |
Keywords: Reactive and Sensor-Based Planning, RGB-D Perception, Multi-Robot Systems
Abstract: 3D scene models are useful in robotics for tasks such as path planning, object manipulation, and structural inspection. We consider the problem of creating a 3D model using depth images captured by a team of multiple robots. Each robot selects a viewpoint and captures a depth image from it, and the images are fused to update the scene model. The process is repeated until a scene model of desired quality is obtained. Next-best-view planning uses the current scene model to select the next viewpoints. The objective is to select viewpoints so that the images captured using them improve the quality of the scene model the most. In this paper, we address next-best-view planning for multiple depth cameras. We propose a utility function that scores sets of viewpoints and avoids overlap between multiple sensors. We show that multi-sensor next-best-view planning with this utility function is an instance of submodular maximization under a matroid constraint. This allows the planning problem to be solved by a polynomial-time greedy algorithm that yields a solution within a constant factor from the optimal. We evaluate the performance of our planning algorithm in simulated experiments with up to 8 sensors, and in real-world experiments using two robot arms equipped with depth cameras.
|
|
TuDT1 |
Room T1 |
Control and System Identification |
Regular session |
Chair: Vela, Patricio | Georgia Institute of Technology |
|
16:30-16:45, Paper TuDT1.1 | |
>Adversarial Generation of Informative Trajectories for Dynamics System Identification |
> Video Attachment
|
|
Jegorova, Marija | University of Edinburgh |
Smith, Joshua | University of Edinburgh |
Mistry, Michael | University of Edinburgh |
Hospedales, Timothy | University of Edinburgh |
Keywords: Novel Deep Learning Methods, Calibration and Identification
Abstract: Dynamic System Identification approaches usually heavily rely on the evolutionary and gradient-based optimisation techniques to produce optimal excitation trajectories for determining the physical parameters of robot platforms. Current optimisation techniques tend to generate single trajectories. This is expensive, and intractable for longer trajectories, thus limiting their efficacy for system identification. We propose to tackle this issue by using multiple shorter cyclic trajectories, which can be generated in parallel, and subsequently combined together to achieve the same effect as a longer trajectory. Crucially, we show how to scale this approach even further by increasing the generation speed and quality of the dataset through the use of generative adversarial network (GAN) based architectures to produce a large databases of valid and diverse excitation trajectories. To the best of our knowledge, this is the first robotics work to explore system identification with multiple cyclic trajectories and to develop GAN-based techniques for scaleably producing excitation trajectories that are diverse in both control parameter and inertial parameter spaces. We show that our approach dramatically accelerates trajectory optimisation, while simultaneously providing more accurate system identification than the conventional approach.
|
|
16:45-17:00, Paper TuDT1.2 | |
>Q-VAE for Disentangled Representation Learning and Latent Dynamical Systems |
|
Kobayashi, Taisuke | Nara Institute of Science and Technology |
Keywords: Novel Deep Learning Methods, Model Learning for Control, Representation Learning
Abstract: A variational autoencoder (VAE) derived from Tsallis statistics called q-VAE is proposed. In the proposed method, a standard VAE is employed to statistically extract latent space hidden in sampled data, and this latent space helps make robots controllable in feasible computational time and cost. To improve the usefulness of the latent space, this paper focuses on disentangled representation learning, e.g., beta-VAE, which is the baseline for it. Starting from a Tsallis statistics perspective, a new lower bound for the proposed q-VAE is derived to maximize the likelihood of the sampled data, which can be considered an adaptive beta-VAE with deformed Kullback-Leibler divergence. To verify the benefits of the proposed q-VAE, a benchmark task to extract the latent space from the MNIST dataset was performed. The results demonstrate that the proposed q-VAE improved disentangled representation while maintaining the reconstruction accuracy of the data. In addition, it relaxes the independency condition between data, which is demonstrated by learning the latent dynamics of nonlinear dynamical systems. By combining disentangled representation, the proposed q-VAE achieves stable and accurate long-term state prediction from the initial state and the action sequence.
|
|
17:00-17:15, Paper TuDT1.3 | |
>Target Tracking Control of a Wheel-Less Snake Robot Based on a Supervised Multi-Layered SNN |
> Video Attachment
|
|
Jiang, Zhuangyi | Technical University of Munich |
Otto, Richard | Technical University of Munich |
Bing, Zhenshan | Technical University of Munich |
Huang, Kai | Sun Yat-Sen University |
Knoll, Alois | Tech. Univ. Muenchen TUM |
Keywords: Biologically-Inspired Robots, Perception-Action Coupling, Visual Tracking
Abstract: The snake-like robot without wheels is a bio-inspired robot whose high degree of freedom results in a challenge in autonomous locomotion control. The use of a Spiking Neural Network (SNN) which is a biologically plausible artificial neural network can help to achieve the autonomous locomotion behavior of snake robots in an energy-efficient manner. Approaches that use an SNN without hidden layers have been applied in the single-target tracking task. However, due to the complexity of the 3D gaits on a wheel-less snake robot and the imprecision of the pose control while in motion, they have some fluctuation that adversely affects their performances. In this work, we design two multi-layered SNNs with different topology for a wheel-less snake robot to track a certain moving object. The visual signals obtained from a Dynamic Vision Sensor (DVS) are fed into the SNN to drive the locomotion controller. Furthermore, the Reward-modulated Spike-Timing-Dependent Plasticity (R-STDP) learning rule is utilized to train the SNN end-to-end. Compared to the SNN without hidden layers, the proposed multi-layered SNN with a separated hidden layer shows its advantage in terms of robustness.
|
|
17:15-17:30, Paper TuDT1.4 | |
>CAZSL: Zero-Shot Regression for Pushing Models by Generalizing through Context |
|
Zhang, Wenyu | Cornell University |
Seto, Skyler | Cornell University |
Jha, Devesh | Mitsubishi Electric Research Laboratories |
Keywords: Novel Deep Learning Methods, Model Learning for Control
Abstract: Learning accurate models of the physical world is required for a lot of robotic manipulation tasks. However, during manipulation, robots are expected to interact with unknown workpieces so that building predictive models which can generalize over a number of these objects is highly desirable. In this paper, we study the problem of designing learning agents which can generalize their models of the physical world by building context-aware learning models. The purpose of these agents is to quickly adapt and/or generalize their notion of physics of interaction in the real world based on certain features about the interacting objects that provide different contexts to the predictive models. With this motivation, we present context-aware zero shot learning (CAZSL, pronounced as casual) models, an approach utilizing a Siamese network architecture, embedding space masking and regularization based on context variables which allows us to learn a model that can generalize to different parameters or features of the interacting objects. We test our proposed learning algorithm on the recently released Omnipush datatset that allows testing of meta-learning capabilities using low-dimensional data. Codes for CAZSL are available at https://www.merl.com/research/license/CAZSL .
|
|
17:30-17:45, Paper TuDT1.5 | |
>Synthesis of Control Barrier Functions Using a Supervised Machine Learning Approach |
|
Srinivasan, Mohit | Georgia Institute of Technology |
Dabholkar, Amogh | Birla Institute of Technology and Science, Pilani |
Coogan, Samuel | Georgia Tech |
Vela, Patricio | Georgia Institute of Technology |
Keywords: Collision Avoidance, Optimization and Optimal Control, Model Learning for Control
Abstract: Control barrier functions are mathematical constructs used to guarantee safety for robotic systems. When integrated as constraints in a quadratic programming optimization problem, instantaneous control synthesis with real-time performance demands can be achieved for robotics applications. Prevailing use has assumed full knowledge of the safety barrier functions, however there are cases where the safe regions must be estimated online from sensor measurements. In these cases, the corresponding barrier function must be synthesized online. This paper describes a learning framework for estimating control barrier functions from sensor data. Doing so affords system operation in unknown state space regions without compromising safety. Here, a support vector machine classifier provides the barrier function specification as determined by sets of safe and unsafe states obtained from sensor measurements. Theoretical safety guarantees are provided. Experimental ROS-based simulation results for an omnidirectional robot equipped with LiDAR demonstrate safe operation.
|
|
TuDT2 |
Room T2 |
Compliance and Impedance Control |
Regular session |
Chair: Yu, Ningbo | Nankai University |
Co-Chair: Chrysostomou, Dimitrios | Aalborg University |
|
16:30-16:45, Paper TuDT2.1 | |
>Variable Stiffness Control with Strict Frequency Domain Constraints for Physical Human-Robot Interaction |
|
Zou, Wulin | Hong Kong University of Science and Technology |
Duan, Pu | Xeno Dynamics Co., Ltd |
Chen, Yawen | The Hong Kong University of Science and Technology |
Yu, Ningbo | Nankai University |
Shi, Ling | The Hong Kong University of Science and Technology |
Keywords: Compliance and Impedance Control, Physical Human-Robot Interaction, Force Control
Abstract: Variable impedance control is advantageous for physical human-robot interaction to improve safety, adaptability and many other aspects. This paper presents a gain-scheduled variable stiffness control approach under strict frequency-domain constraints. Firstly, to reduce conservativeness, we characterize and constrain the impedance rendering, actuator saturation, disturbance/noise rejection and passivity requirements into their specific frequency bands. This relaxation makes sense because of the restricted frequency properties of the interactive robots. Secondly, a gain-scheduled method is taken to regulate the controller gains with respect to the desired stiffness. Thirdly, the scheduling function is parameterized via a nonsmooth optimization method. Finally, the proposed approach is validated by simulations, experiments and comparisons with a gain-fixed passivity-based PID method.
|
|
16:45-17:00, Paper TuDT2.2 | |
>An Energy-Based Approach for the Integration of Collaborative Redundant Robots in Restricted Work Environments |
> Video Attachment
|
|
Hjorth, Sebastian | Aalborg University |
Lachner, Johannes | University of Twente |
Stramigioli, Stefano | University of Twente |
Madsen, Ole | Aalborg University |
Chrysostomou, Dimitrios | Aalborg University |
Keywords: Compliance and Impedance Control, Energy and Environment-Aware Automation, Redundant Robots
Abstract: To this day, most robots are installed behind safety fences, separated from the human. New use-case scenarios demand for collaborative robots, e.g. to assist the human with physically challenging tasks. These robots are mainly installed in work-environments with limited space, e.g. existing production lines. This brings certain challenges for the control of such robots. The presented work addresses a few of these challenges, namely: stable and safe behaviour in contact scenarios; avoidance of restricted workspace areas; prevention of joint limits in automatic mode and manual guidance. The control approach in this paper extents an Energy-aware Impedance controller by repulsive potential fields in order to comply with Cartesian and joint constraints. The presented controller was verified for a KUKA LBR iiwa 7 R800 in simulation as well as on the real robot.
|
|
17:00-17:15, Paper TuDT2.3 | |
>Passivity Filter for Variable Impedance Control |
> Video Attachment
|
|
Bednarczyk, Maciej | ICube Laboratory, University of Strasbourg, Strasbourg |
Omran, Hassan | ICube Laboratory, University of Strasbourg, Strasbourg |
Bayle, Bernard | University of Strasbourg |
Keywords: Compliance and Impedance Control
Abstract: While impedance control is one of the most commonly used strategies for robot interaction control, variable impedance control is a more recent preoccupation. If designing impedance control with varying parameters allows increasing the system flexibility and dexterity, it is still a challenging issue, as it may result in a loss of passivity of the control system. This has an important impact on the stability and therefore on the safety of the interaction. In this paper, we propose methods to design passivity filters that guarantee passivity of the interaction. They aim at either checking whether a desired impedance profile is passive, or modifying it if required.
|
|
17:15-17:30, Paper TuDT2.4 | |
>Coupled Task-Space Admittance Controller Using Dual Quaternion Logarithmic Mapping |
> Video Attachment
|
|
de Paula Assis Fonseca, Mariana | Universidade Federal De Minas Gerais |
Adorno, Bruno Vilhena | Federal University of Minas Gerais (UFMG) |
Fraisse, Philippe | LIRMM |
Keywords: Compliance and Impedance Control
Abstract: This paper proposes a six-DOF task-space admittance controller using the dual quaternion logarithmic mapping, coupling the translation and rotation impedance in a single mathematical structure. The controller is designed based on the energy of the system and the stiffness matrix is build to be consistent with the task geometry. Moreover, the formulation is free of topological obstruction and we present a solution for the unwinding phenomenon based on a switched error function. The closed-loop system is composed of an inner motion control loop to ensure the trajectory tracking of the end-effector pose while an outer loop imposes a desired apparent impedance to the robot. Experiments executed on a KUKA LWR4+ robot with a force/torque sensor in the end-effector, together with statistical analyses, show better performance of the proposed controller over one of the main six-DOF controllers from the state of the art. More specifically, our controller presents an exponential decay in all situations, a task-error closed-loop behavior closer to the desired one, and it is free from topological obstruction and unwinding, while presenting a statistically equivalent control effort.
|
|
17:30-17:45, Paper TuDT2.5 | |
>Compliant Control and Compensation for a Compact Cable-Driven Robotic Manipulator |
> Video Attachment
|
|
Li, Jing | The University of Hong Kong |
Lam, James | University of Hong Kong |
Wang, Zheng | The University of Hong Kong |
Keywords: Medical Robots and Systems, Compliance and Impedance Control, Mechanism Design
Abstract: Cable-driven robotic manipulators are desirable for medical applications, for their form factor flexibility after separating actuation from the distal end. However, intended to work under high spatial constraints such as dental or other surgical applications, severe cable elongations will raise control challenges from inaccuracy to excessive compliance. It is critical to proactively regulate the system compliance, in order to achieve both compliant behavior to avoid tissue damage, and rigid behavior necessary for dental drilling. Both ends of this challenges have been extensively studied in literature, with rigidity achieved by cable elongation compensation, and virtual compliance regulation by impedance control. However, each approach worked within its own turf, with very little being studied in how to blend the two sources of compliance strategically. In this work, blending virtual compliance modulated by impedance control with transmission compliance induced by cable elasticity was investigated and demonstrated in a modified design of our proprietary dental manipulator. It was shown that direct application of impedance control in a cable-driven system would not bluntly increase compliance, and may cause instability. Instead, we proposed a compliance-blending framework with Cartesian-space super-positioning of cable motion compensation and impedance control, and validated the efficacy on the 6-DOF dental manipulator platform. Desirable results were achieved using highly common approaches in both impedance control and cable compensation, making the proposed approach applicable to a wide range of cable-driven robotic systems for impedance control.
|
|
17:45-18:00, Paper TuDT2.6 | |
>Learning Force Control for Contact-Rich Manipulation Tasks with Rigid Position-Controlled Robots |
> Video Attachment
|
|
Beltran-Hernandez, Cristian Camilo | Osaka University |
Petit, Damien | Osaka University |
Ramirez-Alpizar, Ixchel Georgina | National Institute of Advanced Industrial Science and Technology |
Nishi, Takayuki | Fujifilm Corporation |
Kikuchi, Shinichi | FUJIFILM Corporation |
Matsubara, Takamitsu | Nara Institute of Science and Technology |
Harada, Kensuke | Osaka University |
Keywords: Compliance and Impedance Control, Reinforecment Learning, Compliant Assembly
Abstract: Reinforcement Learning (RL) methods have been proven successful in solving manipulation tasks autonomously. However, RL is still not widely adopted on real robotic systems because working with real hardware entails additional challenges, especially when using rigid position-controlled manipulators. These challenges include the need for a robust controller to avoid undesired behavior, that risk damaging the robot and its environment, and constant supervision from a human operator. The main contributions of this work are, first, we proposed a learning-based force control framework combining RL techniques with traditional force control. Within said control scheme, we implemented two different conventional approaches to achieve force control with position-controlled robots; one is a modified parallel position/force control, and the other is an admittance control. Secondly, we empirically study both control schemes when used as the action space of the RL agent. Thirdly, we developed a fail-safe mechanism for safely training an RL agent on manipulation tasks using a real rigid robot manipulator. The proposed methods are validated both on simulation and a real robot with an UR3 e-series robotic arm.
|
|
TuDT3 |
Room T3 |
Control Applications |
Regular session |
Chair: Burdet, Etienne | Imperial College London |
Co-Chair: Valdastri, Pietro | University of Leeds |
|
16:30-16:45, Paper TuDT3.1 | |
>A Frequency-Dependent Impedance Controller for an Active-Macro/passive-Mini Robotic System |
|
Badeau, Nicolas | Université Laval |
Gosselin, Clement | Université Laval |
Keywords: Physical Human-Robot Interaction, Compliance and Impedance Control
Abstract: This paper presents an alternative impedance controller for a macro-mini robotic system in which the mini robot is unactuated. The approach is verified experimentally on a simple two-degree-of-freedom macro-mini robot. The dynamic analysis of the robot is first presented. Then, a standard impedance controller is derived and analysed. Such a controller is shown to be experimentally unstable when used with the present macro-mini mechanism. An alternative impedance controller is then proposed and analysed. While slightly more complex than the standard controller, it provides a more stable behaviour experimentally. The alternative controller also increases the effectiveness of the control by reducing the response to high-frequency motion such as hand tremor.
|
|
16:45-17:00, Paper TuDT3.2 | |
>Compliance Control of Cable-Suspended Aerial Manipulator Using Hierarchical Control Framework |
> Video Attachment
|
|
Gabellieri, Chiara | University of Pisa |
Sarkisov, Yuri | Skolkovo Institute of Science and Technology |
Coelho, Andre | German Aerospace Center (DLR) |
Pallottino, Lucia | Università Di Pisa |
Kondak, Konstantin | German Aerospace Center |
Kim, Min Jun | DLR |
Keywords: Aerial Systems: Applications, Compliance and Impedance Control
Abstract: Aerial robotic manipulation is an emergent trend that poses several challenges. To overcome some of these, the DLR cable-Suspended Aerial Manipulator (SAM) has been envisioned. SAM is composed of a fully actuated multi-rotor anchored to a main carrier through a cable and a KUKA LWR attached below the multi-rotor. This work presents a control method to allow SAM, which is a holonomically constrained system, to perform such interaction tasks using a hierarchical control framework. Within this framework, compliance control of the manipulator end-effector is considered to have the highest priority. The second priority is the control of the oscillations induced by, for example, the motion of the arm or physical contact with the environment. A third priority task is related to the internal motion of the manipulator. The proposed approach is validated through simulations and experiments.
|
|
17:00-17:15, Paper TuDT3.3 | |
>Perceptive Model Predictive Control for Continuous Mobile Manipulation |
> Video Attachment
|
|
Pankert, Johannes | ETH Zuerich |
Hutter, Marco | ETH Zurich |
Keywords: Mobile Manipulation, Robotics in Construction, Whole-Body Motion Planning and Control
Abstract: A mobile robot needs to be aware of its environment to interact with it safely. We propose a receding horizon control scheme for mobile manipulators that tracks task space reference trajectories. It uses visual information to avoid obstacles and haptic sensing to control interaction forces. Additional constraints for mechanical stability and joint limits are met. The proposed method is faster than state of the art sampling based planners, available as opensource and can be implemented on a broad class of robots. We validate the method both in simulation and through extensive hardware experiments with a multitude of mobile manipulation platforms. The resulting software package is released with this paper.
|
|
17:15-17:30, Paper TuDT3.4 | |
>Dual-Arm Control for Enhanced Magnetic Manipulation |
> Video Attachment
|
|
Pittiglio, Giovanni | University of Leeds |
Chandler, James Henry | University of Leeds |
Richter, Michiel | University of Twente |
Kalpathy Venkiteswaran, Venkatasubramanian | University of Twente |
Misra, Sarthak | University of Twente |
Valdastri, Pietro | University of Leeds |
Keywords: Medical Robots and Systems, Dual Arm Manipulation, Force Control
Abstract: Magnetically actuated soft robots have recently been identified for application in medicine, due to their potential to perform minimally invasive exploration of human cavities. Magnetic solutions permit further miniaturization when compared to other actuation techniques, without loss in functionalities. Our long-term goal is to propose a novel actuation method for magnetically actuated soft robots, based on dual-arm collaborative magnetic manipulation. A fundamental step in this direction is to show that this actuation method is capable of controlling up to 8 coincident, independent Degrees of Freedom (DOFs). In present paper, we prove this concept by measuring the independent wrench components on a second pair of static permanent magnets, by means of a high resolution 6-axis load cell. The experiments show dominant activation of the desired DOFs, with mean cross-activation error of the undesired DOFs ranging from 2% to 10%.
|
|
17:30-17:45, Paper TuDT3.5 | |
>Improving Tracking through Human-Robot Sensory Augmentation |
|
Li, Yanan | University of Sussex |
Eden, Jonathan | Imperial College London |
Carboni, Gerolamo | Imperial College London |
Burdet, Etienne | Imperial College London |
Keywords: Control Architectures and Programming, Physical Human-Robot Interaction, Human-Centered Robotics
Abstract: This paper introduces a sensory augmentation technique enabling a contact robot to understand its human user's control in real-time and integrate their reference trajectory information into its own sensory feedback to improve the tracking performance. The human's control is formulated as a feedback controller with unknown control gains and desired trajectory. An unscented Kalman filter is used to estimate first the control gains and then the desired trajectory. The estimated human's desired trajectory is used as augmented sensory information about the system and combined with the robot's measurement to estimate a reference trajectory. Simulations and an implementation on a robotic interface demonstrate that the reactive control can robustly identify the human user's control, and that the sensory augmentation improves the robot's tracking performance.
|
|
TuDT4 |
Room T4 |
Control Architectures and Software |
Regular session |
Chair: Wrede, Sebastian | Bielefeld University |
Co-Chair: Inaba, Masayuki | The University of Tokyo |
|
16:30-16:45, Paper TuDT4.1 | |
>Formalization of Robot Skills with Descriptive and Operational Models |
|
Lesire, Charles | ONERA |
Doose, David | Onera - the French Aerospace Lab |
Grand, Christophe | ONERA |
Keywords: Control Architectures and Programming, Formal Methods in Robotics and Automation
Abstract: In this paper, we propose a formal language to specify robot skills, i.e. the elementary behaviours or functions provided by the robot platform in order to perform an autonomous mission. The advantage of the language we propose is that it integrates a wide range of elements that allows to define and provide automatic translation both to operational models, used online to control the skill execution, and descriptive models, allowing to reason about the expected skill execution, and then apply automated planning or model-checking integrating skill models.
|
|
16:45-17:00, Paper TuDT4.2 | |
>STORM: Screw Theory Toolbox for Robot Manipulator and Mechanisms |
> Video Attachment
|
|
Sagar, Keerthi | University of Genoa, Italy |
Ramadoss, Vishal | PMAR Robotics, University of Genova |
Zlatanov, Dimiter | University of Genoa |
Zoppi, Matteo | University of Genoa, Italy |
Keywords: Software, Middleware and Programming Environments, Kinematics, Mechanism Design
Abstract: Screw theory is a powerful mathematical tool for the kinematic analysis of mechanisms and has become a cornerstone of modern kinematics. Although screw theory has rooted itself as a core concept, there is a lack of generic software tools for visualization of the geometric pattern of the screw elements. This paper presents STORM, an educational and research oriented framework for analysis and visualization of reciprocal screw systems for a class of robot manipulator and mechanisms. This platform has been developed as a way to bridge the gap between theory and practice of application of screw theory in the constraint and motion analysis for robot mechanisms. STORM utilizes an abstracted software architecture that enables the user to study different structures of robot manipulators. The example case studies demonstrate the potential to perform analysis on mechanisms, visualize the screw entities and conveniently add new models and analyses.
|
|
17:00-17:15, Paper TuDT4.3 | |
>Model-Based Specification of Control Architectures for Compliant Interaction with the Environment |
> Video Attachment
|
|
Wigand, Dennis | Bielefeld University |
Dehio, Niels | Karlsruhe Institute of Technology |
Wrede, Sebastian | Bielefeld University |
Keywords: Control Architectures and Programming, Compliance and Impedance Control, Contact Modeling
Abstract: In recent years the need for manipulation tasks in the industrial as well as in the service robotics domain that require compliant interaction with the environment rose. Since then, an increased number of publications use a model-driven approach to describe these tasks. High-level tasks and sequences of skills are coordinated to achieve a desired motion for e.g., screwing, polishing, or snap mounting. Even though the awareness of the environment, especially in terms of contact situations, is essential for successful task execution, it is too often neglected or considered insufficiently. In this paper, we present a model-based approach, using domain-specific languages (DSL), that allows the explicit modeling of the environment in terms of contact situations. Decoupling the environment model from the skills, fosters exchangeability and thus allows the adaptation to different environmental situations. This way, an explicit but non-invasive link is established to the skills, enabling the environment model to provide a context to constrain the execution of the skills. Further, we present a synthesis from the modeled contact situations to a real-time component-based control architecture, which executes the skills subject to the active environmental context. A dual arm yoga mat rolling task is used to show the impact of the environment model on the skill execution.
|
|
17:15-17:30, Paper TuDT4.4 | |
>Verification of System-Wide Safety Properties of ROS Applications |
|
Carvalho, Bruno | Universidade Do Minho |
Cunha, Alcino | Universidade Do Minho |
Macedo, Nuno | INESC TEC & University of Minho |
Santos, André | University of Minho |
Keywords: Formal Methods in Robotics and Automation, Software, Middleware and Programming Environments, Robot Safety
Abstract: Robots are currently deployed in safety-critical domains but proper techniques to assess the functional safety of their software are yet to be adopted. This is particularly critical in ROS, where highly configurable robots are built by composing third-party modules. To promote adoption, we advocate the use of lightweight formal methods, automatic techniques with minimal user input and intuitive feedback. This paper proposes a technique to automatically verify system-wide safety properties of ROS-based applications at static time. It is based in the formalization of ROS architectural models and node behaviour in Electrum, over which system-wide specifications are subsequently model checked. To automate the analysis, it is deployed as a plug-in for HAROS, a framework for the assessment of ROS software quality aimed at the ROS community. The technique is evaluated in a real robot, AgRob V16, with positive results.
|
|
17:30-17:45, Paper TuDT4.5 | |
>Basic Implementation of FPGA-GPU Dual SoC Hybrid Architecture for Low-Latency Multi-DOF Robot Motion Control |
|
Nagamatsu, Yuya | The University of Tokyo |
Sugai, Fumihito | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Control Architectures and Programming, Humanoid Robot Systems, Motion Control
Abstract: This paper describes basic implementation of an embedded controller board based on a hybrid architecture equipped with an Intel FPGA SoC and an NVIDIA GPU SoC. Embedded distributed network involving motor-drivers or other embedded boards is constructed with low-latency optical transmission link. The central controller for high-level motion planning is connected via Gigabit Ethernet. The controller board with the hybrid architecture provides lower-latency feedback control performance. Computing performance of the FPGA SoC, the GPU SoC, and the central controller is evaluated by computation time of matrix multiplication. Then, the total feedback latency is estimated to show the performance of the hybrid architecture.
|
|
17:45-18:00, Paper TuDT4.6 | |
>XBot Real-Time Software Framework for Robotics: From the Developer to the User Perspective (I) |
|
Muratore, Luca | Istituto Italiano Di Tecnologia |
Laurenzi, Arturo | Istituto Italiano Di Tecnologia |
Mingo Hoffman, Enrico | Fondazione Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Keywords: Software, Middleware and Programming Environments, Control Architectures and Programming
Abstract: The widespread of robotics in new application domains outside the industrial workplace settings necessitates robotic systems that demonstrate functionalities far beyond those of the classical industrial robotic machines. The implementation of these additional capabilities increases significantly the complexity of the robot hardware, software and control components. As a result the inevitable complexity of today's robots targeting to new domains in partially unstructured environments has reached a noticeable extent, e.g. such robots typically consist of a large number of sensors, actuators, and processors executing numerous control modules that communicate through several and diverse interfaces. These emerging applications involve complex tasks that also vary and have to be carried out within a partially unknown environment requiring autonomy and adaptability, which further increase the intricacy of the system software architecture. To cope with the demands and the consequent complexity of the robotic systems and their control, software infrastructures that can be quickly and seemly adapted to these requirements, while providing transparent and standardize interfaces to the the robotics developers and users, are needed. In this work we introduce the XBot software framework. The development of the XBot was driven by the need to provide a software framework that abstracts the diverse variability of the robotic hardware (effectively becoming a cross robot platform framework), provides deterministic hard Real-Time (RT) performance, incorporates interfaces which permit the possibility to integrate state of art robot control frameworks, and delivers enhanced flexibility through a plug-in architecture. The paper presents the insights of the XBot framework from the developer to the user perspective, discussing the details of the implementation mechanisms adopted as well as providing tangible examples on the use of the framework.
|
|
TuDT5 |
Room T5 |
Dynamics |
Regular session |
Chair: Bajcinca, Naim | TU Kaiserslautern |
Co-Chair: Zhao, Huijing | Peking University |
|
16:30-16:45, Paper TuDT5.1 | |
>Identification of Dynamic Parameters for Rigid Robots Based on Polynomial Approximation |
|
Lomakin, Alexander | Universität Erlangen-Nürnberg |
Deutscher, Joachim | Universität Erlangen-Nürnberg |
Keywords: Calibration and Identification, Dynamics, Industrial Robots
Abstract: In this paper an approach for the identification of the dynamic parameters, i.e. base parameters, of rigid robots is presented. By using the polynomial approximation operator, an equation is obtained for the identification of the parameters which solely depends on measurable signals and thereby contains no equation error. The resulting expressions can be evaluated online or offline by filtering the measurable signals with FIR filters. In order to identify the parameters on the basis of measurements, an algorithm is presented to calculate the parameters numerically stable, even if the data is obtained sequentially, without a singular value decomposition. The parameters can be determined meaningfully by considering box constraints in order to ensure physical feasibility. The presented methods are finally used to identify the dynamic parameters of a delta robot and compared to the standard approach.
|
|
16:45-17:00, Paper TuDT5.2 | |
>Nonlinear Balance Control of an Unmanned Bicycle: Design and Experiments |
> Video Attachment
|
|
Cui, Leilei | New York University |
Wang, Shuai | Tencent |
Lai, Jie | Tencent |
Chen, Xiangyu | TENCENT |
Yang, Sicheng | Tencent |
Zhang, Zhengyou | Tencent |
Jiang, Zhong-Ping | New York University |
Keywords: Body Balancing, Dynamics, Wheeled Robots
Abstract: In this paper, nonlinear control techniques are exploited to balance an unmanned bicycle with enlarged stability domain. We consider two cases. For the first case when the autonomous bicycle is balanced by the flywheel, the steering angle is set to zero, and the torque of the flywheel is used as the control input. The controller is designed based on the Interconnection and Damping Assignment Passivity Based Control (IDA-PBC) method. For the second case when the bicycle is balanced by the handlebar, the bicycle's velocity is high, and the flywheel is turned off. The angular velocity of the handlebar is used as the control input and the balance controller is designed based on feedback linearization. In these cases, the global stability of the closed-loop unmanned bicycle is theoretically proved based on Lyapunov theory. The experiments are conducted to validate the efficacy of the proposed nonlinear balance controllers.
|
|
17:00-17:15, Paper TuDT5.3 | |
>Modeling Cable-Driven Joint Dynamics and Friction: A Bond-Graph Approach |
> Video Attachment
|
|
Ludovico, Daniele | Istituto Italiano Di Tecnologia |
Guardiani, Paolo | Istituto Italiano Di Tecnologia |
Pistone, Alessandro | Istituto Italiano Di Tecnologia |
Lee, Jinoh | Fondazione Istituto Italiano Di Tecnologia (IIT) |
Cannella, Ferdinando | Istituto Italiano Di Tecnologia |
Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Canali, Carlo | Istituto Italiano Di Tecnologia, Via Morego, 30, 16163 Genova |
Keywords: Dynamics, Calibration and Identification
Abstract: Cable-driven joints proved to be an effective solution in a wide variety of applications ranging from medical to industrial fields where light structures, interaction with unstructured and constrained environments and precise motion are required. These requirements are achieved by moving the actuators from joints to the robot chassis. Despite these positive properties a cable-driven robotic arm requires a complex cable routing within the entire structure to transmit motion to all joints. The main effect of this routing is a friction phenomenon which reduces the accuracy of the motion of the robotic device. In this paper a bond-graph approach is presented to model a family of cable-driven joints including a novel friction model that can be easily implemented into a control algorithm to compensate the friction forces induced by the rope sliding into bushings.
|
|
17:15-17:30, Paper TuDT5.4 | |
>Cross Scene Prediction Via Modeling Dynamic Correlation Using Latent Space Shared Auto-Encoders |
|
Hu, Shaochi | Peking University |
Xu, Donghao | Peking University |
Zhao, Huijing | Peking University |
Keywords: Dynamics, Mapping
Abstract: This work addresses on the following problem: given a set of unsynchronized history observations of two scenes that are correlative on their dynamic changes, the purpose is to learn a cross-scene predictor, so that with the observation of one scene, a robot can onlinely predict the dynamic state of the other. A method is proposed to solve the problem via modeling dynamic correlation using latent space shared auto-encoders. Assuming that the inherent correlation of scene dynamics can be represented by shared latent space, where a common latent state is reached if the observations of both scenes are at an approximate time. A learning model is developed by connecting two auto-encoders through the latent space, and a prediction model is built by concatenating the encoder of the input scene with the decoder of the target one. Simulation datasets are generated imitating the dynamic flows at two adjacent gates of a campus, where the dynamic changes are triggered by a common working and teaching schedule. Similar scenarios can also be found at successive intersections on a single road, gates of a subway station, etc. Accuracy of cross-scene prediction is examined at various conditions of scene correlation and pairwise observations. Potentials of the proposed method are demonstrated by comparing with conventional end-to-end methods and linear predictions.
|
|
17:30-17:45, Paper TuDT5.5 | |
>Dynamic Parameter Estimation Utilizing Optimized Trajectories |
|
Tika, Argtim | Technische Universität Kaiserslautern |
Ulmen, Jonas | TU Kaiserslautern |
Bajcinca, Naim | TU Kaiserslautern |
Keywords: Dynamics, Optimization and Optimal Control, Calibration and Identification
Abstract: We suggest a procedure for dynamic parameter estimation of serial robot manipulators. Its basic idea relies on the synthesis of an optimal manipulation trajectory, which is based on properly introduced parameter aggregates to ensure a collection of numerically well-conditioned data-sets, yielding an accurate computation of parameter estimates. The optimal trajectory itself is computed by using a memetic algorithm, which represents a metaheuristic combination of genetic and gradient based algorithms. The algorithm is experimentally verified by estimating the parameters of the UR5 robot by Universal Robots.
|
|
17:45-18:00, Paper TuDT5.6 | |
>Modeling and Experimental Verification of a Cable-Constrained Synchronous Rotating Mechanism Considering Friction Effect |
|
Li, Yanan | Harbin Institute of Technology |
Liu, Yu | Harbin Institute of Technology |
Meng, Deshan | Graduate School at Shenzhen, Tsinghua University |
Wang, Xueqian | Center for Artificial Intelligence and Robotics, Graduate School |
Liang, Bin | Center for Artificial Intelligence and Robotics, Graduate School |
Keywords: Search and Rescue Robots, Dynamics, Flexible Robots
Abstract: Cable-Constrained Synchronous Rotating Mechanism (CCSRM) has an important application prospect in the field of cable-driven robots, which can greatly reduce the number of driving motors while ensuring the light and slender body. However, there are obvious cable friction effect and elastic deformation in CCSRM. These nonlinear characteristics have a significant impact on synchronous motion performance. In this paper, a model of CCSRM considering cable friction is proposed, which integrates the effects of cable pretension, elastic deformation, and friction between cable and pulley on system characteristics. The distribution law of cable tension under the influence of friction force and the phenomenon of motion hysteresis caused in reverse rotation are emphatically discussed. Then, an improved LuGre friction model is proposed to solve the problem of line contact friction between cables and pulleys. Further a dynamic model of CCSRM is established to simulate the motion characteristics of the whole process, including the discontinuous friction phenomenon in reverse rotation. Finally, an experimental prototype of two-axis synchronous rotating system is built, and the friction coefficient is identified. The experimental results show that the dynamic model can well simulate the motion characteristics of CCSRM.
|
|
17:45-18:00, Paper TuDT5.7 | |
>Model-Based Coupling for Co-Simulation of Robotic Contact Tasks |
|
Peiret, Albert | McGill University |
Gonzalez, Francisco | University of a Coruna |
Kovecses, Jozsef | McGill University |
Teichmann, Marek | CMLabs Simulations Inc |
Enzenhoefer, Andreas | CM Labs Simulations Inc |
Keywords: Contact Modeling, Grasping, Dynamics
Abstract: Co-simulation of complex robotic systems allows the different components to be modelled and simulated independently using methods and tools tailored to their nature and time-scale, which makes the implementation process more modular and flexible. Some applications require the use of non-iterative coupling schemes for optimal performance, such as real-time interactive environments and human and hardware-in-the-loop setups. Stability of non-iterative schemes is challenging due to the restricted and delayed information that is exchanged between subsystems, and robust prediction of interface variables is key. Here, we propose a framework for exchanging model information between mechanical systems with contact, where reduced-order models approximate the interface dynamics of the subsystems. Effective mass and force terms are formulated using a reduced representation of the model, which can then be exchanged between subsystems and integrated in their simulation. The analysis of several simulations of challenging robotic contact tasks, such as grasping and insertion with jamming, shows that model-based coupling allows for stable co-simulation with larger interface stiffness values, resulting in stronger coupling and higher simulation accuracy.
|
|
TuDT6 |
Room T6 |
Estimation and Identification |
Regular session |
Chair: Leonessa, Alexander | Virginia Tech |
Co-Chair: Huang, Xiaowei | University of Liverpool |
|
16:30-16:45, Paper TuDT6.1 | |
>Assessment of Soil Strength Using a Robotically Deployed and Retrieved Penetrometer |
> Video Attachment
|
|
Montano, Victor | University of Houston |
Shah, Ami | University of Houston |
Akinwande, Samuel | University of Houston |
Jafari, Navid | Louisiana State University |
Becker, Aaron | University of Houston |
Keywords: Force and Tactile Sensing, Field Robots, Aerial Systems: Applications
Abstract: This paper presents a method for performing free-fall penetrometer tests for soft soils using an instrumented dart deployed by a quadcopter. Tests were performed with three soil types and used to examine the effect of drop height on the penetration depth and the deceleration profile. Further tests analyzed the force required to remove a dart from the soil and the effect of pulling at different speeds and angles. The pull force of a consumer drone was measured, and tests were performed where a drone delivered and removed darts in soil representative of a wetland environment.
|
|
16:45-17:00, Paper TuDT6.2 | |
>Guaranteed Parameter Estimation of Hunt-Crossley Model with Chebyshev Polynomial Approximation for Teleoperation |
|
Budolak, Daniel | Virginia Tech |
Leonessa, Alexander | Virginia Tech |
Keywords: Telerobotics and Teleoperation, Contact Modeling, Calibration and Identification
Abstract: In haptic time delayed teleoperation as the time delay from the communication channel increases, teleoperation system stability and performance degrade. To increase performance and provide better stability margins, various estimation methods and observers have been implemented in literature to more accurately capture the force exerted by the remote system. Previously, solutions focused on environment force estimation methods that primarily rely on linearization of the Hunt-Crossley (HC) contact model, which has limiting assumptions for use. This work addresses the shortcomings of the aforementioned methods by investigating alternative HC parameter estimation techniques. A new application of Chebyshev polynomial approximation for adaptive parameter estimation of the HC model is proposed. This approximation is compared to current linearization methods as well as nonlinear estimation methods that are not well covered in literature. Moreover, the Chebyshev approximation is used in a new estimation approach that provides control via backstepping with adaptive parameter estimation using Lyapunov methods. This method reduces excitation requirements by using nonlinear swapping and the data accumulation concept to guarantee parameter convergence. A simulated full teleoperation system with time delay demonstrates the effectiveness of this approach.
|
|
17:00-17:15, Paper TuDT6.3 | |
>Practical Verification of Neural Network Enabled State Estimation System for Robotics |
|
Huang, Wei | University of Liverpool |
Zhou, Yifan | University of Liverpool |
Sun, Youcheng | Queen's University Belfast |
Sharp, James | Dstl |
Maskell, Simon | University of Liverpool |
Huang, Xiaowei | University of Liverpool |
Keywords: Formal Methods in Robotics and Automation, Failure Detection and Recovery, Model Learning for Control
Abstract: We study for the first time the verification problem on learning-enabled state estimation systems for robotics, which use Bayes filter for localisation, and use deep neural network to process sensory input into observations for the Bayes filter. Specifically, we are interested in a robustness property of the systems: given a certain ability to an adversary for it to attack the neural network without being noticed, whether or not the state estimation system is able to function with only minor loss of localisation precision? For verification purposes, we reduce the state estimation systems to a novel class of labelled transition systems with payoffs and partial order relations, and formally express the robustness property as a constrained optimisation objective. Based on this, practical verification algorithms are developed. As a major case study, we work with a real-world dynamic tracking system that uses a Kalman filter (a special case of the Bayes filter) to localise and track a ground vehicle. Its perception system, based on convolutional neural networks, processes a high-resolution Wide Area Motion Imagery (WAMI) data stream. Experimental results show that our algorithms can not only verify the robustness of the WAMI tracking system but also provide useful counterexamples.
|
|
17:15-17:30, Paper TuDT6.4 | |
>Markov Decision Processes with Unknown State Feature Values for Safe Exploration Using Gaussian Processes |
> Video Attachment
|
|
Budd, Matthew | University of Oxford |
Lacerda, Bruno | University of Oxford |
Duckworth, Paul | University of Oxford |
West, Andrew | The University of Manchester |
Lennox, Barry | The University of Manchester |
Hawes, Nick | University of Oxford |
Keywords: Discrete Event Dynamic Automation Systems, Probability and Statistical Methods, Robotics in Hazardous Fields
Abstract: When exploring an unknown environment, a mobile robot must decide where to observe next. It must do this whilst minimising the risk of failure, by only exploring areas that it expects to be safe. In this context, safety refers to the robot remaining in regions where critical environment features (e.g. terrain steepness, radiation levels) are within ranges the robot is able to tolerate. More specifically, we consider a setting where a robot explores an environment modelled with a Markov decision process, subject to bounds on the values of one or more environment features which can only be sensed at runtime. We use a Gaussian process to predict the value of the environment feature in unvisited regions, and propose an estimated Markov decision process, a model that integrates the Gaussian process predictions with the environment model transition probabilities. Building on this model, we propose an exploration algorithm that, contrary to previous approaches, considers probabilistic transitions and explicitly reasons about the uncertainty over the Gaussian process predictions. Furthermore, our approach increases the speed of exploration by selecting locations to visit further away from the currently explored area. We evaluate our approach on a real-world gamma radiation dataset, tackling the challenge of a nuclear material inspection robot exploring an a priori unknown area.
|
|
TuDT7 |
Room T7 |
Force and Torque Sensing |
Regular session |
Chair: Johnson, Aaron | Carnegie Mellon University |
Co-Chair: Choi, Hyouk Ryeol | Sungkyunkwan University |
|
16:30-16:45, Paper TuDT7.1 | |
>Contact Localization Using Velocity Constraints |
> Video Attachment
|
|
Wang, Sean J. | Carnegie Mellon University |
Bhatia, Ankit | Carnegie Mellon University |
Mason, Matthew T. | Carnegie Mellon University |
Johnson, Aaron | Carnegie Mellon University |
Keywords: Force and Tactile Sensing, Perception for Grasping and Manipulation, Legged Robots
Abstract: Localizing contacts and collisions is an important aspect of failure detection and recovery for robots and can aid perception and exploration of the environment. Contrary to state-of-the-art methods that rely on forces and torques measured on the robot, this paper proposes a kinematic method for proprioceptive contact localization on compliant robots using velocity measurements. The method is validated on two planar robots, the quadrupedal Minitaur and the two-fingered Direct Drive (DD) Hand which are compliant due to inherent transparency from direct drive actuation. Comparisons to other state-of-the-art proprioceptive methods are shown in simulation. Preliminary results on further extensions to complex geometry (through numerical methods) and spatial robots (with a particle filter) are discussed.
|
|
16:45-17:00, Paper TuDT7.2 | |
>Ultra-Thin Joint Torque Sensor with Enhanced Sensitivity for Robotic Application |
|
Seok, Dong-Yeop | Sungkyunkwan University |
Kim, Yong Bum | Sungskyunkwan University |
Lee, Seung Yeon | Sungkyunkwan University |
Kim, Jae Yun | SUNGKYUNKWAN, Mechanical Engineering, Robottory |
Choi, Hyouk Ryeol | Sungkyunkwan University |
Keywords: Force and Tactile Sensing, Mechanism Design
Abstract: As advanced robotic technologies such as human-robot interaction and automatic assembly processes have emerged, torque sensors have become an essential component for robots. However, commercial torque sensors are not suitable for robotic applications because of their large sizes, heavy weights, narrow options, and high prices. In this letter, we develop a novel capacitive joint torque sensor with ultr-thin structure, high performance, but low cost. To achieve these goals,novel designs have been applied to both the sensing and deformable parts, which are the most important elements of the torque sensor. To obtain high sensitivity, a novel electrode structure called wedge electrode was applied to the sensing part, and a new deformable structure was designed to be ultra-thin and easy to manufacture. Then, the electrode and deformable structures were implemented in a single torque sensor. The developed torque sensor was calibrated based on an artificial neural network (ANN) model and verified to perform high accuracy and sensitivity, and low crosstalk by comparing it with a commercial torque sensor. Finally, a hig-performance torque sensor was implemented in ultra-thin size with a diameter of 108 mm and thickness of 13 mm.
|
|
17:00-17:15, Paper TuDT7.3 | |
>Six-Axis Force/Torque Fingertip Sensor for an Anthropomorphic Robot |
|
Kim, Uikyum | Korea Institute of Machinery & Materials (KIMM) |
Jeong, Heeyeon | Korea Institure of Machinery & Materials |
Do, Hyun Min | Korea Institute of Machinery and Materials |
Park, Jongwoo | Korea Institue of Machinery & Materials |
Park, Chanhun | KIMM |
Keywords: Force and Tactile Sensing, Multifingered Hands
Abstract: To manipulate objects using a robot hand, it is important to measure the information of the various forces on the fingertips. In this paper, a six-axis force/torque (F/T) fingertip sensor for a robot hand is introduced. The sensor was developed to provide the ability to measure six-axis F/T while remaining feasible for robot fingertip integration because of its miniaturization, light weight, and low cost (thanks to its simple manufacturing process). In particular, a novel highly sensitive shear force measurement method is proposed that uses the eccentricity of two cylinders. The designed six-axis F/T sensor was also fabricated. We demonstrate that the developed sensor can be easily installed into the human-sized fingertip of a robot, and we performed an accuracy evaluation using several experiments.
|
|
17:15-17:30, Paper TuDT7.4 | |
>A Flexible Dual-Core Optical Waveguide Sensor for Simultaneous and Continuous Measurement of Contact Force and Position |
> Video Attachment
|
|
Zhang, Zhong | City University of Hong Kong |
Li, Xiong | Tencent |
Pan, Jia | University of Hong Kong |
Li, Kaiwei | Institute of Photonics Technology, Jinan University |
Zheng, Yu | Tencent |
Zhang, Zhengyou | Tencent |
Keywords: Force and Tactile Sensing, Soft Sensors and Actuators, Perception for Grasping and Manipulation
Abstract: Having the merits of chemical inertness and immunity to electromagnetic interference, light weight, small size, and softness, optical waveguides have attracted much attention in making tactile sensors recently. This paper presents a new design of waveguide using two layers of cores, one of which has an uniform width and the other has an incremental width. It is deduced and verified that the contact force can be derived from the light power loss in the uniform-width core, while the contact position can be derived from the light power loss in the other core together with the estimated force. By this dual-core design, a single waveguide can simultaneously and continuously measure the contact force and position along it, which makes it very suited for integration on some thin long robotic parts, such as robotic fingers. A hardware experiment has been conducted to demonstrate its effectiveness on a two-finger gripper in an assembly task. The dual-core waveguide achieves 2 mm spatial resolution and 0.1 N sensitivity.
|
|
17:30-17:45, Paper TuDT7.5 | |
>Bi-Modal Hemispherical Sensors for Dynamic Locomotion and Manipulation |
> Video Attachment
|
|
Epstein, Lindsay | Massachusetts Institute of Technology |
SaLoutos, Andrew | MIT |
Kim, Donghyun | Massachusetts Institute of Technology |
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords: Force and Tactile Sensing, Soft Sensors and Actuators, Sensor Fusion
Abstract: The ability to measure multi-axis contact forces and contact surface normals in real time is critical to allow robots to improve their dexterous manipulation and locomotion abilities. This paper presents a new fingertip sensor for 3-axis contact force and contact location detection, as well as improvements on an existing footpad sensor through use of a new artificial neural network estimator. The fingertip sensor is intended for use in manipulation, while the footpad sensor is intended for high force use in locomotion. Both sensors consist of pressure sensing elements embedded within a rubber hemisphere, and utilize an artificial neural network to estimate the applied forces (f_x, f_y, and f_z), and contact angles (theta and phi) from the individual sensor element readings. The sensors are designed to be inherently robust, and the hemispherical shape allows for easy integration into point feet and fingertips. Both the fingertip and footpad sensors demonstrate the ability to track forces and angles accurately over the surface of the hemisphere (theta = +/- 45 degrees and phi = +/- 45 degrees ) and can experience up to 25N and 450N normal force, respectively, without saturating. The performance of the sensor is demonstrated with experimental results of dynamic control of a robotic arm with real-time sensor feedback.
|
|
17:45-18:00, Paper TuDT7.6 | |
>6-Axis Force/Torque Sensor with a Novel Autonomous Weight Compensating Capability for Robotic Applications |
> Video Attachment
|
|
Kim, Yong Bum | Sungskyunkwan University |
Seok, Dong-Yeop | Sungkyunkwan University |
Lee, Seung Yeon | Sungkyunkwan University |
Kim, Jae Yun | SUNGKYUNKWAN, Mechanical Engineering, Robottory |
Kang, Gitae | Sungkyunkwan University |
Kim, Uikyum | Korea Institute of Machinery & Materials (KIMM) |
Choi, Hyouk Ryeol | Sungkyunkwan University |
Keywords: Force and Tactile Sensing, Physical Human-Robot Interaction, Industrial Robots
Abstract: Force/Torque(F/T) sensing technology enables a dexterous robot control such as direct teaching, master-slave system, and pick-and-place task. In general, 6-axis F/T sensor is attached to the end-effector of the robot manipulator to assist in utilizing advanced robot systems. However, in actual applications, various tools such as robotic grippers, robotic hand, grinders are attached to the sensor and it causes F/T offsets with respect to the gravity. In this letter, Autonomous Weight Compensating(AWC) technique for 6-axis F/T sensor is presented. The proposed AWC technique can reduce the F/T offsets by estimating the F/T offsets through installed Inertial Measurement Unit(IMU) sensor. In this study, the 6-axis F/T are measured based on capacitance sensing scheme and to estimate the orientation of the sensor, a 9-axis IMU sensor is installed inside of the sensor. Then, the F/T offsets are calibrated via Artificial Neural Network(ANN) model. Finally, the performance of the proposed method is demonstrated through comparing the F/T data with both trained data and untrained data.
|
|
TuDT8 |
Room T8 |
Force Control |
Regular session |
Chair: Wen, John | Rensselaer Polytechnic Institute |
Co-Chair: Riener, Robert | ETH Zurich |
|
16:30-16:45, Paper TuDT8.1 | |
>Model Predictive Position and Force Trajectory Tracking Control for Robot-Environment Interaction |
> Video Attachment
|
|
Gold, Tobias | Friedrich-Alexander-Universität Erlangen-Nürnberg |
Völz, Andreas | Friedrich-Alexander-Universität Erlangen-Nürnberg |
Graichen, Knut | Friedrich Alexander University Erlangen-Nürnberg |
Keywords: Force Control, Compliance and Impedance Control, Soft Robot Applications
Abstract: The development of modern sensitive lightweight robots allows the use of robot arms in numerous new scenarios. Especially in applications where interaction between the robot and an object is desired, e.g. in assembly, conventional purely position-controlled robots fail. Former research has focused, among others, on control methods that center on robot-environment interaction. However, these methods often consider only separate scenarios, as for example a pure force control scenario. The present paper aims to address this drawback and proposes a control framework for robot-environment interaction that allows a wide range of possible interaction types. At the same time, the approach can be used for setpoint generation of position-controlled robot arms, where no interaction takes place. Thus, switching between different controller types for specific interaction kinds is not necessary. This versatility is achieved by a model predictive control-based framework which allows trajectory following control of joint or end-effector position as well as of forces for compliant or rigid robot-environment interactions. For this purpose, the robot motion is predicted by an approximated dynamic model and the force behavior by an interaction model. The characteristics of the approach are discussed on the basis of two scenarios on a lightweight robot.
|
|
16:45-17:00, Paper TuDT8.2 | |
>Learning-Based Optimization Algorithms Combining Force Control Strategies for Peg-In-Hole Assembly |
> Video Attachment
|
|
Zou, Peng | Zhejiang University |
Zhu, Qiuguo | Zhejiang University |
Wu, Jun | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Force Control, Industrial Robots, Compliant Assembly
Abstract: In this paper, an approach for automatic peg-in-hole assembly is proposed. The task is divided into two main steps: searching phase and inserting phase. First, a multilayer perceptron network is designed to address the hole search problem and a hybrid force position controller is introduced to ensure a safe and stable interaction with the external environment. Then, for the inserting phase, a variable impedance controller is adopted based on the fuzzy Q-learning algorithm to yield compliant behavior from the robot during the hole insertion process. This approach is a practical and general way to address complex peg-in-hole assembly tasks by taking advantage of both learning-based algorithms and force control strategies, which can greatly improve the efficiency and safety of the industrial manufacturing process without identifying the unknown contact model and tuning tedious parameters. Finally, the peg-in-hole experimental results for an industrial robot verified the effectiveness and robustness of the proposed approach.
|
|
17:00-17:15, Paper TuDT8.3 | |
>A Variable Impedance Control Strategy for Object Manipulation Considering Non-Rigid Grasp |
> Video Attachment
|
|
Logothetis, Michalis | National Technical University of Athens, School of Mechanical En |
Karras, George | National Technical University of Athens |
Alevizos, Konstantinos | National Technical University of Athens |
Kyriakopoulos, Kostas | National Technical Univ. of Athens |
Keywords: Force Control, Compliance and Impedance Control, Motion Control
Abstract: This paper presents a novel control strategy for the compensation of the slippage effect during non-rigidly grasped object manipulation. A detailed dynamic model of the interconnected system composed of the robotic manipulator, the object and the internal forces and torques induced by the slippage effect is provided. Next, we design a model-based variable impedance control scheme, in order to achieve simultaneously zero convergence for the trajectory tracking error and the slippage velocity of the object. The desired damping and stiffness matrices are formulated online, by taking into account the measurement of the slippage velocity on the contact. A formal Lyapunov-based analysis guarantees the stability and convergence properties of the resulting control scheme. A set of extensive simulation studies clarifies the proposed method and verifies its efficacy.
|
|
17:15-17:30, Paper TuDT8.4 | |
>Towards Dynamic Transparency: Robust Interaction Force Tracking Using Multi-Sensory Control on an Arm Exoskeleton |
> Video Attachment
|
|
Zimmermann, Yves Dominic | ETH Zurich |
Farshidian, Farbod | ETH Zurich |
Küçüktabak, Emek Barış | Northwestern University, Shirley Ryan Ability Lab |
Hutter, Marco | ETH Zurich |
Riener, Robert | ETH Zurich |
Keywords: Force Control, Physical Human-Robot Interaction, Haptics and Haptic Interfaces
Abstract: A high-quality free-motion rendering is one of the most vital traits to achieve an immersive human-robot interaction. Rendering free-motion is notably challenging for rehabilitation exoskeletons due to their relatively high weight and powerful actuators required for strength training and support. In the presence of dynamic human movements, accurate feedback linearization of the robot's dynamics is necessary to allow for a linear synthesis of interaction wrench controllers. Hence, we introduce a virtual model controller that uses two 6-DoF force sensors to control the interaction wrenches of a multi-DoF torque-controlled exoskeleton over the joint accelerations and inverse dynamics. Furthermore, we propose a disturbance observer for controlling the joint acceleration to diminish the influence of modeling errors on the inverse dynamics. To provide a high-bandwidth, low-bias estimation of the system's acceleration, we introduce a bias-observer which fuses the information from joint encoders and seven low priced IMUs. We have validated the performance of our proposed control structure on the shoulder and arm exoskeleton ANYexo. The experimental comparison of the controllers shows a reduction of the felt inertia and maximum reflected joint torque by a factor of more than three compared to state of the art. The controllers' robustness w.r.t. a model mismatch is validated. The experiments show that the closed-loop acceleration control improves the tracking, particularly at joints with low inertia. The proposed controllers' performance sets a new benchmark in haptic transparency for comparable devices and should be transferable to other applications.
|
|
17:30-17:45, Paper TuDT8.5 | |
>Robotic Deep Rolling with Iterative Learning Motion and Force Control |
|
Chen, Shuyang | Rensselaer Polytechnic Institute |
Wang, Zhigang | Raytheon Technologies Research Center |
Chakraborty, Abhijit | United Technologies Research Center |
Klecka, Michael | Raytheon Technologies Research Center |
Saunders, Glenn | Rensselaer Polytechnic Institute |
Wen, John | Rensselaer Polytechnic Institute |
Keywords: Force Control, Industrial Robots
Abstract: Large industrial robots offer an attractive option for deep rolling in terms of cost and flexibility. These robots are typically designed for fast and precise motion, but may be commanded to perform force control by adjusting the position setpoint based on the measurements from a wrist-mounted force/torque sensor. Contact force during deep rolling may be as high as 2000 N. The force control performance is affected by robot dynamics, robot joint servo controllers, and motion-induced inertial force. In this paper, we compare three deep rolling force control strategies: position-based rolling with open-loop force control, impedance control, and gradient-based iterative learning control (ILC). Open loop force control is easy to implement but does not correct for any force deviation. Impedance control uses force feedback, but does not track well non-constant force profiles. The ILC augments the impedance control by updating the commanded motion and force profiles based on the motion and force error trajectories in the previous iteration. The update is based on the gradient of the motion and force trajectories with respect to the commanded motion and force. We show that this gradient may be generated experimentally without the need of an explicit model. This is possible because the mapping from the commanded joint motion to the actual joint motion is nearly identical for all joints in industrial robots. We have evaluated the approach on the physical testbed using an ABB robot and demonstrated the convergence of the ILC scheme. The final ILC tracking performance of a trapezoidal force profile improves by over 70 % in terms of the RMS error compared with the impedance controller.
|
|
17:45-18:00, Paper TuDT8.6 | |
>Torque-Bounded Admittance Control Realized by a Set-Valued Algebraic Feedback (I) |
|
Kikuuwe, Ryo | Hiroshima University |
Keywords: Force Control, Robot Safety, Physical Human-Robot Interaction
Abstract: This paper proposes a new admittance controller that realizes safe behavior even under torque saturation. The new controller is analytically equivalent to a conventional admittance controller as long as the actuator torque is not saturated, but is free from unsafe behaviors such as snapping back, oscillation, or overshoots, which may happen with conventional admittance controllers after torque saturation. The new controller is described by a differential algebraic inclusion, and can be understood as a conventional admittance controller expanded with an additional algebraic loop through a normal-cone operator. Its continuous-time representation involves a nonsmooth, set-valued function, but its discrete-time implementation is free from set-valuedness and given as a closed-form algorithm as a result of the use of implicit (backward) Euler discretization. The controller is tested with one joint of an industrial manipulator equipped with a force sensor.
|
|
TuDT9 |
Room T9 |
Whole-Body Motion Planning and Control I |
Regular session |
Chair: Ott, Christian | German Aerospace Center (DLR) |
|
16:30-16:45, Paper TuDT9.1 | |
>Redundancy Resolution under Hard Joint Constraints: A Generalized Approach to Rank Updates |
> Video Attachment
|
|
Ziese, Anton | Technische Universität Darmstadt |
Fiore, Mario Daniele | KUKA Deutschland GmbH |
Peters, Jan | Technische Universität Darmstadt |
Zimmermann, Uwe E. | KUKA Deutschland GmbH |
Adamy, Jürgen | Technische Universität Darmstadt |
Keywords: Whole-Body Motion Planning and Control, Redundant Robots, Motion and Path Planning
Abstract: The increasing interest in autonomous robots with a high number of degrees of freedom for industrial applications and service robotics have also increased the demand for efficient control algorithms. The unstructured environment these robots operate in often impose constraints on the joint motion, an important type being the joint limits of the robot itself. These circumstances demand control algorithms to handle multiple tasks as well as constraints efficiently. This paper shows that both kinematic and torque control of redundant robots under hard joint constraints can be formulated in a single framework as a constrained optimization problem. To solve said problem, a generalization of the Fast-SNS algorithm to weighted pseudoinverses is proposed, which fulfills our demand of efficiently and reliably handling joint constraints.
|
|
16:45-17:00, Paper TuDT9.2 | |
>Task Priority Matrix at the Acceleration Level: Collision Avoidance under Relaxed Constraints |
> Video Attachment
|
|
Khatib, Maram | Sapienza University of Rome |
Al Khudir, Khaled | Sapienza University of Rome |
De Luca, Alessandro | Sapienza University of Rome |
Keywords: Motion Control, Collision Avoidance, Redundant Robots
Abstract: We propose a new approach for executing the main Cartesian tasks assigned to a redundant robot while guaranteeing whole-body collision avoidance. The robot degrees of freedom are fully utilized by introducing relaxed constraints in the definition of operational and collision avoidance tasks. Desired priorities for each task are assigned using the so-called Task Priority Matrix (TPM) method, which is independent from the redundancy resolution law and handles efficiently switching of priorities. To ensure smooth motion during such task reordering, a control scheme with a suitable task allocation algorithm is developed at the acceleration level. The proposed approach is validated with MATLAB simulations and an experimental evaluation using the 7-dof KUKA LWR manipulator.
|
|
17:00-17:15, Paper TuDT9.3 | |
>Hierarchical Tracking Control with Arbitrary Task Dimensions: Application to Trajectory Tracking on Submanifolds |
|
Garofalo, Gianluca | German Aerospace Center (DLR) |
Ott, Christian | German Aerospace Center (DLR) |
Keywords: Whole-Body Motion Planning and Control, Compliance and Impedance Control, Redundant Robots
Abstract: Hierarchical impedance control has been recently shown to effectively allow trajectory tracking, while guaranteeing the order of priorities during the execution. Nevertheless, the choice of the tasks is required to be such that, after being properly decoupled, they are all feasible and lead to an invertible Jacobian matrix. In this work, a modification is proposed that removes both these restrictions. The user is free to specify as many tasks as desired and especially without necessarily guaranteeing in advance that none of the tasks will become singular during the execution. Whenever tasks with higher priority use-up all the degrees of freedom, all the other tasks are naturally ignored. Still, as soon as some of the tasks with higher priority become singular, then the freed-up controllability is used to execute the next task in the stack. This is realized automatically, without any rearrangement of the tasks in the priority stack. As an application, the case of trajectory tracking on a submanifold of the workspace is considered, in which multiple charts of the atlas are used for the tasks. Simulations are used to validate the stability analysis.
|
|
17:15-17:30, Paper TuDT9.4 | |
>Feedback Whole-Body Control of Wheeled Inverted Pendulum Humanoids Using Operational Space |
> Video Attachment
|
|
Murtaza, Muhammad Ali | Georgia Institute of Technology |
Azimi, Vahid | Georgia Institute of Technology |
Hutchinson, Seth | Georgia Institute of Technology |
Keywords: Whole-Body Motion Planning and Control, Optimization and Optimal Control, Wheeled Robots
Abstract: We present a hierarchical framework for trajectory optimization and optimal feedback whole-body control of wheeled inverted pendulum (WIP) humanoid robot. The framework extends rapidly exponentially stabilizing control Lyapunov functions (RES-CLF) to operational space for controlling WIP humanoid robots while utilizing a hierarchical framework to compute an optimal policy. The upper level of the hierarchy encodes locomotion tasks, while the lower level incorporates the full system dynamics, including manipulation tasks to be performed. The framework computes an optimal policy directly in the operational space. Thus it avoids computing inverse kinematics or inverse dynamics explicitly. The framework can handle torque and task constraints while guaranteeing exponential convergence and min-norm control from RES-CLF. The efficacy of the framework is demonstrated on 18 degrees of freedom (DoF) WIP humanoid robot, Golem Krang, and 7 DoF planar WIP humanoid robot.
|
|
17:30-17:45, Paper TuDT9.5 | |
>Optimizing Dynamic Trajectories for Robustness to Disturbances Using Polytopic Projections |
> Video Attachment
|
|
Ferrolho, Henrique | The University of Edinburgh |
Merkt, Wolfgang Xaver | University of Oxford |
Ivan, Vladimir | University of Edinburgh |
Wolfslag, Wouter | University of Edinburgh |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Optimization and Optimal Control, Whole-Body Motion Planning and Control, Manipulation Planning
Abstract: This paper focuses on robustness to disturbance forces and uncertain payloads. We present a novel formulation to optimize the robustness of dynamic trajectories. A straightforward transcription of this formulation into a nonlinear programming problem is not tractable for state-of-the-art solvers, but it is possible to overcome this complication by exploiting the structure induced by the kinematics of the robot. The non-trivial transcription proposed allows trajectory optimization frameworks to converge to highly robust dynamic solutions. We demonstrate the results of our approach using a quadruped robot equipped with a manipulator.
|
|
17:45-18:00, Paper TuDT9.6 | |
>Learning an Optimal Sampling Distribution for Efficient Motion Planning |
> Video Attachment
|
|
Cheng, Richard | California Institute of Technology |
Shankar, Krishna | Toyota Research Institute |
Burdick, Joel | California Institute of Technology |
Keywords: Whole-Body Motion Planning and Control, Reinforecment Learning, Motion and Path Planning
Abstract: Sampling-based motion planners (SBMP) are commonly used to generate motion plans by incrementally constructing a search tree through a robot’s configuration space. For high degree-of-freedom systems, sampling is often done in a lower-dimensional space, with a steering function responsible for local planning in the higher-dimensional configuration space. However, for highly-redundant sytems with complex kinematics, this approach is problematic due to the high computational cost of evaluating the steering function, especially in cluttered environments. Therefore, having an efficient, informed sampler becomes critical to online robot operation. In this study, we develop a learning-based approach with policy improvement to compute an optimal sampling distribution for use in SBMPs. Motivated by the challenge of whole-body planning for a 31 degree-of-freedom mobile robot built by the Toyota Research Institute, we combine our learning-based approach with classical graph-search to obtain a constrained sampling distribution. Over multiple learning iterations, the algorithm learns a probability distribution weighting areas of low-cost and high probability of success, which a graph search algorithm then uses to obtain an optimal sampling distribution for the robot. On challenging motion planning tasks for the robot, we observe significant computational speed-up, fewer edge evaluations, and more efficient paths with minimal computational overhead. We show the efficacy of our approach with a number of experiments in whole-body motion planning.
|
|
TuDT10 |
Room T10 |
Whole-Body Motion Planning and Control II |
Regular session |
Chair: Hsieh, M. Ani | University of Pennsylvania |
Co-Chair: Haghshenas-Jaryani, Mahdi | New Mexico State University |
|
16:30-16:45, Paper TuDT10.1 | |
>A Topological Approach to Path Planning for a Magnetic Millirobot |
> Video Attachment
|
|
Mansfield, Ariella | University of Pennsylvania |
Kularatne, Dhanushka | Drexel University |
Steager, Edward | University of Pennsylvania |
Hsieh, M. Ani | University of Pennsylvania |
Keywords: Motion and Path Planning, Micro/Nano Robots
Abstract: We present a path planning strategy for a magnetic millirobot where the nonlinearities in the external magnetic force field (MFF) are encoded in the graph used for planning. The strategy creates a library of candidate MFFs and characterizes their topologies by identifying the unstable manifolds in the workspace. The path planning problem is then posed as a graph search problem where the computed path consists of a sequence of unstable manifold segments and their associated MFFs. By tracking the robot's position and sequentially applying the MFFs, the robot navigates along each unstable manifold until it reaches the goal. We discuss the theoretical guarantees of the proposed strategy and experimentally validate the strategy.
|
|
16:45-17:00, Paper TuDT10.2 | |
>Optimizing Coordinate Choice for Locomotion Systems with Toroidal Shape Spaces |
|
Lin, Bo | Georgia Institute of Technology |
Zhong, Baxi | Georgia Institute of Technology |
Ozkan-Aydin, Yasemin | Georgia Institute of Technology |
Aydin, Enes | Georgia Institute of Technology |
Choset, Howie | Carnegie Mellon University |
Goldman, Daniel | Georgia Institute of Technology |
Blekherman, Grigoriy | Georgia Institute of Technology |
Keywords: Biologically-Inspired Robots, Multi-Contact Whole-Body Motion Planning and Control, Nonholonomic Motion Planning
Abstract: In a geometric mechanics framework, the configuration space is decomposed into a shape space and a position space. The internal motion of the system is prescribed by a closed loop in the shape space, which causes net motion in the position space. If the shape space is a simply connected domain in an Euclidean space, then with an optimal choice of the body frame, the displacement in the position space is reasonably approximated by the surface integral of emph{the height function}, a functional relationship between the internal shape and position space variables. Our recent work has extended the scope of geometric methods from limbless undulatory system to those with legs; interestingly, the shape space for such systems has a torus structure. However, to the best of our knowledge, the optimal choice of the body frame on the torus shape space was not explored. In this paper, we develop a method to optimally choose the body frame on the torus which results in good approximation of displacement by the integral of the height function. We apply our methods to the centipede locomotion system and observe quantitative agreement of our prediction and experimental results.
|
|
17:00-17:15, Paper TuDT10.3 | |
>Autonomous Navigation and Obstacle Avoidance of a Snake Robot with Combined Velocity-Heading Control |
|
Haghshenas-Jaryani, Mahdi | New Mexico State University |
Sevil, Hakki Erhan | University of West Florida |
Keywords: Biologically-Inspired Robots, Motion and Path Planning, Motion Control
Abstract: This paper presents combined velocity-heading control of a planar snake robot for the autonomous navigation and obstacle avoidance in a simulation environment. The kinematics and dynamics of the snake robot were derived using the articulated-body algorithm without considering the nonholonomic constraints. A double-layer controller was designed to control both heading direction and average velocity through joint motion control. We adopt a rule-based expert system for autonomous navigation while avoiding obstacles/restricted-areas. The guidance commands are realized by two proportional controllers that use feedback of the estimated speed and heading of the robot. To validate the combined velocity-heading controller, a series of simulations were carried out for a snake robot with 6 links (8 DOF). The autonomous navigation and obstacle-avoidance algorithms provided the desired commands to follow the targeted trajectories. The simulation results showed the effectiveness of the controller in following the desired heading directions and achieving targeted velocities with small errors to reach the goal position by avoiding obstacles.
|
|
17:15-17:30, Paper TuDT10.4 | |
>Dynamic Legged Manipulation of a Ball through Multi-Contact Optimization |
|
Yang, Chenyu | Shanghai Jiao Tong University(SJTU) |
Zhang, Bike | University of California, Berkeley |
Zeng, Jun | University of California, Berkeley |
Agrawal, Ayush | University of California at Berkeley |
Sreenath, Koushil | University of California, Berkeley |
Keywords: Humanoid and Bipedal Locomotion, Whole-Body Motion Planning and Control
Abstract: The feet of robots are typically used to design locomotion strategies, such as balancing, walking, and running. However, they also have great potential to perform manipulation tasks. In this paper, we propose a model predictive control (MPC) framework for a quadrupedal robot to dynamically balance on a ball and simultaneously manipulate it to follow various trajectories such as straight lines, sinusoids, circles and in-place turning. We numerically validate our controller on the Mini Cheetah robot using different gaits including trotting, bounding, and pronking on the ball.
|
|
17:30-17:45, Paper TuDT10.5 | |
>Explore Bravely: Wheeled-Legged Robots Traversing in Unknown Rough Environment |
> Video Attachment
|
|
Haddeler, Garen | National University of Singapore, Agency for Science, Technolo |
Chan, Jianle | Institute for Infocomm Research |
You, Yangwei | Institute for Infocomm Research |
Verma, Saurab | Institute of Infocomm Research, Agency for Science, Technology A |
Adiwahono, Albertus Hendrawan | I2R A-STAR |
Chew, Chee Meng | National University of Singapore |
Keywords: Legged Robots, Whole-Body Motion Planning and Control
Abstract: This paper addressed a challenging problem of wheeled-legged robots with high degrees of freedom exploring in unknown rough environments. The proposed method works as a pipeline to achieve prioritized exploration comprising three primary modules: traversability analysis, frontier-based exploration and hybrid locomotion planning. Traversability analysis provides robots an evaluation about surrounding terrain according to various criteria ( roughness, slope etc.) and other semantic information (small step, stair, bridge etc.), while novel gravity point frontier-based exploration algorithm can effectively decide which direction to go even in unknown environments based on robots' current pose and desired one. Given all these information, hybrid locomotion planner will generate a path with motion mode (driving or walking) encoded by optimizing among different objectives and constraints. Lastly, our approach was well verified in both simulation and experiment on a wheeled quadrupedal robot Pholus.
|
|
TuDT11 |
Room T11 |
Visual Servoing |
Regular session |
Chair: Jing, Wei | A*STAR |
|
16:30-16:45, Paper TuDT11.1 | |
>KOVIS: Keypoint-Based Visual Servoing with Zero-Shot Sim-To-Real Transfer for Robotics Manipulation |
> Video Attachment
|
|
Puang, En Yen | Agency for Science, Technology and Research (A*STAR) |
Tee, Keng Peng | Institute for Infocomm Research |
Jing, Wei | A*STAR |
Keywords: Visual Servoing, Deep Learning in Grasping and Manipulation, Perception-Action Coupling
Abstract: We present KOVIS, a novel learning-based, calibration-free visual servoing method for fine robotic manipulation tasks with eye-in-hand stereo camera system. We train the deep neural network only in the simulated environment; and the trained model could be directly used for real-world visual servoing tasks. KOVIS consists of two networks. The first keypoint network learns the keypoint representation from the image using with an autoencoder. Then the visual servoing network learns the motion based on keypoints extracted from the camera image. The two networks are trained end-to-end in the simulated environment by self-supervised learning without manual data labeling. After training with data augmentation, domain randomization, and adversarial examples, we are able to achieve zero-shot sim-to-real transfer to real-world robotic manipulation tasks. We demonstrate the effectiveness of the proposed method in both simulated environment and real-world experiment with different robotic manipulation tasks, including grasping, peg-in-hole insertion with 4mm clearance, and M13 screw insertion.
|
|
16:45-17:00, Paper TuDT11.2 | |
>FlowControl: Optical Flow Based Visual Servoing |
> Video Attachment
|
|
Argus, Maximilian | University of Freiburg |
Hermann, Lukas | University of Freiburg |
Long, Jonathan | Symbio Robotics, Inc |
Brox, Thomas | University of Freiburg |
Keywords: Visual Servoing, Learning from Demonstration, Perception for Grasping and Manipulation
Abstract: Few-shot imitation is a demonstration based approach to effectively control robots without tedious manual programming. We address this problem in the context of imitating manipulation tasks with a visual servoing approach that uses modern learning-based optical flow to find correspondences between demonstration frames and live videos during control. Our approach, which we call FlowControl, can successively track a demonstration video using only a specified foreground mask to focus and shift the attention to the object to be manipulated. The approach relies on RGB-D observations, and has several advantageous properties: it is easy to setup up, requires only a single demonstration, and does not require any 3D models of the objects being manipulated. Moreover, it exploits the robustness of learned optical flow methods with regard to variation in visual appearance enabling generalization to variations in the observed scene. We demonstrate these properties on a range of problems, some requiring very precise motions, some requiring ability to generalize.
|
|
17:00-17:15, Paper TuDT11.3 | |
>Monocular Visual Shape Tracking and Servoing for Isometrically Deforming Objects |
> Video Attachment
|
|
Aranda, Miguel | SIGMA Clermont, Institut Pascal |
Corrales Ramon, Juan Antonio | Sigma-Clermont Engineering School |
Mezouar, Youcef | SIGMA-Clermont |
Bartoli, Adrien | UCA |
Ozgur, Erol | SIGMA-Clermont / Institut Pascal |
Keywords: Visual Servoing, Perception for Grasping and Manipulation, Computer Vision for Manufacturing
Abstract: We address the monocular visual shape servoing problem. This pushes the challenging visual servoing problem one step further from rigid object manipulation towards deformable object manipulation. Explicitly, it implies deforming the object towards a desired shape in 3D space by robots using monocular 2D vision. We specifically concentrate on a scheme capable of controlling large isometric deformations. Two important open subproblems arise for implementing such a scheme. (P1) Since it is concerned with large deformations, perception requires tracking the deformable object's 3D shape from monocular 2D images which is a severely underconstrained problem. (P2) Since rigid robots have fewer degrees of freedom than a deformable object, the shape control becomes underactuated. We propose a template-based shape servoing scheme in which we solve these two problems. The template allows us to both infer the object's shape using an improved Shape-from-Template algorithm and steer the object's deformation by means of the robots' movements. We validate the scheme via simulations and real experiments.
|
|
17:15-17:30, Paper TuDT11.4 | |
>Integrating Features Acceleration in Visual Predictive Control |
> Video Attachment
|
|
Fusco, Franco | LS2N Centrale Nantes |
Kermorgant, Olivier | École Centrale Nantes |
Martinet, Philippe | INRIA |
Keywords: Visual Servoing
Abstract: This paper proposes new prediction models for Visual Predictive Control that can lead to both better motions in the feature space and shorter sensor trajectories in 3D. Contrarily to existing first-order models based only on the interaction matrix, it is proposed to integrate acceleration information provided by second-order models. This allows to better estimate the evolution of the image features, and consequently to evaluate control inputs that can properly steer the system to a desired configuration. By means of simulations, the performances of these new predictors are shown and compared to those of a classical model. Included experiments using both image point features and polar coordinates confirm the validity and generality of the approach, showing that the increased complexity of the predictors does not prevent real-time implementations.
|
|
17:30-17:45, Paper TuDT11.5 | |
>Automatic Shape Control of Deformable Wires Based on Model-Free Visual Servoing |
> Video Attachment
|
|
Lagneau, Romain | INSA Rennes |
Krupa, Alexandre | INRIA Rennes - Bretagne Atlantique |
Marchal, Maud | INSA/INRIA |
Keywords: Visual Servoing
Abstract: In this paper, we propose a novel approach to automatically control the 3D shape of deformable wires using robots. Our approach proposes a novel visual feature along with a novel shape servoing method to enable dual arm manipulation of deformable wires. The visual feature relies on a geometric B-spline model and the use of Sequential Importance Resampling (SIR) particle filtering to track the 3D deformed shape of a wire over time. The shape servoing method is an adaptive model-free method that iteratively updates the deformation Jacobian matrix using weighted least-squares minimization with sliding window and an eigenvalue-based confidence criterion. We performed several experiments on wires with different mechanical properties. The results show that our approach succeeded to control the 3D shape of various wires for many different desired deformations, while working at an interactive time. It has also been shown that the shape servoing method can be used to handle large deformations by subdividing the task in successive intermediary targets to reach. These promising results pave the way for automatic control of the 3D shapes of deformable wires in many fields such as catheter insertion in medicine or wire manipulation in industry.
|
|
17:45-18:00, Paper TuDT11.6 | |
>Fast Model Predictive Image-Based Visual Servoing for Quadrotors |
> Video Attachment
|
|
Roque, Pedro | KTH Royal Institute of Technology, Stockholm, Sweden |
Bin, Elisa | KTH Royal Institute of Technology |
Miraldo, Pedro | Instituto Superior Técnico, Lisboa |
Dimarogonas, Dimos V. | KTH Royal Institute of Technology |
Keywords: Visual Servoing, Aerial Systems: Perception and Autonomy, Visual-Based Navigation
Abstract: This paper studies the problem of Image-Based Visual Servo Control (IBVS) for quadrotors. Although the control of quadrotors has been extensively studied in the last decades, combining the IBVS module with the quadrotor's dynamics is still hard, mainly due to the under-actuation issues related to the quadrotor control as opposed to the 6 DoF control outputs generated by the IBVS modules. We propose an alternative formulation to solve this problem, by particularly using linear Model Predictive Control (MPC), that allows us to relax the UAVs under-actuation issues. Stability guarantees of the proposed scheme are presented. The proposed model is validated with synthetic data and tested in a real UAV's setup.
|
|
TuDT12 |
Room T12 |
Motion Control |
Regular session |
Chair: Zhang, Mingming | Southern University of Science and Technology |
Co-Chair: Berman, Spring | Arizona State University |
|
16:30-16:45, Paper TuDT12.1 | |
>Robust Internal Model Control for Motor Systems Based on Sliding Mode Technique and Extended State Observer |
|
Li, Ping | Southern University of Science and Technology |
Guo, Kaiqi | Southern University of Science and Technology |
Sun, Chenyang | Southern University of Science and Technology |
Zhang, Mingming | Southern University of Science and Technology |
Keywords: Motion Control
Abstract: Electric motors have been widely used as the actuators of robot and automation systems. This paper aims at achieving the high-precision position control of motor drive systems. For this purpose, a robust control scheme is presented by combining the internal model principle, the sliding mode technique and the extended state observer (ESO). The PID-type controller is firstly designed by using the internal model control (IMC) rules. Since the analysis of the IMC system is performed via a sliding surface, a robust sliding mode control (SMC) law is then synthesized to enhance the control ability of the system to uncertainties. However, this robust solution should make a trade-off between the chattering attenuation and the control accuracy. To handle this drawback, a linear ESO is employed to compensate the modeling errors for a higher control accuracy. The stability analysis is provided via a Lyapunov-based method, and the superiority of the proposed approach was validated by comparative experiments on a motor drive platform.
|
|
16:45-17:00, Paper TuDT12.2 | |
>Trajectory Tracking of a One-Link Flexible Arm Via Iterative Learning Control |
> Video Attachment
|
|
Pierallini, Michele | Centro Di Ricerca E. Piaggio - Università Di Pisa |
Angelini, Franco | University of Pisa |
Mengacci, Riccardo | Università Di Pisa |
Palleschi, Alessandro | University of Pisa |
Bicchi, Antonio | Università Di Pisa |
Garabini, Manolo | Università Di Pisa |
Keywords: Natural Machine Motion, Flexible Robots, Motion Control
Abstract: Trajectory tracking of flexible link robots is a classical control problem. Historically, the link elasticity was considered as something to be removed. Hence, the control performance was guaranteed by adopting high-gain feedback loops and, possibly, a dynamic compensation with the result to stiffen up the dynamic behavior of the robot. Nowadays, robots are pushed more and more towards a safe physical interaction with a less and less structured environment. Hence, the design and control of the robots moved to an on-purpose introduction of highly compliant elements in the robot bodies, the so-called soft robotics, and towards control approaches that aim to provide the tracking performance without a substantial change in the robot dynamic behavior. Following this approach, we present an iterative learning control that relies mainly on a feedforward component, hence preserves the robot dynamics, for trajectory tracking of a one-link flexible arm. We provide a condition, based on the system dynamics and similar to the Strong Inertially Coupled property, that ensures the applicability of the proposed control method. Finally, we report simulation and experimental tests to validate the theoretical results.
|
|
17:00-17:15, Paper TuDT12.3 | |
>H-Infinity-Optimal Tracking Controller for Three-Wheeled Omnidirectional Mobile Robots with Uncertain Dynamics |
|
Salimi Lafmejani, Amir | Arizona State University |
Farivarnejad, Hamed | Arizona State University |
Berman, Spring | Arizona State University |
Keywords: Motion Control, Wheeled Robots, Optimization and Optimal Control
Abstract: In this paper, we present an optimal control approach using Linear Matrix Inequalities (LMIs) for trajectory tracking control of a three-wheeled omnidirectional mobile robot in the presence of external disturbances on the robot's actuators and noise in the robot's sensor measurements. First, a state-space representation of the omnidirectional robot dynamics is derived using a point-mass dynamic model. Then, we propose an LMI-based full-state feedback H-infinity-optimal controller for the tracking problem. The robot's tracking performance with the H-infinity-optimal controller is compared to its performance with a classical full-state feedback tracking controller in simulations with circular and bowtie-shaped reference trajectories. In order to evaluate our proposed controller in practice, we also implement the H-infinity-optimal and classical controllers for these reference trajectories on a three-wheeled omnidirectional robot. The H-infinity-optimal controller guarantees stabilization of the robot motion and attenuates the effects of frictional disturbances and measurement noise on the robot's tracking performance. Using the H-infinity-optimal controller, the robot is able to track the reference trajectories with up to a 47.8% and 45.8% decrease in the maximum pose and twist errors, respectively, over a full cycle of the trajectory compared to the classical controller. The simulation and experimental results show that our LMI-based H-infinity-optimal controller is robust to undesired effects of disturbances and noise on the dynamic behavior of the robot during trajectory tracking and can outperform the classical controller in attenuating their effects.
|
|
17:15-17:30, Paper TuDT12.4 | |
>Gain Scheduled Controller Design for Balancing an Autonomous Bicycle |
> Video Attachment
|
|
Wang, Shuai | Tencent |
Cui, Leilei | New York University |
Lai, Jie | Tencent |
Yang, Sicheng | Tencent |
Chen, Xiangyu | TENCENT |
Zheng, Yu | Tencent |
Zhang, Zhengyou | Tencent |
Jiang, Zhong-Ping | New York University |
Keywords: Motion Control, Body Balancing, Underactuated Robots
Abstract: In this paper, the gain scheduling technique is applied to design a balance controller for an autonomous bicycle with an inertia wheel. Previously, two different balance controllers are needed depending on whether the bicycle is stationary or dynamic. The switch between the two different controllers may cause the instability of the autonomous bicycle. Our proposed gain scheduled controller can balance the autonomous bicycle in both stationary and dynamic cases. A physical system is built and experiments are carried out to demonstrate the effectiveness of the gain scheduled controller.
|
|
17:30-17:45, Paper TuDT12.5 | |
>Online System for Dynamic Multi-Contact Motion with Impact Force Based on Contact Wrench Estimation and Current-Based Torque Control |
> Video Attachment
|
|
Fukazawa, Kazuki | The University of Tokyo |
Hiraoka, Naoki | The University of Tokyo |
Kojima, Kunio | The University of Tokyo |
Noda, Shintaro | The University of Tokyo |
Bando, Masahiro | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Multi-Contact Whole-Body Motion Planning and Control, Whole-Body Motion Planning and Control
Abstract: Humanoid robots are expected to play a big role at distress sites and disaster sites. There is a variety of multi-contact locomotion forms other than bipedal walking such as crawling through tightly, getting on the rubble by using its knees and elbows, or jumping in and rolling over the obstacles. If such multi-contact locomotion forms can be achieved, robots can reach environments that are currently unreachable, and be able to conduct tasks required at the environments. To achieve this, it is required for robots to bring various parts of its body into contact with the environment like a human. However, it is difficult for parts without 6-axis force sensors to achieve the target force while adapting to the environment against impact force. It is also difficult to measure contact wrenches without 6-axis force sensors. In this paper, by allowing the error of the contact state, we propose online system for realizing dynamic motion which impact force occurs on the parts of the whole body by contact to the environment. In the proposed system, we applied the current-based torque control for joints to make the whole body parts of the robot adapt to the environment, and we modified motion in real time to stabilize zmp by estimating contact wrenches at the contact positions where force sensors are not mounted. In addition, at the motion planning, we generated more feasible motions for a robot applying torque control by using evolutionary computation which advances the search with the behavior of torque control. We demonstrate that the proposed system is effective by showing experimental results of sitting posture locomotion using a JAXON robot in which impact force occur on the back of the thighs which have no force sensors.
|
|
17:45-18:00, Paper TuDT12.6 | |
>Enhancement of Force Exertion Capability of a Mobile Manipulator by Kinematic Reconfiguration |
|
Xing, Hongjun | Harbin Institute of Technology |
Torabi, Ali | University of Alberta |
Ding, Liang | Harbin Institute of Technology |
Gao, Haibo | Harbin Institute of Technology |
Deng, Zongquan | Harbin Institute of Technology |
Tavakoli, Mahdi | University of Alberta |
Keywords: Motion Control, Mobile Manipulation, Redundant Robots
Abstract: With the increasing applications of wheeled mobile manipulators (WMMs), new challenges have arisen in terms of executing high-force tasks while maintaining precise trajectory tracking. A WMM, which consists of a manipulator mounted on a mobile base, is often a kinematically redundant robot. The existing WMM configuration optimization methods for redundant WMMs are conducted in the null-space of the entire system. Such methods do not consider the differences between the mobile base and the manipulator, such as their different kinematics, dynamics, or operating conditions. This may inevitably reduce the force exertion capability and degrade the tracking precision of the WMM. To enhance the force exertion capability of a WMM, this paper maximizes the directional manipulability (DM) of the manipulator, with consideration of the joint torque differences, first in Cartesian space and then in the null-space of the robotic system. To maintain precise end-effector trajectory tracking, this paper proposes a novel coordination method between the mobile base and the manipulator via a weighting matrix. The advantages and effectiveness of the proposed approach are demonstrated through experiments.
|
|
TuDT13 |
Room T13 |
Optimization and Optimal Control I |
Regular session |
Chair: Hong, Dennis | UCLA |
|
16:30-16:45, Paper TuDT13.1 | |
>Learning-Based Controller Optimization for Repetitive Robotic Tasks |
|
Li, Xiaocong | A*STAR |
Zhu, Haiyue | Singapore Institute of Manufacturing Technology |
Ma, Jun | National University of Singapore |
Teo, Tat Joo | Singapore Institute of Manufacturing Technology |
Teo, Chek Sing | SIMTech |
Tomizuka, Masayoshi | University of California |
Lee, Tong Heng | National University of Singapore |
Keywords: Optimization and Optimal Control, Model Learning for Control, Motion Control
Abstract: Dynamic control for robotic automation tasks is traditionally designed and optimized with a model-based approach, and the performance relies heavily upon accurate system modeling. However, modeling the true dynamics of increasingly complex robotic systems is an extremely challenging task and it often renders the automation system to operate in a non-optimal condition. Notably, many industrial robotic applications involve repetitive motions and constantly generate a large amount of motion data under the non-optimal condition. These motion data contain rich information, and therefore an intelligent automation system should be able to learn from these non-optimal motion data to drive the system to operate optimally in a data-driven manner. In this paper, we propose a learning-based controller optimization algorithm for repetitive robotic tasks. To achieve this, a multi-objective cost function is designed to take into consideration both the trajectory tracking accuracy and smoothness, and then a data-driven approach is developed to estimate the gradient and Hessian based on the motion data for optimization without relying on the dynamic model. Experiments based on a magnetically-levitated nanopositioning system are conducted to demonstrate the effectiveness and practical appeals of the proposed algorithm in repetitive robotic automation tasks.
|
|
16:45-17:00, Paper TuDT13.2 | |
>Unilateral Constraints for Torque-Based Whole-Body Control |
> Video Attachment
|
|
Muñoz Osorio, Juan David | Leibniz University, KUKA Germany GmbH |
Abdelazim, Abdelrahman | KUKA Germany GmbH |
Allmendinger, Felix | KUKA Deutschland GmbH |
Zimmermann, Uwe E. | KUKA Deutschland GmbH |
Keywords: Optimization and Optimal Control, Collision Avoidance, Physical Human-Robot Interaction
Abstract: This work uses quadratic programming to perform torque control on an industrial collaborative robot, while keeping defined constraints. Limits for rotational and translational coordinates are considered at position, velocity and acceleration level. Although the problem of having hardware and safe limitations has been considered before. Solutions usually rely on functions that need a proper tuning. The proposed control scheme is tested to work on a real robot to avoid not only static but also dynamic obstacles without the need of any empirical tuning. The method is tested also under physical human robot interaction (pHRI) showing smooth behaviour of the robot despite of external forces.
|
|
17:00-17:15, Paper TuDT13.3 | |
>Learning High-Level Policies for Model Predictive Control |
|
Song, Yunlong | University of Zurich |
Scaramuzza, Davide | University of Zurich |
Keywords: Optimization and Optimal Control, Reinforecment Learning, Aerial Systems: Mechanics and Control
Abstract: The combination of policy search and deep neural networks holds the promise of automating a variety of decision-making tasks. Model Predictive Control (MPC) provides robust solutions to robot control tasks by making use of a dynamical model of the system and solving an optimization problem online over a short planning horizon. In this work, we leverage probabilistic decision-making approaches and the generalization capability of artificial neural networks to the powerful online optimization by learning a deep high-level policy for the MPC (High-MPC). Conditioning on robot’s local observations, the trained neural network policy is capable of adaptively selecting high-level decision variables for the low-level MPC controller, which then generates optimal control commands for the robot. First, we formulate the search of high-level decision variables for MPC as a policy search problem, specifically, a probabilistic inference problem. The problem can be solved in a closed-form solution. Second, we propose a self-supervised learning algorithm for learning a neural network high-level policy, which is useful for online hyperparameter adaptations in highly dynamic environments. We demonstrate the importance of incorporating the online adaption into autonomous robots by using the proposed method to solve a challenging control problem, where the task is to control a simulated quadrotor to fly through a swinging gate. We show that our approach can handle situations that are difficult for standard MPC.
|
|
17:15-17:30, Paper TuDT13.4 | |
>Squash-Box Feasibility Driven Differential Dynamic Programming |
> Video Attachment
|
|
Marti-Saumell, Josep | Institut De Robòtica I Informàtica Industrial, CSIC-UPC |
Solà, Joan | Institut De Robòtica I Informàtica Industrial |
Mastalli, Carlos | University of Edinburgh |
Santamaria-Navarro, Angel | NASA Jet Propulsion Laboratory, Caltech |
Keywords: Optimization and Optimal Control, Legged Robots, Aerial Systems: Mechanics and Control
Abstract: Recently, Differential Dynamic Programming (DDP) and other similar algorithms have become the solvers of choice when performing non-linear Model Predictive Control (nMPC) with modern robotic devices. The reason is that they have a lower computational cost per iteration when compared with off-the-shelf Non-Linear Programming (NLP) solvers, which enables its online operation. However, they cannot handle constraints, and are known to have poor convergence capabilities. In this paper, we propose a method to solve the optimal control problem with control bounds through a squashing function (i.e. a sigmoid, which is bounded by construction). It has been shown that a naive use of squashing functions damage the convergence rate. To tackle this, we first propose to add a quadratic barrier that avoids the difficulty of the plateau produced by the sigmoid. Second, we add an outer loop that adapts both the sigmoid and the barrier; it makes the optimal control problem with the squashing function converge to the original control-bounded problem. To validate our method, we present simulation results for different types of platforms including a multi-rotor, a biped, a quadruped and a humanoid robot.
|
|
17:45-18:00, Paper TuDT13.6 | |
>Optimal Linearization Via Quadratic Programming |
|
Shen, Junjie | UCLA |
Hong, Dennis | UCLA |
Keywords: Optimization and Optimal Control, Performance Evaluation and Benchmarking
Abstract: The technique of linearization for nonlinear systems around some operating point has been widely used for analysis and synthesis of the system behavior within a certain operating range. Conventional linearization methods include the analytical linearization (AL) method using the Jacobian matrix, the result of which usually works only for a sufficiently small region, as well as the numerical linearization (NL) method based on small perturbation, the accuracy of which is usually not guaranteed. In this letter, we propose an optimal linearization method via quadratic programming (OLQP). We start with uniform data sampling within the neighborhood of the operating point based on the nonlinear ordinary differential equation (ODE). We then find the best linear model that fits to these sample points with a QP formulation. The OLQP solution is derived in closed form with proved convergence to the AL solution. Two examples of nonlinear systems are investigated in terms of linearization and results are compared among these linearization methods, which has shown the proposed OLQP method features a great balance between model accuracy and computational complexity. Moreover, the OLQP method offers additional options in controller design by tuning its parameters.
|
|
TuDT14 |
Room T14 |
Optimization and Optimal Control II |
Regular session |
Chair: Hovakimyan, Naira | University of Illinois at Urbana-Champaign |
Co-Chair: Bajcinca, Naim | TU Kaiserslautern |
|
16:30-16:45, Paper TuDT14.1 | |
>Model Predictive Control for a Tendon-Driven Surgical Robot with Safety Constraints in Kinematics and Dynamics |
> Video Attachment
|
|
Cursi, Francesco | Imperial College London |
Modugno, Valerio | Sapienza University of Rome |
Kormushev, Petar | Imperial College London |
Keywords: Optimization and Optimal Control, Robot Safety, Surgical Robotics: Planning
Abstract: In fields such as minimally invasive surgery, effective control strategies are needed to guarantee safety and accuracy of the surgical task. Mechanical designs and actuation schemes have inevitable limitations such as backlash and joint limits. Moreover, surgical robots need to operate in narrow pathways, which may give rise to additional environmental constraints. Therefore, the control strategies must be capable of satisfying the desired motion trajectories and the imposed constraints. Model Predictive Control (MPC) has proven effective for this purpose, allowing to solve an optimal problem by taking into consideration the evolution of the system states, cost function, and constraints over time. The high nonlinearities in tendon-driven systems, adopted in many surgical robots, are difficult to be modelled analytically. In this work, we use a model learning approach for the dynamics of tendon-driven robots. The dynamic model is then employed to impose constraints on the torques of the robot under consideration and solve an optimal constrained control problem for trajectory tracking by using MPC. To assess the capabilities of the proposed framework, both simulated and real world experiments have been conducted.
|
|
16:45-17:00, Paper TuDT14.2 | |
>L1-Adaptive MPPI Architecture for Robust and Agile Control of Multirotors |
> Video Attachment
|
|
Pravitra, Jintasit | Georgia Institute of Technology |
Ackerman, Kasey | University of Illinois at Urbana-Champaign |
Cao, Chengyu | University of Connecticut |
Hovakimyan, Naira | University of Illinois at Urbana-Champaign |
Theodorou, Evangelos | Georgia Institute of Technology |
Keywords: Optimization and Optimal Control, Robust/Adaptive Control of Robotic Systems, Aerial Systems: Mechanics and Control
Abstract: This paper presents a multirotor control architecture, where Model Predictive Path Integral Control (MPPI) and L1 adaptive control are combined to achieve both fast model predictive trajectory planning and robust trajectory tracking. MPPI provides a framework to solve nonlinear MPC with complex cost functions in real-time. However, it often lacks robustness, especially when the simulated dynamics are different from the true dynamics. We show that the L1 adaptive controller robustifies the architecture, allowing the overall system to behave similar to the nominal system simulated with MPPI. The architecture is validated in a simulated multirotor racing environment.
|
|
17:00-17:15, Paper TuDT14.3 | |
>Learning-Based Distributionally Robust Motion Control with Gaussian Processes |
> Video Attachment
|
|
Hakobyan, Astghik | Seoul National University |
Yang, Insoon | Seoul National University |
Keywords: Optimization and Optimal Control, Robot Safety, Motion Control
Abstract: Safety is a critical issue in learning-based robotic and autonomous systems as learned information about their environments is often unreliable and inaccurate. In this paper, we propose a risk-aware motion control tool that is robust against errors in learned distributional information about obstacles moving with unknown dynamics. The salient feature of our model predictive control (MPC) method is its capability of limiting the risk of unsafety even when the true distribution deviates from the distribution estimated by Gaussian process (GP) regression, within an ambiguity set. Unfortunately, the distributionally robust MPC problem with GP is intractable because the worst-case risk constraint involves an infinite-dimensional optimization problem over the ambiguity set. To remove the infinite-dimensionality issue, we develop a systematic reformulation approach exploiting modern distributionally robust optimization techniques. The performance and utility of our method are demonstrated through simulations using a nonlinear car-like vehicle model for autonomous driving.
|
|
17:15-17:30, Paper TuDT14.4 | |
>Synchronous Minimum-Time Cooperative Manipulation Using Distributed Model Predictive Control |
> Video Attachment
|
|
Tika, Argtim | Technische Universität Kaiserslautern |
Bajcinca, Naim | TU Kaiserslautern |
Keywords: Optimization and Optimal Control, Planning, Scheduling and Coordination, Distributed Robot Systems
Abstract: A hierarchical algorithm involving two-layer optimization-based control policies with varying degrees of abstraction is proposed, including upper layer task scheduling and lower layer local path planning. A scenario with two robot arms performing cooperative pick-and-place tasks for moving objects is specifically addressed. The main focus of the paper lies on the bottom layer of the hierarchical control scheme, more precisely on the online generation of the synchronous robot trajectories using distributed minimum-time model predictive control (DMPC) algorithms. To this end, we introduce a decelerating coupling term in the cost functions of the individual distributed optimization algorithms to synchronize the overall robot motion. The performance of the algorithm is illustrated by extensive simulations with high-fidelity robot dynamic models.
|
|
17:30-17:45, Paper TuDT14.5 | |
>Finite-Horizon LQR Control of Quadrotors on SE_2(3) |
> Video Attachment
|
|
Cohen, Mitchell | McGill University |
Abdulrahim, Khairi | Universiti Sains Islam Malaysia |
Forbes, James Richard | McGill University |
Keywords: Optimization and Optimal Control
Abstract: This paper considers optimal control of a quadrotor unmanned aerial vehicles (UAV) using the discrete-time, finite-horizon, linear quadratic regulator (LQR). The state of a quadrotor UAV is represented as an element of the matrix Lie group of double direct isometries, SE_2(3). The nonlinear system is linearized using a left-invariant error about a reference trajectory, leading to an optimal gain sequence that can be calculated offline. The reference trajectory is calculated using the differentially flat properties of the quadrotor. Monte-Carlo simulations demonstrate robustness of the proposed control scheme to parametric uncertainty, state-estimation error, and initial error. Additionally, when compared to an LQR controller that uses a conventional error definition, the proposed controller demonstrates better performance when initial errors are large.
|
|
17:45-18:00, Paper TuDT14.6 | |
>Safe Optimal Control under Parametric Uncertainties |
> Video Attachment
|
|
Makkapati, Venkata Ramana | Georgia Institute of Technology |
Sarabu, Hemanth | Georgia Tech |
Comandur, Vinodhini | Georgia Institute of Technology |
Tsiotras, Panagiotis | Georgia Tech |
Hutchinson, Seth | Georgia Institute of Technology |
Keywords: Optimization and Optimal Control, Robot Safety, Collision Avoidance
Abstract: We address the issue of safe optimal path planning under parametric uncertainties using a novel regularizer that allows trading off optimality with safety. The proposed regularizer leverages the notion that collisions may be modeled as constraint violations in an optimal control setting in order to produce open-loop trajectories with reduced risk of collisions. The risk of constraint violation is evaluated using a state-dependent relevance function and first-order variations in the constraint function with respect to parametric variations. The approach is generic and can be adapted to any optimal control formulation that deals with constraints under parametric uncertainty. Simulations using a holonomic robot avoiding multiple dynamic obstacles with uncertain velocities are used to demonstrate the effectiveness of the proposed approach. Finally, we introduce the car vs. train problem to emphasize the dependence of the resultant risk aversion behavior on the form of the constraint function used to derive the regularizer.
|
|
TuDT15 |
Room T15 |
Robust/Adaptive Control of Robotic Systems I |
Regular session |
Chair: Hereid, Ayonga | Ohio State University |
Co-Chair: Roy, Spandan | International Institute of Information Technology, Hyderabad (IIIT-H) |
|
16:30-16:45, Paper TuDT15.1 | |
>Online Gain Setting Method for Path Tracking Using CMA-ES: Application to Off-Road Mobile Robot Control |
|
Hill, Ashley William David | CEA |
Laneurit, Jean | Irstea |
Lenain, Roland | Irstea |
Lucet, Eric | CEA Tech |
Keywords: Robust/Adaptive Control of Robotic Systems, AI-Based Methods, Neural and Fuzzy Control
Abstract: This paper proposes a new approach for online control law gains adaptation, through the use of neural networks and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm, in order to optimize the behavior of the robot with respect to an objective function. The neural network considered takes as input the current observed state as well as its uncertainty, and provides as output the control law gains. It is trained, using the CMA-ES algorithm, on a simulator reproducing the vehicle dynamics. Then, it is tested in real conditions on an agricultural mobile robot at different speeds. The transferability of this method from simulation to a real system is demonstrated, as well as its robustness to environmental changes, such as GPS signal degradation or ground variation. As a result, path following errors are reduced, while ensuring tracking stability.
|
|
16:45-17:00, Paper TuDT15.2 | |
>Velocity Regulation of 3D Bipedal Walking Robots with Uncertain Dynamics through Adaptive Neural Network Controller |
> Video Attachment
|
|
Castillo, Guillermo | The Ohio State University |
Weng, Bowen | The Ohio State University |
Stewart, Terrence C | University of Waterloo |
Zhang, Wei | Southern University of Science and Technology |
Hereid, Ayonga | Ohio State University |
Keywords: Robust/Adaptive Control of Robotic Systems, Humanoid and Bipedal Locomotion, Model Learning for Control
Abstract: This paper presents a neural-network based adaptive feedback control structure to regulate the velocity of 3D bipedal robots under dynamics uncertainties. Existing Hybrid Zero Dynamics (HZD)-based controllers regulate velocity through the implementation of heuristic regulators that do not consider model and environmental uncertainties, which may significantly affect the tracking performance of the controllers. In this paper, we address the uncertainties in the robot dynamics from the perspective of the reduced dimensional representation of virtual constraints and propose the integration of an adaptive neural network-based controller to regulate the robot velocity in the presence of model parameter uncertainties. The proposed approach yields improved tracking performance under dynamics uncertainties. The shallow adaptive neural network used in this paper does not require training a priori and has the potential to be implemented on the real-time robotic controller. A comparative simulation study of a 3D Cassie robot is presented to illustrate the performance of the proposed approach under various scenarios.
|
|
17:00-17:15, Paper TuDT15.3 | |
>Aerial Transportation of Unknown Payloads: Adaptive Path Tracking for Quadrotors |
|
Sankaranarayanan, Viswa Narayanan | International Institute of Information Technology, Hyderabad (II |
Roy, Spandan | International Institute of Information Technology, Hyderabad (II |
Baldi, Simone | TU Delft |
Keywords: Robust/Adaptive Control of Robotic Systems, Aerial Systems: Mechanics and Control
Abstract: With the advent of intelligent transport, quadrotors are becoming an attractive aerial transport solution during emergency evacuations, construction works etc. During such operations, dynamic variations in (possibly unknown) payload and unknown external disturbances cause considerable control challenges for path tracking algorithms. In fact, the state- dependent nature of the resulting uncertainties makes state- of-the-art adaptive control solutions ineffective against such uncertainties that can be completely unknown and possibly unbounded a priori. This paper, to the best of the knowledge of the authors, proposes the first adaptive control solution for quadrotors which does not require any a priori knowledge of the parameters of quadrotor dynamics as well as of external disturbances. The stability of the closed-loop system is studied analytically via Lyapunov theory and the effectiveness of the proposed solution is verified on a realistic simulator.
|
|
17:15-17:30, Paper TuDT15.4 | |
>Robust Force Tracking Impedance Control of an Ultrasonic Motor-Actuated End-Effector in a Soft Environment |
> Video Attachment
|
|
Liang, Wenyu | Institute for Infocomm Research, A*STAR |
Feng, Zhao | Wuhan University |
Wu, Yan | A*STAR Institute for Infocomm Research |
Gao, Junli | Guangdong University of Technology |
Ren, Qinyuan | Zhejiang University |
Lee, Tong Heng | National University of Singapore |
Keywords: Robust/Adaptive Control of Robotic Systems, Compliance and Impedance Control, Medical Robots and Systems
Abstract: Robotic systems are increasingly required not only to generate precise motions to complete their tasks but also to handle the interactions with the environment or human. Significantly, soft interaction brings great challenges on the force control due to the nonlinear, viscoelastic and inhomogeneous properties of the soft environment. In this paper, a robust impedance control scheme utilizing integral backstepping technology and integral terminal sliding mode control is proposed to achieve force tracking for an ultrasonic motor-actuated end-effector in a soft environment. In particular, the steady-state performance of the target impedance while in contact with soft environment is derived and analyzed with the nonlinear Hunt-Crossley model. Finally, the dynamic force tracking performance of the proposed control scheme is verified via several experiments.
|
|
TuDT16 |
Room T16 |
Robust/Adaptive Control of Robotic Systems II |
Regular session |
Chair: Matni, Nikolai | University of Pennsylvania |
Co-Chair: Clement, Benoit | ENSTA-Bretagne |
|
16:30-16:45, Paper TuDT16.1 | |
>A Horse Inspired Eight-Wheel Unmanned Ground Vehicle with Four-Swing Arms |
> Video Attachment
|
|
He, Miaolei | Central South University |
Jilin, He | Central South University |
Changji, Ren | Central South University |
Qinghua, He | Central South University |
Kang, Wu | Sunward Intelligent Equipment Co. Ltd |
Keywords: Robust/Adaptive Control of Robotic Systems
Abstract: Rigid-terrain unmanned ground vehicles(UGV) can run under the field environment by the advanced adaptive ability. This paper presents a novel horse inspired rigid-terrain eight-wheel vehicle with four-swing arms. This unmanned ground vehicle is drived by distributed hydraulic motors. By cooperating with four-swing arms and eight wheels, the vehicle has the ability to work like a horse climbs an obstacle under the complex ground. The mechanism, bionic obstacle surmounting algorithm and operation strategy are analyzed in detail. The posture planning of wheel arms and the kinematic model of the UGV are studied. Automatic Dynamic Analysis of Mechanical Systems (ADAMS) simulation results and prototype experiments are executed to verify the analysis and strategy. The results show that this type of unmanned ground vehicle has good performance on crossing the obstacle and running on the rigid-terrain ground.
|
|
16:45-17:00, Paper TuDT16.2 | |
>Non-Linear Control under State Constraints with Validated Trajectories for a Mobile Robot Towing a Trailer |
> Video Attachment
|
|
Tillet, Joris | ENSTA Bretagne |
Jaulin, Luc | ENSTA-Bretagne |
Le Bars, Fabrice | ENSTA Bretagne |
Keywords: Robust/Adaptive Control of Robotic Systems, Nonholonomic Motion Planning, Collision Avoidance
Abstract: In this paper, we propose a set-inversion approach to validate the controller of a nonlinear system that should satisfy some state constraints. We introduce the notion of follow set which corresponds to the set of all output vectors such that the desired dynamics can be followed without violating the state-constraints. This follow set can then be used to choose feasible trajectories that a mobile robot will be able to follow. An illustrative example with a robot towing a trailer is presented. This example is motivated by the safe control of a boat towing a marine magnetic sensor to find wrecks.
|
|
17:00-17:15, Paper TuDT16.3 | |
>Robust, Perception Based Control with Quadrotors |
> Video Attachment
|
|
Jarin-Lipschitz, Laura | UPenn |
Li, Rebecca | University of Pennsylvania |
Nguyen, Ty | University of Pennsylvania |
Kumar, Vijay | University of Pennsylvania |
Matni, Nikolai | University of Pennsylvania |
Keywords: Robust/Adaptive Control of Robotic Systems, Model Learning for Control, Aerial Systems: Perception and Autonomy
Abstract: Traditionally, controllers and state estimators in robotic systems are designed independently. Controllers are often designed assuming perfect state estimation. However, state estimation methods such as Visual Inertial Odometry (VIO) drift over time and can cause the system to misbehave. While state estimation error can be corrected with the aid of GPS or motion capture, these complementary sensors are not always available or reliable. Recent work has shown that this issue can be dealt with by synthesizing robust controllers using a data-driven characterization of the perception error, and can bound the system’s response to state estimation error using a robustness constraint. We investigate the application of this robust perception-based approach to a quadrotor model using VIO for state estimation and demonstrate the benefits and drawbacks of using this technique in simulation and hardware. Additionally, to make tuning easier, we introduce a new cost function to use in the control synthesis which allows one to take an existing controller and “robustify” it. To the best of our knowledge, this is the first robust perception-based controller implemented in real hardware, as well as one utilizing a data-driven perception model. We believe this as an important step towards safe, robust robots that explicitly account for the inherent dependence between perception and control.
|
|
17:15-17:30, Paper TuDT16.4 | |
>Robust Control Synthesis and Verification for Wire-Borne Underactuated Brachiating Robots Using Sum-Of-Squares Optimization |
> Video Attachment
|
|
Farzan, Siavash | Georgia Institute of Technology |
Hu, Ai-Ping | Georgia Tech Research Institute |
Bick, Michael | Georgia Institute of Technology |
Rogers, Jonathan | Georgia Institute of Technology |
Keywords: Underactuated Robots, Robust/Adaptive Control of Robotic Systems, Dynamics
Abstract: Control of wire-borne underactuated brachiating robots requires a robust feedback control design that can deal with dynamic uncertainties, actuator constraints and unmeasurable states. In this paper, we develop a robust feedback control for brachiating on flexible cables, building on previous work on optimal trajectory generation and time-varying LQR controller design. We propose a novel simplified model for approximation of the flexible cable dynamics, which enables inclusion of parametric model uncertainties in the system. We then use semidefinite programming (SDP) and sum-of-squares (SOS) optimization to synthesize a time-varying feedback control with formal robustness guarantees to account for model uncertainties and unmeasurable states in the system. Through simulation, hardware experiments and comparison with a time-varying LQR controller, it is shown that the proposed robust controller results in relatively large robust backward reachable sets and is able to reliably track a pre-generated optimal trajectory and achieve the desired brachiating motion in the presence of parametric model uncertainties, actuator limits, and unobservable states.
|
|
TuDT17 |
Room T17 |
Bio-Inspired Control |
Regular session |
Chair: Cheng, Bo | Pennsylvania State University |
Co-Chair: Ma, Shugen | Ritsumeikan University |
|
16:30-16:45, Paper TuDT17.1 | |
>A Bayesian-Based Controller for Snake Robot Locomotion in Unstructured Environments |
|
Jia, Yuanyuan | Ritsumeikan University |
Ma, Shugen | Ritsumeikan University |
Keywords: Biologically-Inspired Robots, Redundant Robots, Probability and Statistical Methods
Abstract: This paper presents a novel Bayesian-based controller for snake robots in cluttered environment. It extends the conventional shape-based compliant control into statistical field providing an explicit mathematical formulation with Bayesian network. A sequential density propagation rule is derived by introducing several probability densities in a unified framework. Specifically, two input influence densities are proposed to model the cumulative effect of various external forces that the snake robot undergoes. Moreover, the measurement likelihood model is exploited to give a more robust closed-loop feedback. Overall, the proposed approach provides an innovative way to handle challenging tasks of snake robot control in complicated environment. Experimental results have been demonstrated for both simulation and real-world data.
|
|
16:45-17:00, Paper TuDT17.2 | |
>Learning to Locomote with Artificial Neural-Network and CPG-Based Control in a Soft Snake Robot |
> Video Attachment
|
|
Liu, Xuan | Worcester Polytechnic Institute |
Gasoto, Renato | Worcester Polytechnic Institute, NVIDIA |
Jiang, Ziyi | Xidian University |
Fu, Jie | Worcester Polytechnic Institute |
Onal, Cagdas | WPI |
Keywords: Modeling, Control, and Learning for Soft Robots, Biomimetics, Neurorobotics
Abstract: In this paper, we present a new locomotion control method for soft robot snakes. Inspired by biological snakes, our control architecture is composed of two key modules: A deep reinforcement learning (RL) module for achieving adaptive goal-tracking behaviors with changing goals, and a central pattern generator (CPG) system with Matsuoka oscillators for generating stable and diverse locomotion patterns. The two modules are interconnected into a closed-loop system: The RL module, analogizing the locomotion region located in the midbrain of vertebrate animals, regulates the input to the CPG system given state feedback from the robot. The output of the CPG system is then translated into pressure inputs to pneumatic actuators of the soft snake robot. Based on the fact that the oscillation frequency and wave amplitude of the Matsuoka oscillator can be independently controlled under different time scales, we further adapt the option-critic framework to improve the learning performance measured by optimality and data efficiency. The performance of the proposed controller is experimentally validated with both simulated and real soft snake robots.
|
|
17:00-17:15, Paper TuDT17.3 | |
>The Omega Turn: A Biologically-Inspired Turning Strategy for Elongated Limbless Robots |
> Video Attachment
|
|
Wang, Tianyu | Carnegie Mellon University |
Zhong, Baxi | Georgia Institute of Technology |
Diaz, Kelimar | Georgia Institute of Technology |
Whitman, Julian | Carnegie Mellon University |
Lu, Hang | Georgia Institute of Technology |
Travers, Matthew | Carnegie Mellon University |
Goldman, Daniel | Georgia Institute of Technology |
Choset, Howie | Carnegie Mellon University |
Keywords: Biologically-Inspired Robots, Search and Rescue Robots, Nonholonomic Motion Planning
Abstract: Snake robots have the potential to locomote through tightly packed spaces, but turning effectively within unmodelled and unsensed environments remains challenging. Inspired by a behavior observed in the tiny nematode worm C. elegans, we propose a novel in-place turning gait for elongated limbless robots. To simplify the control of the robots' many internal degrees-of-freedom, we introduce a biologically-inspired template in which two co-planar traveling waves are superposed to produce an in-plane turning motion, the omega turn. The omega turn gait arises from modulating the wavelengths and amplitudes of the two traveling waves. We experimentally test the omega turn on a snake robot, and show that this turning gait outperforms previous turning gaits: it results in a larger angular displacement and a smaller area swept by the body over a gait cycle, allowing the robot to turn in highly confined spaces.
|
|
17:15-17:30, Paper TuDT17.4 | |
>Bio-Inspired Inverted Landing Strategy in a Small Aerial Robot Using Policy Gradient |
|
Liu, Pan | Pennsylvania State University |
Geng, Junyi | The Pennsylvania State University |
Li, Yixian | Penn State University |
Cao, Yanran | Penn State University |
Bayiz, Yagiz Efe | Pennsylvania State University |
Langelaan, Jack W. | Penn State University |
Cheng, Bo | Pennsylvania State University |
Keywords: Biologically-Inspired Robots, Aerial Systems: Mechanics and Control, Reinforecment Learning
Abstract: Landing upside down on a ceiling is challenging as it requires a flier to invert its body and land against the gravity, a process that demands a stringent spatiotemporal coordination of body translational and rotational motion. Although such an aerobatic feat is routinely performed by biological fliers such as flies, it is not yet achieved in aerial robots using onboard sensors. This work describes the development of a bio-inspired inverted landing strategy using computationally efficient Relative Retinal Expansion Velocity (RREV) as a visual cue. This landing strategy consists of a sequence of two motions, i.e. an upward acceleration and a rapid angular maneuver. A policy search algorithm is applied to optimize the landing strategy and improve its robustness by learning the transition timing between the two motions and the magnitude of the target body angular velocity. Simulation results show that the aerial robot is able to achieve robust inverted landing, and it tends to exploit its maximal maneuverability. In addition to the computational aspects of the landing strategy, the robustness of landing is also significantly dependent on by the mechanical design of the landing gear, the upward velocity at the start of body rotation, and timing of rotor shutdown.
|
|
17:30-17:45, Paper TuDT17.5 | |
>A Bio-Inspired Framework for Joint Angle Estimation from Non-Collocated Sensors in Tendon-Driven Systems |
> Video Attachment
|
|
Hagen, Daniel | University of Southern California |
Marjaninejad, Ali | University of Southern California |
Valero-Cuevas, Francisco, J | University of Southern California |
Keywords: Biologically-Inspired Robots, Sensorimotor Learning, Sensor Fusion
Abstract: Estimates of limb posture are critical for the control of robotic systems. This is generally accomplished by utilizing on-location joint angle encoders which may complicate the design, increase limb inertia, and add noise to the system. Conversely, some innovative or smaller robotic morphologies can benefit from non-collocated sensors when encoder size becomes prohibitively larger or the joints are less accessible or subject to damage (e.g., distal joints of a robotic hand or foot sensors subject to repeated impact). These concerns are especially important for tendon-driven systems where motors (and their sensors) are not placed at the joints. Here we create a framework for joint angle estimation by which artificial neural networks (ANNs) use limited-experience from motor babbling to predict joint angles. We draw inspiration from Nature where (i) muscles and tendons have mechanoreceptors, (ii) there are no dedicated joint-angle sensors, and (iii) dedicated neural networks perform sensory fusion. We simulated an inverted pendulum driven by an agonist-antagonist pair of motors that pull on tendons with nonlinear elasticity. We then compared the contributions of different sets of non-collocated sensory information when training ANNs to predict joint angle. By com- paring performance across different movement tasks we were able to determine how well each ANN (trained on the different sensory sets of babbling data) generalizes to tasks it has not been exposed to (sinusoidal and point-to-point). Lastly, we evaluated performance as a function of amount of babbling data. We find that training an ANN with actuator states (i.e., motor positions/velocities/accelerations) as well as tendon tension data produces more accurate estimates of joint angles than those ANNs trained without tendon tension data. Moreover, we show that ANNs trained on motor positions/velocities and tendon tensions (i.e., the bio-inspired set) (i) can reliably estimate joint angles with as little as 2 minutes of motor babbling and (ii) generalizes well across tasks. We demonstrate a novel frame- work that can utilize limited-experience to provide accurate and efficient joint angle estimation during dynamical tasks using non-collocated actuator and tendon tension measurements. This enables novel designs of versatile and data-efficient robots that do not require on-location joint angle sensors.
|
|
17:45-18:00, Paper TuDT17.6 | |
>Biomimetic Control Scheme for Musculoskeletal Humanoids Based on Motor Directional Tuning in the Brain |
> Video Attachment
|
|
Toshimitsu, Yasunori | University of Tokyo |
Kawaharazuka, Kento | The University of Tokyo |
Tsuzuki, Kei | University of Tokyo |
Onitsuka, Moritaka | The University of Tokyo |
Nishiura, Manabu | University of Tokyo |
Koga, Yuya | The University of Tokyo |
Omura, Yusuke | The University of Tokyo |
Tomita, Motoki | University of Tokyo |
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Biomimetics, Modeling, Control, and Learning for Soft Robots, Modeling and Simulating Human
Abstract: In this research, we have taken a biomimetic approach to the control of musculoskeletal humanoids. A controller was designed based on the motor directional tuning phenomenon seen in the motor cortex of primates. Despite the simple implementation of the control scheme, complex coordinated movements such as reaching for target objects with its upper body was achieved. The controller does not require an internal model, and instead constantly observes its body in relation to the external world to update motor commands. We claim that such an embodied approach to the control of musculoskeletal robots will be able to effectively take advantage of their complex bodies to achieve motion.
|
|
TuDT18 |
Room T18 |
Biologically-Inspired Robots I |
Regular session |
Chair: Jung, Gwang-Pil | SeoulTech |
Co-Chair: Ramezani, Alireza | Northeastern University |
|
16:30-16:45, Paper TuDT18.1 | |
>Development and Analysis of Digging and Soil Removing Mechanisms for Mole-Bot: Bio-Inspired Mole-Like Drilling Robot |
|
Lee, Junseok | Korea Advanced Institute of Science and Technology (KAIST) |
Tirtawardhana, Christian | Korea Advanced Institute of Science and Technology (KAIST) |
Myung, Hyun | KAIST (Korea Adv. Inst. Sci. & Tech.) |
Keywords: Biologically-Inspired Robots, Biomimetics
Abstract: Interests in exploration of new energy resources are increasing due to the exhaustion of existing resources. To explore new energy sources, various studies have been conducted to improve the drilling performance of drilling equipment for deep and strong ground. However, with better performance, the modern drilling equipment is bulky and, furthermore, has become inconvenient in both installation and operation, for it takes complex procedures for complex terrains. Moreover, environmental issues are also a concern because of the excessive use of mud and slurry to remove excavated soil. To overcome these limitations, a mechanism that combines an expandable drill bit and link structure to simulate the function of the teeth and forelimbs of a mole is proposed. In this paper, the proposed expandable drill bit simplifies the complexity and high number of degrees of freedom of the animal head. In addition, a debris removal mechanism mimicking a shoulder structure and forefoot movement is proposed. For efficient debris removal, the proposed mechanism enables the simultaneous rotation and expanding/folding motions of the drill bit by using a single actuator. The performance of the proposed system is evaluated by dynamic simulations and experiments.
|
|
16:45-17:00, Paper TuDT18.2 | |
>Snatcher: A Highly Mobile Chameleon-Inspired Shooting and Rapidly Retracting Manipulator |
> Video Attachment
|
|
Lee, Dong-Jun | SeoulTech |
Jung, Gwang-Pil | SeoulTech |
Keywords: Biologically-Inspired Robots, Soft Robot Applications, Mechanism Design
Abstract: Chameleon tongue-like manipulators have potential to be quite useful for mobile systems to overcome access issues by allowing them to reach distant targets in an instant. For example, a quadrotor with this manipulator will be able to snatch distant targets instead of hovering and picking up. In this letter, we present a chameleon-inspired shooting and rapidly retracting manipulator, which is lightweight, compact, and ultimately suitable for mobile systems. To make this possible, a novel actuation system has been proposed. The main idea is to provide the pre-stored energy at the right place, at the right timing. By applying this idea, the whole manipulation system has the size of 120x85x85mm, weighs 117.48g, and brings a 30g mass located at 0.8m away within 600ms.
|
|
17:00-17:15, Paper TuDT18.3 | |
>Computational Structure Design of a Bio-Inspired Armwing Mechanism |
> Video Attachment
|
|
Sihite, Eric | Northeastern University |
Kelly, Peter | Northeastern University |
Ramezani, Alireza | Northeastern University |
Keywords: Biomimetics, Soft Robot Materials and Design, Mechanism Design
Abstract: Bat membranous wings possess unique functions that make them a good example to take inspiration from and transform current aerial drones. In contrast with other flying vertebrates, bats have an extremely articulated musculoskeletal system which is key to their energetic efficiency with impressively adaptive and multimodal locomotion. Biomimicry of this flight apparatus is a significant engineering ordeal and we seek to achieve mechanical intelligence through sophisticated interactions of morphology. Such morphological computation or mechanical intelligence draws our attention to the obvious fact that there is a common interconnection between the boundaries of morphology and closed-loop feedback. In this work, we demonstrate that several biologically meaningful degrees of freedom can be interconnected to one another by mechanical intelligence and, as a result, the responsibility of feedback-driven components (e.g., actuated joints) is subsumed under computational morphology. The results reported in this work significantly contribute to the design of bio-inspired Micro Aerial Vehicles (MAVs) with articulated body and attributes such as efficiency, safety, and collision-tolerance.
|
|
17:15-17:30, Paper TuDT18.4 | |
>Optimization-Based Investigation of Bioinspired Variable Gearing of the Distributed Actuation Mechanism to Maximize Velocity and Force |
|
Kim, Jong Ho | Korea Advanced Institute of Science and Technology |
Jang, In Gwun | Korea Advanced Institute of Science and Technology |
Keywords: Biologically-Inspired Robots, Actuation and Joint Mechanisms, Optimization and Optimal Control
Abstract: Transmission between high speed and high force motions is a classic, but challenging problem for most engineering disciplines as well as robotics. This study optimizes the performances (i.e., both velocity and force) of the distributed actuation mechanism (DAM) based on the novel concept of continuously variable gearing, which is inspired by muscle movement. To quantify continuously variable gearing in the DAM, the structural gear ratio (defined as joint speed/motor speed) is mathematically derived in terms of the slider position and the joint angle. Then, for a DAM-based three-revolute joint manipulator, a multi-objective optimization problem is formulated to determine the maximum end-effector velocity according to varying payloads. An optimization framework consisting of the analysis and optimization modules is constructed to verify the proposed concept with a comparison of an equivalent joint actuation mechanism (JAM)-based three-revolute joint manipulator. The numerical results demonstrate that the bioinspired variable gearing of the DAM allows for a significant enhancement of end-effector velocity and force, depending on a given task.
|
|
17:30-17:45, Paper TuDT18.5 | |
>Stable Flight of a Flapping-Wing Micro Air Vehicle under Wind Disturbance |
> Video Attachment
|
|
Lee, Jonggu | Seoul National University |
Ryu, Seungwan | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Biologically-Inspired Robots, Robust/Adaptive Control of Robotic Systems, Biomimetics
Abstract: Flapping-wing micro air vehicles (FWMAVs) inspired by the nature are interesting flight platforms due to their efficiency, concealment and agility. However, most studies have been conducted in indoor environments where external disturbance is excluded because these FWMAVs are susceptible to disturbance due to their complex dynamics and small size. In order for these bio-inspired robots to perform various tasks outside, a capability to react robustly to external disturbance is essential. In this paper, we propose an algorithm that allows a FWMAV to fly well even under external disturbance. First, we derive the attitude dynamics of the FWMAV based on flight data. Then, we design a robust attitude controller using DOBC based on the dynamics. Also, we add a flight mode selector to recognize disturbance autonomously and switch to the robust control mode. Finally, we experiment outdoor flight of the FWMAV with wind disturbance. The FWMAV recognizes the existence of disturbance autonomously, and produces additional control inputs to compensate the disturbance. The proposed algorithm is validated with experiments.
|
|
17:45-18:00, Paper TuDT18.6 | |
>A Bio-Inspired Quadruped Robot Exploiting Flexible Shoulder for Stable and Efficient Walking |
> Video Attachment
|
|
Fukuhara, Akira | Tohoku University |
Gunji, Megu | National Museum of Nature and Science, Tokyo |
Masuda, Yoichi | Osaka University |
Tadakuma, Kenjiro | Tohoku University |
Ishiguro, Akio | Tohoku University |
Keywords: Biologically-Inspired Robots, Legged Robots
Abstract: While most modern-day quadruped robots crouch their limbs during the stance phase to stabilize the trunk, mammals exploit the inverted-pendulum motions of their limbs and realize both efficient and stable walking. Although the flexibility of the shoulder region of mammals is expected to contribute to reconciling the discrepancy between the forelimbs and hindlimbs for natural walking, the complex body structure makes it difficult to understand the functionality of animal morphology. In this study, we developed a simple robot model that mimics the flexibility of shoulder region in the sagittal plane, and we conducted a two-dimensional simulation. The results suggest that the flexibility of the shoulder contributes to absorbing the different motions between the forelimbs and hindlimbs.
|
|
TuDT19 |
Room T19 |
Biologically-Inspired Robots II |
Regular session |
Chair: Valdivia y Alvarado, Pablo | Singapore University of Technology and Design, MIT |
Co-Chair: Tadakuma, Kenjiro | Tohoku University |
|
16:30-16:45, Paper TuDT19.1 | |
>An Earthworm-Like Soft Robot with Integration of Single Pneumatic Actuator and Cellular Structures for Peristaltic Motion |
> Video Attachment
|
|
Liu, Mingcan | National University of Singapore |
Xu, Zhaoyi | University of Toronto |
Ong, Jing Jie | National University of Singapore |
Zhu, Jian | National University of Singapore |
Lu, Wenfeng | National University of Singapore |
Keywords: Biologically-Inspired Robots, Soft Robot Materials and Design, Product Design, Development and Prototyping
Abstract: Earthworm-like soft robots have been widely studied for various applications, such as medical endoscopy and pipeline inspection. Many actuation modes have been chosen to drive the soft robots, including pneumatic actuators, dielectric elastomeric actuators, and shape memory actuators. Pneumatic actuators stand out since the soft robots with pneumatic actuation can produce relatively large forces and displacements with relatively ease of fabrication. Currently, several pneumatic actuators are used to realize elongating movement and anchoring movement of the earthworm for peristaltic motion. More pneumatic actuators not only require more pumps and valves to actuate and control the earthworm, but also lead to less efficient movement control of the earthworm. To address this issue, a new design with integrated single pneumatic actuator and cellular structures is developed to realize elongating movement and anchoring movement of the earthworm-like soft robot in peristaltic motion. With the new design, the simulation model of the new earthworm is developed to simulate both elongating and anchoring movements of the earthworm. A 3D printed prototype of the earthworm-like soft robot is fabricated to validate the proposed design and simulation model. Experimental results show good agreement with the simulation in elongations of peristaltic motion as the differences between the simulated and experimental is 5.8 % in one cycle of the peristaltic motion.
|
|
16:45-17:00, Paper TuDT19.2 | |
>Pneumatic Duplex-Chambered Inchworm Mechanism for Narrow Pipes Driven by Only Two Air Supply Lines |
> Video Attachment
|
|
Yamamoto, Tomonari | National Institute of Advanced Industrial Science and Technology |
Sakama, Sayako | The National Institute of Advanced Industrial Science and Techno |
Kamimura, Akiya | National Institute of Advanced Industrial Science and Technology |
Keywords: Biologically-Inspired Robots, Soft Robot Materials and Design, Mechanism Design
Abstract: Small in-pipe robots are key to improving pipe inspection procedures, especially for narrow diameters. However, robotic locomotion in such spaces, namely achieving a high locomotion performance with a narrow and flexible mechanism, is difficult. The novel in-pipe locomotion mechanism proposed in this paper achieves rapid locomotion through narrow pipes by a unique duplex-chambered structure. The mechanism achieves smooth bi-directional inchworm locomotion by a combination of expandable silicone rubber and a coil spring and is fully controlled by only two air supply lines. The concept and locomotion technique, including a mathematical analysis and discussion from the viewpoint of operational pressure, are presented herein. Several experiments on the prototyped mechanism were performed to elucidate its characteristics. The results of locomotion tests through horizontal, vertical, and bent pipes showed that the mechanism can horizontally navigate through 25-mm pipes at 45.5 mm/s, which is the fastest yet reported for this size of bi-directional in-pipe robot.
|
|
17:00-17:15, Paper TuDT19.3 | |
>Development of a Maneuverable Un-Tethered Multi-Fin Soft Robot |
> Video Attachment
|
|
Van Tien, Truong | Singapore University of Technology and Design |
Mysa, Ravi Chaithanya | Singapore University of Technology and Design |
Stalin, Thileepan | Singapore University of Technology and Design |
Plamootil Mathai, Aby Raj | Singapore University of Technology and Design |
Valdivia y Alvarado, Pablo | Singapore University of Technology and Design, MIT |
Keywords: Biologically-Inspired Robots, Soft Robot Applications, Underactuated Robots
Abstract: In this paper, the design, fabrication, numerical studies, and preliminary characterization of a multi-fin soft robot are presented. The design is simple, robust, and fully autonomous. The robot has a 216mm body length and displays great potential to achieve uncoupled surge (forwards and backwards), sway, and heave motions. Computational fluid dynamic (CFD) studies are employed to evaluate appropriate fin control approaches and their influence on force generation. By using asymmetric input functions to actuate all fins in phase, the robot can achieve close to pure heave motions while single fin symmetric actuation enables forwards, backwards, and sway motions.
|
|
17:15-17:30, Paper TuDT19.4 | |
>Emergence of Swing-To-Stance Transition from Interlocking Mechanism in Horse Hindlimb |
> Video Attachment
|
|
Miyashita, Kazuhiro | Osaka University |
Masuda, Yoichi | Osaka University |
Gunji, Megu | National Museum of Nature and Science, Tokyo |
Fukuhara, Akira | Tohoku University |
Tadakuma, Kenjiro | Tohoku University |
Ishikawa, Masato | Osaka University |
Keywords: Biologically-Inspired Robots, Legged Robots, Passive Walking
Abstract: The bodies of quadrupeds have very complex muscle-tendon structure. In particular, it is known that in the horse hindlimb, multiple joints in the leg are remarkably interlocked due to the muscle-tendon structure. Although the function of these interlocking mechanisms during standing has been investigated in the field of anatomy, the function related to the emergence of limb trajectory during dynamic walking has not been revealed. To investigate the role of the interlocking mechanism, we developed a robot model imitating the muscle-tendon arrangement and the dynamics of a horse hindlimb. In the walking experiment, the robot autonomously generated a limb trajectory with a smooth transition between the swing phase and the stance phase by simply swinging the hip joint with sinusoidal input. Moreover, we compared the joint angles between successful and failed walking. The compared results indicate that the extension of the fetlock joint after hoof touchdown plays the crucial role in emergence of a function of supporting body.
|
|
17:30-17:45, Paper TuDT19.5 | |
>Emergent Adaptive Gait Generation through Hebbian Sensor-Motor Maps by Morphological Probing |
> Video Attachment
|
|
Dujany, Matthieu | EPFL |
Hauser, Simon | École Polytechnique Fédérale De Lausanne (EPFL) |
Mutlu, Mehmet | École Polytechnique Fédérale De Lausanne (EPFL) |
van der Sar, Martijn | EPFL |
Arreguit, Jonathan | École Polytechnique Fédérale De Lausanne |
Kano, Takeshi | Tohoku University |
Ishiguro, Akio | Tohoku University |
Ijspeert, Auke | EPFL |
Keywords: Biologically-Inspired Robots, Multi-legged Robots, Sensorimotor Learning
Abstract: Gait emergence and adaptation in animals is unmatched in robotic systems. Animals can create and recover locomotive functions ``on-the-fly'' after an injury whereas locomotion controllers for robots lack robustness to morphological changes. In this work, we extend previous research on emergent interlimb coordination of legged robots based on coupled phase oscillators with force feedback terms. We investigate how the coupling weights between these phase oscillators can be extracted from the morphology with a fast and computationally lightweight method based on a combination of twitching and Hebbian learning to form sensor-motor maps. The coefficients of these maps create naturally scaled weights, which not only lead to robust gait limit cycles, but can also adapt to morphological modifications such as sensor loss and limb injuries within a few gait cycles. We demonstrate the approach on a robotic quadruped and hexapod.
|
|
TuDT20 |
Room T20 |
Insect-Inspired Robotics |
Regular session |
Chair: Gravish, Nick | UC San Diego |
Co-Chair: Perez-Arancibia, Nestor O | University of Southern California (USC) |
|
16:30-16:45, Paper TuDT20.1 | |
>Soft Microrobotic Transmissions Enable Rapid Ground-Based Locomotion |
|
Zhou, Wei | University of California San Diego |
Gravish, Nick | UC San Diego |
Keywords: Micro/Nano Robots, Soft Robot Materials and Design
Abstract: In this paper we present the design, fabrication, testing, and control of a 0.4~g milliscale robot employing a soft polymer flexure transmission for rapid ground movement. The robot was constructed through a combination of two methods: smart-composite-manufacturing (SCM) process to fabricate the actuators and robot chassis, and silicone elastomer molding and casting to fabricate a soft flexure transmission. We actuate the flexure transmission using two customized piezoelectric (PZT) actuators that attach to the transmission inputs. Through high-frequency oscillations, the actuators are capable of exciting vibrational resonance modes of the transmission which result in motion amplification on the transmission output. Directional spines on the transmission output generate traction force with the ground and drive the robot forward. By varying the excitation frequency of the soft transmission we can control locomotion speed, and when the transmission is oscillated at its resonance frequency we achieve high speeds with a peak speed of 439~mm/s (22 body lengths/s). By exciting traveling waves through the soft transmission, we were able to control the steering direction. Overall this paper demonstrates the feasibility of generating resonance behavior in millimeter scale soft robotic structures to achieve high-speed controllable locomotion.
|
|
16:45-17:00, Paper TuDT20.2 | |
>An Untethered 216-Mg Insect-Sized Jumping Robot with Wireless Power Transmission |
> Video Attachment
|
|
Kurniawan, Riccy | University of Washington, Seattle |
Fukudome, Tamaki | Institute of Industrial Science, the University of Tokyo |
Qiu, Hao | Institute of Industrial Science, the University of Tokyo |
Takamiya, Makoto | Institute of Industrial Science, the University of Tokyo |
Kawahara, Yoshihiro | The University of Tokyo |
Yang, Jinkyu | University of Washington, Seattle |
Niiyama, Ryuma | University of Tokyo |
Keywords: Biologically-Inspired Robots, Micro/Nano Robots, Soft Robot Applications
Abstract: We present the first demonstration of a battery-free untethered wirelessly powered sub-gram jumping robot on an insect-scale. In order to operate the insect-sized robot autonomously, the limitation in battery use emphasizes the need for a wireless power transmission system as an onboard power solution. We designed a wireless power transmission system based on inductive coupling to power the Shape Memory Alloy (SMA), which serves as an elastic energy storage element and actuator for the jumping robot. The assembled mechanical structures, onboard power and electronics yield a 2 mm (high) x 24 mm (long) x 12 mm (wide) robot with a weight of 216 mg. The experiments show that our jumping robot wirelessly lift-off up to 5.75 times its body length and repeats the jump around 7 times per minute. To date, out of the several untethered sub-gram insect-scale jumping robots with onboard power, this is the first wirelessly powered robot with the highest jumping performance. The novelty in this work, which addresses the engineering challenges in insect-scale jumping robots, is an untethered wirelessly powered design that achieves dynamic jumping maneuvers, and has self-righting ability.
|
|
17:00-17:15, Paper TuDT20.3 | |
>Towards the Long-Endurance Flight of an Insect-Inspired, Tailless, Two-Winged, Flapping-Wing Flying Robot |
|
Phan, Hoang Vu | Konkuk University |
Aurecianus, Steven | Konkuk University |
Au, Thi Kim Loan | Konkuk University |
Kang, Taesam | Konkuk Univeristy |
Park, Hoon Cheol | Konkuk University |
Keywords: Biologically-Inspired Robots, Biomimetics
Abstract: A hover-capable insect-inspired flying robot that can remain long in the air has shown its potential use for both confined indoor and outdoor applications to complete assigned tasks. In this letter, we report improvements in the flight endurance of our 15.8 g robot, named KUBeetle-S, using a low-voltage power source. The robot is equipped with a simple but effective control mechanism that can modulate the stroke plane for attitude stabilization and control. Due to the demand for extended flight, we performed a series of experiments on the lift generation and power requirement of the robot with different stroke amplitudes and wing areas. We show that a larger wing with less inboard wing area improves the lift-to-power ratio and produces a peak lift-to-weight ratio of 1.34 at 3.7 V application. Flight tests show that the robot employing the selected wing could hover for 8.8 minutes. Moreover, the robot could perform maneuvers in any direction, fly outdoors, and carry payload, demonstrating its ability to enter the next phase of autonomous flight.
|
|
17:15-17:30, Paper TuDT20.4 | |
>SMALLBug: A 30-Mg Crawling Robot Driven by a High-Frequency Flexible SMA Microactuator |
> Video Attachment
|
|
Calderon, Ariel, A | University of Southern California |
Nguyen, Xuan-Truc | University of Southern California |
Rigo, Alberto | USC |
Ge, Joey Zaoyuan | University of Southern California |
Perez-Arancibia, Nestor O | University of Southern California (USC) |
Keywords: Biomimetics, Micro/Nano Robots, Mechanism Design
Abstract: We present the design, fabrication and experimental testing of SMALLBug, a 30-mg crawling microrobot that is 13 mm in length and can locomote at actuation frequencies of up to 20 Hz. The robot is driven by an electrically-powered 6-mg bending actuator that is composed of thin shape-memory alloy (SMA) wires and a carbon-fiber piece that acts as a loading leaf-spring. This configuration enables the generation of high-speed thermally-induced phase transformations of the SMA material in order to produce high-frequency periodic actuation. During development, several actuator prototypes with different mechanical stiffnesses were tested and characterized by measuring their bending motions when excited with pulse-width modulation (PWM) voltages with a variety of frequencies and duty cycles (DCs). In a similar manner, the displacement-force characteristic of the actuator chosen to drive SMALLBug was identified by measuring its bending displacements under a number of different loads ranging from 4.22 to 83.8 mN. The locomotion capabilities of SMALLBug were experimentally tested at three different input actuation frequencies, which were observed to produce three distinct gaits. At the low frequency of 2 Hz, the robot locomotes with a crawling gait similar to that of inchworms; at the moderate frequency of 10 Hz, the robot advances smoothly at an approximately constant speed using a shuffling gait; and at the high frequency of 20 Hz, the robot executes small and fast jumps in a galloping gait, which can reach an average speed of up to 17 mm/s, equivalent to 1.3 body-lengths per second (BLPS).
|
|
17:30-17:45, Paper TuDT20.5 | |
>Inverted and Inclined Climbing Using Capillary Adhesion in a Quadrupedal Insect-Scale Robot |
> Video Attachment
|
|
Chen, YuFeng | Massachusetts Institute of Technology |
Doshi, Neel | MIT |
Wood, Robert | Harvard University |
Keywords: Micro/Nano Robots, Biologically-Inspired Robots, Mechanism Design
Abstract: Many insects demonstrate remarkable locomotive capabilities on inclined or even inverted surfaces. Achieving inverted locomotion is a challenge for legged insect-scale robots because repeated attachment and detachment to a surface usually requires the design of special climbing gaits, adhesion mechanisms, sensing, and feedback control. In this study, we propose a novel adhesion method that leverages capillary and lubrication effects to achieve simultaneous adhesion and sliding. We design a 47 mg adhesion pad and install it on a 1.4 g insect-scale quadrupedal robot to demonstrate locomotion on inverted and inclined surfaces. On an inverted acrylic surface, the robot's climbing and turning speeds are 0.3 cm/s and 23.6 °/s, respectively. Further, the robot can climb a 30° inclined acrylic surface at 0.04 cm/s. This light-weight, passively stable, and versatile adhesion design is suitable for insect-scale robots with limited sensing, actuation, and control capabilities.
|
|
17:45-18:00, Paper TuDT20.6 | |
>Coordinated Appendages Accumulate More Energy to Self-Right on the Ground |
|
Xuan, Qihan | Johns Hopkins University |
Li, Chen | Johns Hopkins University |
Keywords: Biologically-Inspired Robots, Dynamics, Motion Control
Abstract: Animals and robots must right themselves after flipping over on the ground. The discoid cockroach pushes its wings against the ground in an attempt to dynamically self-right by a somersault. However, because this maneuver is strenuous, the animal often fails to overcome the potential energy barrier and makes continual attempts. In this process, the animal flails its legs, whose lateral perturbation eventually leads it to roll to the side to self-right. Our previous work developed a cockroach-inspired robot capable of leg-assisted, winged self-righting, and a robot simulation study revealed that the outcome of this strategy depends sensitively on wing-leg coordination (measured by the phase between their motions). Here, we further elucidate why this is the case by developing a template to model the complex hybrid dynamics resulting from discontinuous contact and actuation. We used the template to calculate the potential energy barrier that the body must overcome to self-right, mechanical energy contribution by wing pushing and leg flailing, and mechanical energy dissipation due to wing-ground collision. The template revealed that wing-leg coordination (phase) strongly affects self-righting outcome by changing mechanical energy budget. Well-coordinated appendage motions (good phase) accumulate more mechanical energy than poorly-coordinated motions (bad phase), thereby better overcoming the potential energy barrier to self-right more successfully. Finally, we demonstrated practical use of the template for predicting a new control strategy to further increase self-righting performance and informing robot design.
|
|
TuDT21 |
Room T21 |
Autonomous Agents |
Regular session |
Chair: Miao, Jinghao | Baidu |
Co-Chair: Tran-Thanh, Long | University of Warwick |
|
16:30-16:45, Paper TuDT21.1 | |
>Cooperative Simultaneous Tracking and Jamming for Disabling a Rogue Drone |
|
Papaioannou, Savvas | KIOS CoE, University of Cyprus |
Kolios, Panayiotis | Kios Research and Innovation Center of Excellence, University Of |
Panayiotou, Christos | University of Cyprus |
Polycarpou, Marios | KIOS Center of Excellence, University of Cyprus |
Keywords: Agent-Based Systems, Aerial Systems: Applications, Autonomous Agents
Abstract: This work investigates the problem of simultaneous tracking and jamming of a rogue drone in 3D space with a team of cooperative unmanned aerial vehicles (UAVs). We propose a decentralized estimation, decision and control framework in which a team of UAVs cooperate in order to a) optimally choose their mobility control actions that result in accurate target tracking and b) select the desired transmit power levels which cause uninterrupted radio jamming and thus ultimately disrupt the operation of the rogue drone. The proposed decision and control framework allows the UAVs to reconfigure themselves in 3D space such that the cooperative simultaneous tracking and jamming (CSTJ) objective is achieved; while at the same time ensures that the unwanted inter-UAV jamming interference caused during CSTJ is kept below a specified critical threshold. Finally, we formulate this problem under challenging conditions i.e., uncertain dynamics, noisy measurements and false alarms. Extensive simulation experiments illustrate the performance of the proposed approach.
|
|
16:45-17:00, Paper TuDT21.2 | |
>SpCoMapGAN: Spatial Concept Formation-Based Semantic Mapping with Generative Adversarial Networks |
> Video Attachment
|
|
Katsumata, Yuki | Ritsumeikan University |
Taniguchi, Akira | Ritsumeikan University |
El Hafi, Lotfi | Ritsumeikan University |
Hagiwara, Yoshinobu | Ritsumeikan University |
Taniguchi, Tadahiro | Ritsumeikan University |
Keywords: Autonomous Agents, Representation Learning, Cognitive Human-Robot Interaction
Abstract: In semantic mapping, which connects semantic information to an environment map, it is a challenging task for robots to deal with both local and global information of environments. In addition, it is important to estimate semantic information of unobserved areas from already acquired partial observations in a newly visited environment. On the other hand, previous studies on spatial concept formation enabled a robot to relate multiple words to places from bottom-up observations even when the vocabulary was not provided beforehand. However, the robot could not transfer global information related to the room arrangement between semantic maps from other environments. In this paper, we propose SpCoMapGAN, which generates the semantic map in a newly visited environment by training an inference model using previously estimated semantic maps. SpCoMapGAN uses generative adversarial networks (GANs) to transfer semantic information based on room arrangements to a newly visited environment. Our proposed method assigns semantics to the map of unknown environment using the prior distribution of the map trained in known environments and the multimodal observation in the unknown environment. We experimentally show in simulation that SpCoMapGAN can use global information for estimating the semantic map and is superior to previous methods. Finally, we also demonstrate in a real environment that SpCoMapGAN can accurately 1) deal with local information, and 2) acquire the semantic information of real places.
|
|
17:00-17:15, Paper TuDT21.3 | |
>To Ask or Not to Ask: A User Annoyance Aware Preference Elicitation Framework for Social Robots |
|
Gucsi, Bálint | University of Southampton |
Tarapore, Danesh | University of Southampton |
Yeoh, William | Washington University St Louis |
Amato, Christopher | Northeastern University |
Tran-Thanh, Long | University of Warwick |
Keywords: Autonomous Agents, Planning, Scheduling and Coordination, Social Human-Robot Interaction
Abstract: In this paper we investigate how social robots can efficiently gather user preferences without exceeding the allowed user annoyance threshold. To do so, we use a Gazebo based simulated office environment with a TIAGo Steel robot. We then formulate the user annoyance aware preference elicitation problem as a combination of tensor completion and knapsack problems. We then test our approach on the aforementioned simulated environment and demonstrate that it can accurately estimate user preferences.
|
|
17:15-17:30, Paper TuDT21.4 | |
>Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning |
> Video Attachment
|
|
Maeda, Guilherme Jorge | Preferred Networks |
Vaatainen, Joni | Waseda University |
Yoshida, Hironori | Www.hy-Ma.com |
Keywords: Autonomous Agents, Representation Learning, Perception-Action Coupling
Abstract: One of the challenges of full autonomy is to have robots capable of manipulating its current environment to achieve another environment configuration. This paper is a step towards this challenge, focusing on the visual understanding of the task. Our approach trains a deep neural network to represent images as measurable features that are useful to estimate the progress (or phase) of a task. The training uses numerous variations of images of identical tasks when taken under the same phase index. The goal is to make the network sensitive to differences in task progress but insensitive to the appearance of the images. To this end, our method builds upon Time-Contrastive Networks (TCNs) to train a network using only discrete snapshots taken at different stages of a task. A robot can then solve long-horizon tasks by using the trained network to identify the progress of the current task and by iteratively calling a motion planner until the task is solved. We quantify the granularity achieved by the network in two simulated environments. In the first, to detect the number of objects in a scene and in the second to measure the volume of particles in a cup. Our experiments leverage this granularity to make a mobile robot move a desired number of objects into a storage area and to control the amount of pouring in a cup.
|
|
17:30-17:45, Paper TuDT21.5 | |
>Lane-Attention: Predicting Vehicles' Moving Trajectories by Learning Their Attention Over Lanes |
|
Pan, Jiacheng | UCLA, Baidu |
Sun, Hongyi | Baidu USA |
Xu, Kecheng | Baidu USA LLC |
Jiang, Yifei | Baidu USA LLC |
Xiao, Xiangquan | Baidu USA LLC |
Hu, Jiangtao | Baidu USA |
Miao, Jinghao | Baidu |
Keywords: Autonomous Agents, Collision Avoidance, Deep Learning in Grasping and Manipulation
Abstract: Accurately forecasting the future movements of surrounding vehicles is essential for safe and efficient operations of autonomous driving cars. This task is difficult because a vehicle's moving trajectory is greatly determined by its driver's intention, which is often hard to estimate. By leveraging attention mechanisms along with long short-term memory (LSTM) networks, this work learns the relation between a driver's intention and the vehicle's changing positions relative to road infrastructures, and uses it to guide the prediction. Different from other state-of-the-art solutions, our work treats the on-road lanes as non-Euclidean structures, unfolds the vehicle's moving history to form a spatio-temporal graph, and uses methods from Graph Neural Networks to solve the problem. Not only is our approach a pioneering attempt in using non-Euclidean methods to process static environmental features around a predicted object, our model also outperforms other state-of-the-art models in several metrics. The practicability and interpretability analysis of the model shows great potential for large-scale deployment in various autonomous driving systems in addition to our own.
|
|
17:45-18:00, Paper TuDT21.6 | |
>Pedestrian Intention Prediction for Autonomous Driving Using a Multiple Stakeholder Perspective Model |
> Video Attachment
|
|
Kim, Kyungdo | Seoul National University |
Lee, Yoon Kyung | Seoul National University |
Ahn, Hyemin | Technical University of Munich |
Hahn, Sowon | Seoul National University |
Oh, Songhwai | Seoul National University |
Keywords: Autonomous Agents, Virtual Reality and Interfaces, Deep Learning for Visual Perception
Abstract: This paper proposes a multiple stakeholder perspective model (MSPM) which predicts the future pedestrian trajectory observed from vehicle's point of view. The motivation of the MSPM is that a human driver exploits the experience of being a pedestrian when he or she encounters a pedestrian crossing over the street. For the vehicle-pedestrian interaction, the estimation of the pedestrian's intention is a key factor. However, even if this interaction is commonly initiated by both the human (pedestrian) and the agent (driver), current research focuses on developing a neural network trained by the data from driver's perspective only. In this paper, we suggest a multiple stakeholder perspective model (MSPM) and apply this model for pedestrian intention prediction. The model combines the driver (stakeholder 1) and pedestrian (stakeholder 2) by separating the information based on the perspective. The dataset from pedestrian's perspective have been collected from the virtual reality experiment, and a network that can reflect perspectives of both pedestrian and driver is proposed. Our model achieves the best performance in the existing pedestrian intention dataset, while reducing the trajectory prediction error by average of 4.48% in the short-term (0.5s) and middle-term (1.0s) prediction, and 11.14% in the long-term prediction (1.5s) compared to the previous state-of-the-art.
|
|
TuDT22 |
Room T22 |
Cooperating Robots |
Regular session |
Chair: Chernova, Sonia | Georgia Institute of Technology |
Co-Chair: Tokekar, Pratap | University of Maryland |
|
16:30-16:45, Paper TuDT22.1 | |
>Computing High-Quality Clutter Removal Solutions for Multiple Robots |
> Video Attachment
|
|
Tang, Wei N. | Rutgers University |
Han, Shuai D. | Rutgers University |
Yu, Jingjin | Rutgers University |
Keywords: Multi-Robot Systems, Task Planning, Cooperating Robots
Abstract: We investigate the task and motion planning problem of clearing clutter from a workspace with limited ingress/egress access for multiple robots. We call the problem multi-robot clutter removal (MRCR). Targeting practical applications where motion planning is non-trivial but is not a bottleneck, we limit our focus to feasible MRCR instances and seek high-quality solutions, which depends on the ability to efficiently compute high-quality object removal sequences. Despite several additional challenges in the multi-robot setting, our proposed search algorithms based on A*, dynamic programming, and best-first heuristics all produce solutions for tens of objects that significantly outperform single robot solutions. Realistic simulations with multiple Kuka youBots further confirms the effectiveness of our algorithmic solutions. In contrast, we also show that deciding the optimal object removal sequence for MRCR is computationally intractable.
|
|
16:45-17:00, Paper TuDT22.2 | |
>Adaptive Partitioning for Coordinated Multi-Agent Perimeter Defense |
|
Guimarães Macharet, Douglas | Universidade Federal De Minas Gerais |
Chen, Austin Ku | University of Pennsylvania |
Shishika, Daigo | University of Pennsylvania |
Pappas, George J. | University of Pennsylvania |
Kumar, Vijay | University of Pennsylvania, School of Engineering and Applied Sc |
Keywords: Multi-Robot Systems, Cooperating Robots, Autonomous Agents
Abstract: Multi-Robot Systems have been recently employed in different applications and have advantages over single-robot systems, such as increased robustness and task performance efficiency. We consider such assemblies specifically in the scenario of perimeter defense, where the task is to defend a circular perimeter by intercepting radially approaching targets. Possible intruders appear randomly at a fixed distance from the perimeter and with azimuthal location determined by some unknown probability density. Coordination among multiple defenders is a complex combinatorial optimization problem. In this work, we focus on the following two aspects: (i) estimating the probability density that describes the direction from which the next intruders are going to arrive, and (ii) partitioning of the space so that the defenders focus on capturing a disjoint subset of intruders. Results show that the proposed strategy increases the number of captures over a naive baseline strategy, especially in scenarios with non-uniform spatial distributions of intruder arrival. The proposed approach is also efficient and able to quickly adapt to time-varying intruder distributions.
|
|
17:00-17:15, Paper TuDT22.3 | |
>Approximated Dynamic Trait Models for Heterogeneous Multi-Robot Teams |
> Video Attachment
|
|
Neville, Glen | Georgia Institute of Technology |
Ravichandar, Harish | Georgia Institute of Technology |
Shaw, Kenneth | Georgia Institute of Technology |
Chernova, Sonia | Georgia Institute of Technology |
Keywords: Cooperating Robots, Multi-Robot Systems
Abstract: To realize effective heterogeneous multi-agent teams, we must be able to leverage individual agents' relative strengths. Recent work has addressed this challenge by introducing trait-based task assignment approaches that exploit the agents' relative advantages. These approaches, however, assume that the agents' traits remain static. Indeed, in real-world scenarios, traits are likely to vary as agents execute tasks. In this paper, we present a transformation-based modeling framework to bridge the gap between state-of-the-art task assignment algorithms and the reality of dynamic traits. We define a transformation as a function that approximates dynamic traits with static traits based on a specific statistical measure. We define different candidate transformations, investigate their effects on different dynamic trait models, and the resulting task performance. Further, we propose a variance-based transformation as a general solution that approximates a variety of dynamic models, eliminating the need for hand specification. Finally, we demonstrate the benefits of reasoning about dynamic traits both in simulation and in a physical experiment involving the game of capture-the-flag.
|
|
17:15-17:30, Paper TuDT22.4 | |
>Cooperative Control of Mobile Robots with Stackelberg Learning |
> Video Attachment
|
|
Koh, Joewie J. | University of Colorado Boulder |
Ding, Guohui | University of Colorado Boulder |
Heckman, Christoffer | University of Colorado at Boulder |
Chen, Lijun | University of Colorado at Boulder |
Roncone, Alessandro | University of Colorado Boulder |
Keywords: Cooperating Robots, Multi-Robot Systems, Reinforecment Learning
Abstract: Multi-robot cooperation requires agents to make decisions that are consistent with the shared goal without disregarding action-specific preferences that might arise from asymmetry in capabilities and individual objectives. To accomplish this goal, we propose a method named SLiCC: Stackelberg Learning in Cooperative Control. SLiCC models the problem as a partially observable stochastic game composed of Stackelberg bimatrix games, and uses deep reinforcement learning to obtain the payoff matrices associated with these games. Appropriate cooperative actions are then selected with the derived Stackelberg equilibria. Using a bi-robot cooperative object transportation problem, we validate the performance of SLiCC against centralized multi-agent Q-learning and demonstrate that SLiCC achieves better combined utility.
|
|
17:30-17:45, Paper TuDT22.5 | |
>Sparse Discrete Communication Learning for Multi-Agent Cooperation through Backpropagation |
> Video Attachment
|
|
Freed, Benjamin | Carnegie Mellon University |
James, Rohan | Carnegie Mellon University |
Sartoretti, Guillaume Adrien | National University of Singapore (NUS) |
Choset, Howie | Carnegie Mellon University |
Keywords: Multi-Robot Systems, Reinforecment Learning, Cooperating Robots
Abstract: Recent approaches to multi-agent reinforcement learning (MARL) with inter-agent communication have often overlooked important considerations of real-world communication networks, such as limits on bandwidth. In this paper, we propose an approach to learning sparse discrete communication through backpropagation in the context of MARL, in which agents are incentivized to communicate as little as possible while still achieving high reward. Building on top of our prior work on differentiable discrete communication learning, we develop a regularization-inspired message-length penalty term, that encourages agents to send shorter messages and avoid unnecessary communications. To this end, we introduce a variable-length message code that provides agents with a general means of modulating message length while keeping the overall learning objective differentiable. We present simulation results on a partially-observable robot navigation task, where we first show how our approach allows learning of sparse communication behavior while still solving the task. We finally demonstrate our approach can even learn an effective sparse communication behavior from demonstrations of an expert (potentially communication-free) policy.
|
|
17:45-18:00, Paper TuDT22.6 | |
>Multi-Robot Coordinated Planning in Confined Environments under Kinematic Constraints |
|
Mangette, Clayton | Virginia Polytechnic Institute |
Tokekar, Pratap | University of Maryland |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Nonholonomic Motion Planning, Motion and Path Planning
Abstract: We investigate the problem of multi-robot coordinated planning in environments where the robots may have to operate in close proximity to each other. We seek computationally efficient planners that ensure safe paths and adherence to kinematic constraints. We extend the central planner dRRT* with our variant, fast-dRRT (fdRRT), with the intention being to use in tight environments that lead to a high degree of coupling between robots. Our algorithm is empirically shown to achieve the trade-off between computational time and solution quality, especially in tight environments. We also demonstrate the ability of our algorithm to be adapted to the online planning problem while maintaining computational efficiency.
|
|
TuDT23 |
Room T23 |
Swarms |
Regular session |
Chair: Floreano, Dario | Ecole Polytechnique Federal, Lausanne |
Co-Chair: De Schutter, Joris | KU Leuven |
|
16:30-16:45, Paper TuDT23.1 | |
>SwarmLab: A MATLAB Drone Swarm Simulator |
> Video Attachment
|
|
Soria, Enrica | EPFL |
Schiano, Fabrizio | Ecole Polytechnique Federale De Lausanne, EPFL |
Floreano, Dario | Ecole Polytechnique Federal, Lausanne |
Keywords: Swarms, Agent-Based Systems, Simulation and Animation
Abstract: Among the available solutions for drone swarm simulations, we identified a lack of simulation frameworks that allow easy algorithms prototyping, tuning, debugging and performance analysis. Moreover, users who want to dive in the research field of drone swarms often need to interface with multiple programming languages. We present SwarmLab, a software entirely written in MATLAB, that aims at the creation of standardized processes and metrics to quantify the performance and robustness of swarm algorithms, and in particular, it focuses on drones. We showcase the functionalities of SwarmLab by comparing two decentralized algorithms from the state of the art for the navigation of aerial swarms in cluttered environments, Olfati-Saber’s and Vasarhelyi’s. We analyze the variability of the inter-agent distances and agents’ speeds during flight. We also study some of the performance metrics presented, i.e. order, inter- and extra-agent safety, union, and connectivity. While Olfati-Saber’s approach results in a faster crossing of the obstacle field, Vasarhelyi’s approach allows the agents to fly smoother trajectories, without oscillations. We believe that SwarmLab is relevant for both the biological and robotics research communities, and for education, since it allows fast algorithm development, the automatic collection of simulated data, the systematic analysis of swarming behaviors with performance metrics inherited from the state of the art.
|
|
16:45-17:00, Paper TuDT23.2 | |
>An Actor-Based Programming Framework for Swarm Robotic Systems |
|
Yi, Wei | National Innovation Institute of Defense Technology |
Di, Bin | Artificial Intelligence Research Center (AIRC), National Innovat |
Li, Ruihao | National Innovation Institute of Defense Technology (NIIDT) |
Dai, Huadong | National Innovation Institute of Defense Technology |
Yi, Xiaodong | National Innovation Institute of Defense Technology |
Wang, Yanzhen | School of Computer, National University of Defense Technology |
Yang, Xuejun | National University of Defense Technology |
Keywords: Software, Middleware and Programming Environments, Multi-Robot Systems, Swarms
Abstract: Programming cooperative tasks for autonomous swarm robotic systems has always been challenging. In this paper, we introduce a concept 'Actor', as a virtualization for robot platforms. Every robot platform in the swarm robotic system carries out the task and interacts with others as an Actor. We designed an Actor-based framework for the management of autonomous swarm robotic systems including modules and interfaces for the Actor, the collective Actor, and task management. The Actor-based framework enables task developers to explicitly model cooperative tasks without intricacies about the detailed robotic algorithms or the specific robot brands, and eases the burden on robotic algorithm developers by providing common functionalities. The proposed framework is implemented in C++ and validated quantitatively and qualitatively with a swarm of thirty drones by simulations and a swarm of ten drones by in-field tests.
|
|
17:00-17:15, Paper TuDT23.3 | |
>A Distributed Range-Only Collision Avoidance Approach for Low-Cost Large-Scale Multi-Robot Systems |
> Video Attachment
|
|
Han, Ruihua | Southern University of Science and Technology |
Chen, Shengduo | Southern University of Science and Technology |
Hao, Qi | Southern University of Science and Technology |
Keywords: Collision Avoidance, Multi-Robot Systems, Path Planning for Multiple Mobile Robots or Agents
Abstract: The challenges of developing low-cost, large-scale multi-robot navigation systems include noisy measurements, a large number of robots, and computing efficiency for collision avoidance. This paper presents a distributed motion planning framework for a large number of robots to navigate with robust collision avoidance using low-cost range only measurements. The novelty of this work is threefold. (1) Developing a distributed collision-free navigation system for a large-scale robot group in which each robot performs motion planning based on the noisy range measurements of neighboring robots; (2) Developing a set of algorithms for each robot to accurately estimate the relative positions and orientations based on the range measurements and relative velocities; (3) Developing a velocity obstacle (VO) based motion planning algorithm for each robot which can take into account of the estimation uncertainties in relative positions and orientations. The proposed approach is tested with various numbers of differential-driven robots in the Gazebo simulator and real-world experiments. Both simulation and experiment results validate the superior performance of the proposed approach compared to other state-of-art technologies.
|
|
17:15-17:30, Paper TuDT23.4 | |
>Automatic Control Synthesis for Swarm Robots from Formation and Location-Based High-Level Specifications |
> Video Attachment
|
|
Chen, Ji | Cornell University |
Wang, Hanlin | Northwestern University |
Rubenstein, Michael | Northwestern University |
Kress-Gazit, Hadas | Cornell University |
Keywords: Swarms, Multi-Robot Systems, Motion Control
Abstract: In this paper, we propose an abstraction that captures high-level formation and location-based swarm behaviors, and an automated control synthesis framework for generating safe controls. Our abstraction includes symbols representing both possible formations and physical locations in the workspace. We allow users to write linear temporal logic (LTL) specifications over the symbols to specify high-level tasks for the swarm. To satisfy a specification, we automatically synthesize a centralized symbolic plan, and environment and swarm-size-dependent motion controllers that are guaranteed to implement the symbolic transitions. In addition, using integer programming (IP), we assign robots to different sub-swarms to execute the synthesized symbolic plan. Our framework gives insights into controlling a large fleet of autonomous robots to achieve complex tasks which require composition of behaviors at different locations and coordination among different groups of robots in a correct-by-construction way. We demonstrate the proposed framework in simulation with 16 UAVs and 8 ground vehicles, and on a physical platform with 20 ground robots, showcasing the generality of the approach and discussing the implications of controlling constrained physical hardware.
|
|
17:30-17:45, Paper TuDT23.5 | |
>Low-Viewpoint Forest Depth Dataset for Sparse Rover Swarms |
> Video Attachment
|
|
Niu, Chaoyue | Agents, Interaction and Complexity Group, Electronics and Comput |
Tarapore, Danesh | University of Southampton |
Zauner, Klaus-Peter | University of Southampton |
Keywords: Swarms, Visual-Based Navigation, RGB-D Perception
Abstract: Rapid progress in embedded computing hardware increasingly enables on-board image processing on small robots. This development opens the path to replacing costly sensors with sophisticated computer vision techniques. A case in point is the prediction of scene depth information from a monocular camera for autonomous navigation. Motivated by the aim to develop a robot swarm suitable for sensing, monitoring, and search applications in forests, we have collected a set of RGB images and corresponding depth maps. Over 100000 RGB/depth image pairs were recorded with a custom rig from the perspective of a small ground rover moving through a forest. Taken under different weather and lighting conditions, the images include scenes with grass, bushes, standing and fallen trees, tree branches, leaves, and dirt. In addition GPS, IMU, and wheel encoder data were recorded. From the calibrated, synchronized, aligned and timestamped frames about 9700 image-depth map pairs were selected for sharpness and variety. We provide this dataset to the community to fill a need identified in our own research and hope it will accelerate progress in robots navigating the challenging forest environment. This paper describes our custom hardware and methodology to collect the data, subsequent processing and quality of the data, and how to access it.
|