| |
Last updated on May 27, 2019. This conference program is tentative and subject to change
Technical Program for Wednesday May 22, 2019
|
WePL Plenary Session, 210 |
Add to My Program |
Plenary Session III |
|
|
Chair: Dudek, Gregory | McGill University |
|
08:30-09:30, Paper WePL.1 | Add to My Program |
A Future with Affordable Self-Driving Vehicles |
Urtasun, Raquel | University of Toronto |
Keywords: Computer Vision for Transportation
Abstract: Raquel Urtasun is the Chief Scientist of Uber ATG and the Head of Uber ATG Toronto. She is also an Associate Professor in the Department of Computer Science at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She received her Ph.D. degree from the Computer Science department at Ecole Polytechnique Federal de Lausanne (EPFL) in 2006 and did her postdoc at MIT and UC Berkeley. She is a world leading expert in AI for self-driving cars. Her research interests include machine learning, computer vision, robotics and remote sensing. Her lab was selected as an NVIDIA NVAIL lab. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award, a Fallona Family Research Award and two Best Paper Runner up Prize awarded at CVPR in 2013 and 2017 respectively.
|
|
WeAT1 |
220 |
PODS: Wednesday Session I |
Interactive Session |
|
09:40-10:55, Subsession WeAT1-01, 220 | |
Marine Robotics V - 3.1.01 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-02, 220 | |
Mapping and Reconstruction - 3.1.02 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-03, 220 | |
Robots and Language - 3.1.03 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-04, 220 | |
Path Planning II - 3.1.04 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-05, 220 | |
Learning from Demonstration II - 3.1.05 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-06, 220 | |
Semantic Scene Understanding I - 3.1.06 Interactive Session, 5 papers |
|
09:40-10:55, Subsession WeAT1-07, 220 | |
SLAM - Session VII - 3.1.07 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-08, 220 | |
AI-Based Methods I - 3.1.08 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-09, 220 | |
Perception for Manipulation III - 3.1.09 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-10, 220 | |
Object Recognition & Segmentation III - 3.1.10 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-11, 220 | |
Manipulation III - 3.1.11 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-12, 220 | |
Mechanism Design II - 3.1.12 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-13, 220 | |
Soft Robots V - 3.1.13 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-14, 220 | |
Legged Robots III - 3.1.14 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-15, 220 | |
Robot Safety I - 3.1.15 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-16, 220 | |
Wheeled Robotics I - 3.1.16 Interactive Session, 5 papers |
|
09:40-10:55, Subsession WeAT1-17, 220 | |
Actuators - 3.1.17 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-18, 220 | |
Autonomous Agents - 3.1.18 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-19, 220 | |
Contact Modeling - 3.1.19 Interactive Session, 5 papers |
|
09:40-10:55, Subsession WeAT1-20, 220 | |
Hybrid Logical/Dynamical Planning and Verification - 3.1.20 Interactive Session, 5 papers |
|
09:40-10:55, Subsession WeAT1-21, 220 | |
Aerial Systems - 3.1.21 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-22, 220 | |
Learning from Demonstration III - 3.1.22 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-23, 220 | |
Learning from Demonstration IV 3.1.23 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-24, 220 | |
Learning and Manipulation I - 3.1.24 Interactive Session, 6 papers |
|
09:40-10:55, Subsession WeAT1-25, 220 | |
Learning and Manipulation II - 3.1.25 Interactive Session, 6 papers |
|
WeAT1-01 Interactive Session, 220 |
Add to My Program |
Marine Robotics V - 3.1.01 |
|
|
|
09:40-10:55, Paper WeAT1-01.1 | Add to My Program |
Visual Diver Recognition for Underwater Human-Robot Collaboration |
Xia, Youya | University of Minnesota, Twin Cities |
Sattar, Junaed | University of Minnesota |
Keywords: Marine Robotics, Deep Learning in Robotics and Automation, Computer Vision for Other Robotic Applications
Abstract: This paper presents an approach for autonomous underwater robots to visually detect and identify divers. The proposed approach enables an autonomous underwater robot to detect multiple divers in a visual scene and distinguish between them. Such methods are useful for robots to identify a human leader, for example, in multi-human/robot teams where only designated individuals are allowed to command or lead a team of robots. Initial diver identification is performed using the Faster R-CNN algorithm with a region proposal network which produces bounding boxes around the divers' locations. Subsequently, a suite of spatial and frequency domain descriptors are extracted from the bounding boxes to create a feature vector. A K-Means clustering algorithm, with "k" set to the number of detected bounding boxes, thereafter identifies the detected divers based on these feature vectors. We evaluate the performance of the proposed approach on video footage of divers swimming in front of a mobile robot and demonstrate its accuracy.
|
|
09:40-10:55, Paper WeAT1-01.2 | Add to My Program |
An Integrated Approach to Navigation and Control in Micro Underwater Robotics Using Radio-Frequency Localization |
Duecker, Daniel Andre | Hamburg University of Technology |
Johannink, Tobias | Hamburg University of Technology |
Kreuzer, Edwin | Hamburg University of Technology |
Rausch, Viktor | Hamburg University of Technology |
Solowjow, Eugen | Siemens Corporation |
Keywords: Marine Robotics, Field Robots, Localization
Abstract: Navigation and control are a largely unsolved problems for micro autonomous underwater vehicles (uAUVs). The main challenges are due to the lack of accurate underwater localization systems, which fit on-board of uAUVs. In this work, we present an integrated navigation and control architecture consisting of a low-cost embedded localization module and an underwater way-point tracking controller, which fulfills the requirements of uAUVs. The performance of the navigation and control system is benchmarked in two different experimental scenarios.
|
|
09:40-10:55, Paper WeAT1-01.3 | Add to My Program |
Online Utility-Optimal Trajectory Design for Time-Varying Ocean Environments |
Nutalapati, Mohan Krishna | Indian Institute of Technology Kanpur |
Joshi, Shruti | Indian Institute If Technology Kanpur |
Rajawat, Ketan | IIT Kanpur |
Keywords: Energy and Environment-Aware Automation, Marine Robotics, Motion and Path Planning
Abstract: This paper considers the problem of online optimal trajectory design under time-varying environments. Of particular interest is the design of energy-efficient trajectories under strong and uncertain disturbances in ocean environments and time-varying goal location. We formulate the problem within the constrained online convex optimization formalism, and a modified online gradient descent algorithm is motivated. The mobility constraints are met using a carefully chosen step-size, and the proposed algorithm is shown to incur sublinear regret. Different from the state-of-the-art algorithms that entail planning and re-planning the full trajectory using forecast data at each time instant, the proposed algorithm is entirely online and relies mostly on the current ocean velocity measurements at the vehicle locations. The trade-off between excess delay incurred in reaching the goal and the overall energy consumption is examined via numerical tests carried out on real data obtained from the regional ocean modelling system. As compared to the state-of-the-art algorithms, the proposed algorithm is not only energy-efficient but also several orders of magnitude computationally efficient.
|
|
09:40-10:55, Paper WeAT1-01.4 | Add to My Program |
Rendezvous Planning for Multiple AUVs with Mobile Charging Stations in Dynamic Currents |
Li, Bingxi | Michigan Technological University |
Page, Brian | Purdue University |
Hoffman, John | Michigan Technological University |
Moridian, Barzin | Purdue University |
Mahmoudian, Nina | Purdue University |
Keywords: Marine Robotics, Planning, Scheduling and Coordination
Abstract: Operation of Autonomous Underwater Vehicles (AUVs) in large spatiotemporal missions is currently challenged by onboard energy resources requiring manned support. With current methods, AUVs are programmed to return to a static charging station based on a threshold in their energy level. Although, this approach has shown success in extending the operational life, it becomes impractical due to interruption of AUV operation and loss of energy needed to return to charging station. It is also not practical for large networks due to shortage of charging stations. We introduce mobile onsite power delivery, which will fundamentally change the range and duration of underwater operations. This paper presents a mission planning method to generate mobile charger trajectories, given pre-defined working AUV trajectories, considering environmental constraints such as currents and obstacles. The problem is formulated as a Multiple Generalized Traveling Salesman Problem (MGTSP) that is then transformed into a Traveling Salesman Problem (TSP). Energy cost in dynamic currents is integrated with a path planning algorithm using a grid-based environment model. A scheduling strategy extends the problem over multiple charging cycles. Simulation results show that the planning method significantly improves mission success and energy expenditure. Field experiments in Lake Superior validate feasibility of the planned trajectories for long-term marine missions.
|
|
09:40-10:55, Paper WeAT1-01.5 | Add to My Program |
Towards a Generic Diver-Following Algorithm: Balancing Robustness and Efficiency in Deep Visual Detection |
Islam, Md Jahidul | University of Minnesota-Twin Cities |
Fulton, Michael | University of Minnesota |
Sattar, Junaed | University of Minnesota |
Keywords: Field Robots, Marine Robotics, Human Detection and Tracking
Abstract: This paper explores the design and development of a class of robust diver detection algorithms for autonomous diver following applications. By considering the operational challenges for underwater visual tracking in diverse real-world settings, we formulate a set of desired features of a generic diver-following algorithm. We attempt to accommodate these features and maximize general tracking performance by exploiting the state-of-the-art deep object detection models. We fine-tune the building blocks of these models with a goal of balancing the trade-off between robustness and efficiency in an on-board setting under real-time constraints. Subsequently, we design an architecturally simple Convolutional Neural Network (CNN)-based diver detection model that is much faster than the state-of-the-art deep models yet provides comparable detection performances. In addition, we validate the performance and effectiveness of the proposed model through a number of diver-following experiments in closed-water and open-water environments.
|
|
09:40-10:55, Paper WeAT1-01.6 | Add to My Program |
RoboScallop: A Bivalve Inspired Swimming Robot |
Robertson, Matthew | EPFL |
Efremov, Filip | EPFL |
Paik, Jamie | Ecole Polytechnique Federale De Lausanne |
Keywords: Biologically-Inspired Robots, Marine Robotics, Soft Material Robotics
Abstract: Underwater robots permit remote access to over 70% of the Earth’s surface that is covered in water for a variety of scientific, environmental, tactical, or industrial purposes. Many practical applications for robots in this setting include sensing, monitoring, exploration, reconnaissance, or inspection tasks. In the interest of expanding this activity and opportunity within aquatic environments, this paper describes a swimming robot with a simple, robust, and scalable design. RoboScallop is inspired by the locomotion of bivalve scallops, utilizing two articulating rigid shells and an elastic membrane to produce water jet propulsion. A one-DoF, reciprocating mechanism enclosed within the robot shells is used to generate pulsating thrust, and the performance of this novel swimming method is evaluated by characterization of the robot jet force and swimming speed. This is the first time jet propulsion is demonstrated for a robot swimming in normal, Newtonian fluid using a bivalve morphology. We found the robot metrics to be comparable to its biological counterpart but free from metabolic limitations which prevent sustained free swimming in living species. Leveraging this locomotion principle provides unique benefits over other existing underwater propulsion techniques, including robustness, scalability, resistance to entanglement, and possible implicit water treatment capabilities, toward further development of a new class of self-contained, hybrid-stiffness underwater robots.
|
|
WeAT1-02 Interactive Session, 220 |
Add to My Program |
Mapping and Reconstruction - 3.1.02 |
|
|
|
09:40-10:55, Paper WeAT1-02.1 | Add to My Program |
Online Continuous Mapping Using Gaussian Process Implicit Surfaces |
Lee, Bhoram | University of Pennsylvania |
Zhang, Clark | University of Pennsylvania |
Huang, Zonghao | University of Pennsylvania |
Lee, Daniel | Cornell Tech |
Keywords: Mapping, Range Sensing, Perception for Grasping and Manipulation
Abstract: The representation of the environment strongly affects how robots can move and interact with it. This paper presents an online approach for continuous mapping using Gaussian Process Implicit Surfaces (GPIS). Compared with grid-based methods, GPIS better utilizes sparse measurements to represent the world seamlessly. It provides direct access to the signed-distance function (SDF) and its derivatives which are invaluable for other robotic tasks and it incorporates uncertainty in the sensor measurements. Our approach incrementally and efficiently updates GPIS by employing a regressor on observations and a spatial tree structure. The effectiveness of the suggested approach is demonstrated using simulations and real world 2D/3D data.
|
|
09:40-10:55, Paper WeAT1-02.2 | Add to My Program |
Dense 3D Visual Mapping Via Semantic Simplification |
Morreale, Luca | Politecnico Di Milano |
Romanoni, Andrea | Politecnico Di Milano |
Matteucci, Matteo | Politecnico Di Milano |
Keywords: Mapping, Semantic Scene Understanding
Abstract: Dense 3D visual mapping estimates as many as possible pixel depths, for each image. This results in very dense point clouds that often contain redundant and noisy information, especially for surfaces that are roughly planar, for instance, the ground or the walls in the scene. In this paper we leverage on semantic image segmentation to discriminate which regions of the scene require simplification and which should be kept at high level of details. We propose four different point cloud simplification methods which decimate the perceived point cloud by relying on class-specific local and global statistics still maintaining more points in the proximity of class boundaries to preserve the infra-class edges and discontinuities. 3D dense model is obtained by fusing the point clouds in a 3D Delaunay Triangulation to deal with variable point cloud density. In the experimental evaluation we have shown that, by leveraging on semantics, it is possible to simplify the model and diminish the noise affecting the point clouds.
|
|
09:40-10:55, Paper WeAT1-02.3 | Add to My Program |
Predicting the Layout of Partially Observed Rooms from Grid Maps |
Luperto, Matteo | Università Degli Studi Di Milano |
Arcerito, Valerio | Politecnico Di Milano |
Amigoni, Francesco | Politecnico Di Milano |
Keywords: Mapping
Abstract: In several applications, autonomous mobile robots benefit from knowing the structure of the indoor environments where they operate. This knowledge can be extracted from the metric maps built (e.g., using SLAM algorithms) from the data perceived by the robots' sensors. The layout is a way to represent the structure of an indoor environment with geometrical primitives. Most of the current methods for reconstructing the layout from a metric map represent the parts of the environment that have been fully observed. In this paper, we propose an approach that predicts the layout of rooms which are only partially known in a 2D metric grid map. The prediction is made according to the global structure of the environment, as identified from its known parts. Experiments show that our approach is able to effectively predict the layout of several indoor environments that have been observed to different degrees.
|
|
09:40-10:55, Paper WeAT1-02.4 | Add to My Program |
Dense Surface Reconstruction from Monocular Vision and LiDAR |
Li, Zimo | Carnegie Mellon University |
Gogia, Prakruti | Carnegie Mellon University |
Kaess, Michael | Carnegie Mellon University |
Keywords: Mapping, SLAM, Range Sensing
Abstract: In this work, we develop a new surface reconstruction pipeline that combines monocular camera images and LiDAR measurements from a moving sensor rig to reconstruct dense 3D mesh models of indoor scenes. For surface reconstruction, the 3D LiDAR and camera are widely deployed for gathering geometric information from environments. Current state-of-the-art multi-view stereo or LiDAR-only reconstruction methods cannot reconstruct indoor environments accurately due to shortcomings of each sensor type. In our approach, LiDAR measurements are integrated into a multi-view stereo pipeline for point cloud densification and tetrahedralization. In addition to that, a graph cut algorithm is utilized to generate a watertight surface mesh. Because our proposed method leverages the complementary nature of these two sensors, the accuracy and completeness of the output model are improved. The experimental results on real world data show that our method significantly outperforms both the state-of-the-art camera-only and LiDAR-only reconstruction methods in accuracy and completeness.
|
|
09:40-10:55, Paper WeAT1-02.5 | Add to My Program |
FSMI: Fast Computation of Shannon Mutual Information for Information-Theoretic Mapping |
Zhang, Zhengdong | Massachusetts Institute of Technology |
Henderson, Trevor | Massachusetts Institute of Technology |
Sze, Vivienne | Massachusetts Institute of Technology |
Karaman, Sertac | Massachusetts Institute of Technology |
Keywords: Mapping, Planning, Scheduling and Coordination, Motion and Path Planning
Abstract: Information-based mapping algorithms are critical to robot exploration tasks in several applications ranging from disaster response to space exploration. Unfortunately, most existing information-based mapping algorithms are plagued by the computational difficulty of evaluating the Shannon mutual information between potential future sensor measurements and the map. This has lead researchers to develop approximate methods, such as Cauchy-Schwarz Quadratic Mutual Information (CSQMI). In this paper, we propose a new algorithm, called Fast Shannon Mutual Information (FSMI), which is significantly faster than existing methods at computing the exact Shannon mutual information. The key insight behind FSMI is recognizing that the integral over the sensor beam can be evaluated analytically, removing an expensive numerical integration. In addition, we provide a number of approximation techniques for FSMI, which significantly improve computation time. Equipped with these approximation techniques, the FSMI algorithm is more than three orders of magnitude faster than the existing computation for Shannon mutual information; it also outperforms the CSQMI algorithm significantly, being roughly twice as fast, in our experiments.
|
|
09:40-10:55, Paper WeAT1-02.6 | Add to My Program |
Real-Time Scalable Dense Surfel Mapping |
Wang, Kaixuan | Hong Kong University of Science and Technology |
Gao, Fei | Hong Kong University of Science and Technology |
Shen, Shaojie | Hong Kong University of Science and Technology |
Keywords: Mapping, Sensor Fusion, Aerial Systems: Perception and Autonomy
Abstract: In this paper, we propose a novel dense surfel mapping system that scales well in different environments with only CPU computation. Using a sparse SLAM system to estimate camera poses, the proposed mapping system can fuse intensity images and depth images into a globally consistent model. The system is carefully designed so that it can build from room-scale environments to urban-scale environments using depth images from RGB-D cameras, stereo cameras or even a monocular camera. First, superpixels extracted from both intensity and depth images are used to model surfels in the system. superpixel-based surfels make our method both run-time efficient and memory efficient. Second, surfels are further organized according to the pose graph of the SLAM system to achieve O(1) fusion time regardless of the scale of reconstructed models. Third, a fast map deformation using the optimized pose graph enables the map to achieve global consistency in real-time. The proposed surfel mapping system is compared with other state-of-the-art methods on synthetic datasets. The performances of urban-scale and room-scale reconstruction are demonstrated using the KITTI dataset and autonomous aggressive flights, respectively. The code is available for the benefit of the community.
|
|
WeAT1-03 Interactive Session, 220 |
Add to My Program |
Robots and Language - 3.1.03 |
|
|
|
09:40-10:55, Paper WeAT1-03.1 | Add to My Program |
Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions |
Patki, Siddharth | University of Rochester |
Daniele, Andrea F | Toyota Technological Institute at Chicago |
Walter, Matthew | Toyota Technological Institute at Chicago |
Howard, Thomas | University of Rochester |
Keywords: Cognitive Human-Robot Interaction, Semantic Scene Understanding
Abstract: The speed and accuracy with which robots are able to interpret natural language is fundamental to realizing effective human-robot interaction. A great deal of attention has been paid to developing models and approximate inference algorithms that improve the efficiency of language understanding. However, existing methods still attempt to reason over a representation of the environment that is flat and unnecessarily detailed, which limits scalability. An open problem is then to develop methods capable of producing the most compact environment model sufficient for accurate and efficient natural language understanding. We propose a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation. The framework uses three probabilistic graphical models trained from a corpus of annotated instructions to infer salient scene semantics, perceptual classifiers, and grounded symbols. Experimental results on two robots operating in different environments demonstrate that by exploiting the content and the structure of the instructions, our method learns compact environment representations that significantly improve the efficiency of natural language symbol grounding.
|
|
09:40-10:55, Paper WeAT1-03.2 | Add to My Program |
Improving Grounded Natural Language Understanding through Human-Robot Dialog |
Thomason, Jesse | University of Washington |
Padmakumar, Aishwarya | University of Texas at Austin |
Sinapov, Jivko | Tufts University |
Walker, Nick | The University of Washington |
Jiang, Yuqian | The University of Texas at Austin |
Yedidsion, Harel | University of Texas at Austin |
Hart, Justin | University of Texas at Austin |
Stone, Peter | University of Texas at Austin |
Mooney, Raymond | University of Texas at Austin |
Keywords: Social Human-Robot Interaction, Learning and Adaptive Systems
Abstract: Natural language understanding for robotics can require substantial domain- and platform-specific engineering. For example, for mobile robots to pick-and-place objects in an environment to satisfy human commands, we can specify the language humans use to issue such commands, and connect concept words like red can to physical object properties. One way to alleviate this engineering for a new domain is to enable robots in human environments to adapt dynamically---continually learning new language constructions and perceptual concepts. In this work, we present an end-to-end pipeline for translating natural language commands to discrete robot actions, and use clarification dialogs to jointly improve language parsing and concept grounding. We train and evaluate this agent in a virtual setting on Amazon Mechanical Turk, and we transfer the learned agent to a physical robot platform to demonstrate it in the real world.
|
|
09:40-10:55, Paper WeAT1-03.3 | Add to My Program |
Prospection: Interpretable Plans from Language by Predicting the Future |
Paxton, Chris | NVIDIA Research |
Bisk, Yonatan | University of Washington |
Thomason, Jesse | University of Washington |
Byravan, Arunkumar | University of Washington |
Fox, Dieter | University of Washington |
Keywords: Visual Learning, Task Planning, Cognitive Human-Robot Interaction
Abstract: High-level human instructions often correspond to behaviors with multiple implicit steps. In order for robots to be useful in the real world, they must be able to to reason over both motions and intermediate goals implied by human instructions. In this work, we propose a framework for learning representations that convert from a natural-language command to a sequence of intermediate goals for execution on a robot. A key feature of this framework is prospection, training an agent not just to correctly execute the prescribed command, but to predict a horizon of consequences of an action before taking it. We demonstrate the fidelity of plans generated by our framework when interpreting real, crowd-sourced natural language commands for a robot in simulated scenes.
|
|
09:40-10:55, Paper WeAT1-03.4 | Add to My Program |
Flight, Camera, Action! Using Natural Language and Mixed Reality to Control a Drone |
Huang, Baichuan | Brown University |
Bayazit, Deniz | Brown University |
Ullman, Daniel | Brown University |
Gopalan, Nakul | Brown University |
Tellex, Stefanie | Brown |
Keywords: Virtual Reality and Interfaces, Cognitive Human-Robot Interaction, Aerial Systems: Perception and Autonomy
Abstract: In this paper, we present an interface that uses natural language grounding within an MR environment to solve high-level task and navigational instructions given to an autonomous drone. To the best of our knowledge, this is the first work to perform fully autonomous language grounding in an MR setting for a robot. Given a map, our interface first grounds natural language commands to reward specifications within a Markov Decision Process (MDP) framework. Then, it passes the reward specification to an MDP solver. Finally, the drone performs the desired operations in the real world while planning and localizing itself. Our approach uses MR to provide a set of known virtual landmarks, enabling the drone to understand commands referring to objects without being equipped with object detectors for multiple novel objects or a predefined environment model. We conducted an exploratory user study to assess users’ experience of our MR interface with and without natural language, as compared to a web interface. We found that users were able to command the drone more quickly via both MR interfaces as compared to the web interface, with roughly equal system usability scores across all three interfaces.
|
|
09:40-10:55, Paper WeAT1-03.5 | Add to My Program |
An Interactive Scene Generation Using Natural Language |
Cheng, Yu | Michigan State University |
Shi, Yan | Michigan State University |
Sun, Zhiyong | The University of Hong Kong |
Feng, Dezhi | Michigan State University |
Dong, Lixin | Michigan State University |
Keywords: Social Human-Robot Interaction, Human-Centered Automation
Abstract: Scene generation is an important step of robotic drawing. Recent works have shown success in scene generation conditioned on text using a variety of approaches, with which the generated scenes cannot be revised after its generation. To allow modification on generated scenes, we model the scene generation process as a discrete event system. Instead of training text-to-pixel mappings using large datasets, the proposed approach uses object instances retrieved from the Internet to synthesize scenes. Evaluated on 128 experiments using MSCOCO evaluation dataset, the result shows the scene generation performance has been increased by 197%, 22.3%, and 55.7% compared with the state of the art approach on three standard metrics (CIDEr, ROUGH-L, METEOR), respectively. Human evaluation conducted on Amazon Mechanical Turk shows over 80% of generated scenes are considered to have higher recognizability and better alignment with natural language descriptions than baseline works.
|
|
09:40-10:55, Paper WeAT1-03.6 | Add to My Program |
Efficient Generation of Motion Plans from Attribute-Based Natural Language Instructions Using Dynamic Constraint Mapping |
Park, Jae Sung | University of North Carolina at Chapel Hill |
Jia, Biao | University of Maryland at College Park |
Bansal, Mohit | Unc Chapel Hill |
Manocha, Dinesh | University of Maryland |
Keywords: AI-Based Methods, Task Planning, Manipulation Planning
Abstract: We present an algorithm for combining natural language processing (NLP) and fast robot motion planning to automatically generate robot movements. Our formulation uses a novel concept called Dynamic Constraint Mapping to transform complex, attribute-based natural language instructions into appropriate cost functions and parametric constraints for optimization-based motion planning. We generate a factor graph from natural language instructions called the Dynamic Grounding Graph (DGG), which takes latent parameters into account. The coefficients of this factor graph are learned based on conditional random fields (CRFs) and are used to dynamically generate the constraints for motion planning. We map the cost function directly to the motion parameters of the planner and compute smooth trajectories in dynamic scenes. We highlight the performance of our approach in a simulated environment and via a human interacting with a 7-DOF Fetch robot using intricate language commands including negation, orientation specification, and distance constraints.
|
|
WeAT1-04 Interactive Session, 220 |
Add to My Program |
Path Planning II - 3.1.04 |
|
|
|
09:40-10:55, Paper WeAT1-04.1 | Add to My Program |
Safe and Fast Path Planning in Cluttered Environment Using Contiguous Free-Space Partitioning |
Sadhu, Arup Kumar | Tata Consultancy Services |
Shukla, Shubham | Tata Consultancy Services |
Bera, Titas | TCS Innovation Labs |
Dasgupta, Ranjan | TCS Research |
Keywords: Motion and Path Planning
Abstract: The paper proposes a path planning algorithm for cluttered environment and maze. The proposed planning algorithm exploits the merit of convex optimization while forming the convex navigable free-spaces, ensuring safety of the vehicle. The contiguous convex free-spaces are iteratively computed from a random-walk based seed generation method to create a contiguous navigable geometry. Inside this contiguous navigable geometry an undirected graph is then created, whose each node and edge belong to at least one convex region which boils down the path planning problem into a graph search problem. In addition, the proposed multiple query planning algorithm can merge the user provided feasible initial and goal configuration with the existing undirected graph in each plan, without deteriorating the planning performance in terms of run-time and path length. Simulation and experimental results confirm the superiority of the proposed planning algorithm jointly in terms of both path length and run-time by a significant margin.
|
|
09:40-10:55, Paper WeAT1-04.2 | Add to My Program |
Probabilistic Completeness of RRT for Geometric and Kinodynamic Planning with Forward Propagation |
Kleinbort, Michal | Tel Aviv University |
Solovey, Kiril | Stanford University |
Littlefield, Zakary | Rutgers University |
Bekris, Kostas E. | Rutgers, the State University of New Jersey |
Halperin, Dan | Tel Aviv University |
Keywords: Motion and Path Planning, Dynamics, Nonholonomic Motion Planning
Abstract: The Rapidly-exploring Random Tree (RRT) algorithm has been one of the most prevalent and popular motion-planning techniques for two decades now. Surprisingly, in spite of its centrality, there has been an active debate under which conditions RRT is probabilistically complete. We provide two new proofs of probabilistic completeness (PC) of RRT with a reduced set of assumptions. The first one for the purely geometric setting, where we only require that the solution path has a certain clearance from the obstacles. For the kinodynamic case with forward propagation of random controls and duration, we only consider in addition mild Lipschitz-continuity conditions. These proofs fill a gap in the study of RRT itself. They also lay sound foundations for a variety of more recent and alternative sampling-based methods, whose PC property relies on that of RRT.
|
|
09:40-10:55, Paper WeAT1-04.3 | Add to My Program |
Contact-Implicit Trajectory Optimization Using Orthogonal Collocation |
Patel, Amir | University of Cape Town |
Shield, Stacey Leigh | University of Cape Town |
Kazi, Saif | Carnegie Mellon University |
Johnson, Aaron | Carnegie Mellon University |
Biegler, Lorenz | Carnegie Mellon University |
Keywords: Motion and Path Planning, Optimization and Optimal Control, Hybrid Logical/Dynamical Planning and Verification
Abstract: In this paper we propose a method to improve the accuracy of trajectory optimization for dynamic robots with intermittent contact by using orthogonal collocation. Until recently, most trajectory optimization methods for systems with contacts employ mode-scheduling, which requires an a priori knowledge of the contact order and thus cannot produce complex or non-intuitive behaviors. Contact-implicit trajectory optimization methods offer a solution to this by allowing the optimization to make or break contacts as needed, but thus far have suffered from poor accuracy. Here, we combine methods from direct collocation using higher order orthogonal polynomials with contact-implicit optimization to generate trajectories with significantly improved accuracy. The key insight is to increase the order of the polynomial representation while maintaining the physics assumption that impact occurs over the duration of one finite element.
|
|
09:40-10:55, Paper WeAT1-04.4 | Add to My Program |
Energy-Efficient Coverage Path Planning for General Terrain Surfaces |
Wu, Chenming | Tsinghua University |
Dai, Chengkai | Delft University of Technology |
Gong, Xiaoxi | Nanjing University of Aeronautics and Astronautics |
Liu, Yong-Jin | Tsinghua University |
Wang, Jun | Nanjing University of Aeronautics and Astronautics |
Gu, Xianfeng | Stony Brook University |
Wang, Charlie C.L. | The Chinese University of Hong Kong |
Keywords: Motion and Path Planning, Energy and Environment-Aware Automation, Field Robots
Abstract: This paper tackles the problem of energy-efficient coverage path planning for exploring general surfaces by an autonomous vehicle. Efficient algorithms are developed to generate paths on freeform 3D surfaces according to a special design pattern as peak-aware smooth Fermat spiral for this purpose. By using the exact boundary-sourced geodesic distances, how to generate Fermat spiral paths is first introduced to cover a general surface. Then, heuristics for energy-efficiency are incorporated to add peak points of a height-field as sources for geodesic computation. Lastly, the paths are further smoothed on the given surface to avoid sharp turns. The paths generated by our method can significantly reduce the cost caused by gravity and turning. Physical experiments have been taken on different terrain surfaces to demonstrate the effectiveness of our approach.
|
|
09:40-10:55, Paper WeAT1-04.5 | Add to My Program |
A New Approach to Time-Optimal Path Parameterization Based on Reachability Analysis (I) |
Pham, Hung | Nanyang Technological University |
Pham, Quang-Cuong | NTU Singapore |
Keywords: Motion and Path Planning, Industrial Robots, Motion Control
Abstract: Time-Optimal Path Parameterization (TOPP) is a well-studied problem in robotics and has a wide range of applications. There are two main families of methods to address TOPP: Numerical Integration (NI) and Convex Optimization (CO). NI-based methods are fast but difficult to implement and suffer from robustness issues, while CO-based approaches are more robust but at the same time significantly slower. Here we propose a new approach to TOPP based on Reachability Analysis (RA). The key insight is to recursively compute reachable and controllable sets at discretized positions on the path by solving small Linear Programs (LPs). The resulting algorithm is faster than NI-based methods and as robust as CO-based ones (100% success rate), as confirmed by extensive numerical evaluations. Moreover, the proposed approach offers unique additional benefits: Admissible Velocity Propagation and robustness to parametric uncertainty can be derived from it in a simple and natural way.
|
|
09:40-10:55, Paper WeAT1-04.6 | Add to My Program |
On Optimal Pursuit Trajectories for Visibility-Based Target-Tracking Game (I) |
Zou, Rui | MathWorks |
Bhattacharya, Sourabh | Iowa State University |
Keywords: Motion and Path Planning, Surveillance Systems, Visual Tracking
Abstract: In this paper, we address a class of visibility-based pursuit-evasion game in which a mobile observer tries to maintain a line-of-sight (LOS) with a mobile target in an environment containing obstacles. The observer knows the current position of the target as long as the target is in the observer’s LOS. At first, we address this problem in an environment containing a single corner. We formulate the game as an optimal control problem of maximizing the time for which the observer can keep the reachability set of the target in its field-of-view. Using Pontryagin’s principle, we show that the primitives for optimal motion of the observer are straight lines ( ST ) and spiral-like curves ( C ). Next, we present the synthesis of the optimal trajectories from any given initial position of the observer. We show that the optimal path of the observer belongs to the class { ST,C-ST,ST-C-ST} . Given any initial position of the target, we present a partition of the workspace around a corner based on the optimal control policy of the observer.
|
|
WeAT1-05 Interactive Session, 220 |
Add to My Program |
Learning from Demonstration II - 3.1.05 |
|
|
|
09:40-10:55, Paper WeAT1-05.1 | Add to My Program |
Learning from Extrapolated Corrections |
Zhang, Jason | UC Berkeley |
Dragan, Anca | University of California Berkeley |
Keywords: Learning from Demonstration, Learning and Adaptive Systems, Physical Human-Robot Interaction
Abstract: Our goal is to enable robots to learn cost functions from user guidance. Often it is difficult or impossible for users to provide full demonstrations, so corrections have emerged as an easier guidance channel. However, when robots learn cost functions from corrections rather than demonstrations, they have to extrapolate a small amount of information – the change of a waypoint along the way – to the rest of the trajectory. We cast this extrapolation problem as online function approximation, which exposes different ways in which the robot can interpret what trajectory the person intended, depending on the function space used for the approximation. Our simulation results and user study suggest that using function spaces with non-Euclidean norms can better capture what users intend, particularly if environments are uncluttered. This, in turn, can lead to the robot learning a more accurate cost function and improves the user’s subjective perceptions of the robot.
|
|
09:40-10:55, Paper WeAT1-05.2 | Add to My Program |
Merging Position and Orientation Motion Primitives |
Saveriano, Matteo | German Aerospace Center (DLR) |
Franzel, Felix | Technical University of Munich |
Lee, Dongheui | Technical University of Munich |
Keywords: Learning from Demonstration, Learning and Adaptive Systems
Abstract: In this paper, we focus on generating complex robotic trajectories by merging sequential motion primitives. A robotic trajectory is a time series of positions and orientations ending at a desired target. Hence, we first discuss the generation of converging pose trajectories via dynamical systems, providing a rigorous stability analysis. Then, we present approaches to merge motion primitives which represent both the position and the orientation part of the motion. Developed approaches preserve the shape of each learned movement and allow for continuous transitions among succeeding motion primitives. Presented methodologies are theoretically described and experimentally evaluated, showing that it is possible to generate a smooth pose trajectory out of multiple motion primitives.
|
|
09:40-10:55, Paper WeAT1-05.3 | Add to My Program |
Learning Haptic Exploration Schemes for Adaptive Task Execution |
Eiband, Thomas | German Aerospace Center (DLR) |
Saveriano, Matteo | German Aerospace Center (DLR) |
Lee, Dongheui | Technical University of Munich |
Keywords: Learning from Demonstration, Learning and Adaptive Systems, Force and Tactile Sensing
Abstract: The recent generation of compliant robots enables kinesthetic teaching of novel skills by human demonstration. This enables strategies to transfer tasks to the robot in a more intuitive way than conventional programming interfaces. Programming physical interactions can be achieved by manually guiding the robot to learn the behavior from the motion and force data. To let the robot react to changes in the environment, force sensing can be used to identify constraints and act accordingly. While autonomous exploration strategies in the whole workspace are time consuming, we propose a way to learn these schemes from human demonstrations in an object targeted manner. The presented teaching strategy and the learning framework allow to generate adaptive robot behaviors relying on the robot's sense of touch in a systematically changing environment. A generated behavior consists of a hierarchical representation of skills, where haptic exploration skills are used to touch the environment with the end effector, and relative manipulation skills, which are parameterized according to previous exploration events. The effectiveness of the approach has been proven in a manipulation task, where the adaptive task structure is able to generalize to unseen object locations. The robot autonomously manipulates objects without relying on visual feedback.
|
|
09:40-10:55, Paper WeAT1-05.4 | Add to My Program |
Learning Motion Trajectories from Phase Space Analysis of the Demonstration |
Gesel, Paul | University of New Hampshire |
Begum, Momotaz | University of New Hampshire |
LaRoche, Dain | University of New Hampshire |
Keywords: Learning from Demonstration, Learning and Adaptive Systems, Physical Human-Robot Interaction
Abstract: A major goal of learning from demonstration is task generalization via observation of a teacher. In this paper, we propose a novel framework for learning motion from a single demonstration. Our approach reconstructs the demonstrated trajectory's phase space curve via a linear piece-wise regression method. We approximate dynamics of trajectory segments with linear time invariant equations, each yielding closed form solutions. We show convergence to desired phase space states via an energy-based analysis. The robustness of the model is evaluated on a robot for a sequential trajectory task. Additionally, we show the advantages that the phase space model has over the dynamic motion primitive for a kinematic based task.
|
|
09:40-10:55, Paper WeAT1-05.5 | Add to My Program |
Relationship between the Order for Motor Skill Transfer and Motion Complexity in Reinforcement Learning |
Cho, Nam Jun | Hanyang University |
Lee, Sang Hyoung | Korea Institute of Industrial Technology |
Suh, Il Hong | Hanyang University |
Kim, Hong-Seok | Korea Institute of Industrial Technology |
Keywords: Learning from Demonstration, Learning and Adaptive Systems
Abstract: We propose a method to generate an order for learning and transferring motor skills based on motion complexity, then evaluate the order to learn motor skills of a task and transfer them to another task as a form of reinforcement learning (RL). Here, motion complexity refers to the complexity calculated from multiple motion trajectories of a task. To do this, multiple human demonstrations are extracted and clustered to calculate motion complexity and identify the motor skills involved in a task. The motion trajectories of the task are then used to calculate the motion complexity considering temporal entropy and spatial entropy. Finally, both orders [Simple-to-Complex] and [Complex-to-Simple] are generated to learn and transfer motor skills based on the motion complexities of multiple tasks. To evaluate these orders, two tasks [Drawing] and [Fitting] are performed using an actual robotic arm. To verify the learning and transfer processes, we apply our method to three different figures as well as to pegs and holes of three different shapes and analyze the experimental results. In addition, we provide guidelines for using the [Simple-to-Complex] and [Complex-to-Simple] orders in RL.
|
|
09:40-10:55, Paper WeAT1-05.6 | Add to My Program |
Learning Task Priorities from Demonstrations (I) |
Silvério, João | Istituto Italiano Di Tecnologia |
Calinon, Sylvain | Idiap Research Institute |
Rozo, Leonel | Bosch Center for Artificial Intelligence |
Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Keywords: Learning from Demonstration, Learning and Adaptive Systems, Humanoid Robots
Abstract: Bimanual operations in humanoids offer the possibility to carry out more than one manipulation task at the same time, which in turn introduces the problem of task prioritization. We address this problem from a learning from demonstration perspective, by extending the task-parameterized Gaussian mixture model to Jacobian and null space structures. The proposed approach is tested on bimanual skills but can be applied in any scenario where the prioritization between potentially conflicting tasks needs to be learned. We evaluate the proposed framework in: two different tasks with humanoids requiring the learning of priorities and a loco-manipulation scenario, showing that the approach can be exploited to learn the prioritization of multiple tasks in parallel.
|
|
WeAT1-06 Interactive Session, 220 |
Add to My Program |
Semantic Scene Understanding I - 3.1.06 |
|
|
|
09:40-10:55, Paper WeAT1-06.1 | Add to My Program |
I Can See Clearly Now : Image Restoration Via De-Raining |
Porav, Horia | University of Oxford |
Bruls, Tom | University of Oxford |
Newman, Paul | Oxford University |
Keywords: Semantic Scene Understanding, Deep Learning in Robotics and Automation, Performance Evaluation and Benchmarking
Abstract: We present a method for improving segmentation tasks on images affected by adherent rain drops and streaks. We introduce a novel stereo dataset recorded using a system that allows one lens to be affected by real water droplets while keeping the other lens clear. We train a denoising generator using this dataset and show that it is effective at removing the effect of real water droplets, in the context of image reconstruction and road marking segmentation. To further test our de-noising approach, we describe a method of adding computer-generated adherent water droplets and streaks to any images, and use this technique as a proxy to demonstrate the effectiveness of our model in the context of general semantic segmentation. We benchmark our results using the CamVid road marking segmentation dataset, Cityscapes semantic segmentation datasets and our own real-rain dataset, and show significant improvement on all tasks.
|
|
09:40-10:55, Paper WeAT1-06.2 | Add to My Program |
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics Using CNNs |
Milioto, Andres | University of Bonn |
Stachniss, Cyrill | University of Bonn |
Keywords: Object Detection, Segmentation and Categorization, Semantic Scene Understanding, Deep Learning in Robotics and Automation
Abstract: The ability to interpret a scene is an important capability for a robot that is supposed to interact with its environment. The knowledge of what is in front of the robot is, for example, relevant for navigation, manipulation, or planning. Semantic segmentation labels each pixel of an image with a class label and thus provides a detailed semantic annotation of the surroundings to the robot. Convolutional neural networks (CNNs) are popular methods for addressing this type of problem. The available software for training and the integration of CNNs for real robots, however, is quite fragmented and often difficult to use for non-experts, despite the availability of several high-quality open-source frameworks for neural network implementation and training. In this paper, we propose a tool called Bonnet, which addresses this fragmentation problem by building a higher abstraction that is specific for the semantic segmentation task. It provides a modular approach to simplify the training of a semantic segmentation CNN independently of the used dataset and the intended task. Furthermore, we also address the deployment on a real robotic platform. Thus, we do not propose a new CNN approach in this paper. Instead, we provide a stable and easy-to-use tool to make this technology more approachable in the context of autonomous systems. In this sense, we aim at closing a gap between computer vision research and its use in robotics research.
|
|
09:40-10:55, Paper WeAT1-06.3 | Add to My Program |
Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations |
Nekrasov, Vladimir | University of Adelaide |
Dharmasiri, Thanuja | Monash University |
Spek, Andrew | Monash University |
Drummond, Tom | Monash University |
Shen, Chunhua | The University of Adelaide |
Reid, Ian | University of Adelaide |
Keywords: Visual Learning, Semantic Scene Understanding, SLAM
Abstract: Deployment of deep learning models in robotics as sensory information extractors can be a daunting task to handle, even using generic GPU cards. Here, we address three of its most prominent hurdles, namely, i) the adaptation of a single model to perform multiple tasks at once (in this work, we consider depth estimation and semantic segmentation crucial for acquiring geometric and semantic understanding of the scene), while ii) doing it in real-time, and iii) using asymmetric datasets with uneven numbers of annotations per each modality. To overcome the first two issues, we adapt a recently proposed real-time semantic segmentation network, making changes to further reduce the number of floating point operations. To approach the third issue, we embrace a simple solution based on hard knowledge distillation under the assumption of having access to a powerful `teacher' network. We showcase how our system can be easily extended to handle more tasks, and more datasets, all at once, performing depth estimation and segmentation both indoors and outdoors with a single model. Quantitatively, we achieve results equivalent to (or better than) current state-of-the-art approaches with one forward pass costing just 13ms and 6.5 GFLOPs on 640x480 inputs. This efficiency allows us to directly incorporate the raw predictions of our network into the SemanticFusion framework for dense 3D semantic reconstruction of the scene.
|
|
09:40-10:55, Paper WeAT1-06.4 | Add to My Program |
Semantic Mapping for View-Invariant Relocalization |
Li, Jimmy | McGill University |
Meger, David Paul | McGill University |
Dudek, Gregory | McGill University |
Keywords: Semantic Scene Understanding, Visual-Based Navigation, SLAM
Abstract: We propose a system for visual simultaneous localization and mapping (SLAM) that combines traditional local appearance-based features with semantically meaningful object landmarks to achieve both accurate local tracking and highly view-invariant object-driven relocalization. Our mapping process uses a sampling-based approach to efficiently infer the 3D pose of object landmarks from 2D bounding box object detections. These 3D landmarks then serve as a view-invariant representation which we leverage to achieve camera relocalization even when the viewing angle changes by more than 125 degrees. This level of view-invariance cannot be attained by local appearance-based features (e.g. SIFT) since the same set of surfaces are not even visible when the viewpoint changes significantly. Our experiments show that even when existing methods fail completely for viewpoint changes of more than 70 degrees, our method continues to achieve a relocalization rate of around 90%, with a mean rotational error of around 8 degrees.
|
|
09:40-10:55, Paper WeAT1-06.5 | Add to My Program |
Automatic Targeting of Plant Cells Via Cell Segmentation and Robust Scene-Adaptive Tracking |
Paranawithana, Ishara | Singapore University of Technology and Design |
Chau, Zhong Hoo | Singapore University of Technology and Design |
Yang, Liangjing | Zhejiang University |
Chen, Zhong | National Institute of Education, Nanyang Technological Universit |
Youcef-Toumi, Kamal | Massachusetts Institute of Technology |
Tan, U-Xuan | Singapore University of Techonlogy and Design |
Keywords: Automation at Micro-Nano Scales, Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Visual Tracking
Abstract: Automatic targeting of plant cells to perform tasks like extraction of chloroplast is often desired in the study of plant biology. Hence, this paper proposes an improved cell segmentation method combined with a robust tracking algorithm for vision-guided micromanipulation in plant cells. The objective of this work is to develop an automatic plant cell detection and localization technique to complete the automated workflow for plant cell manipulation. The complex structural properties of plant cells make both segmentation of cells and visual tracking of the microneedle immensely challenging, unlike single animal cell applications. Thus, an improved version of watershed segmentation with adaptive thresholding is proposed to detect the plant cells without the need for staining of the cells or additional tedious preparations. To manipulate the needle to reach the identified centroid of the cells, tracking of the needle tip is required. Visual and motion information from two data sources namely, template tracking and projected manipulator trajectory are combined using score-based normalized weighted averaging to continuously track the microneedle. Experimental results validate the effectiveness of the proposed method by detecting plant cell centroids accurately, tracking the microneedle constantly and reaching the plant cell of interest despite the presence of visual disturbances.
|
|
WeAT1-07 Interactive Session, 220 |
Add to My Program |
SLAM - Session VII - 3.1.07 |
|
|
|
09:40-10:55, Paper WeAT1-07.1 | Add to My Program |
Real-Time Monocular Object-Model Aware Sparse SLAM |
Hosseinzadeh, Mehdi | The University of Adelaide |
Li, Kejie | The University of Adelaide |
Latif, Yasir | University of Adelaide |
Reid, Ian | University of Adelaide |
Keywords: SLAM
Abstract: Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While sparse point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work incorporates a real-time deep-learned object detector to the monocular SLAM framework for representing generic objects as quadrics that permit detections to be seamlessly integrated while allowing the real-time performance. Finer reconstruction of an object, learned by a CNN network, is also incorporated and provides a shape prior for the quadric leading further refinement. To capture the dominant structure of the scene, additional planar landmarks are detected by a CNN-based plane detector and modelled as independent landmarks in the map. Extensive experiments support our proposed inclusion of semantic objects and planar structures directly in the bundle-adjustment of SLAM - textit{Semantic SLAM} - that enriches the reconstructed map semantically, while significantly improving the camera localization.
|
|
09:40-10:55, Paper WeAT1-07.2 | Add to My Program |
Probabilistic Projective Association and Semantic Guided Relocalization for Dense Reconstruction |
Yang, Sheng | Tsinghua University |
Kuang, Zheng-fei | Tsinghua University |
Cao, Yanpei | Tsinghua University |
Lai, Yu-Kun | Cardiff University |
Hu, Shi-Min | Tsinghua University |
Keywords: SLAM, RGB-D Perception, Object Detection, Segmentation and Categorization
Abstract: We present a real-time dense mapping system which uses the predicted 2D semantic labels for optimizing the geometric quality of reconstruction. With a combination of CNN for 2D labeling and a SLAM system for camera trajectory estimation, recent approaches have succeeded in incrementally fusing and labeling 3D scenes. However, the geometric quality of the reconstruction can be further improved by incorporating such semantic prediction results, which is not sufficiently exploited by existing methods. In this paper, we propose to use semantic information to improve two crucial modules in the reconstruction pipeline, namely tracking and loop detection, for obtaining mutual benefits in geometric reconstruction and semantic recognition. Specifically for tracking, we use a novel probabilistic projective association approach to efficiently pick out candidate correspondences, where the confidence of these correspondences is quantified concerning similarities on all available short-term invariant features. For the loop detection, we incorporate these semantic labels into the original encoding through Randomized Ferns to generate a more comprehensive representation for retrieving candidate loop frames. Evaluations on a publicly available synthetic dataset have shown the effectiveness of our approach that considers such semantic hints as a reliable feature for achieving higher geometric quality.
|
|
09:40-10:55, Paper WeAT1-07.3 | Add to My Program |
MRS-VPR: A Multi-Resolution Sampling Based Visual Place Recognition Method |
Yin, Peng | Carnegie Mellon University |
Rangaprasad, Arun Srivatsan | Carnegie Mellon University |
Chen, Yin | Beijing University of Posts and Telecommunications |
Li, Xueqian | SIA |
Zhang, Hongda | SIA |
Xu, Lingyun | Chinese Academy of Sciences |
Li, Lu | Carnegie Mellon University |
Jia, Zhenzhong | Carnegie Mellon University |
Ji, Jianmin | University of Science and Technology of China |
He, Yuqing | Shenyang Institute of Automation, Chinese Academy of Sciences |
Keywords: SLAM, Deep Learning in Robotics and Automation, Visual Learning
Abstract: Place recognition and loop closure detection are challenging for long-term visual navigation tasks. SeqSLAM is considered to be one of the most successful approaches to achieving long-term localization under varying environmental conditions and changing viewpoints. It depends on a brute-force, time-consuming sequential matching method. We propose MRS-VPR, a multi-resolution, sampling-based place recognition method, which can significantly improve the matching efficiency and accuracy in sequential matching. The novelty of this method lies in the coarse-to-fine searching pipeline and a particle filter-based global sampling scheme, that can balance the matching efficiency and accuracy in the long-term navigation task. Moreover, our model works much better than SeqSLAM when the testing sequence has a much smaller scale than the reference sequence. Our experiments demonstrate that the proposed method is efficient in locating short temporary trajectories within long-term reference ones without losing accuracy compared to SeqSLAM.
|
|
09:40-10:55, Paper WeAT1-07.4 | Add to My Program |
Robust Low-Overlap 3-D Point Cloud Registration for Outlier Rejection |
Stechschulte, John | University of Colorado, Boulder |
Ahmed, Nisar | University of Colorado Boulder |
Heckman, Christoffer | University of Colorado at Boulder |
Keywords: SLAM, RGB-D Perception, Probability and Statistical Methods
Abstract: When registering 3-D point clouds it is expected that some points in one cloud do not have corresponding points in the other cloud. These non-correspondences are likely to occur near one another, as surface regions visible from one sensor pose are obscured or out of frame for another. In this work, a hidden Markov random field model is used to capture this prior within the framework of the iterative closest point algorithm. The EM algorithm is used to estimate the distribution parameters and learn the hidden component memberships. Experiments are presented demonstrating that this method outperforms several other outlier rejection methods when the point clouds have low or moderate overlap.
|
|
09:40-10:55, Paper WeAT1-07.5 | Add to My Program |
Unified Representation and Registration of Heterogeneous Sets of Geometric Primitives |
Nardi, Federico | Sapienza Univ of Rome |
Della Corte, Bartolomeo | Sapienza University of Rome |
Grisetti, Giorgio | Sapienza University of Rome |
Keywords: SLAM, Localization
Abstract: Registering models is an essential building block of many robotic applications. In case of 3D data, the models to be aligned usually consist of point clouds. In this work we propose a formalism to represent in a uniform manner scenes consisting of high-level geometric primitives, including lines and planes. Additionally, we derive both an iterative and a direct method to determine the transformation between heterogeneous scenes (solver). We analyzed the convergence behavior of this solver on synthetic data. Furthermore, we conducted comparative experiments on a full registration pipeline that operates on raw data, implemented on top of our solver. To this extent we used public benchmark datasets and we compared against state-of-the-art approaches. Finally, we provide an implementation of our solver together with scripts to ease the reproduction of the results presented in this work.
|
|
09:40-10:55, Paper WeAT1-07.6 | Add to My Program |
Direct Relative Edge Optimization, a Robust Alternative for Pose Graph Optimization |
Jackson, James | Brigham Young University |
Brink, Kevin | AFRL |
Forsgren, Brendon | Brigham Young University |
Wheeler, David | Brigham Young University |
McLain, T.W. | Bringham Young University |
Keywords: SLAM, Mapping, Multi-Robot Systems
Abstract: Pose graph optimization is a common problem in robotics and associated fields. Most commonly, pose graph optimization is performed by finding the set of pose estimates which are the ``most likely'' for given set of measurements. In some situations, arbitrarily large errors in pose graph initialization are unavoidable and can cause these pose-based methods to diverge or fail. This paper details the parameterization of the classic pose graph problem in a relative context, optimizing directly over relative edge constraints between vertices in the pose graph and not on the poses themselves. Unlike previous literature on relative optimization, this paper details relative optimization over an entire pose graph, instead of a subset of edges, resulting in increased robustness to arbitrarily large errors than the classic pose-based optimization or prior relative. Several small-scale simulation comparison studies and a single and multi-agent hardware experiment are presented with results pointing to Relative Edge Optimization as a strong candidate to solving real-world pose graph optimization problems that contain arbitrarily large initialization errors that have proven problematic to the global approach thus far.
|
|
WeAT1-08 Interactive Session, 220 |
Add to My Program |
AI-Based Methods I - 3.1.08 |
|
|
|
09:40-10:55, Paper WeAT1-08.1 | Add to My Program |
Generalized Controllers in POMDP Decision-Making |
Wray, Kyle | University of Massachusetts Amherst |
Zilberstein, Shlomo | University of Massachusetts |
Keywords: AI-Based Methods, Autonomous Agents, Planning, Scheduling and Coordination
Abstract: We present a general policy formulation for partially observable Markov decision processes (POMDPs) called controller family policies that may be used as a framework to facilitate the design of new policy forms. We prove how modern approximate policy forms: point-based, finite state controller (FSC), and belief compression, are instances of this family of generalized controller policies. Our analysis provides a deeper understanding of the POMDP model and suggests novel ways to design POMDP solutions that can combine the benefits of different state-of-the-art methods. We illustrate this capability by creating a new customized POMDP policy form called the belief-integrated FSC (BI-FSC) tailored to overcome the shortcomings of a state-of-the-art algorithm that uses non-linear programming (NLP). Specifically, experiments show that for NLP the BI-FSC offers improved performance over a vanilla FSC-based policy form on benchmark domains. Furthermore, we demonstrate the BI-FSC’s execution on a real robot navigating in a maze environment. Results confirm the value of using the controller family policy as a framework to design customized policies in POMDP robotic solutions.
|
|
09:40-10:55, Paper WeAT1-08.2 | Add to My Program |
Continuous Value Iteration (CVI) Reinforcement Learning and Imaginary Experience Replay (IER) for Learning Multi-Goal, Continuous Action and State Space Controllers |
Gerken, Andreas Konrad Richard | Technische Universität Berlin |
Spranger, Michael | Sony Computer Science Laboratories Inc |
Keywords: AI-Based Methods, Learning and Adaptive Systems, Optimization and Optimal Control
Abstract: This paper presents a novel model-free Reinforcement Learning algorithm for learning behavior in continuous action, state, and goal spaces. The algorithm approximates optimal value functions using non-parametric estimators. It is able to efficiently learn to reach multiple arbitrary goals in deterministic and nondeterministic environments. To improve generalization in the goal space, we propose a novel sample augmentation technique. Using these methods robots learn faster and overall better controllers. We benchmark the performance using simulation and a real world voltage control robot that learns to control a Cartesian task space without direct sensori access.
|
|
09:40-10:55, Paper WeAT1-08.3 | Add to My Program |
IX-BSP: Belief Space Planning through Incremental Expectation |
Farhi, Elad I. | Technion - Israel Institute of Technology |
Indelman, Vadim | Technion - Israel Institute of Technology |
Keywords: AI-Based Methods, Path Planning for Multiple Mobile Robots or Agents, Optimization and Optimal Control
Abstract: Belief space planning (BSP) is a fundamental problem in robotics. Determining an optimal action quickly grows intractable as it involves calculating the expected accumulated cost (reward), where the expectation accounts for all future measurement realizations. State of the art approaches therefore resort to simplifying assumptions and approximations to reduce computational complexity. Importantly, while in robotics re-planning is essential, these approaches calculate each planning session from scratch. In this work we contribute a novel approach, iX-BSP, that is based on the key insight that calculations in consecutive planning sessions are similar in nature and can be thus re-used. Our approach performs incremental calculation of the expectation by appropriately re-using computations already performed in a precursory planing session while accounting for the information obtained in inference between the two planning sessions. The formulation of our approach considers general distributions and accounts for data association aspects. We evaluate iX-BSP in statistical simulation and show incremental expectation calculations significantly reduce runtime without impacting performance.
|
|
09:40-10:55, Paper WeAT1-08.4 | Add to My Program |
What Am I Touching? Learning to Classify Terrain Via Haptic Sensing |
Bednarek, Jakub | Poznań University of Technology |
Bednarek, Michał | Poznan University of Technology |
Wellhausen, Lorenz | ETH Zürich |
Hutter, Marco | ETH Zurich |
Walas, Krzysztof, Tadeusz | Poznan University of Technology |
Keywords: AI-Based Methods, Legged Robots, Force and Tactile Sensing
Abstract: Mobile robots are becoming very popular in real-world outdoors applications, where there are many challenges in robot control and perception. One of the most critical problems is to characterise the terrain traversed by the robot. This knowledge is indispensable for optimal terrain negotiation. Currently, most approaches are performing terrain classification from vision, but there is not enough research on terrain identification from a direct interaction of the robot with the environment. In our work, we proposed new methods for classification of force/torque data from an interaction of the legged robot foot with the ground, gathered during the walking process. We provided machine learning methods for terrain classification from raw force/torque signals for which we achieved 93% accuracy on a challenging dataset with 160 minutes of recorded fixed-length steps. We also worked on a dataset where the assumption of a fixed-length step is not valid. In this case, the final result is around 80% of accuracy. The most important fact is that the data in both cases was recorded while the robot was walking, no particular movements or controlled environment were needed. Additionally, we also proposed a clustering method which allows us to learn about the class membership based on the recorded data only, without any human supervision.
|
|
09:40-10:55, Paper WeAT1-08.5 | Add to My Program |
Multi-Object Search Using Object-Oriented POMDPs |
Wandzel, Arthur | Brown University |
Oh, Yoonseon | Brown University |
Fishman, Michael | Brown University |
Kumar, Nishanth | Brown University |
Wong, Lawson L.S. | Northeastern University |
Tellex, Stefanie | Brown |
Keywords: AI-Based Methods
Abstract: A core capability of robots is to reason about multiple objects under uncertainty. Partially Observable Markov Decision Processes (POMDPs) provide a means of reasoning under uncertainty for sequential decision making, but are computationally intractable in large domains. In this paper, we propose Object-Oriented POMDPs (OO-POMDPs), which represent the state and observation spaces in terms of classes and objects. The structure afforded by OO-POMDPs support a factorization of the agent's belief into independent object distributions, which enables the size of the belief to scale linearly versus exponentially in the number of objects. We formulate a novel Multi-Object Search (MOS) task as an OO-POMDP for mobile robotics domains in which the agent must find the locations of multiple objects. Our solution exploits the structure of OO-POMDPs by featuring human language to selectively update the belief at task onset. Using this structure, we develop a new algorithm for efficiently solving OO-POMDPs: Object-Oriented Partially Observable Monte-Carlo Planning (OO-POMCP). We show that OO-POMCP with grounded language commands is sufficient for solving challenging MOS tasks both in simulation and on a physical mobile robot.
|
|
09:40-10:55, Paper WeAT1-08.6 | Add to My Program |
Depth Generation Network: Estimating Real World Depth from Stereo and Depth Images |
Dong, Zhipeng | Northeastern University |
Gao, Yi | Northeastern University |
Ren, Qinyuan | Zhejiang University |
Yan, Yunhui | Northeastern University |
Chen, Fei | Istituto Italiano Di Tecnologia |
Keywords: AI-Based Methods, RGB-D Perception, Range Sensing
Abstract: In this work, we propose the Depth Generation Network (DGN) to address the problem of dense depth estimation by exploiting the variational method and the deep-learning technique. In particular, we focus on improving the feasibility of depth estimation under complex scenarios given stereo RGB images, where the stereo pairs and/or depth ground-truth captured by real sensors may be deteriorated; the stereo setting parameters may be unavailable or unreliable, hence hamper efforts to establish the correspondence between image pairs via supervision learning or epipolar geometric cues. Instead of relying on real data, we supervise the training of our model using synthetic depth maps generated by the simulator, which deliver complex scenes and reliable data with ease. Two non-trivial challenges, i.e., (i) attaining reasonable amount yet realistic samples for training, and (ii) developing a model that adapts to both synthetic and real scenes arise, whereas in this work we mainly deal with the later one yet leveraging state-of-the-art Falling Things (FAT) dataset to overcome the first. Experiments on FAT and KITTI datasets demonstrate that our model estimates relative dense depth in fine details, potentially generalizable to real scenes without knowing the stereo geometric and optic settings.
|
|
WeAT1-09 Interactive Session, 220 |
Add to My Program |
Perception for Manipulation III - 3.1.09 |
|
|
|
09:40-10:55, Paper WeAT1-09.1 | Add to My Program |
Multi-Task Template Matching for Object Detection, Segmentation and Pose Estimation Using Depth Images |
Park, Kiru | TU Wien |
Patten, Timothy | Technical University of Vienna |
Prankl, Johann | University of Technology Vienna |
Vincze, Markus | Vienna University of Technology |
Keywords: Perception for Grasping and Manipulation, RGB-D Perception, Computer Vision for Automation
Abstract: Template matching has been shown to accurately estimate the pose of a new object given a limited number of samples. However, pose estimation of occluded objects is still challenging. Furthermore, many robot application domains encounter texture-less objects for which depth images are more suitable than color images. In this paper, we propose a novel framework, Multi-Task Template Matching (MTTM), that finds the nearest template of a target object from a depth image while predicting segmentation masks and a pose transformation between the template and a detected object in the scene using the same feature map of the object region. The proposed feature comparison network computes segmentation masks and pose predictions by comparing feature maps of templates and cropped features of a scene. The segmentation result from this network improves the robustness of the pose estimation by excluding points that do not belong to the object. Experimental results show that MTTM outperforms baseline methods for segmentation and pose estimation of occluded objects despite using only depth images.
|
|
09:40-10:55, Paper WeAT1-09.2 | Add to My Program |
A Clustering Approach to Categorizing 7 Degree-Of-Freedom Arm Motions During Activities of Daily Living |
Gloumakov, Yuri | Yale University |
Spiers, Adam | Max Planck Institute for Intelligent Systems |
Dollar, Aaron | Yale University |
Keywords: Perception for Grasping and Manipulation, Dual Arm Manipulation
Abstract: In this paper we present a novel method of categorizing naturalistic human arm motions during activities of daily living using clustering techniques. While many current approaches attempt to define all arm motions using heuristic interpretation, or a combination of several abstract motion primitives, our unsupervised approach generates a hierarchical description of natural human motion with well recognized groups. Reliable recommendation of a subset of motions for task achievement is beneficial to various fields, such as robotic and semi-autonomous prosthetic device applications. The proposed method makes use of well-known techniques such as dynamic time warping (DTW) to obtain a divergence measure between motion segments, DTW barycenter averaging (DBA) to get a motion average, and Ward’s distance criterion to build the hierarchical tree. The clusters that emerge summarize the variety of recorded motions into the following general tasks: reach-to-front, transfer-box, drinking from vessel, on-table motion, turning a key or door knob, and reach-to-back pocket. The clustering methodology is justified by comparing against an alternative measure of divergence using Bezier coefficients and K-medoids clustering.
|
|
09:40-10:55, Paper WeAT1-09.3 | Add to My Program |
Factored Pose Estimation of Articulated Objects Using Efficient Nonparametric Belief Propagation |
Desingh, Karthik | University of Michigan |
Lu, Shiyang | University of Michigan, Ann Arbor |
Opipari, Anthony | University of Michigan |
Jenkins, Odest Chadwicke | University of Michigan |
Keywords: Perception for Grasping and Manipulation
Abstract: Robots working in human environments often encounter a wide range of articulated objects, such as tools, cabinets, and other jointed objects. Such articulated objects can take an infinite number of possible poses, as a point in a potentially high-dimensional continuous space. A robot must perceive this continuous pose in order to manipulate the object to a desired pose. This problem of perception and manipulation of articulated objects remains a challenge due to its high dimensionality and multi-modal uncertainty. In this paper, we propose a factored approach to estimate the poses of articulated objects using an efficient nonparametric belief propagation algorithm. We consider inputs as geometrical models with articulation constraints, and observed 3D sensor data. The proposed framework produces object-part pose beliefs iteratively. The problem is formulated as a pairwise Markov Random Field (MRF) where each hidden node (continuous pose variable) models an observed object-part's pose and each edge denotes an articulation constraint between a pair of parts. We propose articulated pose estimation by a Pull Message Passing algorithm for Nonparametric Belief Propagation (PMPNBP) and evaluate its convergence properties over scenes with articulated objects.
|
|
09:40-10:55, Paper WeAT1-09.4 | Add to My Program |
Domain Randomization for Active Pose Estimation |
Ren, Xinyi | University of California, Berkeley |
Luo, Jianlan | UC Berkeley |
Solowjow, Eugen | Siemens Corporation |
Aparicio Ojea, Juan | Siemens |
Gupta, Abhishek | UC Berkeley |
Tamar, Aviv | UC Berkeley |
Abbeel, Pieter | UC Berkeley |
Keywords: Perception for Grasping and Manipulation, Simulation and Animation
Abstract: Accurate state estimation is a fundamental component of robotic control. In robotic manipulation tasks, as is our focus in this work, state estimation is essential for identifying the positions of objects in the scene, forming the basis of the manipulation plan. However, pose estimation typically requires expensive 3D cameras or additional instrumentation such as fiducial markers to perform accurately. Recently, Tobin et al. introduced an approach to pose estimation based on domain randomization, where a neural network is trained to predict pose directly from a 2D image of the scene. The network is trained on computer generated images with a high variation in textures and lighting, thereby generalizing to real world images. In this work, we investigate how to improve the accuracy of domain randomization based pose estimation. Our main idea is that active perception -- moving the robot to get a better estimate of pose -- can be trained in simulation and transferred to real using domain randomization. In our approach, the robot trains in a domain-randomized simulation how to estimate pose from a sequence of images. We show that our approach can significantly improve the accuracy of standard pose estimation in several scenarios: when the robot holding an object moves, or when reference objects are moved in the scene.
|
|
09:40-10:55, Paper WeAT1-09.5 | Add to My Program |
GraspFusion: Realizing Complex Motion by Learning and Fusing Grasp Modalities with Instance Segmentation |
Hasegawa, Shun | The University of Tokyo |
Wada, Kentaro | The University of Tokyo |
Kitagawa, Shingo | University of Tokyo |
Uchimi, Yuto | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Perception for Grasping and Manipulation, Visual Learning, Grasping
Abstract: Recent progress of deep learning improved the capability of a robot to find a proper grasp of a novel object for different grasp modalities (e.g., pinch and suction). While these previous studies consider multiple modalities separately, several studies develop multi-modal grippers that can achieve simultaneous pinch and suction grasp (multi-modal grasp fusion) for more capable and stable object manipulation. However, the previous studies with these grippers restrict the situations: simple object geometry and uncluttered environments. To overcome these difficulties, we propose a system that consists of: 1) object-class-agnostic grasp modality detection; 2) object-class-agnostic instance segmentation; and 3) grasp template matching for different modalities. The key idea of our work is the introduction of instance segmentation to fuse multiple modalities regarding each instance eluding a grasp of multiple objects at once. In the experiments, we evaluated the proposed system on the real-world picking task in clutter. The experimental results show that the effectiveness of modality detection, instance segmentation, and the integrated system as a whole.
|
|
09:40-10:55, Paper WeAT1-09.6 | Add to My Program |
Factored Contextual Policy Search with Bayesian Optimization |
Pinsler, Robert | University of Cambridge |
Karkus, Peter | National University of Singapore |
Kupcsik, Andras | Bosch Center for AI |
Hsu, David | National University of Singapore |
Lee, Wee Sun | National University of Singapore |
Keywords: Learning and Adaptive Systems
Abstract: Scarce data is a major challenge to scaling robot learning to truly complex tasks, as we need to generalize locally learned policies over different task contexts. Contextual policy search offers data-efficient learning and generalization by explicitly conditioning the policy on a parametric context space. In this paper, we further structure the contextual policy representation. We propose to factor contexts into two components: target contexts that describe the task objectives, e.g. target position for throwing a ball; and environment contexts that characterize the environment, e.g. initial position or mass of the ball. Our key observation is that experience can be directly generalized over target contexts. We show that this can be easily exploited in contextual policy search algorithms. In particular, we apply factorization to a Bayesian optimization approach to contextual policy search both in sampling-based and active learning settings. Our simulation results show faster learning and better generalization in various robotic domains. See our supplementary video: https://youtu.be/MNTbBAOufDY.
|
|
WeAT1-10 Interactive Session, 220 |
Add to My Program |
Object Recognition & Segmentation III - 3.1.10 |
|
|
|
09:40-10:55, Paper WeAT1-10.1 | Add to My Program |
Structured Domain Randomization: Bridging the Reality Gap by Context-Aware Synthetic Data |
Prakash, Aayush | NVIDIA |
Boochoon, Shaad | NVIDIA |
Brophy, Mark Austin | NVIDIA |
Acuna, David | Nvidia/ University of Toronto |
Cameracci, Eric | 1992 |
State, Gavriel | NVIDIA |
Shapira, Omer | NVIDIA |
Birchfield, Stan | NVIDIA |
Keywords: Object Detection, Segmentation and Categorization, Computer Vision for Transportation, Deep Learning in Robotics and Automation
Abstract: We present structured domain randomization (SDR), a variant of domain randomization (DR) that takes into account the structure of the scene in order to add context to the generated data. In contrast to DR, which places objects and distractors randomly according to a uniform probability distribution, SDR places objects and distractors randomly according to probability distributions that arise from the specific problem at hand. In this manner, SDR-generated imagery enables the neural network to take the context around an object into consideration during detection. We demonstrate the power of SDR for the problem of 2D bounding box car detection, achieving competitive results on real data after training only on synthetic data. On the KITTI easy, moderate, and hard tasks, we show that SDR outperforms other approaches to generating synthetic data (VKITTI, Sim 200k, or DR), as well as real data collected in a different domain (BDD100K). Moreover, synthetic SDR data combined with real KITTI data outperforms real KITTI data alone.
|
|
09:40-10:55, Paper WeAT1-10.2 | Add to My Program |
Probabilistic Active Filtering for Object Search in Clutter |
Poon, James | NAIST |
Cui, Yunduan | Nara Institute of Science and Technology |
Ooga, Jun'ichiro | Toshiba Corporation |
Ogawa, Akihito | TOSHIBA CORPORATION |
Matsubara, Takamitsu | Nara Institute of Science and Technology |
Keywords: Probability and Statistical Methods, Learning and Adaptive Systems, Optimization and Optimal Control
Abstract: This paper proposes a probabilistic approach for object search in clutter. Due to heavy occlusions, it is vital for an agent to be able to gradually reduce uncertainty in observations of the objects in its workspace by systematically rearranging them. Probabilistic methodologies present a promising sample-efficient alternative to handle the massively complex state-action space that inherently comes with this problem, avoiding the need for both exhaustive training samples and the accompanying heuristics for traversing a large-scale model during runtime. We approach the object search problem by extending a Gaussian Process active filtering strategy with an additional model for capturing state dynamics as the objects are moved over the course of the activity. This allows viable models to be built upon relatively scarce training data, while the complexity of the action space is also reduced by shifting objects over relatively short distances. Validation in both simulation and with a real Baxter robot with a limited number of training samples demonstrates the efficacy of the proposed approach.
|
|
09:40-10:55, Paper WeAT1-10.3 | Add to My Program |
Robust 3D Object Classification by Combining Point Pair Features and Graph Convolution |
Weibel, Jean-Baptiste | TU Wien |
Patten, Timothy | Technical University of Vienna |
Vincze, Markus | Vienna University of Technology |
Keywords: Object Detection, Segmentation and Categorization, Visual Learning, Computer Vision for Other Robotic Applications
Abstract: Object classification is an important capability for robots as it provides vital semantic information that underpin most practical high-level tasks. Classic handcrafted features, such as point pair features, have demonstrated their robustness for this task. Combining these features with modern deep learning methods provide discriminative features that are rotation invariant and robust to various sources of noise. In this work, we aim to improve the descriptiveness of point pair features while retaining their robustness. We propose a method to achieve more structured sampling of pairs and combine this information through the use of graph convolutional networks. We introduce a novel attention model based on a repeatable local reference frame. Experiments show that our approach significantly improves the state of the art for object classification on large scale reconstruction such as the Stanford 3D indoor dataset and ScanNet and obtains competitive accuracy on the artificial dataset ModelNet.
|
|
09:40-10:55, Paper WeAT1-10.4 | Add to My Program |
Discrete Rotation Equivariance for Point Cloud Recognition |
Li, Jiaxin | National University of Singapore |
Bi, Yingcai | National University of Singapore |
Lee, Gim Hee | National University of Singapore |
Keywords: Object Detection, Segmentation and Categorization, AI-Based Methods, Computer Vision for Automation
Abstract: Despite the recent active research on processing point clouds with deep networks, few attention has been on the sensitivity of the networks to rotations. In this paper, we propose a deep learning architecture that achieves discrete SO(2)/SO(3) rotation equivariance for point cloud recognition. Specifically, the rotation of an input point cloud with elements of a rotation group is similar to shuffling the feature vectors generated by our approach. The equivariance is easily reduced to invariance by eliminating the permutation with operations such as maximum or average. Our method can be directly applied to any existing point cloud based networks, resulting in significant improvements in their performance for rotated inputs. We show state-of-the-art results in the classification tasks with various datasets under both SO(2) and SO(3) rotations. In addition, we further analyze the necessary conditions of applying our approach to PointNet based networks.
|
|
09:40-10:55, Paper WeAT1-10.5 | Add to My Program |
MVX-Net: Multimodal VoxelNet for 3D Object Detection |
Sindagi, Vishwanath | Johns Hopkins University |
Zhou, Yin | Apple |
Tuzel, Oncel | Apple |
Keywords: Object Detection, Segmentation and Categorization, Computer Vision for Other Robotic Applications
Abstract: Many recent works on 3D object detection have focused on designing neural network architectures that can consume point cloud data. While these approaches demonstrate encouraging performance, they are typically based on a single modality and are unable to leverage information from other modalities, such as a camera. Although a few approaches fuse data from different modalities, these methods either use a complicated pipeline to process the modalities sequentially, or perform late-fusion and are unable to learn interaction between different modalities at early stages. In this work, we present PointFusion and VoxelFusion: two simple yet effective early-fusion approaches to combine the RGB and point cloud modalities, by leveraging the recently introduced VoxelNet architecture. Evaluation on the KITTI dataset demonstrates significant improvements in performance over approaches which only use point cloud data. Furthermore, the proposed method provides results competitive with the state-of-the-art multimodal algorithms, achieving top-2 ranking in five of the six bird’s eye view and 3D detection categories on the KITTI benchmark, by using a simple single stage network.
|
|
09:40-10:55, Paper WeAT1-10.6 | Add to My Program |
Segmenting Unknown 3D Objects from Real Depth Images Using Mask R-CNN Trained on Synthetic Data |
Danielczuk, Michael | UC Berkeley |
Matl, Matthew | University of California, Berkeley |
Gupta, Saurabh | UC Berkeley |
Lee, Andrew | University of California, Berkeley |
Li, Andrew | UC Berkeley |
Mahler, Jeffrey | University of California, Berkeley |
Goldberg, Ken | UC Berkeley |
Keywords: Object Detection, Segmentation and Categorization, Computer Vision for Automation
Abstract: Recent computer vision research has demonstrated that Mask R-CNN can be trained to segment specific categories of objects in RGB images when massive hand-labeled datasets are available. As generating these datasets is time-consuming, we instead train with synthetic depth images. Many robots use depth sensors, and recent results suggest training on synthetic depth data can transfer successfully to the real world. We present a method for automated dataset generation and rapidly generate a synthetic training dataset of 50,000 depth images and 320,000 object masks using simulated heaps of 3D CAD models. We train a variant of Mask R-CNN with domain randomization on the generated dataset to perform category-agnostic instance segmentation without any hand-labeled data. We evaluate the trained network, which we refer to as Synthetic Depth (SD) Mask R-CNN, on a set of real, high-resolution depth images of challenging, densely-cluttered bins containing objects with highly-varied geometry. SD Mask R-CNN outperforms point cloud clustering baselines by an absolute 15% in Average Precision and 20% in Average Recall on COCO benchmarks, and achieves performance levels similar to a Mask R-CNN trained on a massive, hand-labeled RGB dataset and fine-tuned on real images from the experimental setup. We deploy the model in an instance-specific grasping pipeline to demonstrate its usefulness in a robotics application. See https://bit.ly/2letCuE for code, datasets, and supplementary material.
|
|
WeAT1-11 Interactive Session, 220 |
Add to My Program |
Manipulation III - 3.1.11 |
|
|
|
09:40-10:55, Paper WeAT1-11.1 | Add to My Program |
The Task Motion Kit (I) |
Dantam, Neil | Colorado School of Mines |
Chaudhuri, Swarat | Rice University |
Kavraki, Lydia | Rice University |
Keywords: Manipulation Planning, Task Planning, Motion and Path Planning
Abstract: Robots require novel reasoning systems to achieve complex objectives in new environments. Everyday activities in the physical world couple discrete and continuous reasoning. For example, to set the table in Fig. 1, the robot must make discrete decisions about which objects to pick and the order in which to do so, and execute these decisions by computing continuous motions to reach objects or desired locations. Robotics has traditionally treated these issues in isolation. Reasoning about discrete events is referred to as task planning while reasoning about and computing continuous motions is the realm of motion planning. However, several recent works have shown that separating task planning from motion planning---that is finding first a series of actions that will later be executed through continuous motion---is problematic; for example, the next discrete action may specify picking an object, but there may be no continuous motion for the robot to bring its hand to a configuration that can actually grasp the object to pick it up. Instead, Task--Motion Planning (TMP) tightly couples task planning and motion planning, producing a sequence of steps that can actually be executed by a real robot to bring the world from an initial to a final state. This article provides an introduction to TMP and discusses the implementation and use of an open-source TMP framework that is adaptable to new robots, scenarios, and algorithms.
|
|
09:40-10:55, Paper WeAT1-11.2 | Add to My Program |
A Soft Modular End-Effector for Underwater Manipulation (I) |
Mura, Domenico | University of Pisa |
Barbarossa, Manuel | Istituto Italiano Di Tecnologia |
Dinuzzi, Giacomo | Istituto Italiano Di Tecnologia |
Grioli, Giorgio | Istituto Italiano Di Tecnologia |
Caiti, Andrea | University of Pisa |
Catalano, Manuel Giuseppe | Istituto Italiano Di Tecnologia |
Keywords: Marine Robotics, Grasping, Grippers and Other End-Effectors
Abstract: Current underwater end-effector technology has limits in terms of finesse and versatility. Because of this, the execution of several underwater operations, such as archeological recovery and biological sampling, often still requires direct intervention by human operators, exposing them to the risks of working in a difficult environment. This article proposes the design and implementation of an underactuated and compliant underwater end effector that embodies grasp capabilities comparable to those of a scuba's real hand as well as the large grasping envelope of grippers.
|
|
09:40-10:55, Paper WeAT1-11.3 | Add to My Program |
Multimodal Aerial Locomotion: An Approach to Active Tool Handling (I) |
Wopereis, Han Willem | University of Twente |
Ridder, van de, L. W. | University of Twente |
Lankhorst, Tom J. W. | University of Twente |
Klooster, Lucian | University of Twente |
Bukai, Evyatar | University of Twente |
Wuthier, David | Aalborg University Copenhagen |
Nikolakopoulos, George | Luleå University of Technology |
Stramigioli, Stefano | University of Twente |
Engelen, Johan B. C. | University of Twente |
Fumagalli, Matteo | Aalborg University |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications, Service Robots
Abstract: This work reports on the development and evaluation of an aerial system for active tool handling on remote locations. In the proposed approach a multirotor UAV is responsible for moving an end-effector with a tool to the region of interest and providing sufficient contact force for the end-effector to accomplish the desired task. The end-effector is equipped with actuated wheels that rely on the contact force to both allow an operator to re-position while in contact with the environment and perform the tool operation. Preliminary experiments validate the approach in a cleaning scenario and demonstrate the repeatability in an experiment with 18 consecutive repetitions of the approach.
|
|
09:40-10:55, Paper WeAT1-11.4 | Add to My Program |
Tele-MAGMaS: An Aerial-Ground Co-Manipulator System (I) |
Staub, Nicolas | Czech Technical University |
Mohammadi, Mostafa | University of Siena |
Bicego, Davide | LAAS-CNRS |
Delamare, Quentin | University of Rennes 1 |
Yang, Hyunsoo | Seoul National University |
Prattichizzo, Domenico | Università Di Siena |
Robuffo Giordano, Paolo | Centre National De La Recherche Scientifique (CNRS) |
Lee, Dongjun | Seoul National University |
Franchi, Antonio | LAAS-CNRS |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control, Cooperative Manipulators
Abstract: This paper highlights the key components of Tele-MAGMaS and demonstrates practically the implementation of the described algorithm in software and hardware for aerial-ground co-manipulation. To the best of our knowledge, this is the first time a Flying Assistant has been implemented. In particular, for the first time a cooperative manipulation task between a ground industrial manipulator and an aerial manipulator has been robustly demonstrated. Tele-MAGMaS necessitated to develop software in a modular way in order to allow easier development, this required the careful design of the full software architecture used in this work, which implements a control framework allowing for the different modalities of i) full autonomy, ii) tele-operation and iii) shared control of the system, thus allowing the system to cope with the different complexity levels of various environments by also leveraging (when needed) the cognitive abilities of a human operator.
|
|
09:40-10:55, Paper WeAT1-11.5 | Add to My Program |
A Smart Companion Robot for Heavy Payload Transport and Manipulation in Automotive Assembly (I) |
Chen, Yi | Clemson University |
Wang, Weitian | Clemson University |
Abdollahi, Zoleikha | Clemson University |
Wang, Zebin | Clemson University |
Schulte, Joerg | BMW Manufacturing Co. LLC |
Krovi, Venkat | Clemson University |
Jia, Yunyi | Clemson University |
Keywords: Cooperative Manipulators, Compliant Assembly
Abstract: Almost all existing mobile manipulators employ a combination of serial manipulators with various mobile platforms. In order to handle heavy payload, they have to utilize a large-size manipulator and a large high-payload mobile base as well. This results in unwieldy and unoptimized robotic system, which tends to be unsuitable for many automotive assembly applications. To solve this, we develop an innovative mobile parallel manipulator to form a “Smart Companion Robot” which can cooperate with human workers in automotive assembly tasks. The robot employs an omnidirectional mobile base and a parallel manipulator to handle payload. To the best of our knowledge, this is the first robot with such a structure for heavy payload manipulation and transport in six degrees of freedom. Experimental results suggest that human workers can be assisted to effectively, flexibly and conveniently handing heavy parts by taking advantages of the Smart Companion Robot, which has greatly potential benefits in increasing the automotive assembly production efficiency and quality as well as improving ergonomics.
|
|
09:40-10:55, Paper WeAT1-11.6 | Add to My Program |
Multi-Modal Geometric Learning for Grasping and Manipulation |
Watkins-Valls, David | Columbia University |
Varley, Jacob | Columbia University |
Allen, Peter | Columbia University |
Keywords: Sensor Fusion, Object Detection, Segmentation and Categorization, Perception for Grasping and Manipulation
Abstract: This work provides an architecture that incorporates depth and tactile information to create rich and accurate 3D models useful for robotic manipulation tasks. This is accomplished through the use of a 3D convolutional neural network (CNN). Offline, the network is provided with both depth and tactile information and trained to predict the object's geometry, thus filling in regions of occlusion. At runtime, the network is provided a partial view of an object. Tactile information is acquired to augment the captured depth information. The network can then reason about the object's geometry by utilizing both the collected tactile and depth information. We demonstrate that even small amounts of additional tactile information can be incredibly helpful in reasoning about object geometry. This is particularly true when information from depth alone fails to produce an accurate geometric prediction. Our method is benchmarked against and outperforms other visual-tactile approaches to general geometric reasoning. We also provide experimental results comparing grasping success with our method.
|
|
WeAT1-12 Interactive Session, 220 |
Add to My Program |
Mechanism Design II - 3.1.12 |
|
|
|
09:40-10:55, Paper WeAT1-12.1 | Add to My Program |
Panthera: Design of a Reconfigurable Pavement Sweeping Robot |
Hayat, Abdullah Aamir | Singapore University of Technology and Design |
Parween, Rizuwana | SUTD |
Elara, Mohan Rajesh | Singapore University of Technology and Design |
Parasuraman, Karthikeyan | Singapore University of Technology and Design |
Prathap, kandasamy S | SUTD |
Keywords: Automation Technologies for Smart Cities, Environment Monitoring and Management, Mechanism Design
Abstract: The pavement cleaning is essential to maintain urban hygiene and keep the long stretch of pavements spick and span. This paper reports on the development of novel reconfigurable pavement cleaning robot named Panthera. Reconfiguration in Panthera is gained by the expansion and contraction of the body frame using a single lead screw shaft and linkages mechanism. It gives the capability to reshape itself based on factors like pavement width and pedestrian density. The independent steering action is derived using two in-wheels motors for each steering axis. This imparts the flexibility in motion and make system omnidirectional and allows the convenient movement of the robot in any direction along the pavement. It is powered using onboard batteries that generate lesser noise compared to the existing solution powered with gasoline. The modeling and steering kinematics is presented along with experimental results of the path followed and discussion supporting the robot's capability.
|
|
09:40-10:55, Paper WeAT1-12.2 | Add to My Program |
Automatic Leg Regeneration for Robot Mobility Recovery |
Wang, Liyu | University of California at Berkeley |
Fearing, Ronald | University of California at Berkeley |
Keywords: Cellular and Modular Robots, Biologically-Inspired Robots, Mechanism Design
Abstract: Automatic repair of mechanical structures would enable a robot to recover or improve functions after physical damage. Little work exists on real-world execution of automatic repair in robotic systems. State-of-the-art takes a modular approach where the robotic system is modular and a replacement module is available. However, the modular approach suffers from low granularity in repair even with tens of motors. In addition, there is a lack of quantitative evaluation of the effect of automatic repair on robot functionality. Here we propose a cooperative method for automatic repair in a robotic system. Our method is regeneration-based rather than module-based and does not assume availability of a replacement part. It integrates a fabrication process on the fly for robot structure regeneration. With a system that consists of a regenerating robot, a legged robot and a pre-engineered ribbon, we demonstrate end-to-end execution of automated repair of the legged robot's leg by the regenerating robot in 335 seconds. Experiments on repeatability show a 100% success rate for sub-processes such as positioning, leg fabrication, and legged robot disengagement and a 90% success rate for leg detachment. We quantify the effect of leg regeneration on mobility recovery and found a 90% recovery of forward speed, a 19.7% increase of peak power and a 9.3% reduction of cost of transport with a regenerated leg.
|
|
09:40-10:55, Paper WeAT1-12.3 | Add to My Program |
Geometric Interpretation of the General POE Model for a Serial-Link Robot Via Conversion into D-H Parameterization |
Wu, Liao | University of New South Wales |
Crawford, Ross | Queensland University of Technology |
Roberts, Jonathan | Queensland University of Technology |
Keywords: Kinematics, Calibration and Identification
Abstract: While Product of Exponentials (POE) formula has been gaining increasing popularity in modeling the kinematics of a serial-link robot, the Denavit-Hartenberg (D-H) notation is still the most widely used due to its intuitive and concise geometric interpretation of the robot. This paper has developed an analytical solution to automatically convert a POE model into a D-H model for a robot with revolute, prismatic, and helical joints, which are the complete set of three basic one degree of freedom lower pair joints for constructing a serial-link robot. The conversion algorithm developed can be used in applications such as calibration where it is necessary to convert the D-H model to the POE model for identification and then back to the D-H model for compensation. The equivalence of the two models proved in this paper also benefits the analysis of the identifiability of the kinematic parameters. It is found that the maximum number of identifiable parameters in a general POE model is 5h+4r +2t +n+6 where h, r, t, and n stand for the number of helical, revolute, prismatic, and general joints, respectively. It is also suggested that the identifiability of the base frame and the tool frame in the D-H model is restricted rather than the arbitrary six parameters as assumed previously.
|
|
09:40-10:55, Paper WeAT1-12.4 | Add to My Program |
Dynamic Friction Model with Thermal and Load Dependency: Modeling, Compensation, and External Force Estimation |
Iskandar, Maged | DLR |
Wolf, Sebastian | DLR - German Aerospace Center |
Keywords: Mechanism Design, Calibration and Identification
Abstract: A physically-motivated friction model with a parametric description of the nonlinear dependency of the temperature and velocity as well as the dependency on external load is presented. The fully parametric approach extends a static friction model in the gross sliding regime. We show how it can be seamlessly integrated in standard dynamic friction models such as Lund Grenoble (LuGre) and Generalized-Maxwell-Slip (GMS). Parameters of a Harmonic Drive CSD 25 gear are experimentally identified and the final model is evaluated on a dedicated test-bed. We show the integration and effectiveness in dynamic simulation, friction compensation, and external torque estimation.
|
|
09:40-10:55, Paper WeAT1-12.5 | Add to My Program |
Bundled Wire Drive: Proposal and Feasibility Study of a Novel Tendon-Driven Mechanism Using Synthetic Fiber Ropes |
Endo, Gen | Tokyo Institute of Technology |
Wakabayashi, Youki | Tokyo Institute of Technology |
Nabae, Hiroyuki | Tokyo Institute of Technology |
Suzumori, Koichi | Tokyo Institute of Technology |
Keywords: Tendon/Wire Mechanism, Mechanism Design
Abstract: This paper proposes a new wire-driven mechanism in order to relay many ropes very simply and compactly. Ropes pass through in a joint while bundled. Synthetic fiber rope can slide and twist, exploiting its low friction coefficient. In order to use this mechanism, it is necessary to investigate the influence of sliding on tension transmission efficiency and rope strength. The results of this study reveal that it is feasible for a robot arm using this mechanism to have more than 15 joints. Sliding has little influence on rope strength. The feasibility of this system was studied through hardware experiments and its mechanical performance was evaluated by constructing a horizontally extendable manipulator with three degrees of freedom.
|
|
09:40-10:55, Paper WeAT1-12.6 | Add to My Program |
Shape Locking Mechanism of Flexible Joint Using Mechanical Latch with Electromagnetic Force |
Chung, Deok Gyoon | KAIST |
Kim, Joonhwan | The University of Tokyo |
Baek, DongHoon | KAIST |
Kim, Joonyeong | Korea Advanced Institute of Science and Technology (KAIST) |
Kwon, Dong-Soo | KAIST |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy, Mechanism Design
Abstract: Abstract— Single-incision laparoscopic surgery (SILS) has emerged as a procedure to further improve cosmetic profits and reduce the postoperative pain of multiport laparoscopic surgery. However, SILS is a difficult operation due to the limited workspace or accessibility. To improve surgical convenience, flexible surgical robots should be developed and applied to SILS for a large workspace. Flexible robots would penetrate a single incision point around the navel area and can provide a large workspace within the abdominal cavity. However, it is difficult to support the force required for surgical intervention during this process. In this study, a novel mechanism to lock the shape of a flexible joint to support the external forces during SILS is proposed. The developed shape-locking mechanism involves placing a latch between the joints; the shape locks by engagement of the latches via an electromagnetic force. The mechanism is implemented through mechanical coupling, so it can withstand large loads. Furthermore, the driving force of the mechanism is small because it only needs to engage the latch structure. This paper discusses the development of the mechanism, magnetic force simulation, and a payload experiment. Index Terms— Surgical Robotics: Laparoscopy, Flexible Robots, and Mechanism Design
|
|
WeAT1-13 Interactive Session, 220 |
Add to My Program |
Soft Robots V - 3.1.13 |
|
|
|
09:40-10:55, Paper WeAT1-13.1 | Add to My Program |
Echinoderm Inspired Variable Stiffness Soft Actuator with Connected Ossicle Structure |
Jeong, Hwayeong | KAIST |
Kim, Jung | KAIST |
Keywords: Soft Material Robotics, Biologically-Inspired Robots
Abstract: An echinoderm can actively modulate the structural stiffness of its body wall by as much as 10 times, using the material and structural features that make up its body, including calcite ossicles, connective tissue and inter-ossicular muscle. This capacity for variable stiffness makes it possible to adapt to the kinematics and dynamics required to perform a given task and the surrounding environment. This characteristic can improve the ability of soft material robots, which currently have limited application because of their low load-bearing capability. This paper presents a stiffness modulation method inspired by the connected ossicle structures of echinoderms. We introduce the mechanism, structure, and stiffness variation of the proposed design with respect to different ossicle shape, interval, and elastomer. Then we built a finger-shaped stiffening structure using the proposed design, measured its stiffness according to vacuum level, and showed its load-bearing capacity under control. The proposed design was then applied to a robotic gripper, a typical device that interacts with unpredictable environments and needs variable stiffening ability.
|
|
09:40-10:55, Paper WeAT1-13.2 | Add to My Program |
Controllability Pre-Verification of Silicon Soft Robots Based on Finite-Element Method |
Zheng, Gang | INRIA |
Goury, Olivier | Inria - Lille Nord Europe |
Thieffry, Maxime | Lamih |
Kruszewski, Alexandre | Centrale Lille |
Duriez, Christian | INRIA |
Keywords: Soft Material Robotics, Flexible Robots, Optimization and Optimal Control
Abstract: Soft robot is an emergent research field which has variant promising applications. However, the design of soft robots nowadays still follows the trial-and-error process, which is not at all efficient. This paper proposes to design soft robots by pre-checking controllability during the numerical design phase. Finite-element method is used to model the dynamics of silicon soft robots, based on which the differential geometric method is applied to analyze the controllability of the points of interest. Such a verification is also investigated via model order reduction technique and Galerkin projection. The proposed methodology is finally validated by numerically designing a controllable parallel soft robot.
|
|
09:40-10:55, Paper WeAT1-13.3 | Add to My Program |
A Vacuum-Driven Origami "Magic-Ball" Soft Gripper |
Li, Shuguang | MIT/Harvard University |
Stampfli, John | Massachusetts Institute of Technology |
Xu, Helen | Massachusetts Institute of Technology |
Malkin, Elian | Massachusetts Institute of Technology |
Villegas Diaz, Evelin | St. Mary’s University |
Rus, Daniela | MIT |
Wood, Robert | Harvard University |
Keywords: Soft Material Robotics, Flexible Robots, Grippers and Other End-Effectors
Abstract: Soft robotics has yielded numerous examples of soft grippers that utilize compliance to achieve impressive grasping performances with great simplicity, adaptability, and robustness.Designing soft grippers with substantial grasping strength while remaining compliant and gentle is one of the most important challenges in this field. In this paper, we present a light-weight, vacuum-driven soft robotic gripper made of an origami "magic-ball" and a flexible thin membrane. We also describe the design and fabrication method to rapidly manufacture the gripper with different combinations of low-cost materials for diverse applications. Grasping experiments demonstrate that our gripper can lift a large variety of objects, including delicate foods, heavy bottles, and other miscellaneous items. The grasp force on 3D printed objects is also characterized through mechanical load tests. The results reveal that our soft gripper can produce significant grasp force on various shapes using negative pneumatic pressure (vacuum). This new gripper holds the potential for many practical applications that require safe, strong, and simple grasping.
|
|
09:40-10:55, Paper WeAT1-13.4 | Add to My Program |
Azimuthal Shear Deformation of a Novel Soft Fiber-Reinforced Rotary Pneumatic Actuator |
Lee, Young Min | SKKU |
Lee, Hyuk Jin | SungKyunKwan University |
Moon, Hyungpil | Sungkyunkwan University |
Choi, Hyouk Ryeol | Sungkyunkwan University |
Koo, Ja Choon | Sungkyunkwan University |
Keywords: Soft Material Robotics, Hydraulic/Pneumatic Actuators, Mechanism Design
Abstract: The Elastic Inflatable Actuators (EIAs) has several advantages such as the inherent compliance due to the body comprised of a soft materials such as silicone. Among them, the soft fiber reinforced actuator is based on the principle that the expansion of enclosure and constraint of fiber pattern lead to a desired operation. While lots of researches on the actuator has been attributed to linear and bending motions, however, there are only few researches on rotary, or torsional, motions. In this paper, we propose a new actuator that causes azimuthal deformation due to restriction of anisotropically distributed fiber element along the radial direction and expansion of the hyper elastic material. Structure design of the actuator and a fabrication process of the actuator are presented. Subsequently, FEM simulation and experiment are executed to measure rotation angles of the actuators corresponding to the applied pressure.
|
|
09:40-10:55, Paper WeAT1-13.5 | Add to My Program |
INFORA: A Novel Inflatable Origami-Based Actuator |
Leylavi Shoushtari, Ali | Istituto Italiano Di Tecnologia |
Naselli, Giovanna A. | Italian Institute of Technology |
Sadeghi, Ali | Istituto Italiano Di Tecnologia |
Mazzolai, Barbara | Istituto Italiano Di Tecnologia |
Keywords: Grippers and Other End-Effectors, Grasping, Soft Material Robotics
Abstract: Pneumatic actuators have gained huge popularity in the field of soft robotics. A class of this kind of devices exploits inflatable thin membranes which generate a desired displacement upon inflation, but often without providing sufficient force/torque to perform their task. In this paper, we propose a novel actuator combining a membrane and a rigid foldable structure. Experimental tests show that such INFlatable ORigami Actuator (INFORA) is characterized by relatively high stiffness compared to other actuators of the same class. We provide a mathematical model to be used for design purposes and we describe the fabrication process. In addition, we show how the INFORA can be used to build a tendril-like structure capable of performing grasping tasks.
|
|
09:40-10:55, Paper WeAT1-13.6 | Add to My Program |
Pellicular Morphing Surfaces for Soft Robots |
Digumarti, Krishna Manaswi | Bristol Robotics Laboratory |
Conn, Andrew | University of Bristol |
Rossiter, Jonathan | University of Bristol |
Keywords: Soft Material Robotics, Biologically-Inspired Robots, Biomimetics
Abstract: Soft structures in nature endow organisms across scales with the ability to drastically deform their bodies and exhibit complex behaviours while overcoming challenges in their environments. Inspired by microstructures found in the cell membranes of the Euglena family of microorganisms, which exhibit giant changes in shape during their characteristic euglenoid movement, this paper presents the design, fabrication and characterisation of bio-inspired deforming surfaces. The result is a surface of interconnected strips, that deforms in 2D and 3D due to simple shear between adjacent members. We fabricate flexible polymeric strips and demonstrate three different shapes arising out of the same actuation by imposing various constraints. We characterise the strips in terms of the force required to separate them and show that the bio-inspired cross section of these strips enables them to hold up to 8N of force with a meagre 0.5mm of material thickness, while still being flexible to deform. Further, the design of a soft robot module, with an actively deformable surface has been presented which replicates the mechanism of shape change seen in the euglena. This work shows the potential for this new form of shape morphing surface in realising bio-mimetic soft robots exhibiting large changes in shape.
|
|
WeAT1-14 Interactive Session, 220 |
Add to My Program |
Legged Robots III - 3.1.14 |
|
|
|
09:40-10:55, Paper WeAT1-14.1 | Add to My Program |
Dynamic Period-Two Gait Generation in a Hexapod Robot Based on the Fixed-Point Motion of a Reduced-Order Model |
Lu, Wei-Chun | National Taiwan University |
Lin, Pei-Chun | National Taiwan University |
Keywords: Legged Robots, Biologically-Inspired Robots, Dynamics
Abstract: This research explored the generation of period-two dynamic running motion in a robot, based on the passive dynamic period-two motion of the reduced order, rolling spring-loaded inverted pendulum (R-SLIP) model. Each cycle of period-two motion consists of two stance phases separated by two flight phases. The distribution of the period-two fixed points of the model was analyzed using a return map. Models with the same or different landing angles per motion cycle were studied, and two sets of period-two motion trajectories were implemented in a robot for experimental evaluation. Without sensory feedback or control, this evaluation relied on the open loop trajectory of the model. Based on the experiments, the robot was capable of performing dynamic period-two motion.
|
|
09:40-10:55, Paper WeAT1-14.2 | Add to My Program |
Realizing Learned Quadruped Locomotion Behaviors through Kinematic Motion Primitives |
Singla, Abhik | Indian Institute of Science (IISc), Bangalore |
Bhattacharya, Shounak | Indian Institute of Science |
Dholakiya, Dhaivat | Indian Institute of Science |
Bhatnagar, Shalabh | Indian Institute of Science, Bangalore |
Ghosal, Ashitava | India Institute of Science (IISc |
Amrutur, Bharadwaj | Indian Institute of Science |
Kolathaya, Shishir | Indian Institute of Science |
Keywords: Legged Robots, Deep Learning in Robotics and Automation, Biologically-Inspired Robots
Abstract: Humans and animals are believed to use a very minimal set of trajectories to perform a wide variety of tasks including walking. Our main objective in this paper is two fold 1) Obtain an effective tool to realize these basic motion patterns for quadrupedal walking, called the kinematic motion primitives (kMPs), via trajectories learned from deep reinforcement learning (D-RL) and 2) Realize a set of behaviors, namely trot, walk, gallop and bound from these kinematic motion primitives in our custom four legged robot, called the “Stoch”. D-RL is a data driven approach, which has been shown to be very effective for realizing all kinds of robust locomotion behaviors, both in simulation and in experiment. On the other hand, kMPs are known to capture the underlying structure of walking and yield a set of derived behaviors. We first generate walking gaits from D-RL, which uses policy gradient based approaches. We then analyze the resulting walking by using principal component analysis. We observe that the kMPs extracted from PCA followed a similar pattern irrespective of the type of gaits generated. Leveraging on this underlying structure, we then realize walking in Stoch by a straightforward reconstruction of joint trajectories from kMPs. This type of methodology improves the transferability of these gaits to real hardware, lowers the computational overhead on-board, and also avoids multiple training iterations by generating a set of derived behaviors from a single learned gait.
|
|
09:40-10:55, Paper WeAT1-14.3 | Add to My Program |
Single-Shot Foothold Selection and Constraint Evaluation for Quadruped Locomotion |
Belter, Dominik | Poznan University of Technology |
Bednarek, Jakub | Poznań University of Technology |
Lin, Hsiu-Chin | University of Edinburgh |
Xin, Guiyang | The University of Edinburgh |
Mistry, Michael | University of Edinburgh |
Keywords: Legged Robots, Deep Learning in Robotics and Automation, Reactive and Sensor-Based Planning
Abstract: In this paper, we propose a method for selecting the optimal footholds for legged systems. The goal of the proposed method is to find the best foothold for the swing leg on a local elevation map. First, we evaluate the geometrical characteristics of each cell on the elevation map, checks kinematic constraints and collisions. Then, we apply the Convolutional Neural Network to learn the relationship between the local elevation map and the quality of potential footholds. During execution time, the controller obtains the qualitative measurement of each potential foothold from the neural model. This method evaluates hundreds of potential footholds and checks multiple constraints in a single step which takes 10 ms on a standard computer without GPU. The experiments were carried out on a quadruped robot walking over rough terrain in both simulation and real robotic platforms.
|
|
09:40-10:55, Paper WeAT1-14.4 | Add to My Program |
Optimized Jumping on the MIT Cheetah 3 Robot |
Nguyen, Quan | Massachusetts Institute of Technology |
Powell, Matthew | Massachusetts Institute of Technology |
Katz, Benjamin | Massachusetts Institute of Technology |
Di Carlo, Jared | Massachusetts Institute of Technology |
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords: Legged Robots, Optimization and Optimal Control, Motion and Path Planning
Abstract: This paper presents a novel methodology for implementing optimized jumping behavior on quadruped robots. Our method includes efficient trajectory optimization, precise high-frequency tracking controller and robust landing controller for stabilizing the robot body position and orientation after impact. Experimental validation was successfully conducted on the MIT Cheetah 3, enabling the robot to repeatably jump onto and jump down from a desk with the height of 30'' (0.76 m). The result demonstrates the advantages of the approach as well as the capability of the robot hardware itself.
|
|
09:40-10:55, Paper WeAT1-14.5 | Add to My Program |
Lift Your Leg: Mechanics of Running through Fluids |
Alicea, Ryan | Florida State University |
Ladyko, Kyle | Florida State University |
Clark, Jonathan | Florida State University |
Keywords: Legged Robots, Biologically-Inspired Robots
Abstract: In order for legged robotic platforms to become adept enough to operate in unstructured, outdoor environments it is critical that they have the ability to adapt to a variety of terrains. One class of terrains to consider are regions of shallow, dense fluids, such as a beach-head, stream banks, snow or mud. This work examines the behavior of a simulated SLIP runner operating in such a viscous medium. Simulation results show that intelligently retracting the leg during flight can have a profound effect on the maximum achievable velocity of the runner, the stability of the resulting gait, and the cost of transport of the runner. Results also show that trudging gaits, in which the leg is positioned behind the center of mass, can be favorable in certain situations in terms of energy consumption and forward velocity.
|
|
09:40-10:55, Paper WeAT1-14.6 | Add to My Program |
CENTAURO: A Hybrid Locomotion and High Power Resilient Manipulation Platform |
Kashiri, Navvab | Istituto Italiano Di Tecnologia |
Baccelliere, Lorenzo | Istituto Italiano Di Tecnologia |
Muratore, Luca | Istituto Italiano Di Tecnologia |
Laurenzi, Arturo | Istituto Italiano Di Tecnologia |
Ren, Zeyu | Istituto Italiano Di Tecnologia |
Mingo Hoffman, Enrico | Fondazione Istituto Italiano Di Tecnologia |
Kamedula, Malgorzata | Istituto Italiano Di Tecnologia |
Rigano, Giuseppe Francesco | Istituto Italiano Di Tecnologia |
Malzahn, Jörn | Istituto Italiano Di Tecnologia |
Cordasco, Stefano | Istituto Italiano Di Tecnologia (IIT) |
Margan, Alessio | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Guria, Paolo | Istituto Italiano Di Tecnologia |
Keywords: Legged Robots, Field Robots, Compliant Joint/Mechanism
Abstract: Despite the development of a large number of mobile manipulation robots, very few platforms can demonstrate the required strength and mechanical sturdiness to accommodate the needs of real-world applications with high payload and moderate/harsh physical interaction demands, e.g. in disaster-response scenarios or heavy logistics/collaborative tasks. In this work we introduce the design of a wheeled-legged mobile manipulation platform capable of executing demanding manipulation tasks, and demonstrating significant physical resilience while possessing a body size (height/width) and weight compatible to that of human. The achieved performance is the result of combining a number of design and implementation principles related to the actuation system, the integration of body structure and actuation, and the wheeled-legged mobility concept. These design principles are discussed, and the solutions adopted for various robot components are detailed. Finally, the robot performance is demonstrated in a set of experiments validating its power and strength capability when manipulating heavy payload and executing tasks involving high impact physical interactions.
|
|
WeAT1-15 Interactive Session, 220 |
Add to My Program |
Robot Safety I - 3.1.15 |
|
|
|
09:40-10:55, Paper WeAT1-15.1 | Add to My Program |
Safe and Complete Real-Time Planning and Exploration in Unknown Environments |
Fridovich-Keil, David | University of California, Berkeley |
Fisac, Jaime F. | University of California, Berkeley |
Tomlin, Claire | UC Berkeley |
Keywords: Nonholonomic Motion Planning, Robot Safety, Reactive and Sensor-Based Planning
Abstract: We present a new framework for motion planning that wraps around existing kinodynamic planners and guarantees recursive feasibility when operating in a priori unknown, static environments. Our approach makes strong guarantees about overall safety and collision avoidance by utilizing a robust controller derived from reachability analysis. We ensure that motion plans never exit the safe backward reachable set of the initial state, while safely exploring the space. This preserves the safety of the initial state, and guarantees that that we will eventually find the goal if it is possible to do so while exploring safely. We implement our framework in the Robot Operating System (ROS) software environment and demonstrate it in a real-time simulation.
|
|
09:40-10:55, Paper WeAT1-15.2 | Add to My Program |
Handling Robot Constraints within a Set-Based Multi-Task Priority Inverse Kinematics Framework |
Di Lillo, Paolo Augusto | University of Cassino and Southern Lazio |
Chiaverini, Stefano | Università Di Cassino E Del Lazio Meridionale |
Antonelli, Gianluca | Univ. of Cassino and Southern Lazio |
Keywords: Kinematics, Robot Safety, Task Planning
Abstract: Set-Based Multi-Task Priority is a recent framework to handle inverse kinematics for redundant structures. Both equality tasks, i.e., control objectives to be driven to a desired value, and set-bases tasks, i.e., control objectives to be satisfied with a set/range of values can be addressed in a rigorous manner within a priority framework. In addition, optimization tasks, driven by the gradient of a proper function, may be considered as well, usually as lower priority tasks. In this paper the proper design of the tasks, their priority and the use of a Set-Based Multi-Task Priority framework is proposed in order to handle several constraints simultaneously in real-time. It is shown that safety related tasks such as, e.g., joint limits or kinematic singularity, may be properly handled by consider them both at an higher priority as set-based task and at a lower within a proper optimization functional. Experimental results on a 7DOF Jaco^2 arm with and without the proposed approach show the effectiveness of the proposed method.
|
|
09:40-10:55, Paper WeAT1-15.3 | Add to My Program |
Compliant Limb Sensing and Control for Safe Human-Robot Interactions |
Miyata, Colin | Carleton University |
Ahmadi, Mojtaba | Carleton University |
Keywords: Robot Safety, Force Control, Compliance and Impedance Control
Abstract: The current paper proposes a control methodology for ensuring safety during human–robot interaction based on a compliant sensor covering the robot links as a lightweight shell. The method can be used with existing robots without the need for mechanical redesign. To assess the behaviour of the proposed control law, the controller is analysed using a linear robot model. Stability analysis is performed and requirements on the controller parameters are derived. The effect of the controller parameters on the perceived impedance and the maximum safe operating velocity of the robot are determined via the linear model. The adverse impact of dry friction is analysed in simulation and methods are developed to mitigate the effects. The controller is implemented on a 1 DoF robotic joint and the results are compared to those of a traditional admittance control law, demonstrating comparable transient response while maintaining a simple control structure and decreased risk of instability.
|
|
09:40-10:55, Paper WeAT1-15.4 | Add to My Program |
Hybrid Nonsmooth Barrier Functions with Applications to Provably Safe and Composable Collision Avoidance for Robotic Systems |
Glotfelter, Paul | Georgia Institute of Technology |
Buckley, Ian | Georgia Institute of Technology |
Egerstedt, Magnus | Georgia Institute of Technology |
Keywords: Robot Safety, Formal Methods in Robotics and Automation, Collision Avoidance
Abstract: Robots are entering an age of ubiquity, and to operate effectively, these systems must typically satisfy a series of constraints (e.g., collision avoidance, obeying speed limits, maintaining connectivity). In addition, modern applications hinge on the completion of particular tasks, like driving to a certain location or monitoring a crop patch. The dichotomy between satisfying constraints and completing objectives creates a need for constraint-satisfaction frameworks that are composable with a pre-existing primary objective. Barrier functions have recently emerged as a practical and composable method for constraint satisfaction, and prior results demonstrate a system of Boolean logic for nonsmooth barrier functions as well as a composable controller-synthesis framework; however, this prior work does not consider dynamically changing constraints (e.g., a robot sensing and avoiding an obstacle). Consequently, the main theoretical contribution of this paper extends nonsmooth barrier functions to time-varying barrier functions with jumps. In a practical instantiation of the theoretical main results, this work revisits a classic problem by formulating a collision-avoidance framework and composing it with a nominal controller. Experimental results show the efficacy of this framework on a LIDAR-equipped differential-drive robot in a real-time obstacle-avoidance scenario.
|
|
09:40-10:55, Paper WeAT1-15.5 | Add to My Program |
VUNet: Dynamic Scene View Synthesis for Traversability Estimation Using an RGB Camera |
Hirose, Noriaki | Stanford University |
Sadeghian, Amir | Stanford University |
Xia, Fei | Stanford University |
Martín-Martín, Roberto | Stanford University |
Savarese, Silvio | Stanford University |
Keywords: Robot Safety, Computer Vision for Other Robotic Applications, Collision Avoidance
Abstract: We present a novel view synthesis method for mobile robots in dynamic environments and its application to the estimation of future traversability. Our method predicts future images for given virtual robot velocity commands using only the RGB images from previous and current time steps. The future images result from applying two types of image changes to the previous and current images: 1) changes caused by different camera pose, and 2) changes due to the motion of the dynamic obstacles. We learn to predict these two types of changes disjointly using two novel network architectures, SNet and DNet. We combine SNet and DNet to synthesize future images that we pass to our previously presented method GONet to estimate the traversable areas around the robot. Our quantitative and qualitative evaluation indicate that our approach for view synthesis predicts accurate future images in both static and dynamic environments. We also show that these virtual images can be used to estimate future traversability correctly. We apply our view synthesis-based traversability estimation method to two applications for assisted teleoperation.
|
|
09:40-10:55, Paper WeAT1-15.6 | Add to My Program |
Adaptive Update of Reference Capacitances in Conductive Fabric Based Robotic Skin |
Matsuno, Takahiro | Ritsumeikan Univ |
Wang, Zhongkui | Ritsumeikan University |
Althoefer, Kaspar | Queen Mary University of London |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords: Soft Material Robotics, Robot Safety, Flexible Robots
Abstract: This paper proposes a sensor using conductive fabric that can detect proximity and contact by measuring the capacitance between the sensor and the surrounding environment. Due to the flexibility of the sensor used, it can be easily integrated with industrial robot arms to monitor proximity and contact between the robot and the surrounding environment including humans for safety reasons. However, the surrounding environment is constantly changing and significantly affects the capacitance measurements. To apply such proximity sensors in this scenario, the environmental variations have to be considered and the influences on the capacitance measurements have to be eliminated to ensure stable and robust proximity measurements. Therefore, in this paper, we propose an approach to adaptively update the reference capacitance to eliminate the influence of the environment. To experimentally validate the proposed sensor and approach, we developed a two-link robot arm and embedded the proposed sensing technology with each link. Experimental results demonstrate that proximity and contact can be successfully detected by the proposed sensor, independently of whether the robot arm is at rest or moving in a potentially dynamic environment.
|
|
WeAT1-16 Interactive Session, 220 |
Add to My Program |
Wheeled Robotics I - 3.1.16 |
|
|
|
09:40-10:55, Paper WeAT1-16.1 | Add to My Program |
Ascento: A Two-Wheeled Jumping Robot |
Klemm, Victor | ETH Zürich |
Morra, Alessandro | ETH Zürich |
Salzmann, Ciro | ETH Zürich |
Tschopp, Florian | ETH Zurich |
Bodie, Karen | ETH Zurich |
Gulich, Lionel | ETH Zürich |
Küng, Nicola | ETH Zürich |
Mannhart, Dominik | ETH Zürich |
Pfister, Corentin | ETH Zürich |
Vierneisel, Marcus | ETH Zürich |
Weber, Florian | ETH Zürich |
Deuber, Robin | ETH Zürich |
Siegwart, Roland | ETH Zurich |
Keywords: Wheeled Robots, Optimization and Optimal Control, Additive Manufacturing
Abstract: Applications of mobile ground robots demand high speed and agility while navigating in complex indoor environments. These present an ongoing challenge in mobile robotics. A system with these specifications would be of great use for a wide range of indoor inspection tasks. This paper introduces Ascento, a compact wheeled bipedal robot that is able to move quickly on flat terrain, and to overcome obstacles by jumping. The mechanical design and overall architecture of the system is presented, as well as the development of various controllers for different scenarios. A series of experiments with the final prototype system validate these behaviors in realistic scenarios.
|
|
09:40-10:55, Paper WeAT1-16.2 | Add to My Program |
Path Following Controller for Differentially Driven Planar Robots with Limited Torques and Uncertain and Changing Dynamics |
Pitkänen, Ville | University of Oulu |
Halonen, Veikko | University of Oulu |
Kemppainen, Anssi Juhani | University of Oulu |
Röning, Juha Jaakko | University of Oulu |
Keywords: Wheeled Robots, Dynamics, Robust/Adaptive Control of Robotic Systems
Abstract: This paper presents a path following controller that is suitable for asymmetrical planar robots with significant mass and limited motor torques. The controller is resistant against environmental forces, and inaccurate estimates of robot’s inertia, by estimating their effects with Unscented Kalman Filter. The controller outputs wheel torque commands which take in account the motor torque limits and given relative priority of internal control elements. The method presented is thoroughly explained and the simulation results demonstrate the performance of the controller
|
|
09:40-10:55, Paper WeAT1-16.3 | Add to My Program |
Nonlinear Tire Cornering Stiffness Observer for a Double Steering Off-Road Mobile Robot |
Fnadi, Mohamed | Sorbonne University, ISIR, Paris 6 |
Plumet, Frederic | UPMC |
Ben Amar, Faiz | Université Pierre Et Marie Curie, Paris 6 |
Keywords: Wheeled Robots, Sensor Fusion, Dynamics
Abstract: Path tracking controllers for an autonomous vehicle are often designed by using either a dynamic model or a kinematic one and some models are related to wheel-ground contact, that makes the efficiency of the controller highly dependent on the ground parameters estimation, especially for off-road mobile robots intended to navigate in open environments. This paper proposes a new nonlinear observer designed to estimate the front and rear contact cornering stiffnesses in real time that are related both on tire and soil proprieties. The latter is estimated using steering angles as well as yaw rate and lateral velocity, which are provided by a preliminary Kalman-Bucy observer. The performance of the proposed nonlinear observer combined with the LQR controller is evaluated by both advanced simulations and experiments in real conditions at different speeds.
|
|
09:40-10:55, Paper WeAT1-16.4 | Add to My Program |
Hierarchical Optimization for Whole-Body Control of Wheeled Inverted Pendulum Humanoids |
Zafar, Munzir | Georgia Institute of Technology |
Hutchinson, Seth | University of Illinois |
Theodorou, Evangelos | Georgia Institute of Technology |
Keywords: Wheeled Robots, Mobile Manipulation, Humanoid Robots
Abstract: In this paper, we present a whole-body control framework for textit{Wheeled Inverted Pendulum (WIP) Humanoids}. WIP Humanoids are redundant manipulators dynamically balancing themselves on wheels. Characterized by several degrees of freedom, they have the ability to perform several tasks simultaneously, such as balancing, maintaining a body pose, controlling the gaze, lifting a load or maintaining end-effector configuration in operation space. The problem of whole-body control is to enable simultaneous performance of these tasks with optimal participation of all degrees of freedom at specified priorities for each objective. The control also has to obey constraint of angle and torque limits on each joint. The proposed approach is hierarchical with a low level controller for body joints manipulation and a high-level controller that defines center of mass (CoM) targets for the low-level controller to control zero dynamics of the system driving the wheels. The low-level controller plans for shorter horizons while considering more complete dynamics of the system, while the high-level controller plans for longer horizon based on an approximate model of the robot for computational efficiency.
|
|
09:40-10:55, Paper WeAT1-16.5 | Add to My Program |
Efficient and Stable Locomotion for Impulse-Actuated Robots Using Strictly Convex Foot Shapes (I) |
Giardina, Fabio | University of Cambridge |
Iida, Fumiya | University of Cambridge |
Keywords: Dynamics, Wheeled Robots, Contact Modeling
Abstract: Impulsive actuation enables robots to perform agile maneuvers and surpass difficult terrain, yet its capacity to induce continuous and stable locomotion have not been explored. We claim that strictly convex foot shapes can improve the impulse effectiveness (impulse used per travelled distance) and locomotion speed by facilitating periodicity and stability. To test this premise, we introduce a theoretical 2-D model based on rigid-body mechanics to prove stability. We then implement a more elaborate model in simulation to study transient behavior and impulse effectiveness. Finally, we test our findings on a robot platform to prove their physical validity. Our results prove that continuous and stable locomotion can be achieved in the strictly convex case of a disk with an off-centered mass. In keeping with our theory, stable limit cycles of the off-centered disk outperform the theoretical performance of a cube in simulation and experiment, using up to 10 times less impulse per distance to travel at the same locomotion speed.
|
|
WeAT1-17 Interactive Session, 220 |
Add to My Program |
Actuators - 3.1.17 |
|
|
|
09:40-10:55, Paper WeAT1-17.1 | Add to My Program |
An Actively Controlled Variable Stiffness Structure Via Layer Jamming and Pneumatic Actuation |
Mikol, Collin | The Ohio State University |
Su, Hai-Jun | The Ohio State University |
Keywords: Hydraulic/Pneumatic Actuators, Compliant Joint/Mechanism, Soft Material Robotics
Abstract: Current robotics industry trends show an increased interest in the interaction between humans and robots in a variety of fields, ranging from collaborative robots in manufacturing to assisted medical devices in the medical field. One limiting factor in present applications is the ability to actively morph these robotic structures and control their stiffness using the same type of actuation system. This paper focuses on developing an actively controlled, variable stiffness structure that uses a pneumatic system for both morphing and locking the structure shape. The structure design integrates Pneumatic Artificial Muscles (PAMs) that are pressurized to control shape morphing. The pressurization of the PAM provides a radial force that allows bi-directional morphing based on the pressurization scheme. Layer Jamming, which utilizes varied friction between thin sheets based on pressure, is used to control the variable stiffness of the structure. In this paper, a control model is developed to predict the morphed curvature of the structure based on the input actuator pressure. This experimental control model is also validated using a theoretical pseudo-rigid-body model. The repeatability and accuracy of morphing is also discussed. Through experimental testing, a measure of the stiffness variation range of the structure is also developed. This novel research would positively impact the robotics field by creating lightweight morphing structures that are flexible and easily deformed.
|
|
09:40-10:55, Paper WeAT1-17.2 | Add to My Program |
A Floating-Piston Hydrostatic Linear Actuator and Remote-Direct-Drive 2-DOF Gripper |
Schwarm, Eric | Northeastern University |
Gravesmill, Kevin | Northeastern University |
Whitney, John Peter | Northeastern University |
Keywords: Hydraulic/Pneumatic Actuators, Grippers and Other End-Effectors, Mechanism Design
Abstract: Dexterous, serial-chain motor-driven robotic arms have high moving mass, since most of the actuators must be located in the arm itself. This necessitates high gear ratios, sacrificing passive compliance, backdrivability, and the capacity for delicate motion. We introduce the concept of a remote direct-drive (RDD) manipulator, in which every motor is located in the base, connected to remote joints via a low-friction hydrostatic transmission. We have designed a new hydrostatic linear actuator with a fully-floating piston; the piston floats within the cylinder using a pair of soft fiber-elastomer rolling-diaphragm seals. This eliminates static friction from seal rubbing and piston/rod misalignment. Actuators were developed with a 20mm bore, weighing 55 grams each with a 400:1 bidirectional strength-to-weight ratio (+/- 230N), which drive a 2-DOF manipulator (wrist pitch/finger pinch; 120-degree range-of-motion; 6.6 Nm max grip strength). The gripper is hydrostatically coupled to remotely-located direct-drive/backdrivable brushless electric motors. System hysteresis and friction are 1 percent of full-range force. This low-mass low-friction configuration is of great interest for powered prosthetic hand design, and passively-safe high dynamic range robot arms.
|
|
09:40-10:55, Paper WeAT1-17.3 | Add to My Program |
3D Printed Ferrofluid Based Soft Actuators |
Sachyani, Ela | Hebrew University of Jerusalem |
Epstein, Alexander R. | University of Maryland, College Park |
Soreni Harari, Michal | University of Maryland, College Park |
St. Pierre, Ryan | University of Maryland |
Magdassi, Shlomo | Hebrew University of Jerusalem |
Bergbreiter, Sarah | Carnegie Mellon University |
Keywords: Soft Material Robotics, Hydraulic/Pneumatic Actuators, Additive Manufacturing
Abstract: This work demonstrates 3D printed soft actuators with complex shapes and remote actuation using an external magnetic field. Instead of embedding magnetic particles in a polymeric matrix, we fabricated a novel ferrofluid-based actuator, in which the fluid can be moved to different locations in the actuator to affect actuator response. We studied the effect of both the ferrofluid and the 3D printed material on the motion of simple actuators using 3D printed tubes. In addition, we 3D printed more complex actuators mimicking a human hand and a worm to demonstrate more complex motion.
|
|
09:40-10:55, Paper WeAT1-17.4 | Add to My Program |
A Simple Tripod Mobile Robot Using Soft Membrane Vibration Actuators |
Kim, DongWook | Seoul National University |
Kim, Jae In | Seoul National University |
Park, Yong-Lae | Seoul National University |
Keywords: Soft Material Robotics, Hydraulic/Pneumatic Actuators, Model Learning for Control
Abstract: Recent research on mobile robots has focused on increasing their adaptability to unpredictable and unstructured environments using soft materials and structures, and pneu- matic actuators have been widely used in soft mobile robots. In spite of their advantages, existing pneumatic actuators have limitations with regard to control and speed. We propose soft membrane vibration actuators for a simple tripod robot. The proposed actuators are composed of a rigid housing and a soft membrane that creates vibration based on the input air pressure. The mobile robot proposed here has three vibration actuators arranged in an equilateral triangle configuration. Based on the pressure on each actuator, the robot can easily re- alize both translational and rotational motions. We constructed an analytical model of the proposed actuator and characterized the behavior of the robot experimentally. We also implemented model-free control based on a Gaussian process. The robot demonstrated the ability to follow the trajectories of various polygons. It could also carry a payload five times heavier than its own weight.
|
|
09:40-10:55, Paper WeAT1-17.5 | Add to My Program |
Long-Stroke Rolling Diaphragm Actuators for Haptic Display of Forces in Teleoperation |
Gruebele, Alexander | Stanford University |
Frishman, Samuel | Stanford University |
Cutkosky, Mark | Stanford University |
Keywords: Soft Material Robotics, Haptics and Haptic Interfaces, Telerobotics and Teleoperation
Abstract: We present a new rolling diaphragm actuator for transmitting forces in a teleoperated system. The initial application is for MR-guided biopsy procedures, providing accurate transmission of motions and forces between the fingertips of a physician and a biopsy needle being inserted into tissue. Desirable actuator qualities include low hysteresis, high axial stiffness, and long travel. The actuator uses an anisotropic laser-patterned fabric embedded in a soft silicone sleeve for a combination of low stretch in the axial direction and sufficient stretch in the radial direction so that a taper is not required; hence the actuator can have almost any length. We present results for a prototype input/output system with 6 cm stroke, 1 cm diameter and a minimum force of 0.3N to initiate motion. We compare its performance to a system using commercial rolling diaphragm actuators and show that the new system provides an improved combination of long stroke, high stiffness, and accurate transmission of fingertip forces.
|
|
09:40-10:55, Paper WeAT1-17.6 | Add to My Program |
High-Performance Continuous Hydraulic Motor for MR Safe Robotic Teleoperation |
Dong, Ziyang | The University of Hong Kong |
Guo, Ziyan | The University of Hong Kong |
Lee, Kit-Hang | The University of Hong Kong |
Fang, Ge | The University of Hong Kong |
Tang, Wai Lun | The University of Hong Kong |
Chang, Hing-Chiu | The University of Hong Kong |
Chan, Tat-Ming | Prince of Wales Hospital |
Kwok, Ka-Wai | The University of Hong Kong |
Keywords: Hydraulic/Pneumatic Actuators, Medical Robots and Systems, Surgical Robotics: Steerable Catheters/Needles
Abstract: Magnetic resonance imaging (MRI)-guided intervention has drawn increasing attention over the last decade. It is accredited to the capability of monitoring any physiological change of soft tissue with the high-contrast MR images. This also gives rise to the demand for precise tele-manipulation of interventional instruments. However, there is still lack of choices of MR safe actuators that provide high-fidelity robot manipulation. In this paper, we present a three-cylinder hydraulic motor using rolling-diaphragm-sealed cylinders, which can provide continuous bidirectional rotation with unlimited range. Both kinematics and dynamics models of the presented motor were studied, which facilitate its overall design optimization and position/torque control. Motor performances, such as step response, frequency response, and accuracy, were experimentally evaluated. We also integrate the motor into our catheter robot prototype designed for intra-operative MRI-guided cardiac electrophysiology (EP), which is capable of providing full-degree-of-freedom and precise manipulation of a standard EP catheter.
|
|
WeAT1-18 Interactive Session, 220 |
Add to My Program |
Autonomous Agents - 3.1.18 |
|
|
|
09:40-10:55, Paper WeAT1-18.1 | Add to My Program |
Learning Primitive Skills for Mobile Robots |
Zhu, Yifeng | Carnegie Mellon University |
Schwab, Devin | Carnegie Mellon University |
Veloso, Manuela | Carnegie Mellon University |
Keywords: Autonomous Agents, Deep Learning in Robotics and Automation
Abstract: Achieving effective task performance on real mobile robots is a great challenge when hand-coding algorithms, both due to the amount of effort involved and manually tuned parameters required for each skill. Learning algorithms instead have the potential to lighten up this challenge by using one single set of training parameters for learning different skills, but the question of the feasibility of such learning in real robots remains a research pursuit. We focus on a kind of mobile robot system - the robot soccer ''small-size'' domain, in which tactical and high-level team strategies build upon individual robot ball-based skills. In this paper, we present our work using Deep Reinforcement Learning to learn three real robot primitive skills in continuous action space: go-to-ball, turn-and-shoot and shoot-goalie, for which there is a clear success metric to reach a destination or score a goal. We introduce the state and action representation, as well as the reward and network architecture. We describe our training and testing using a simulator of high physical and hardware fidelity. Then we test the policies trained from simulation on real robots. Our results show that the learned skills achieve an overall better success rate at the expense of taking 0.29 seconds slower on average of all three skills. In the end, we show that our policies trained in simulation have good performance on real robots by directly transferring the policy.
|
|
09:40-10:55, Paper WeAT1-18.2 | Add to My Program |
Coverage Path Planning in Belief Space |
Schirmer, Robert | Robert Bosch GmbH |
Biber, Peter | Robert Bosch GmbH |
Stachniss, Cyrill | University of Bonn |
Keywords: Autonomous Agents, Learning and Adaptive Systems, Motion and Path Planning
Abstract: For safety reasons, robotic lawn mowers and similar devices are required to stay within a predefined working area. Keeping the robot within its workspace is typically achieved by special safeguards such as a wire installed in the ground. In the case of robotic lawn mowers, this causes a certain customer reluctance. It is more desirable to fulfill those safety-critical tasks by safe navigation and path planning. In this paper, we tackle the problem of planning a coverage path composed of parallel lanes that maximizes robot safety under the constraints of cheap, low range sensors and thus substantial uncertainty in the robot's belief and ability to execute actions. Our approach uses a map of the environment to estimate localizability at all locations, and it uses these estimates to search for an uncertainty-aware coverage path while avoiding collisions. We implemented our approach using C++ and ROS and thoroughly tested it on real garden data. The experiment shows that our approach leads to safer meander patterns for the lawn mower and takes expected localizability information into account.
|
|
09:40-10:55, Paper WeAT1-18.3 | Add to My Program |
Continuous Control for High-Dimensional State Spaces: An Interactive Learning Approach |
Pérez Dattari, Rodrigo Javier | University of Chile |
Celemin, Carlos | Advanced Mining Technology Center, Department of Electrical Engi |
Ruiz-del-Solar, Javier | Universidad De Chile |
Kober, Jens | TU Delft |
Keywords: Deep Learning in Robotics and Automation, Agent-Based Systems, Learning from Demonstration
Abstract: Deep Reinforcement Learning (DRL) has become a powerful methodology to solve complex decision-making problems. However, DRL has several limitations when used in real-world problems (e.g., robotics applications). For instance, long training times are required and cannot be accelerated in contrast to simulated environments, and reward functions may be hard to specify/model and/or to compute. Moreover, the transfer of policies learned in a simulator to the real-world has limitations (reality gap). On the other hand, machine learning methods that rely on the transfer of human knowledge to an agent have shown to be time efficient for obtaining well performing policies and do not require a reward function. In this context, we analyze the use of human corrective feedback during task execution to learn policies with high-dimensional state spaces, by using the D-COACH framework, and we propose new variants of this framework. D-COACH is a Deep Learning based extension of COACH (COrrective Advice Communicated by Humans), where humans are able to shape policies through corrective advice. The enhanced version of D-COACH, which is proposed in this paper, largely reduces the time and effort of a human for training a policy. Experimental results validate the efficiency of the D-COACH framework in three different problems (simulated and with real robots), and show that its enhanced version reduces the human training effort considerably.
|
|
09:40-10:55, Paper WeAT1-18.4 | Add to My Program |
A Predictive Reward Function for Human-Like Driving Based on a Transition Model of Surrounding Environment |
Hayashi, Daiki | Graduate School of Informatics, Nagoya University |
Xu, Yunfei | Michigan State University |
Bando, Takashi | DENSO International America, Inc |
Takeda, Kazuya | Nagoya University |
Keywords: Autonomous Agents, Learning from Demonstration, Cognitive Human-Robot Interaction
Abstract: Driving is a complex task that requires the perception of the surrounding environment, decision making and control of the vehicle. Human drivers predict how surrounding objects move and decide an appropriate driving behavior. As with human drivers, autonomous driving vehicles should consider the condition of the surrounding environment and behave naturally so as not to disturb the traffic flow. We propose a reward function for learning how natural the driving is based on the hypothesis that the movement of surrounding vehicles becomes unpredictable when the ego vehicle takes an unnatural driving behavior. The reward function is based on the prediction error of a deep predictive network that models the transition of the surrounding environment. Occupancy grid image is used to perceive the surrounding environment and the predictions up to two seconds are used to calculate the reward function. We evaluated the reward function using both simulated and the real world data. We trained the prediction network using real driving data and trained a reinforcement learning agent based on the reward function. Then we compared the speed planned by the agent and a human driver, which showed a correlation of 0.52. We also confirmed the benefit of taking prediction into account by observing the behavior of the agent in a specific traffic scenario.
|
|
09:40-10:55, Paper WeAT1-18.5 | Add to My Program |
ADAPS: Autonomous Driving Via Principled Simulations |
Li, Weizi | University of North Carolina at Chapel Hill |
Wolinski, David | University of North Carolina at Chapel Hill |
Lin, Ming C. | University of North Carolina |
Keywords: Autonomous Agents, Simulation and Animation, Visual Learning
Abstract: Autonomous driving has gained significant advancements in recent years. However, obtaining a robust control policy for driving remains challenging as it requires training data from a variety of scenarios, including rare situations (e.g., accidents), an effective policy architecture, and an efficient learning mechanism. We propose ADAPS for producing robust control policies for autonomous vehicles. ADAPS consists of two simulation platforms in generating and analyzing accidents to automatically produce labeled training data, and a memory-enabled hierarchical control policy. Additionally, ADAPS offers a more efficient online learning mechanism that reduces the number of iterations required in learning compared to existing methods such as DAGGER [1]. We present both theoretical and experimental results. The latter are produced in simulated environments, where qualitative and quantitative results are generated to demonstrate the benefits of ADAPS.
|
|
09:40-10:55, Paper WeAT1-18.6 | Add to My Program |
Planning Coordinated Event Observation for Structured Narratives |
Shell, Dylan | Texas A&M University |
Huang, Li | University of Houston |
Becker, Aaron | University of Houston |
O'Kane, Jason | University of South Carolina |
Keywords: Autonomous Agents, Agent-Based Systems, Motion and Path Planning
Abstract: This paper addresses the problem of using autonomous robots to record events that obey narrative structure. The work is motivated by a vision of robot teams that can, for example, produce individualized highlight videos for each runner in a large-scale road race such as a marathon. We introduce a method for specifying the desired structure as a function that describes how well the captured events can be used to produce an output that meets the specification. This function is specified in a compact, legible form similar to a weighted finite automaton. Then we describe a planner that uses simple predictions of future events to coordinate the robots' efforts to capture the most important events, as determined by the specification. We describe an implementation of this approach and demonstrate its effectiveness in a simulated race scenario both in simulation and in a hardware testbed.
|
|
WeAT1-19 Interactive Session, 220 |
Add to My Program |
Contact Modeling - 3.1.19 |
|
|
|
09:40-10:55, Paper WeAT1-19.1 | Add to My Program |
Algorithmic Resolution of Multiple Impacts in Nonsmooth Mechanical Systems with Switching Constraints |
Li, Yangzhi | Singapore University of Technology and Design |
Yu, Haoyong | National University of Singapore |
Braun, David | Singapore University of Technology and Design |
Keywords: Contact Modeling, Dynamics, Simulation and Animation
Abstract: We present a differential-algebraic formulation with switching constraints to model the nonsmooth dynamics of robotic systems subject to changing constraints and multiple impacts. The formulation combines a single structurally simple governing equation, a set of switching kinematic constraints, and the plastic impact law, to represent the dynamics of robots that interact with their environment. The main contribution of this formulation is a novel algorithmic impact resolution method which provides an explicit solution to the classical plastic impact law in the case of multiple simultaneous impacts. This method serves as an alternative to prior linear-complementarity-based formulations which offer an implicit impact resolution through iterative calculation. We demonstrate the utility of the proposed method by simulating the locomotion of a planar anthropometric biped.
|
|
09:40-10:55, Paper WeAT1-19.2 | Add to My Program |
Rigid Body Motion Prediction with Planar Non-Convex Contact Patch |
Xie, Jiayin | Stony Brook University |
Chakraborty, Nilanjan | Stony Brook University |
Keywords: Contact Modeling, Dynamics, Simulation and Animation
Abstract: We present a principled method for motion prediction via dynamic simulation for rigid bodies in intermittent contact with each other where the contact is assumed to be a planar non-convex contact patch. The planar non-convex contact patch can either be a topologically connected set or disconnected set. Such algorithms are useful in planning and control for robotic manipulation. Most work in rigid body dynamic simulation assumes that the contact between objects is a point contact, which may not be valid in many applications. In this paper, by using the convex hull of the contact patch, we build on our recent work on simulating rigid bodies with convex contact patches, for simulating the motion of objects with planar non-convex contact patches. We formulate a discrete-time mixed complementarity problem where we solve the contact detection and integration of the equations of motion simultaneously. Thus, our method is a geometrically-implicit method and we prove that in our formulation, there is no artificial penetration between the contacting rigid bodies. We solve for the equivalent contact point (ECP) and contact impulse of each contact patch simultaneously along with the state, i.e., configuration and velocity of the objects. We provide empirical evidence to show that our method can seamlessly capture transition between different contact modes like patch contact to multiple or single point contact during simulation.
|
|
09:40-10:55, Paper WeAT1-19.3 | Add to My Program |
A Data-Driven Approach for Fast Simulation of Robot Locomotion on Granular Media |
Zhu, Yifan | Duke University |
Abdulmajeid, Laith | University of Wisconsin Madison |
Hauser, Kris | Duke University |
Keywords: Contact Modeling, Simulation and Animation, Legged Robots
Abstract: In this paper, we propose a semi-empirical approach for simulating robot locomotion on granular media. We first develop a contact model based on the stick-slip behavior between rigid objects and granular grains, which is then learned through running extensive experiments. The contact model represents all possible contact wrenches that the granular substrate can provide as a convex volume, which our method formulates as constraints in an optimization-based contact force solver. During simulation, granular substrates are treated as rigid objects that allow penetration and the contact solver solves for wrenches that maximize frictional dissipation. We show that our method is able to simulate plausible interaction response with several granular media at interactive rates.
|
|
09:40-10:55, Paper WeAT1-19.4 | Add to My Program |
On the Similarities and Differences among Contact Models in Robot Simulation |
Horak, Peter | Charles Stark Draper Laboratory |
Trinkle, Jeff | Rensselaer Polytechnic Institute |
Keywords: Contact Modeling, Simulation and Animation, Dynamics
Abstract: Over the past several decades, as affordable computational power has increased, simulation has become increasingly important in robot analysis, planning, and control. Smooth robot dynamics can be simulated efficiently and accurately and therefore readily used in model-based control schemes. However, some of the most difficult and important problems in robotics, such as running, grasping, and parts assembly, involve intermittent frictional contacts, which introduce extreme nonlinearities into the dynamics. Numerous approaches have been developed to simulate contact dynamics. Even though real bodies are not rigid, idealized rigid contact models have been used widely and productively for decades. However, the resulting non-smooth dynamics can be computationally difficult to solve or use in model-based planning and control, motivating researchers to propose various relaxations of the idealized contact models. The varied origins and formulations of these approaches can obscure their similarities and differences. In this paper, we identify and explain differences between four contact models. We present the models in the context of one solver that is applicable to all of them, namely Projected Gauss-Seidel, in order to highlight their common structure and to avoid confounding their comparison with differences in the solution methods. Simulation results from sliding, wedging, grasping, and stacking experiments illustrate consequences of the differences.
|
|
09:40-10:55, Paper WeAT1-19.5 | Add to My Program |
Grasping Interface with Wet Adhesion and Patterned Morphology: Case of Thin Shell |
Nguyen, Van Pho | Japan Advanced Institute of Science and Technology (JAIST) |
Ho, Van | Japan Advanced Institute of Science and Technology |
Keywords: Contact Modeling, Grippers and Other End-Effectors, Grasping
Abstract: In this paper, we present an analytical method for study the role of the micro-patterned morphology of soft finger with wet adhesion in grasping of deformable thin shell object. This work was originated from a project on a robotic platform for autonomous attachment/removing soft contact lens to/from human's eyes in wet environment. In this scenario, a contact lens (hemispherical thin shell) was gripped by a soft-fingered hand in three conditions: inside/outside liquid environment, and in contact with a curvature substrate (as an eye). The finger's tip is attached by pads in two cases: flat surface, and micro-patterned surface inspired by adhesion mechanism of a tree-frog's toes. The pattern includes 3600 square cells, each cell has size of 85(mum)x85(mum). The proposed analytical model is utilized to evaluate the grasp forces in both cases of the pads, and verified by an actual application of grasping a contact lens. The experimental results showed a good agreement with the analysis, indicating that the micro-structured pads decreased the applied preload and deformation of the shell 1.5-2 times lower than that of the flat surface. This work could be extended to modeling grasping interface with deformable curvature objects in wet and high moisturized environment.
|
|
WeAT1-20 Interactive Session, 220 |
Add to My Program |
Hybrid Logical/Dynamical Planning and Verification - 3.1.20 |
|
|
|
09:40-10:55, Paper WeAT1-20.1 | Add to My Program |
Controller Synthesis for Discrete-Time Hybrid Polynomial Systems Via Occupation Measures |
Han, Weiqiao | Massachusetts Institute of Technology |
Tedrake, Russ | Massachusetts Institute of Technology |
Keywords: Optimization and Optimal Control
Abstract: We consider the feedback design for stabilizing a rigid body system by making and breaking multiple contacts with the environment without prespecifying the timing or the number of occurrence of the contacts. We model such a system as a discrete-time hybrid polynomial system, where the state-input space is partitioned into several polytopic regions with each region associated with a different polynomial dynamics equation. Based on the notion of occupation measures, we present a novel controller synthesis approach that solves finite-dimensional semidefinite programs as approximations to an infinite-dimensional linear program to stabilize the system. The optimization formulation is simple and convex, and for any fixed degree of approximations the computational complexity is polynomial in the state and control input dimensions. We illustrate our approach on some robotics examples.
|
|
09:40-10:55, Paper WeAT1-20.2 | Add to My Program |
Optimal Path Planning for W-Regular Objectives with Abstraction-Refinement |
Leong, Yoke Peng | California Institute of Technology |
Prabhakar, Pavithra | Kansas State University |
Keywords: Formal Methods in Robotics and Automation, Motion and Path Planning, Optimization and Optimal Control
Abstract: This paper presents an abstraction-refinement based framework for optimal controller synthesis of discrete-time systems with respect to w-regular objectives. It first abstracts the discrete-time concrete system into a finite weighted transition system using a finite partition of the state-space. Then, a two-player mean payoff parity game is solved on the product of the abstract system and the Buchi automaton corresponding to the w-regular objective, to obtain an optimal abstract controller that satisfies the w-regular objective. The abstract controller is guaranteed to be implementable in the concrete discrete-time system, with a sub-optimal cost. The abstraction is refined with finer partitions to reduce the sub-optimality. In contrast to existing formal controller synthesis algorithms based on abstractions, this technique provides an upper bound on the trajectory cost when implementing the suboptimal controller. A robot surveillance scenario is presented to illustrate the feasibility of the approach.
|
|
09:40-10:55, Paper WeAT1-20.3 | Add to My Program |
Sampling-Based Polytopic Trees for Approximate Optimal Control of Piecewise Affine Systems |
Sadraddini, Sadra | Boston University |
Tedrake, Russ | Massachusetts Institute of Technology |
Keywords: Hybrid Logical/Dynamical Planning and Verification, Formal Methods in Robotics and Automation, Motion and Path Planning
Abstract: Piecewise affine (PWA) systems are widely used to model highly nonlinear behaviors such as contact dynamics in robot locomotion and manipulation. Existing control techniques for PWA systems have computational drawbacks, both in offline design and online implementation. In this paper, we introduce a method to obtain feedback control policies and a corresponding set of admissible initial conditions for discrete-time PWA systems such that all the closed-loop trajectories reach a goal polytope, while a cost function is optimized. The idea is conceptually similar to LQR-trees in Tedrake et al., 2010, which consists of 3 steps: (1) open-loop trajectory optimization, (2) feedback control for computation of "funnels" of states around trajectories, and (3) repeating (1) and (2) in a way that the funnels are grown backward from the goal in a tree fashion and fill the state-space as much as possible. We show PWA dynamics can be exploited to combine step (1) and (2) into a single step that is tackled using mixed-integer convex programming, which makes the method suitable for dealing with hard constraints. Illustrative examples on contact-based dynamics are presented.
|
|
09:40-10:55, Paper WeAT1-20.4 | Add to My Program |
A Classification-Based Approach for Approximate Reachability |
Rubies Royo, Vicenc | UC Berkeley |
Fridovich-Keil, David | University of California, Berkeley |
Herbert, Sylvia | UC Berkeley |
Tomlin, Claire | UC Berkeley |
Keywords: Hybrid Logical/Dynamical Planning and Verification, Optimization and Optimal Control, AI-Based Methods
Abstract: Hamilton-Jacobi (HJ) reachability analysis has been developed over the past decades into a widely-applicable tool for determining goal satisfaction and safety verification in nonlinear systems. While HJ reachability can be formulated very generally, computational complexity can be a serious impediment for many systems of practical interest. Much prior work has been devoted to computing approximate solutions to large reachability problems, yet many of these methods may only apply to very restrictive problem classes, do not generate controllers, and/or can be extremely conservative. In this paper, we present a new method for approximating the optimal controller of the HJ reachability problem for control-affine systems. While also a specific problem class, many dynamical systems of interest are, or can be well approximated, by control-affine models. We explicitly avoid storing a representation of the reachability value function, and instead emph{learn} a controller as a sequence of simple binary classifiers. We compare our approach to existing grid-based methodologies in HJ reachability and demonstrate its utility on several examples, including a physical quadrotor navigation task.
|
|
09:40-10:55, Paper WeAT1-20.5 | Add to My Program |
Practical Resolution Methods for MDPs in Robotics Exemplified with Disassembly Planning |
Suárez-Hernández, Alejandro | CSIC-UPC |
Alenyà, Guillem | CSIC-UPC |
Torras, Carme | Csic - Upc |
Keywords: Planning, Scheduling and Coordination, Hybrid Logical/Dynamical Planning and Verification, Task Planning
Abstract: In this paper we focus on finding practical resolution methods for Markov Decision Processes (MDPs) in robotics. Some of the main difficulties of applying MDPs to real-world robotics problems are: (1) having to deal with huge state spaces; and (2) designing a method that is robust enough to dead ends. These complications restrict or make more difficult the application of methods such as Value Iteration, Policy Iteration or Labeled Real Time Dynamic Programming (LRTDP). We see in determinization and heuristic search a way to successfully work around these problems. In addition, we believe that many practical use cases offer the opportunity to identify hierarchies of subtasks and solve smaller, simplified problems. We propose a decision-making unit that operates in a probabilistic planning setting through Stochastic Shortest Path Problems (SSPPs), which generalize the most common types of MDPs. Our decision-making unit combines: (1) automatic hierarchical organization of subtasks; and (2) on-line resolution via determinization. We argue that several applications of planning benefit from these two strategies. We exemplify our approach with a robotized disassembly application. The disassembly problem is modeled in Probabilistic Planning Definition Language (PPDDL), and serves to define our experiments. Our results show many advantages of our method over LRTDP, like a better capability to handle problems with large state spaces.
|
|
WeAT1-21 Interactive Session, 220 |
Add to My Program |
Aerial Systems - 3.1.21 |
|
|
|
09:40-10:55, Paper WeAT1-21.1 | Add to My Program |
Improving Drone Localisation Around Wind Turbines Using Monocular Model-Based Tracking |
Moolan-Feroze, Oliver | University of Bristol |
Karachalios, Konstantinos | Perceptual Robotics |
Nikolaidis, Dimitrios | Perceptual Robotics |
Calway, Andrew | University of Bristol |
Keywords: Localization, Deep Learning in Robotics and Automation, Aerial Systems: Applications
Abstract: We present a novel method of integrating image-based measurements into a drone navigation system for the automated inspection of wind turbines. We take a model-based tracking approach, where a 3D skeleton representation of the turbine is matched to the image data. Matching is based on comparing the projection of the representation to that inferred from images using a convolutional neural network. This enables us to find image correspondences using a generic turbine model that can be applied to a wide range of turbine shapes and sizes. To estimate 3D pose of the drone, we fuse the network output with GPS and IMU measurements using a pose graph optimiser. Results illustrate that the use of the image measurements significantly improves the accuracy of the localisation over that obtained using GPS and IMU alone.
|
|
09:40-10:55, Paper WeAT1-21.2 | Add to My Program |
Experimental Assessment of Plume Mapping Using Point Measurements from Unmanned Vehicles |
Hutchinson, Michael | Loughborough University |
Ladosz, Pawel | Loughborough University |
Liu, Cunjia | Loughborough University |
Chen, Wen-Hua | Loughborough University |
Keywords: Robotics in Hazardous Fields, Environment Monitoring and Management, Aerial Systems: Applications
Abstract: This paper presents experiments to assess the plume mapping performance of unmanned autonomous vehicles. The paper compares several mapping algorithms including Gaussian Process regression, Neural networks and polynomial and piecewise linear interpolation. The methods are compared in Monte Carlo simulations using a well known plume model and in indoor experiments using a ground robot. Unlike previous work on mapping using unmanned vehicles, the indoor experiments were performed in a controlled, repeatable, manner where a steady state ground truth could be obtained in order to properly assess the various regression methods using data from a real dispersive source and sensor. The effect of sampling time during data collection was assessed with regards to the mapping accuracy, and the data collected during the experiments has been made available. Overall, the Gaussian Process method was found to perform the best among the regression algorithms, showing more robustness to the noisy measurements obtained from short sampling periods, enabling an accurate map to be produced in significantly less time. Finally, plume mapping results are presented in uncontrolled outdoor conditions, using an unmanned aerial vehicle, to demonstrate the system in a realistic uncontrolled environment.
|
|
09:40-10:55, Paper WeAT1-21.3 | Add to My Program |
Online Deep Learning for Improved Trajectory Tracking of Unmanned Aerial Vehicles Using Expert Knowledge |
Sarabakha, Andriy | Nanyang Technological University |
Kayacan, Erdal | Aarhus University |
Keywords: Neural and Fuzzy Control, Learning and Adaptive Systems, Aerial Systems: Applications
Abstract: This work presents an online learning-based control method for improved trajectory tracking of unmanned aerial vehicles using both deep learning and expert knowledge. The proposed method does not require the exact model of the system to be controlled, and it is robust against variations in system dynamics as well as operational uncertainties. The learning is divided into two phases: offline (pre-)training and online (post-)training. In the former, a conventional controller performs a set of trajectories and, based on the input-output dataset, the deep neural network (DNN)-based controller is trained. In the latter, the trained DNN, which mimics the conventional controller, controls the system. Unlike the existing papers in the literature, the network is still being trained for different sets of trajectories which are not used in the training phase of DNN. Thanks to the rule-base, which contains the expert knowledge, the proposed framework learns the system dynamics and operational uncertainties in real-time. The experimental results show that the proposed online learning-based approach gives better trajectory tracking performance when compared to the only offline trained network.
|
|
09:40-10:55, Paper WeAT1-21.4 | Add to My Program |
Decentralized Collaborative Transport of Fabrics Using Micro-UAVs |
Cotsakis, Ryan | University of British Colombia, Faculty of Applied Science |
St-Onge, David | Ecole Polytechnique De Montreal |
Beltrame, Giovanni | Ecole Polytechnique De Montreal |
Keywords: Swarms, Cooperating Robots, Aerial Systems: Applications
Abstract: Small unmanned aerial vehicles (UAVs) have generally little capacity to carry payloads. Through collaboration, the UAVs can increase their joint payload capacity and carry more significant loads. For maximum flexibility to dynamic and unstructured environments and task demands, we propose a fully decentralized control infrastructure based on a swarm-specific scripting language, Buzz. In this paper, we describe the control infrastructure and use it to compare two algorithms for collaborative transport: field potentials and spring-damper. We test the performance of our approach with a fleet of micro-UAVs, demonstrating the potential of decentralized control for collaborative transport.
|
|
09:40-10:55, Paper WeAT1-21.5 | Add to My Program |
Precision Stationary Flight of a Robotic Hummingbird |
Roshanbin, Ali | Université Libre De Bruxelles |
Garone, Emanuele | Université Libre De Bruxelles |
Preumont, André | ULB |
Keywords: Biologically-Inspired Robots
Abstract: This paper describes recent developments of a robotic hummingbird project, aimed at achieving precision stationary hovering. To this end, the early version of our flapping mechanism is modified which, besides being more efficient, reduces significantly the asymmetry of the wing trajectory of the previous version. A cascade control strategy is used to compensate for the residual parasitic torques and the misalignment of the lift vector and the autopilot.
|
|
09:40-10:55, Paper WeAT1-21.6 | Add to My Program |
Robust Attitude Estimation Using an Adaptive Unscented Kalman Filter |
Chiella, Antonio Carlos Bana | Federal University of Minas Gerais |
Teixeira, Bruno Otávio Soares | Federal University of Minas Gerais |
Pereira, Guilherme | West Virginia University |
Keywords: Sensor Fusion, Aerial Systems: Perception and Autonomy, Localization
Abstract: This paper presents the robust Adaptive unscented Kalman filter (RAUKF) for attitude estimation. Since the proposed algorithm represents attitude as a unit quaternion, all basic tools used, including the standard UKF, are adapted to the unit quaternion algebra. Additionally, the algorithm adopts an outlier detector algorithm to identify abrupt changes in the UKF innovation and an adaptive strategy based on covariance matching to tune the measurement covariance matrix online. Adaptation and outlier detection make the proposed algorithm robust to fast and slow perturbations such as magnetic field interference and linear accelerations. Experimental results with a manipulator robot suggest that our method overcomes other algorithms found in the literature.
|
|
WeAT1-22 Interactive Session, 220 |
Add to My Program |
Learning from Demonstration III - 3.1.22 |
|
|
|
09:40-10:55, Paper WeAT1-22.1 | Add to My Program |
One-Shot Learning of Multi-Step Tasks from Observation Via Activity Localization in Auxiliary Video |
Goo, Wonjoon | University of Texas at Austin |
Niekum, Scott | University of Texas at Austin |
Keywords: Learning from Demonstration, Deep Learning in Robotics and Automation, Computer Vision for Automation
Abstract: Due to burdensome data requirements, learning from demonstration often falls short of its promise to allow users to quickly and naturally program robots. Demonstrations are inherently ambiguous and incomplete, making correct generalization to unseen situations difficult without a large number of demonstrations in varying conditions. By contrast, humans are often able to learn complex tasks from a single demonstration (typically observations without action labels) by leveraging context learned over a lifetime. Inspired by this capability, our goal is to enable robots to perform one-shot learning of multi-step tasks from observation by leveraging auxiliary video data as context. Our primary contribution is a novel system that achieves this goal by: (1) using a single user-segmented demonstration to define the primitive actions that comprise a task, (2) localizing additional examples of these actions in unsegmented auxiliary videos via a metalearning-based approach, (3) using these additional examples to learn a reward function for each action, and (4) performing reinforcement learning on top of the inferred reward functions to learn action policies that can be combined to accomplish the task. We empirically demonstrate that a robot can learn multi-step tasks more effectively when provided auxiliary video, and that performance greatly improves when localizing individual actions, compared to learning from unsegmented videos.
|
|
09:40-10:55, Paper WeAT1-22.2 | Add to My Program |
LVIS: Learning from Value Function Intervals for Contact-Aware Robot Controllers |
Deits, Robin | MIT |
Koolen, Twan | Massachusetts Institute of Technology |
Tedrake, Russ | Massachusetts Institute of Technology |
Keywords: Learning from Demonstration, Optimization and Optimal Control, Legged Robots
Abstract: Guided policy search is a popular approach for training controllers for high-dimensional systems, but it has a number of pitfalls. Non-convex trajectory optimization has local minima, and non-uniqueness in the optimal policy itself can mean that independently-optimized samples do not describe a coherent policy from which to train. We introduce LVIS, which circumvents the issue of local minima through global mixed-integer optimization and the issue of non-uniqueness through learning the optimal value function rather than the optimal policy. To avoid the expense of solving the mixed-integer programs to full global optimality, we instead solve them only partially, extracting intervals containing the true cost-to-go from early termination of the branch-and-bound algorithm. These interval samples are used to weakly supervise the training of a neural net which approximates the true cost-to-go. Online, we use that learned cost-to-go as the terminal cost of a one-step model-predictive controller, which we solve via a small mixed-integer optimization. We demonstrate LVIS on piecewise affine models of a cart-pole system with walls and a planar humanoid robot and show that it can be applied to a fundamentally hard problem in feedback control--control through contact.
|
|
09:40-10:55, Paper WeAT1-22.3 | Add to My Program |
Augmenting Action Model Learning by Non-Geometric Features |
Nematollahi, Iman | University of Freiburg |
Kuhner, Daniel | University of Freiburg |
Welschehold, Tim | Albert-Ludwigs-Universität Freiburg |
Burgard, Wolfram | University of Freiburg |
Keywords: Learning from Demonstration, Domestic Robots
Abstract: Learning from demonstration is a powerful tool for teaching manipulation actions to a robot. It is, however, an unsolved problem how to consider knowledge about the world and action-induced reactions such as forces imposed onto the gripper or measured liquid levels during pouring without explicit and case dependent programming. In this paper, we present a novel approach to include such knowledge directly in form of measured features. To this end, we use action demonstrations together with external features to learn a motion encoded by a dynamic system in a Gaussian Mixture Model (GMM) representation. Accordingly, during action imitation, the system is able to couple the geometric trajectory of the motion to measured features in the scene. We demonstrate the feasibility of our approach with a broad range of external features in real-world robot experiments including a drinking, a handover and a pouring task.
|
|
09:40-10:55, Paper WeAT1-22.4 | Add to My Program |
Skill Acquisition Via Automated Multi-Coordinate Cost Balancing |
Ravichandar, Harish | Georgia Institute of Technology |
Ahmadzadeh, S. Reza | University of Massachusetts Lowell |
Rana, Muhammad Asif | Georgia Institute of Technology |
Chernova, Sonia | Georgia Institute of Technology |
Keywords: Learning from Demonstration
Abstract: We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets.
|
|
09:40-10:55, Paper WeAT1-22.5 | Add to My Program |
Real-Time Multisensory Affordance-Based Control for Adaptive Object Manipulation |
Chu, Vivian | Georgia Institute of Technology |
Gutierrez, Reymundo A. | University of Texas at Austin |
Chernova, Sonia | Georgia Institute of Technology |
Thomaz, Andrea Lockerd | University of Texas at Austin |
Keywords: Learning from Demonstration, Perception for Grasping and Manipulation, Learning and Adaptive Systems
Abstract: We address the challenge of how a robot can adapt its actions to successfully manipulate objects it has not previously encountered. We introduce Real-time Multisensory Affordance-based Control (RMAC), which uses multisensory inputs and Hidden Markov Models to enable a robot to determine how much to adapt an existing affordance model. We show that using the combination of haptic, audio, and visual information with RMAC allows the robot to learn afforance models and adaptively manipulate two very different objects (drawer, lamp), in multiple novel configurations. Offline evaluations and real-time online evaluations show that RMAC allows the robot to accurately open different drawer configurations and turn-on novel lamps with an average accuracy of 75%.
|
|
09:40-10:55, Paper WeAT1-22.6 | Add to My Program |
Learning Behavior Trees from Demonstration |
French, Kevin | University of Michigan |
Wu, Shiyu | University of Michigan |
Pan, Tianyang | University of Michigan |
Zhou, Zheming | University of Michigan |
Jenkins, Odest Chadwicke | University of Michigan |
Keywords: Learning from Demonstration
Abstract: Robotic Learning from Demonstration (LfD) allows anyone, not just experts, to program a robot for an arbitrary task. Many LfD methods focus on low level primitive actions such as manipulator trajectories. Complex multi-step task with many primitive actions must be learned from demonstration if LfD is to encompass the full range of task a user may desire. Existing methods represent the high level task in various forms including, finite state machines, decision trees, formal logic, among others. Behavior trees are proposed as an alternative representation of high level task. Behavior trees are an execution model for the control of a robot designed for real time execution, modularity, and, consequently, transparency. Real time execution allows the robot to reactively perform the task. Modularity allows the reuse of learned primitive actions and high level task in new situations, speeding up the process of learning in new scenarios. Transparency allows users to understand and interactively modify the learned model. Behavior trees are used to represent high level tasks by building on the relationship it has with decision trees. We demonstrate a human teaching our Fetch robot a household cleaning task.
|
|
WeAT1-23 Interactive Session, 220 |
Add to My Program |
Learning from Demonstration IV 3.1.23 |
|
|
|
09:40-10:55, Paper WeAT1-23.1 | Add to My Program |
Leveraging Temporal Reasoning for Policy Selection in Learning from Demonstration |
Carpio Mazariegos, Estuardo Rene | University of New Hampshire |
Clark-Turner, Madison | University of New Hampshire |
Gesel, Paul | University of New Hampshire |
Begum, Momotaz | University of New Hampshire |
Keywords: Learning from Demonstration, Human-Centered Automation, Cognitive Human-Robot Interaction
Abstract: High-level human activities often have rich temporal structures that determine the order in which atomic actions are executed. We propose the Temporal Context Graph (TCG), a temporal reasoning model that integrates probabilistic inference with Allen's interval algebra, to capture these temporal structures. TCGs are capable of modeling tasks with cyclical atomic actions and consisting of sequential and parallel temporal relations. We present Learning from Demonstration as the application domain where the use of TCGs can improve policy selection and address the problem of perceptual aliasing. Experiments validating the model are presented for learning two tasks from demonstration that involve structured human-robot interactions.
|
|
09:40-10:55, Paper WeAT1-23.2 | Add to My Program |
Specifying Dual-Arm Robot Planning Problems through Natural Language and Demonstration |
Behrens, Jan Kristof | Robert Bosch GmbH |
Stepanova, Karla | Czech Technical University |
Lange, Ralph | Robert Bosch GmbH |
Skoviera, Radoslav | Czech Institute of Informatics, Robotics, and Cybernetics; Czech |
Keywords: Learning from Demonstration, Planning, Scheduling and Coordination, Dual Arm Manipulation
Abstract: Multi-modal robot programming with natural language and demonstration is a promising technique for efficient teaching of manipulation tasks in industrial environments. In particular, with modern dual-arm robots designed to quickly take over tasks at typical industrial workbenches, the direct teaching of task sequences hardly utilizes the robots' capabilities. We therefore propose a two-staged approach that combines natural language instructions and demonstration with simultaneous task allocation and motion scheduling based on constraint programming. Instead of providing a task description and demonstrations that are replayed to a large extent, the user describes tasks to be scheduled with all relevant constraints and demonstrates relevant locations relative to workpieces and other objects. With explicitly stated constraints on the partial ordering of tasks, the solver allocates the tasks to the robot arms and schedules them in time while avoiding self-collisions and reducing the makespan in our experiment by 33%. The linguistic concepts of naming and grouping enable systematic reuse of sub-task ensembles. The proposed approach is evaluated with four variants of a gluing use-case from furniture assembly in user studies with ten participants. In these user studies, we observed a speed-up for the task definition of more than 6 times compared to a textual specification of the planning problems using the Python-based planner API.
|
|
09:40-10:55, Paper WeAT1-23.3 | Add to My Program |
Learning to Serve: An Experimental Study for a New Learning from Demonstrations Framework |
Koc, Okan | Max Planck Institute for Intelligent Systems |
Peters, Jan | Technische Universität Darmstadt |
Keywords: Learning from Demonstration, AI-Based Methods, Learning and Adaptive Systems
Abstract: Learning from demonstrations is an easy and intuitive way to show examples of successful behavior to a robot. However, the fact that humans optimize or take advantage of their body and not of the robot, usually called the embodiment problem in robotics, often prevents industrial robots from executing the task in a straightforward way. The shown movements often do not or cannot utilize the degrees of freedom of the robot efficiently, and moreover can suffer from excessive execution errors. In this letter, we explore a variety of solutions that address these shortcomings. In particular, we learn sparse movement primitive parameters from several demonstrations of a successful table tennis serve. The number of parameters learned using our procedure is independent of the degrees of freedom of the robot. Moreover, they can be ranked according to their importance in the regression task. Learning few parameters, which are ranked, is a desirable feature to combat the curse of dimensionality in reinforcement learning. Real robot experiments on the Barrett WAM for a table tennis serve using the learned movement primitives show that the representation can capture successfully the style of the movement with few parameters.
|
|
09:40-10:55, Paper WeAT1-23.4 | Add to My Program |
Imitating Human Search Strategies for Assembly |
Ehlers, Dennis | Aalto University |
Suomalainen, Markku | University of Oulu |
Lundell, Jens | Aalto University |
Kyrki, Ville | Aalto University |
Keywords: Learning from Demonstration, Compliant Assembly
Abstract: We present a Learning from Demonstration method for teaching robots to perform search strategies imitated from humans in scenarios where alignment tasks fail due to position uncertainty. The method utilizes human demonstrations to learn both a state invariant dynamics model and an exploration distribution that captures the search area covered by the demonstrator. We present two alternative algorithms for computing a search trajectory from the exploration distribution, one based on sampling and another based on deterministic ergodic control. We augment the search trajectory with forces learnt through the dynamics model to enable searching both in force and position domains. An impedance controller with superposed forces is used for reproducing the learnt strategy. We experimentally evaluate the method on a KUKA LWR4+ performing a 2D peg-in-hole and a 3D electricity socket task. Results show that the proposed method can, with only few human demonstrations, learn to complete the search task.
|
|
09:40-10:55, Paper WeAT1-23.5 | Add to My Program |
Combining Imitation Learning with Constraint-Based Task Specification and Control |
Vergara Perico, Cristian Alejandro | KU Leuven |
De Schutter, Joris | KU Leuven |
Aertbelien, Erwin | KU Leuven |
Keywords: Learning from Demonstration, Sensor-based Control, Human-Centered Automation
Abstract: This paper combines an imitation learning approach with a model-based and constraint-based task specification and control methodology. Imitation learning provides an intuitive way for the end user to specify the context of a new robot application without the need for traditional programming skills. On the other hand, constraint-based robot programming allows us to define complex tasks involving different kinds of sensor input. Combination of both enables adaptation of complex tasks to new environments and new objects with a small number of demonstrations. The proposed method uses a statistical uni-modal model to describe the demonstrations in terms of a number of weighted basis functions. This is then combined with model-based descriptions of other aspects of the task at hand. This approach was tested in a use case inspired by an industrial application, in which the required transfer motions were learned from a small number of demonstrations, and gradually improved by adding new demonstrations. Information on a collision-free path was introduced through a small number of demonstrations. The method showed a high level of composability with force and vision controlled tasks. The use case showed that the deployment of a complex constraint-based task with sensor interactions can be expedited using imitation learning.
|
|
09:40-10:55, Paper WeAT1-23.6 | Add to My Program |
Incorporating Safety into Parametric Dynamic Movement Primitives |
Kim, Hyoin | Seoul National University |
Seo, Hoseong | Seoul National University |
Choi, Seungwon | Seoul Nat'l University |
Tomlin, Claire | UC Berkeley |
Kim, H. Jin | Seoul National University |
Keywords: Learning from Demonstration, Motion and Path Planning, Manipulation Planning
Abstract: Parametric dynamic movement primitives (PDMPs) are powerful motion representation algorithms which encode the multiple demonstrations and generalize them. As an online trajectory from PDMPs emulates the provided demonstrations, managing the qualified demonstrations for a given scenario is an important issue. This paper presents the process to manage the motion demonstrations in PDMPs when some demonstrations are poor. Our proposed process distinguishes safe motion primitives from unsafe ones. In order to establish a criterion for determining whether a motion is safe or not, we calculate the safe region of the PDMP parameters using an optimization technique. In the optimization formulation, we calculate the unsafe style parameters which produce the closest motion to the unsafe point. By eliminating unsafe demonstrations with the parameters in the unsafe criterion, and replacing them with safe ones, we incorporate the safety in the PDMPs framework. Simulation and experimental results validate that the proposed process can expand the motion primitives in the PDMPs framework to the new environmental settings by efficiently utilizing the previous demonstrations.
|
|
WeAT1-24 Interactive Session, 220 |
Add to My Program |
Learning and Manipulation I - 3.1.24 |
|
|
|
09:40-10:55, Paper WeAT1-24.1 | Add to My Program |
Active Multi-Contact Continuous Tactile Exploration with Gaussian Process Differential Entropy |
Driess, Danny | University of Stuttgart |
Hennes, Daniel | University of Stuttgart |
Toussaint, Marc | University of Stuttgart |
Keywords: Learning and Adaptive Systems, Force and Tactile Sensing, Motion and Path Planning
Abstract: In the present work, we propose an active tactile exploration framework to obtain a surface model of an unknown object utilizing multiple contacts simultaneously. To incorporate these multiple contacts, the exploration strategy is based on the differential entropy of the underlying Gaussian process implicit surface model, which formalizes the exploration with multiple contacts within an information theoretic context and additionally allows for nonmyopic multi-step planning. In contrast to many previous approaches, the robot continuously slides along the surface with its end-effectors to gather the tactile stimuli, instead of touching it at discrete locations. This is realized by closely integrating the surface model into the compliant controller framework. Furthermore, we extend our recently proposed sliding based tactile exploration approach to handle non-convex objects. In the experiments, it is shown that multiple contacts simultaneously leads to a more efficient exploration of complex, non-convex objects, not only in terms of time, but also with respect to the total moved distance of all end-effectors. Finally, we demonstrate our methodology with a real PR2 robot that explores an object with both of its arms.
|
|
09:40-10:55, Paper WeAT1-24.2 | Add to My Program |
Learning Robust Manipulation Skills with Guided Policy Search Via Generative Motor Reflexes |
Ennen, Philipp | RWTH Aachen University |
Bresenitz, Pia | RWTH Aachen University |
Vossen, Rene | RWTH Aachen University |
Hees, Frank | RWTH Aachen University |
Keywords: Learning and Adaptive Systems, Deep Learning in Robotics and Automation, Motion Control
Abstract: Guided Policy Search enables robots to learn control policies for complex manipulation tasks efficiently. Therein, the control policies are represented as high-dimensional neural networks which derive robot actions based on states. However, due to the small number of real-world trajectory samples in Guided Policy Search, the resulting neural networks are only robust in the neighbourhood of the trajectory distribution explored by real-world interactions. In this paper, we present a new policy representation called Generative Motor Reflexes, which is able to generate robust actions over a broader state space compared to previous methods. In contrast to prior state-action policies, Generative Motor Reflexes map states to parameters for a state-dependent motor reflex, which is then used to derive actions. Robustness is achieved by generating similar motor reflexes for many states. We evaluate the presented method in simulated and real-world manipulation tasks, including contact-rich peg-in-hole tasks. Using these evaluation tasks, we show that policies represented as Generative Motor Reflexes lead to robust manipulation skills also outside the explored trajectory distribution with less training needs compared to previous methods.
|
|
09:40-10:55, Paper WeAT1-24.3 | Add to My Program |
Incremental Learning of Spatial-Temporal Features in Human Motion Patterns with Mixture Model for Planning Motion of a Collaborative Robot in Assembly Lines |
Kanazawa, Akira | Tohoku University |
Kinugawa, Jun | Tohoku University |
Kosuge, Kazuhiro | Tohoku University |
Keywords: Learning and Adaptive Systems, Industrial Robots, Intelligent and Flexible Manufacturing
Abstract: Collaborative robots are expected to work in cooperation with humans to improve productivity and maintain the quality of products. In the previous study, we have proposed an incremental learning system for adaptively scheduling a motion of the collaborative robot based on a worker’s behavior. Although this system could model the worker’s motion pattern precisely and robustly without collecting the worker’s data in advance, it required two different models for modeling the worker’s spatial and temporal features respectively and was not well considered for generalization. In this paper, we extend the previous incremental learning system by integrating the spatial and temporal models using a mixture model. In addition, we install a new incremental learning algorithm which improves a generalization capability of the mixture model and avoids overfitting in the situation where the prior information is limited. Implementing the proposed algorithm, we evaluate the effectiveness of the proposed system by experiments for several workers and for several assembly processes.
|
|
09:40-10:55, Paper WeAT1-24.4 | Add to My Program |
Learning Quickly to Plan Quickly Using Modular Meta-Learning |
Chitnis, Rohan | Massachusetts Institute of Technology |
Kaelbling, Leslie | MIT |
Lozano-Perez, Tomas | MIT |
Keywords: Learning and Adaptive Systems, Task Planning, Deep Learning in Robotics and Automation
Abstract: Multi-object manipulation problems in continuous state and action spaces can be solved by planners that search over sampled values for the continuous parameters of operators. The efficiency of these planners depends critically on the effectiveness of the samplers used, but effective sampling in turn depends on details of the robot, environment, and task. Our strategy is to learn functions called "specializers" that generate values for continuous operator parameters, given a state description and values for the discrete parameters. Rather than trying to learn a single specializer for each operator from large amounts of data on a single task, we take a modular meta-learning approach. We train on multiple tasks and learn a variety of specializers that, on a new task, can be quickly adapted using relatively little data -- thus, our system "learns quickly to plan quickly" using these specializers. We validate our approach experimentally in simulated 3D pick-and-place tasks with continuous state and action spaces. Visit http://tinyurl.com/chitnis-icra-19 for a supplementary video.
|
|
09:40-10:55, Paper WeAT1-24.5 | Add to My Program |
Deep Multi-Sensory Object Category Recognition Using Interactive Behavioral Exploration |
Tatiya, Gyan | Tufts University |
Sinapov, Jivko | Tufts University |
Keywords: Learning and Adaptive Systems, Object Detection, Segmentation and Categorization, Deep Learning in Robotics and Automation
Abstract: When identifying an object and its properties, humans use features from multiple sensory modalities produced when manipulating the object. Motivated by this cognitive process, we propose a deep learning methodology for object category recognition which uses visual, auditory, and haptic sensory data coupled with exploratory behaviors (e.g., grasping, lifting, pushing, etc.). In our method, as the robot performs an action on an object, it uses a Tensor-Train Gated Recurrent Unit network to process its visual data, and Convolutional Neural Networks to process haptic and auditory data. We propose a novel strategy to train a single neural network that inputs video, audio and haptic data, and demonstrate that its performance is better than separate neural networks for each sensory modality. The proposed method was evaluated on a dataset in which the robot explored 100 different objects, each belonging to one of 20 categories. While the visual information was the dominant modality for most categories, adding the additional haptic and auditory networks further improves the robot's category recognition accuracy. For some of the behaviors, our approach outperforms the previous published baseline for the dataset which used handcrafted features for each modality. We also show that a robot does not need the sensory data from the entire interaction, but instead can make a good prediction early on during behavior execution.
|
|
09:40-10:55, Paper WeAT1-24.6 | Add to My Program |
Force, Impedance, and Trajectory Learning for Contact Tooling and Haptic Identification (I) |
Li, Yanan | University of Sussex |
Ganesh, Gowrishankar | Centre National De La Recherche Scientifique (CNRS) |
Jarrassé, Nathanael | UMR7222, Centre National De La Recherche Scientifique (CNRS) |
Haddadin, Sami | Technical University of Munich |
Albu-Schäffer, Alin | DLR - German Aerospace Center |
Burdet, Etienne | Imperial College London |
Keywords: Learning and Adaptive Systems, Robust/Adaptive Control of Robotic Systems
Abstract: Humans can skilfully use tools and interact with the environment by adapting theirmovement trajectory, contact force, and impedance. Motivated by the human versatility, we develop here a robot controller that concurrently adapts feedforward force, impedance, and reference trajectory when interacting with an unknown environment. In particular, the robot’s reference trajectory is adapted to limit the interaction force and maintain it at a desired level, while feedforward force and impedance adaptation compensates for the interaction with the environment. An analysis of the interaction dynamics using Lyapunov theory yields the conditions for convergence of the closed-loop interaction mediated by this controller. Simulations exhibit adaptive properties similar to human motor adaptation. The implementation of this controller for typical interaction tasks including drilling, cutting, and haptic exploration shows that this controller can outperform conventional controllers in contact tooling.
|
|
WeAT1-25 Interactive Session, 220 |
Add to My Program |
Learning and Manipulation II - 3.1.25 |
|
|
|
09:40-10:55, Paper WeAT1-25.1 | Add to My Program |
Discontinuity-Sensitive Optimal Control Learning by Mixture of Experts |
Tang, Gao | Duke University |
Hauser, Kris | Duke University |
Keywords: Deep Learning in Robotics and Automation, Optimization and Optimal Control
Abstract: This paper proposes a machine learning method to predict the solutions of related nonlinear optimal control problems given some parametric input, such as the initial state. The map between problem parameters to optimal solutions is called the problem-optimum map, and is often discontinuous due to nonconvexity, discrete homotopy classes, and control switching. This causes difficulties for traditional function approximators such as neural networks, which assume continuity of the underlying function. This paper proposes a mixture of experts (MoE) model composed of a classifier and several regressors, where each regressor is tuned to a particular continuous region. A novel training approach is proposed that trains classifier and regressors independently. MoE greatly outperforms standard neural networks, and achieves highly reliable trajectory prediction (over 99% accuracy) in several underactuated control problems.
|
|
09:40-10:55, Paper WeAT1-25.2 | Add to My Program |
Wormhole Learning |
Zanardi, Alessandro | Eth Zürich |
Zilly, Julian | ETH Zurich |
Aumiller, Andreas Jianhao | ETH Zürich |
Censi, Andrea | ETH Zürich & NuTonomy |
Frazzoli, Emilio | ETH Zürich |
Keywords: AI-Based Methods, Computer Vision for Automation
Abstract: Typically, to enlarge the operating domain of an object detector, more labeled training data is required. We describe a method called wormhole learning, which allows to extend the operating domain without additional data, but only with temporary access to an auxiliary sensor with certain invariance properties. We describe the instantiation of this principle with a regular visible-light RGB camera as the main sensor, and an infrared sensor as the temporary sensor. We start with a pre-trained RGB detector; then we train the infrared detector based on the RGB-inferred labels; then we re-train the RGB detector based on the infrared-inferred labels. After these two transfer-learning steps, the RGB detector has enlarged its operating domain by inheriting part of the invariance to illumination of the infrared sensor; in particular, the RGB detector is now able to see much better at night. We analyze the wormhole learning phenomen by bounding the possible gain in accuracy by mutual information properties of the two sensors and the labels.
|
|
09:40-10:55, Paper WeAT1-25.3 | Add to My Program |
Sharing the Load: Human-Robot Team Lifting Using Muscle Activity |
DelPreto, Joseph | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Physical Human-Robot Interaction, Human-Centered Robotics, Cooperating Robots
Abstract: Seamless communication of desired motions and goals is essential for enabling effective physical human-robot collaboration. In such cases, muscle activity measured via surface electromyography (EMG) can provide insight into a person's intentions while minimally distracting from the task. The presented system uses two muscle signals to create a control framework for team lifting tasks in which a human and robot lift an object together. A continuous setpoint algorithm uses biceps activity to estimate changes in the user's hand height, and also allows the user to explicitly adjust the robot by stiffening or relaxing their arm. In addition to this pipeline, a neural network trained only on previous users classifies biceps and triceps activity to detect up or down gestures on a rolling basis; this enables finer control over the robot and expands the feasible workspace. The resulting system is evaluated by 10 untrained subjects performing a variety of team lifting and assembly tasks with rigid and flexible objects.
|
|
09:40-10:55, Paper WeAT1-25.4 | Add to My Program |
Position Control of Medical Cable-Driven Flexible Instruments by Combining Machine Learning and Kinematic Analysis |
Aleluia Porto, Rafael | University of Strasbourg |
Nageotte, Florent | University of Strasbourg |
Zanne, Philippe | University of Strasbourg |
de Mathelin, Michel | University of Strasbourg |
Keywords: Medical Robots and Systems, Flexible Robots, Model Learning for Control
Abstract: Non-linearities in cable transmissions are important limitations for the accurate control of flexible instruments used in medical endoscopic systems. Hysteresis effects greatly impact the accuracy of conventional kinematic models. This is especially critical for implementing automatic motions in flexible medical robotic systems. In this paper, we propose a method for improving open-loop accuracy of flexible instruments by implementing a Position Inverse Kinematic Model which is able to take into account hysteresis effects. In order to avoid complex physical modeling, the method relies on the off-line learning of the behavior of the instruments. Basic knowledge of the kinematic is also incorporated in the learning process in order to make it fast. The validity of the approach is demonstrated by the execution of 2D and 3D trajectories with the instruments of the STRAS medical robot. The accuracy is shown to be significantly improved with respect to other learning-based methods.
|
|
09:40-10:55, Paper WeAT1-25.5 | Add to My Program |
Online Learning for Proactive Obstacle Avoidance with Powered Transfemoral Prostheses |
Gordon, Max | North Carolina State University |
Thatte, Nitish | Carnegie Mellon University |
Geyer, Hartmut | Carnegie Mellon University |
Keywords: Prosthetics and Exoskeletons, Learning and Adaptive Systems, Collision Avoidance
Abstract: Avoiding obstacles poses a significant challenge for amputees using mechanically-passive transfemoral prosthetic limbs due to their lack of direct knee control. In contrast, powered prostheses can potentially improve obstacle avoidance via their ability to add energy to the system. In past work, researchers have proposed stumble recovery systems for powered prosthetic limbs that provide assistance in the event of a trip. However, these systems only aid recovery after an obstacle has disrupted the user's gait and do not proactively help the amputee avoid obstacles. To address this problem, we designed an adaptive system that learns online to use kinematic data from the prosthetic limb to detect the user's obstacle avoidance intent in early swing. When the system detects an obstacle, it alters the planned swing trajectory to help avoid trips. Additionally, the system uses a regression model to predict the required knee flexion angle for the trip response. We validated the system by comparing obstacle avoidance success rates with and without the obstacle avoidance system. For a non-amputee subject wearing the prosthesis through an adapter, the trip avoidance system improved the obstacle negotiation success rate from 37% to 89%, while an amputee subject improved his success rate from 35% to 71% when compared to utilizing minimum jerk trajectories for the knee and ankle joints.
|
|
09:40-10:55, Paper WeAT1-25.6 | Add to My Program |
Passive Dynamic Object Locomotion by Rocking and Walking Manipulation |
Nazir, Syed Abdullah | The Hong Kong University of Science and Technology |
Seo, Jungwon | The Hong Kong University of Science and Technology |
Keywords: Mobile Manipulation, Manipulation Planning
Abstract: This paper presents a novel robotic manipulation technique for transporting objects on the ground in a passive dynamic, nonprehensile manner. The object is manipulated to rock from side to side repeatedly; in the meantime, the force of gravity enables the object to roll along a zigzag path that is eventually heading forward. We call it rock-and-walk object locomotion. First, we examine the kinematics and dynamics of the rocking motion to understand how the states of the object evolve. We then discuss how to control the robot to connect individual rocking motions into a stable gait of the object. Our rock-and-walk object transportation technique is implemented using a conventional manipulator arm and a simple end-effector, interacting with the object in a nonprehensile manner in favor of the passive dynamics of the object. A set of experiments demonstrates successful object locomotion.
|
|
WeBT1 |
220 |
PODS: Wednesday Session II |
Interactive Session |
|
11:30-12:45, Subsession WeBT1-01, 220 | |
Marine Robotics VI - 3.2.01 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-02, 220 | |
Human Robot Communication - 3.2.02 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-03, 220 | |
Cooperative and Distributed Robot Systems I - 3.2.03 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-04, 220 | |
Cognitive HRI - 3.2.04 Interactive Session, 5 papers |
|
11:30-12:45, Subsession WeBT1-05, 220 | |
Calibration and Identification - 3.2.05 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-06, 220 | |
Semantic Scene Understanding II - 3.2.06 Interactive Session, 5 papers |
|
11:30-12:45, Subsession WeBT1-07, 220 | |
SLAM - Session VIII - 3.2.07 Interactive Session, 5 papers |
|
11:30-12:45, Subsession WeBT1-08, 220 | |
AI-Based Methods II - 3.2.08 Interactive Session, 5 papers |
|
11:30-12:45, Subsession WeBT1-09, 220 | |
Simulation and Animation - 3.2.09 Interactive Session, 5 papers |
|
11:30-12:45, Subsession WeBT1-10, 220 | |
Object Recognition & Segmentation IV - 3.2.10 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-11, 220 | |
Haptics and Manipulation - 3.2.11 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-12, 220 | |
Compliant Actuators I - 3.2.12 Interactive Session, 5 papers |
|
11:30-12:45, Subsession WeBT1-13, 220 | |
Soft Robots VI - 3.2.13 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-14, 220 | |
Legged Robots IV - 3.2.14 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-15, 220 | |
Robot Safety II - 3.2.15 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-16, 220 | |
Wheeled Robotics II - 3.2.16 Interactive Session, 5 papers |
|
11:30-12:45, Subsession WeBT1-17, 220 | |
Motion Planning - 3.2.17 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-18, 220 | |
Autonomous Vehicles II - 3.2.18 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-19, 220 | |
Manipulation IV - 3.2.19 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-20, 220 | |
Medical Computer Vision - 3.2.20 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-21, 220 | |
Active Perception - 3.2.21 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-22, 220 | |
Planning - 3.2.22 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-23, 220 | |
Vision-Based Navigation - 3.2.23 Interactive Session, 6 papers |
|
11:30-12:45, Subsession WeBT1-24, 220 | |
Medical Robotics VIII - 3.2.24 Interactive Session, 6 papers |
|
WeBT1-01 Interactive Session, 220 |
Add to My Program |
Marine Robotics VI - 3.2.01 |
|
|
|
11:30-12:45, Paper WeBT1-01.1 | Add to My Program |
Autonomous Latching System for Robotic Boats |
Mateos, Luis | MIT |
Wang, Wei | Massachusetts Institute of Technology |
Gheneti, Banti | Massachusetts Institute of Technology |
Duarte, Fábio | Massachusetts Institute of Technology |
Ratti, Carlo | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Marine Robotics, Multi-Robot Systems, Field Robots
Abstract: Autonomous robotic boats are devised to transport people and goods similar to self-driving cars. One of the attractive features specially applied in water environment is to dynamically link and join multiple boats into one unit in order to form floating infrastructure such as bridges, markets or concert stages, as well as autonomously self-detach to perform individual tasks. In this paper we present a novel latching system that enables robotic boats to create dynamic united floating infrastructure while overcoming water disturbances. The proposed latching mechanism is based on the spherical joint (ball and socket) that allows rotation and free movements in two planes at the same time. In this configuration, the latching system is capable to securely and efficiently assemble/disassemble floating structures. The vision-based robot controller guides the self- driving robotic boats to latch with high accuracy in the millimeter range. Moreover, in case the robotic boat fails to latch due to harsh weather, the autonomous latching system is capable to recompute and reposition to latch successfully. We present experimental results from latching and docking in indoor environments. Also, we present results in outdoor environments from latching a couple of robotic boats in open water with calm and turbulent currents.
|
|
11:30-12:45, Paper WeBT1-01.2 | Add to My Program |
Streaming Scene Maps for Co-Robotic Exploration in Bandwidth Limited Environments |
Girdhar, Yogesh | Woods Hole Oceanographic Institution |
Cai, Levi | Massachusetts Institute of Technology |
Jamieson, Stewart | Massachusetts Institute of Technology |
McGuire, Nathan | Northeastern University |
Flaspohler, Genevieve | Massachusetts Institute of Technology |
Suman, Stefano | Woods Hole Oceanographic Institution |
Claus, Brian | Woods Hole Oceanographic Institution |
Keywords: Marine Robotics, Semantic Scene Understanding, Cognitive Human-Robot Interaction
Abstract: This paper proposes a bandwidth tunable technique for real-time probabilistic scene modeling and mapping to enable co-robotic exploration in communication constrained environments such as the deep sea. The parameters of the system enable the user to characterize the scene complexity represented by the map, which in turn determines the bandwidth requirements. The approach is demonstrated using an underwater robot that learns an unsupervised scene model of the environment and then uses this scene model to communicate the spatial distribution of various high-level semantic scene constructs to a human operator. Preliminary experiments in an artificially constructed tank environment as well as simulated missions over a 10m x 10m coral reef using real data show the tunability of the maps to different bandwidth constraints and science interests. To our knowledge this is the first paper to quantify how the free parameters of the unsupervised scene model impact both the scientific utility of and bandwidth required to communicate the resulting scene model.
|
|
11:30-12:45, Paper WeBT1-01.3 | Add to My Program |
UWStereoNet: Unsupervised Learning for Depth Estimation and Color Correction of Underwater Stereo Imagery |
Skinner, Katherine A. | University of Michigan |
Junming, Zhang | University of Michigan |
Olson, Elizabeth | University of Michigan |
Johnson-Roberson, Matthew | University of Michigan |
Keywords: Marine Robotics, Deep Learning in Robotics and Automation, Computer Vision for Other Robotic Applications
Abstract: Stereo cameras are widely used for sensing and navigation of underwater robotic systems. They provide high resolution color views of a scene; the constrained camera geometry enables metrically accurate depth estimation; they are also relatively cost-effective. Traditional stereo vision algorithms rely on feature detection and matching to enable triangulation of points for estimating disparity. However, for underwater applications, the effects of underwater light propagation lead to image degradation, reducing image quality and contrast. This makes it challenging to detect and match features, especially from varying viewpoints. Recently, deep learning has shown success in end-to-end learning of dense disparity maps from stereo images. Many state-of-the-art methods are supervised and require ground truth depth, which is challenging to gather in subsea environments. Simultaneously, deep learning has also been applied to the problem of underwater image restoration. Again, it is difficult to gather real ground truth data for this problem. In this work, we present an unsupervised network that takes input raw underwater stereo imagery and outputs dense depth maps and color corrected imagery of underwater scenes. We leverage a model of the process of underwater image formation, image processing techniques, as well as geometric constraints inherent to the stereo vision problem to develop a modular network that outperforms existing methods.
|
|
11:30-12:45, Paper WeBT1-01.4 | Add to My Program |
Design and Parameter Optimization of a 3-PSR Parallel Mechanism for Replicating Wave and Boat Motion |
Talke, Kurt | Spawar Systems Center Pacific |
Drotman, Dylan | University of California, San Diego |
Stroumtsos, Nicholas | Spawar Systems Center Pacific |
de Oliveira, Mauricio | University of California, San Diego |
Bewley, Thomas | Flow Control & Coordinated Robotics Labs |
Keywords: Mechanism Design, Marine Robotics, Field Robots
Abstract: We present a low-cost, three-degree-of-freedom (3- DOF) prismatic-spherical-revolute (PSR) parallel mechanism used as a testing platform for an unmanned aerial vehicle (UAV) tethered to an unmanned surface vehicle (USV). The mechanism has three actuated linear rails kinematically linked to a platform which replicates boat motion up to 2.5 m vertical heave (sea state 4, Douglas Sea Scale). A lookup table relating relative slider heights to platform roll and pitch was developed numerically leveraging geometric constraints. A design parameter study optimized the arm length, platform size, and ball joint mounting angle relative to the overall radius to maximize the workspace. For this design, a maximum roll and pitch range from -32◦ to 32◦ and -25◦ to 35◦, respectively, is achievable. A prototype was manufactured to carry the tethered UAV winch payload. Experimental testing confirmed the workspace and demonstrated boat motion replication, validated using an inertial measurement unit (IMU).
|
|
11:30-12:45, Paper WeBT1-01.5 | Add to My Program |
Autonomous Navigation for Unmanned Underwater Vehicles: Real-Time Experiments Using Computer Vision |
Manzanilla, Adrián | Centro De Investigación Y De Estudios Avanzados Del Instituto Po |
Reyes Sanchez, Sergio | Umi Lafmia Cinvestav |
Garcia Rangel, Miguel Angel | CINVESTAV |
Mercado Ravell, Diego Alberto | Catedra CONACyT, CIMAT-Zacatecas |
Lozano, Rogelio | Université De Tech. De Compiègne |
Keywords: Marine Robotics, Autonomous Vehicle Navigation, Visual-Based Navigation
Abstract: This letter studies the problem of autonomous navigation for unmanned underwater vehicles, using computer vision for localization. Parallel tracking and mapping is employed to localize the vehicle with respect to a visual map, using a single camera, whereas an extended Kalman filter (EKF) is used to fuse the visual information with data from an inertial measurement unit, in order to recover the scale of the map and improve the pose estimation. A proportional integral derivative controller with compensation of the restoring forces is proposed to accomplish trajectory tracking, where a pressure sensor and a magnetometer provide feedback for depth control and yaw, respectively, while the remaining states are provided by the EKF. Real-time experiments are presented to validate the navigation strategy, using a commercial remotely operated vehicle (ROV), the BlueROV2, which was adapted to perform as an autonomous underwater vehicle with the help of the robot operative system.
|
|
11:30-12:45, Paper WeBT1-01.6 | Add to My Program |
A Framework for On-Line Learning of Underwater Vehicles Dynamic Models |
Wehbe, Bilal | German Research Center for Artificial Intelligence |
Hildebrandt, Marc | DFKI RIC Bremen |
Kirchner, Frank | University of Bremen |
Keywords: Model Learning for Control, Marine Robotics, Learning and Adaptive Systems
Abstract: Learning the dynamics of robots from data can help achieve more accurate tracking controllers, or aid their navigation algorithms. However, when the actual dynamics of the robots change due to external conditions, on-line adaptation of their models is required to maintain high fidelity performance. In this work, a framework for on-line learning of robot dynamics is developed to adapt to such changes. The proposed framework employs an incremental support vector regression method to learn the model sequentially from data streams. In combination with the incremental learning, strategies for including and forgetting data are developed to obtain better generalization over the whole state space. The framework is tested in simulation and real experimental scenarios demonstrating its adaptation capabilities to changes in the robot's dynamics.
|
|
WeBT1-02 Interactive Session, 220 |
Add to My Program |
Human Robot Communication - 3.2.02 |
|
|
|
11:30-12:45, Paper WeBT1-02.1 | Add to My Program |
Incorporating End-To-End Speech Recognition Models for Sentiment Analysis |
Lakomkin, Egor | University of Hamburg |
Zamani, Mohammad Ali | University of Hamburg |
Weber, Cornelius | Knowledge Technology Group, University of Hamburg |
Magg, Sven | University of Hamburg |
Wermter, Stefan | University of Hamburg |
Keywords: Cognitive Human-Robot Interaction, Robot Audition
Abstract: Previous work on emotion recognition demonstrated a synergistic effect of combining several modalities such as auditory, visual, and transcribed text, to estimate the affective state of a speaker. Among these, the linguistic modality is crucial for the evaluation of an expressed emotion. However, manually transcribed spoken text cannot be given as input to a system practically. We argue that using ground truth transcriptions during training and evaluation phases leads to a significant discrepancy in performance compared to real-world conditions, as the spoken text has to be recognized on the fly and can contain speech recognition mistakes. In this paper, we propose a method of integrating an automatic speech recognition (ASR) output with a character-level recurrent neural network for sentiment recognition. In addition, we conduct several experiments investigating sentiment recognition in human-robot interaction in a noise realistic scenario which is challenging for the ASR systems. We quantify the improvement compared to using only the acoustic modality in sentiment recognition. We demonstrate the effectiveness of this approach on the Multimodal Corpus of Sentiment Intensity (MOSI) by achieving 73,6% accuracy in a binary sentiment classification task, exceeding previously reported results that use only acoustic input. In addition, we set a new state-of-the-art performance on the MOSI dataset (80.4% accuracy, 2% absolute improvement).
|
|
11:30-12:45, Paper WeBT1-02.2 | Add to My Program |
Improved Optical Flow for Gesture-Based Human Robot Interaction |
Chang, Jen-Yen | The Unversity of Tokyo |
Tejero-de-Pablos, Antonio | The University of Tokyo |
Harada, Tatsuya | The University of Tokyo |
Keywords: Computer Vision for Automation, Gesture, Posture and Facial Expressions, Social Human-Robot Interaction
Abstract: Gesture interaction is a natural way of communicating with a robot as an alternative to speech. Gesture recognition methods leverage optical flow in order to under-stand human motion. However, while accurate optical flow estimation (i.e., traditional) methods are costly in terms of runtime, fast estimation (i.e., deep learning) methods’ accuracy can be improved. In this paper, we present a pipeline for gesture-based human-robot interaction that uses a novel optical flow estimation method in order to achieve an improved speed-accuracy trade-off. Our optical flow estimation method introduces four improvements to previous deep learning-based methods: strong feature extractors, attention to contours, mid-way features, and a combination of these three. This results in a better understanding of motion, and a finer representation of silhouettes. In order to evaluate our pipeline, we generated our own dataset, MIBURI, which contains gestures to command a house service robot. In our experiments, we show how our method improves not only optical flow estimation, but also gesture recognition, offering a speed-accuracy trade-off more realistic for practical robot applications.
|
|
11:30-12:45, Paper WeBT1-02.3 | Add to My Program |
Decentralization of Multiagent Policies by Learning What to Communicate |
Paulos, James | University of Pennsylvania |
Chen, Steven W | University of Pennsylvania |
Shishika, Daigo | University of Pennsylvania |
Kumar, Vijay | University of Pennsylvania |
Keywords: Multi-Robot Systems, Deep Learning in Robotics and Automation, Swarms
Abstract: Effective communication is required for teams of robots to solve sophisticated collaborative tasks. In practice it is typical for both the encoding and semantics of communication to be manually defined by an expert; this is true regardless of whether the behaviors themselves are bespoke, optimization based, or learned. We present an agent architecture and training methodology using neural networks to learn task-oriented communication semantics based on the example of a communication-unaware expert policy. A perimeter defense game illustrates the system's ability to handle dynamically changing numbers of agents and its graceful degradation in performance as communication constraints are tightened or the expert's observability assumptions are broken.
|
|
11:30-12:45, Paper WeBT1-02.4 | Add to My Program |
Acquisition of Word-Object Associations from Human-Robot and Human-Human Dialogues |
Sadeghi, Sepideh | Tufts University |
Oosterveld, Brad | Tufts University |
Krause, Evan | Tufts University |
Scheutz, Matthias | Tufts University |
Keywords: AI-Based Methods, Learning from Demonstration, Social Human-Robot Interaction
Abstract: Past work on acquisition of word-object associations in robots has focused on either fast instruction-based methods which accept highly constrained input or gradual cross-situational learning methods, but not a mixture of both. In this paper, we present an integrated robotic system which allows for a combination of these methods to contribute to the task of learning the labels of objects in AI agents. We demonstrate the expanded word learning capabilities in the outcome system and how learning from both human-human and human-robot dialogues can be achieved in one integrated system.
|
|
11:30-12:45, Paper WeBT1-02.5 | Add to My Program |
Robot Object Referencing through Situated Legible Projections |
Weng, Thomas | Carnegie Mellon University |
Perlmutter, Leah | University of Washington |
Nikolaidis, Stefanos | Carnegie Mellon University |
Srinivasa, Siddhartha | University of Washington |
Cakmak, Maya | University of Washington |
Keywords: Human-Centered Robotics, Physical Human-Robot Interaction, Virtual Reality and Interfaces
Abstract: The ability to reference objects in the environment is a key communication skill that robots need for complex, task oriented human-robot collaborations. In this paper we explore the use of projections, which are a powerful communication channel for robot-to-human information transfer as they allow for situated, instantaneous, and parallelized visual referencing. We focus on the question of what makes a good projection for referencing a target object. To that end, we mathematically formulate legibility of projections intended to reference an object, and propose alternative arrow-object match functions for optimally computing the placement of an arrow to indicate a target object in a cluttered scene. We implement our approach on a PR2 robot with a head-mounted projector. Through an online (48 participants) and an in-person (12 participants) user study we validate the effectiveness of our approach, identify the types of scenes where projections may fail, and characterize the differences between alternative match functions.
|
|
11:30-12:45, Paper WeBT1-02.6 | Add to My Program |
Security-Aware Synthesis of Human-UAV Protocols |
Elfar, Mahmoud | Duke University |
Zhu, Haibei | Duke University |
Cummings, M. L. | Duke |
Pajic, Miroslav | Duke University |
Keywords: Human Factors and Human-in-the-Loop, Cognitive Human-Robot Interaction, Planning, Scheduling and Coordination
Abstract: In this work, we synthesize collaboration protocols for human-unmanned aerial vehicle (H-UAV) command and control systems, where the human operator aids in securing the UAV by intermittently performing geolocation tasks to confirm its reported location. We first present a stochastic game-based model for the system that accounts for both the operator and an adversary capable of launching stealthy false-data injection attacks, causing the UAV to deviate from its path. We also describe a synthesis challenge due to the UAV's hidden-information constraint. Next, we perform human experiments using a developed RESCHU-SA testbed to recognize the geolocation strategies that operators adopt. Furthermore, we deploy machine learning techniques on the collected experimental data to predict the correctness of a geolocation task at a given location based on its geographical features. By representing the model as a delayed-action game and formalizing the system objectives, we utilize off-the-shelf model checkers to synthesize protocols for the human-UAV coalition that satisfy these objectives. Finally, we demonstrate the usefulness of the H-UAV protocol synthesis through a case study where the protocols are experimentally analyzed and further evaluated by human operators.
|
|
WeBT1-03 Interactive Session, 220 |
Add to My Program |
Cooperative and Distributed Robot Systems I - 3.2.03 |
|
|
|
11:30-12:45, Paper WeBT1-03.1 | Add to My Program |
Underwater Communication Using Full-Body Gestures and Optimal Variable-Length Prefix Codes |
Koreitem, Karim | McGill University |
Li, Jimmy | McGill University |
Karp, Ian | McGill University |
Manderson, Travis | McGill University |
Dudek, Gregory | McGill University |
Keywords: Cooperating Robots, Multi-Robot Systems
Abstract: In this paper we consider inter-robot communication in the context of joint activities. In particular, we focus on convoying and passive communication for radio-denied environments by using whole-body gestures to provide cues regarding future actions. We develop a communication protocol whereby information described by codewords is transmitted by a series of actions executed by a swimming robot. These action sequences are chosen to optimize robustness and transmission duration given the observability, natural activity of the robot and the frequency of different messages.Our approach uses a convolutional network to make core observations of the pose of the robot being tracked, which is sending messages. The observer robot then uses an adaptation of classical decoding methods to infer a message that is being transmitted. The system is trained and validated using simulated data, tested in the pool and is targeted for deployment in the open ocean. Our decoder achieves .94 precision and .66 recall on real footage of robot gesture execution recorded in a swimming pool.
|
|
11:30-12:45, Paper WeBT1-03.2 | Add to My Program |
WISDOM: WIreless Sensing-Assisted Distributed Offline Mapping |
Adhivarahan, Charuvahan | University at Buffalo, State University of New York |
Dantu, Karthik | University of Buffalo |
Keywords: Cooperating Robots, Multi-Robot Systems, Networked Robots
Abstract: Spatial sensing is a fundamental requirement for applications in robotics and augmented reality. In urban spaces such as malls, airports, apartment buildings and others, it is quite challenging for a single robot to map the whole environment. So, we employ a swarm of robots to perform mapping. One challenge with this approach is the mechanism to merge maps. In this work, we use wireless access points, which are ubiquitous in most urban spaces, to provide us with coarse orientation between sub-maps, and use a custom ICP algorithm to refine this orientation to merge them. We demonstrate our approach with maps from a building on campus and evaluate it using two approaches. Our results show that, in the building we studied, we can achieve an Absolute Trajectory Error of 0.2m and Root Mean Square Error of 1.3m in known landmark positions.
|
|
11:30-12:45, Paper WeBT1-03.3 | Add to My Program |
Learning Recursive Bayesian Nonparametric Modeling of Moving Targets Via Mobile Decentralized Sensors |
Liu, Chang | University of California, Berkeley |
Chen, Yucheng | Cornell University |
Gemerek, Jake | Cornell University |
Yang, Hengye | Cornell University |
Ferrari, Silvia | Cornell University |
Keywords: Distributed Robot Systems, Sensor Fusion, Learning and Adaptive Systems
Abstract: Bayesian nonparametric models, such as the Dirichlet Process Gaussian Process (DPGP), have been shown very effective at learning models of dynamic targets exclusively from data. Previous work on batch DPGP learning and inference, however, ceases to be efficient in multi-sensor applications that require decentralized measurements to be obtained sequentially over time. Batch processing, in this case, leads to redundant computations that may hinder online applicability. This paper develops a recursive approach for DPGP learning and inference in which a novel Dirichlet Process prior based on Wasserstein metric is used for measuring the similarity between multiple Gaussian Processes (GPs). Combined with the GP recursive fusion law, the proposed recursive DPGP fusion approach enables ecient online data fusion. The problem of active sensing for recursive DPGP learning and inference is also investigated by uncertainty reduction via expected mutual information. Simulation and experimental results show that the proposed approach successfully learns the models of moving targets and outperforms existing benchmark methods.
|
|
11:30-12:45, Paper WeBT1-03.4 | Add to My Program |
UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether |
Miki, Takahiro | University of Tokyo |
Khrapchenkov, Petr | The University of Tokyo |
Hori, Koichi | University of Tokyo |
Keywords: Cooperating Robots, Climbing Robots, Autonomous Vehicle Navigation
Abstract: This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a flying sensor but also as a tether attachment device. Two robots are connected with a tether, allowing the UAV to anchor the tether to a structure located at the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the poor traversability of the UGV by not only providing a wider range of scanning and mapping from the air, but also by allowing the UGV to climb steep terrains with the winding of the tether. In addition, we present an autonomous framework for the collaborative navigation and tether attachment in an unknown environment. The UAV employs visual inertial navigation with 3D voxel mapping and obstacle avoidance planning. The UGV makes use of the voxel map and generates an elevation map to execute path planning based on a traversability analysis. Furthermore, we compared the pros and cons of possible methods for the tether anchoring from multiple points of view. To increase the probability of successful anchoring, we evaluated the anchoring strategy with an experiment. Finally, the feasibility and capability of our proposed system were demonstrated by an autonomous mission experiment in the field with an obstacle and a cliff.
|
|
11:30-12:45, Paper WeBT1-03.5 | Add to My Program |
Distributed Motion Tomography for Reconstruction of Flow Fields |
Chang, Dongsik | University of Michigan |
Zhang, Fumin | Georgia Institute of Technology |
Sun, Jing | University of Michigan |
Keywords: Distributed Robot Systems, Multi-Robot Systems, Optimization and Optimal Control
Abstract: This paper considers a group of mobile sensing agents in a flow field and presents a distributed method for motion tomography (MT) that estimates the underlying flow field. MT formulates an underdetermined nonlinear system of equations as an inverse problem. Inspired by the Kaczmarz method which is an optimization approach for solving a linear system of equations, our previous work developed a nonlinear Kaczmarz method that solves the system of equations associated with MT. Considering distributed multi-agent systems for MT, this paper extends the nonlinear Kaczmarz method into a distributed framework. The distributed nonlinear Kaczmarz method is developed by formulating a constrained consensus problem that belongs to a class of projected consensus algorithms. To study the convergence and consensus for the method, its linear case is analyzed first and then its nonlinear case is discussed. The nonlinear case of the method is further validated through simulations by estimating a gyre flow field using mobile sensor networks with different numbers of neighboring agents. Resulting estimated flow fields are compared with a flow field estimated by its centralized counterpart.
|
|
11:30-12:45, Paper WeBT1-03.6 | Add to My Program |
Adaptive Sampling and Reduced Order Modeling of Dynamic Processes by Robot Teams |
Salam, Tahiya | 1995 |
Hsieh, M. Ani | University of Pennsylvania |
Keywords: Distributed Robot Systems, Environment Monitoring and Management, Sensor Networks
Abstract: This paper presents a strategy to enable a team of mobile robots to adaptively sample and track a dynamic process. We propose a distributed strategy, where robots collect sparse sensor measurements, create a reduced-order model of a spatio-temporal process, and use this model to estimate field values for areas without sensor measurements of the dynamic process. The robots then use these estimates of the field, or inferences about the process, to adapt the model and reconfigure their sensing locations. The key contributions of this work are two-fold: 1) leveraging the dynamics of the process of interest to determine where to sample and how to estimate the process, and 2) maintaining fully distributed models, sensor measurements, and estimates of the time-varying process. We illustrate the application of the proposed solution in simulation and compare it to centralized and global approaches. We then test our approach with physical marine robots sampling a process in a water tank.
|
|
WeBT1-04 Interactive Session, 220 |
Add to My Program |
Cognitive HRI - 3.2.04 |
|
|
|
11:30-12:45, Paper WeBT1-04.1 | Add to My Program |
Who Takes What: Using RGB-D Camera and Inertial Sensor for Unmanned Monitor |
Kao, Hsin-Wei | National Chiao Tung University |
Ke, Ting-Yuan | National Chiao Tung University |
Lin, Kate Ching-Ju | National Chiao Tung University |
Tseng, Yu-Chee | National Chiao Tung University |
Keywords: Cognitive Human-Robot Interaction, Calibration and Identification, Human Detection and Tracking
Abstract: Advanced Internet of Things (IoT) techniques have made human-environment interaction much easier. Existing solutions usually enable such interactions without knowing the identities of action performers. However, identifying users who are interacting with environments is a key to enable personalized service. To provide such add-on service, we propose WTW (who takes what), a system that identifies which user takes what object. Unlike traditional vision-based approaches, which are typically vulnerable to blockage, our WTW combines the feature information of three types of data, i.e., images, skeletons and IMU data, to enable reliable user-object matching and identification. By correlating the moving trajectory of a user monitored by inertial sensors with the movement of an object recorded in the video, our WTW reliably identifies a user and matches him/her with the object on action. Our prototype evaluation shows that WTW achieves a recognition rate of over 90% even in a crowd. The system is reliable even when users locate close by and take objects roughly at the same time.
|
|
11:30-12:45, Paper WeBT1-04.2 | Add to My Program |
Sound-Indicated Visual Object Detection for Robotic Exploration |
Wang, Feng | Tsinghua University |
Guo, Di | Tsinghua University |
Liu, Huaping | Tsinghua University |
Keywords: Cognitive Human-Robot Interaction, Robot Audition
Abstract: Robots are usually equipped with microphones and cameras to perceive and understand the physical world. Though visual object detection technology has achieved great success, the detection in other modalities remains unsolved. In this paper, we establish a novel robotic sound-indicated visual object detection framework, and develop a two-stream weakly- supervised deep learning architecture to connect the visual and audio modalities for localizing the sounding object. A dataset is constructed from the AudioSet to validate the proposed method and some promising applications are demonstrated on robotic platforms.
|
|
11:30-12:45, Paper WeBT1-04.3 | Add to My Program |
HG-DAgger: Interactive Imitation Learning with Human Experts |
Kelly, Michael | Stanford University |
Sidrane, Chelsea Rose | Stanford University |
Driggs-Campbell, Katherine Rose | Stanford University |
Kochenderfer, Mykel | Stanford University |
Keywords: Human-Centered Automation, Learning from Demonstration, Deep Learning in Robotics and Automation
Abstract: Imitation learning has proven to be useful for many real-world problems, but approaches such as behavioral cloning suffer from data mismatch and compounding error issues. One attempt to address these limitations is the DAGGER algorithm, which uses the state distribution induced by the novice to sample corrective actions from the expert. Such sampling schemes, however, require the expert to provide action labels without being fully in control of the system. This can decrease safety and, when using humans as experts, is likely to degrade the quality of the collected labels due to perceived actuator lag. In this work, we propose HG-DAGGER, a variant of DAGGER that is more suitable for interactive imitation learning from human experts in real-world systems. In addition to training a novice policy, HG-DAGGER also learns a safety threshold for a model-uncertainty-based risk metric that can be used to predict the performance of the fully trained novice in different regions of the state space. We evaluate our method on both a simulated and real-world autonomous driving task, and demonstrate improved performance over both DAGGER and behavioral cloning.
|
|
11:30-12:45, Paper WeBT1-04.4 | Add to My Program |
Proximity Human-Robot Interaction Using Pointing Gestures and a Wrist-Mounted IMU |
Gromov, Boris | IDSIA |
Abbate, Gabriele | University of Milano-Bicocca |
Gambardella, Luca | USI-SUPSI |
Giusti, Alessandro | IDSIA Lugano, SUPSI |
Keywords: Cognitive Human-Robot Interaction, Localization, Gesture, Posture and Facial Expressions
Abstract: We present a system for interaction between co-located humans and mobile robots, which uses pointing gestures sensed by a wrist-mounted IMU. The operator begins by pointing, for a short time, at a moving robot. The system thus simultaneously determines: that the operator wants to interact; the robot they want to interact with; and the relative pose among the two. Then, the system can reconstruct pointed locations in the robot's own reference frame, and provide real-time feedback about them so that the user can adapt to misalignments. We discuss the challenges to be solved to implement such a system and propose practical solutions, including variants for fast flying robots and slow ground robots. We report different experiments with real robots and untrained users, validating the individual components and the system as a whole.
|
|
11:30-12:45, Paper WeBT1-04.5 | Add to My Program |
Bayesian Active Learning for Collaborative Task Specification Using Equivalence Regions |
Wilde, Nils | University of Waterloo |
Kulic, Dana | University of Waterloo |
Smith, Stephen L. | University of Waterloo |
Keywords: Cognitive Human-Robot Interaction, Motion and Path Planning, Learning and Adaptive Systems
Abstract: Specifying complex task behaviours while ensuring good robot performance may be difficult for untrained users. We study a framework for users to specify rules for acceptable behaviour in a shared environment such as industrial facilities. As non-expert users might have little intuition about how their specification impacts the robot's performance, we design a learning system that interacts with the user to find an optimal solution. Using active preference learning, we iteratively show alternative paths that the robot could take on an interface. From the user feedback ranking the alternatives, we learn about the weights that users place on each part of their specification. We extend the user model from our previous work to a discrete Bayesian learning model and introduce a greedy algorithm for proposing alternative that operates on the notion of equivalence regions of user weights. We prove that with this algorithm the revision active learning process converges on the user-optimal path. In simulations on realistic industrial environments, we demonstrate the convergence and robustness of our approach.
|
|
WeBT1-05 Interactive Session, 220 |
Add to My Program |
Calibration and Identification - 3.2.05 |
|
|
|
11:30-12:45, Paper WeBT1-05.1 | Add to My Program |
Lidar Measurement Bias Estimation Via Return Waveform Modelling in a Context of 3D Mapping |
Laconte, Johann | Institut Pascal |
Deschênes, Simon-Pierre | Laval University |
Labussière, Mathieu | Institut Pascal |
Pomerleau, Francois | Laval University |
Keywords: Calibration and Identification, Range Sensing, Mapping
Abstract: In a context of 3D mapping, it is very important to obtain accurate measurements from sensors. In particular, Light Detection And Ranging (LIDAR) measurements are typically treated as a zero-mean Gaussian distribution. We show that this assumption leads to predictable localisation drifts, especially when a bias related to measuring obstacles with high incidence angles is not taken into consideration. Moreover, we present a way to physically understand and model this bias, which generalizes to multiple sensors. Using an experimental setup, we measured the bias of the Sick LMS151, Velodyne HDL-32E, and Robosense RS-LiDAR-16 as a function of depth and incidence angle, and showed that the bias can reach 20 cm for high incidence angles. We then used our model to remove the bias from the measurements, leading to more accurate maps and a reduced localisation drift.
|
|
11:30-12:45, Paper WeBT1-05.2 | Add to My Program |
An Extrinsic Calibration Tool for Radar, Camera and Lidar |
Domhof, Joris | Delft University of Technology |
Kooij, Julian | TU Delft |
Gavrila, Dariu | Daimler |
Keywords: Calibration and Identification, Sensor Fusion
Abstract: We present a novel open-source tool for extrinsic calibration of radar, camera and lidar. Unlike currently available offerings, our tool facilitates joint extrinsic calibration of all three sensing modalities on multiple measurements. Furthermore, our calibration target design extends existing work to obtain simultaneous measurements for all these modalities. We study how various factors of the calibration procedure affect the outcome on real multi-modal measurements of the target. Three different configurations of the optimization criterion are considered, namely using error terms for a minimal amount of sensor pairs, or using terms for all sensor pairs with additional loop closure constraints, or by adding terms for structure estimation in a probabilistic model. The experiments further evaluate how the number of calibration boards affect calibration performance, and robustness against different levels of zero mean Gaussian noise. Our results show that all configurations achieve good results for lidar to camera errors and that fully connected pose estimation shows the best performance for lidar to radar errors when more than five board locations are used.
|
|
11:30-12:45, Paper WeBT1-05.3 | Add to My Program |
Unified Motion-Based Calibration of Mobile Multi-Sensor Platforms with Time Delay Estimation |
Della Corte, Bartolomeo | Sapienza University of Rome |
Andreasson, Henrik | Örebro University |
Stoyanov, Todor | Örebro University |
Grisetti, Giorgio | Sapienza University of Rome |
Keywords: Calibration and Identification
Abstract: The ability to maintain and continuously update geometric calibration parameters of a mobile platform is a key functionality for every robotic system. These parameters include the intrinsic kinematic parameters of the platform, the extrinsic parameters of the sensors mounted on it and their time delays. In this paper, we present a unified pipeline for motion-based calibration of mobile platforms equipped with multiple heterogeneous sensors. We formulate a unified optimization problem to concurrently estimate the platform kinematic parameters, the sensors extrinsic parameters and their time delays. We analyze the influence of the trajectory followed by the robot on the accuracy of the estimate. Our framework automatically selects appropriate trajectories to maximize the information gathered and to obtain a more accurate parameters estimate. In combination with that, our pipeline observes the parameters evolution in long-term operation to detect possible values change in the parameters set. The experiments conducted on real data show a smooth convergence along with the ability to detect changes of parameters value. We release an open-source version of our framework to the community.
|
|
11:30-12:45, Paper WeBT1-05.4 | Add to My Program |
Degenerate Motion Analysis for Aided INS with Online Spatial and Temporal Sensor Calibration |
Yang, Yulin | University of Delaware |
Geneva, Patrick | University of Delaware |
Eckenhoff, Kevin | University of Delaware |
Huang, Guoquan | University of Delaware |
Keywords: Calibration and Identification, SLAM, Localization
Abstract: In this paper we perform in-depth observability analysis for both spatial and temporal calibration parameters of an aided inertial navigation system (INS) with global and/or local sensing modalities. In particular, we analytically show that both spatial and temporal calibration parameters are observable if the sensor platform undergoes random motion. More importantly, we identify four degenerate motion primitives that harm the calibration accuracy and thus should be avoided in reality whenever possible. Interestingly, we also prove that these degenerate motions would still hold even in the case where global pose measurements are available. Leveraging a particular multi-state constrained Kalman filter (MSCKF)-based vision-aided inertial navigation system (VINS) with online spatial and temporal calibration, we perform extensively both Monte-Carlo simulations and real-world experiments with the identified degenerate motions to validate our analysis.
|
|
11:30-12:45, Paper WeBT1-05.5 | Add to My Program |
Compensation of Measurement Noise and Bias in Geometric Attitude Estimation |
Mitikiri, Yujendra | University of Florida |
Mohseni, Kamran | University of Florida at Gainesville |
Keywords: Calibration and Identification, Formal Methods in Robotics and Automation, Sensor Fusion
Abstract: A geometry-based analytic attitude estimation using a rate measurement and measurement of a single reference vector has been recently proposed. Because rigid body attitude estimation is a fundamentally nonlinear problem, the geometry-based method yields a better attitude estimate, when compared to other methods. A critical source of residual error in the geometric solution is on account of the noise and bias in the vector and rate measurements. A methodical perturbation analysis of the attitude estimate is performed in this letter that reveals the effects of measurement noise and bias, and provides means to compensate for, or filter out, such errors. Application of the filter and compensation provides better attitude estimation than a standard Extended Kalman filter using an optimal Kalman gain. The geometric method is first verified in experiments and then simulation results are provided that validate the better performance of the geometric attitude and bias estimator.
|
|
11:30-12:45, Paper WeBT1-05.6 | Add to My Program |
Geometric Calibration of Continuum Robots: Joint Space and Equilibrium Shape Deviations (I) |
Wang, Long | Columbia University |
Simaan, Nabil | Vanderbilt University |
Keywords: Calibration and Identification, Kinematics, Medical Robots and Systems
Abstract: Currently, surgical continuum robots (CRs) are predominantly used as telemanipulators where modeling errors are overcome by the user. Such errors preclude their use for autonomous tasks. In this paper, we investigate the calibration of CRs with specific focus on capturing joint space errors due to homing offsets, assembly errors causing twist about the robot’s backbone, and uncertainty in the equilibrium bending shapes of segments of these robots. A kinematic framework focusing on multibackbone CR is presented with emphasis on deriving calibration identification Jacobians. This framework captures the coupling between twist and the equilibrium shapes of a continuum segment as a function of its bending angle. To capture equilibrium shape variations as a function of bending, a homotopy of curves is defined and represented by respective modal coefficients. The estimation of the calibration parameters is cast as a nonlinear least-squares problem. The framework is validated by simulations and experimentally using a single-port access surgery robot. We believe this calibration framework will facilitate semiautomation of surgical tasks carried out by CRs.
|
|
WeBT1-06 Interactive Session, 220 |
Add to My Program |
Semantic Scene Understanding II - 3.2.06 |
|
|
|
11:30-12:45, Paper WeBT1-06.1 | Add to My Program |
Hierarchical Depthwise Graph Convolutional Neural Network for 3D Semantic Segmentation of Point Clouds |
Liang, Zhidong | Shanghai Jiao Tong University |
Yang, Ming | Shanghai Jiao Tong University |
Deng, Liuyuan | Shanghai Jiao Tong University |
Wang, Chunxiang | Shanghai Jiaotong University |
Wang, Bing | Shanghai Jiao Tong University |
Keywords: Semantic Scene Understanding, AI-Based Methods, RGB-D Perception
Abstract: This paper proposes a hierarchical depthwise graph convolutional neural network (HDGCN) for point cloud semantic segmentation. The main chanllenge for learning on point clouds is to capture local structures or relationships. Graph convolution has the strong ability to extract local shape information from neighbors. Inspired by depthwise convolution, we propose a depthwise graph convolution which requires less memory consumption compared with the previous graph convolution. While depthwise graph convolution aggregates features channel-wisely, pointwise convolution is used to learn features across different channels. A customized block called DGConv is specially designed for local feature extraction based on depthwise graph convolution and pointwise convolution. The DGConv block can extract features from points and transfer features to neighbors while being invariant to different point orders. HDGCN is constructed by a series of DGConv blocks using a hierarchical structure which can extract both local and global features of point clouds. Experiments show that HDGCN achieves the state-of-the-art performance in the indoor dataset S3DIS and the outdoor dataset Paris-Lille-3D.
|
|
11:30-12:45, Paper WeBT1-06.2 | Add to My Program |
Monocular Semantic Occupancy Grid Mapping with Convolutional Variational Encoder-Decoder Networks |
Lu, Chenyang | Eindhoven University of Technology |
van de Molengraft, Marinus Jacobus Gerardus | University of Technology Eindhoven |
Dubbelman, Gijs | Eindhoven University of Technology |
Keywords: Semantic Scene Understanding, Object Detection, Segmentation and Categorization, Computer Vision for Transportation
Abstract: In this work, we research and evaluate end-to-end learning of monocular semantic-metric occupancy grid mapping from weak binocular ground truth. The network learns to predict four classes, as well as a camera to bird's eye view mapping. At the core, it utilizes a variational encoder-decoder network that encodes the front-view visual information of the driving scene and subsequently decodes it into a 2-D top-view Cartesian coordinate system. The evaluations on Cityscapes show that the end-to-end learning of semantic-metric occupancy grids outperforms the deterministic mapping approach with flat-plane assumption by more than 12% mean IoU. Furthermore, we show that the variational sampling with a relatively small embedding vector brings robustness against vehicle dynamic perturbations, and generalizability for unseen KITTI data. Our network achieves real-time inference rates of approx. 35 Hz for an input image with a resolution of 256x512 pixels and an output map with 64x64 occupancy grid cells using a Titan V GPU.
|
|
11:30-12:45, Paper WeBT1-06.3 | Add to My Program |
Asynchronous Spatial Image Convolutions for Event Cameras |
Scheerlinck, Cedric | The Australian National University |
Barnes, Nick | National ICT Australia |
Mahony, Robert | Australian National University |
Keywords: Computer Vision for Other Robotic Applications, Visual Tracking, Computer Vision for Automation
Abstract: Spatial convolution is arguably the most fundamental of 2D image processing operations. Conventional spatial image convolution can only be applied to a conventional image, that is, an array of pixel values (or similar image representation) that are associated with a single instant in time. Event cameras have serial, asynchronous output with no natural notion of an image frame, and each event arrives with a different timestamp. In this paper, we propose a method to compute the convolution of a linear spatial kernel with the output of an event camera. The approach operates on the event stream output of the camera directly without synthesising pseudo-image frames as is common in the literature. The key idea is the introduction of an internal state that directly encodes the convolved image information, which is updated asynchronously as each event arrives from the camera. The state can be read-off as-often-as and whenever required for use in higher level vision algorithms for real-time robotic systems. We demonstrate the application of our method to corner detection, providing an implementation of a Harris corner-response "state" that can be used in real-time for feature detection and tracking on robotic systems.
|
|
11:30-12:45, Paper WeBT1-06.4 | Add to My Program |
Where Should I Walk? Predicting Terrain Properties from Images Via Self-Supervised Learning |
Wellhausen, Lorenz | ETH Zürich |
Dosovitskiy, Alexey | Intel |
Ranftl, Rene | Intel |
Walas, Krzysztof, Tadeusz | Poznan University of Technology |
Cadena Lerma, Cesar | ETH Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Semantic Scene Understanding, Visual-Based Navigation, Visual Learning
Abstract: Legged robots have the potential to traverse diverse and rugged terrain. To find a safe and efficient navigation path and to carefully select individual footholds, it is useful to be able to predict properties of the terrain ahead of the robot. In this work, we propose a method to collect data from robot-terrain interaction and associate it to images. Using sparse data acquired in teleoperation experiments with a quadrupedal robot, we train a neural network to generate a dense prediction of the terrain properties in front of the robot. To generate training data, we project the foothold positions from the robot trajectory into on-board camera images. We then attach labels to these footholds by identifying the dominant features of the force-torque signal measured with sensorized feet. We show that data collected in this fashion can be used to train a convolutional network for terrain property prediction as well as weakly supervised semantic segmentation. Finally, we show that the predicted terrain properties can be used for autonomous navigation of the ANYmal quadruped robot.
|
|
11:30-12:45, Paper WeBT1-06.5 | Add to My Program |
Adapting Semantic Segmentation Models for Changes in Illumination and Camera Perspective |
Zhou, Wei | University of Sydney |
Zyner, Alex | The University of Sydney |
Worrall, Stewart | University of Sydney |
Nebot, Eduardo | Unversity of Sydney |
Keywords: Computer Vision for Transportation, Intelligent Transportation Systems
Abstract: Semantic segmentation using deep neural networks has been widely explored to generate high-level contextual information for autonomous vehicles. To acquire a complete 180◦ semantic understanding of the forward surroundings, we propose to stitch semantic images from multiple cameras with varying orientations. However, previously trained semantic segmentation models showed unacceptable performance after significant changes to the camera orientations and the lighting conditions. To avoid timeconsuming hand labeling, we explore and evaluate the use of data augmentation techniques, specifically skew and gamma correction, from a practical real-world standpoint to extend the existing model and provide more robust performance. The experimental results presented have shown significant improvements with varying illumination and camera perspective changes. A comparison of the results from a high-performance network (PSPNet), and a realtime capable network (ENet) is provided.
|
|
WeBT1-07 Interactive Session, 220 |
Add to My Program |
SLAM - Session VIII - 3.2.07 |
|
|
|
11:30-12:45, Paper WeBT1-07.1 | Add to My Program |
CELLO-3D: Estimating the Covariance of ICP in the Real World |
Landry, David | Laval University |
Pomerleau, Francois | Laval University |
Giguere, Philippe | Université Laval |
Keywords: SLAM, Range Sensing, Learning and Adaptive Systems
Abstract: The fusion of Iterative Closest Point (ICP) registrations in existing state estimation frameworks relies on an accurate estimation of their uncertainty. In this paper, we study the estimation of this uncertainty in the form of a covariance. First, we scrutinize the limitations of existing closed-form covariance estimation algorithms over 3D datasets. Then, we set out to estimate the covariance of ICP registrations through a data-driven approach, with over 5 100 000 registrations on 1020 pairs from real 3D point clouds. We assess our solution upon a wide spectrum of environments, ranging from structured to unstructured and indoor to outdoor. The capacity of our algorithm to predict covariances is accurately assessed, as well as the usefulness of these estimations for uncertainty estimation over trajectories. The proposed method estimates covariances better than existing closed-form solutions, and makes predictions that are consistent with observed trajectories.
|
|
11:30-12:45, Paper WeBT1-07.2 | Add to My Program |
Probabilistic Appearance-Based Place Recognition through Bag of Tracked Words |
Tsintotas, Konstantinos A. | Democritus University of Thrace |
Bampis, Loukas | Democritus University of Thrace |
Gasteratos, Antonios | Democritus University of Thrace |
Keywords: SLAM, Visual-Based Navigation, Recognition
Abstract: A key feature in robotics applications is to recognize whether the current environment observation corresponds to a previously visited location. Should the place be recognized by the robot, a Loop Closure Detection (LCD) has occurred. The letter in hand deploys a novel low complexity LCD method based on the representation of the route by unique Visual Features (VFs). Each of these VFs, referred to as “Tracked Word” (TW), is generated on-line through a tracking technique coupled with a guided-feature-detection mechanism and belongs to a group of successive images. During the robot’s navigation, new TWs are added to the database forming a bag of tracked words. When querying the database seeking for loop closures, the new local-feature-descriptors are associated with the nearest neighboring TWs in the map casting votes to the corresponding instances. The system relies on a probabilistic method to select the most suitable loop closing pair, based on the number of votes each location polls. The proposed system depends solely on the appearance information of the scenes on the trajectory, without requiring any pre-training phase. The evaluation of the method is administered via a variety of tests with several community datasets, thus proving its capability of achieving high recall rates for perfect precision.
|
|
11:30-12:45, Paper WeBT1-07.3 | Add to My Program |
A White-Noise-On-Jerk Motion Prior for Continuous-Time Trajectory Estimation on SE(3) |
Tang, Tim Yuqing | University of Toronto |
Yoon, David Juny | University of Toronto |
Barfoot, Timothy | University of Toronto |
Keywords: SLAM
Abstract: Simultaneous trajectory estimation and mapping (STEAM) offers an efficient approach to continuous-time trajectory estimation, by representing the trajectory as a Gaussian process (GP). Previous formulations of the STEAM framework use a GP prior that assumes white-noise-on-acceleration, with the prior mean encouraging constant body-centric velocity. We show that such a prior cannot sufficiently represent trajectory sections with non-zero acceleration, resulting in a bias to the posterior estimates. This paper derives a novel motion prior that assumes white-noise-on-jerk, where the prior mean encourages constant body-centric acceleration. With the new prior, we formulate a variation of STEAM that estimates the pose, body-centric velocity, and body-centric acceleration. By evaluating across several datasets, we show that the new prior greatly outperforms the white-noise-on-acceleration prior in terms of solution accuracy.
|
|
11:30-12:45, Paper WeBT1-07.4 | Add to My Program |
Low-Latency Visual SLAM with Appearance-Enhanced Local Map Building |
Zhao, Yipu | Georgia Institute of Technology |
Ye, Wenkai | Georgia Institute of Technology |
Vela, Patricio | Georgia Institute of Technology |
Keywords: SLAM
Abstract: A local map module is often implemented in modern VO/VSLAM systems to improve data association and pose estimation. Conventionally, the local map contents are determined by co-visibility. While co-visibility is cheap to establish, it utilizes the relatively-weak temporal prior (i.e. seen before, likely to be seen now), therefore admitting more features into the local map than necessary. This paper describes an enhancement to co-visibility local map building by incorporating a strong appearance prior, which leads to a more compact local map and latency reduction in downstream data association. The appearance prior collected from the current image influences the local map contents: only the map features visually similar to the current measurements are potentially useful for data association. To that end, mapped features are indexed and queried with Multi-index Hashing (MIH). An online hash table selection algorithm is developed to further reduce the query overhead of MIH and the local map size. The proposed appearance-based local map building method is integrated into a state-of-the-art VO/VSLAM system. When evaluated on two public benchmarks, the size of the local map, as well as the latency of real-time pose tracking in VO/VSLAM are significantly reduced. Meanwhile, the VO/VSLAM mean performance is preserved or improves.
|
|
11:30-12:45, Paper WeBT1-07.5 | Add to My Program |
Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities |
Rosinol Vidal, Antoni | MIT |
Sattler, Torsten | Chalmers University of Technology |
Pollefeys, Marc | ETH Zurich |
Carlone, Luca | Massachusetts Institute of Technology |
Keywords: SLAM, Visual-Based Navigation, Sensor Fusion
Abstract: Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud representation of the scene that does not model the topology of the environment. A 3D mesh instead offers a richer, yet lightweight, model. Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks triangulated by a VIO algorithm often results in a mesh that does not fit the real scene. In order to regularize the mesh, previous approaches decouple state estimation from the 3D mesh regularization step, and either limit the 3D mesh to the current frame [1], [2] or let the mesh grow indefinitely [3], [4]. We propose instead to tightly couple mesh regularization and state estimation by detecting and enforcing structural regularities in a novel factor-graph formulation. We also propose to incrementally build the mesh by restricting its extent to the time-horizon of the VIO optimization; the resulting 3D mesh covers a larger portion of the scene than a per-frame approach while its memory usage and computational complexity remain bounded. We show that our approach successfully regularizes the mesh, while improving localization accuracy, when structural regularities are present, and remains operational in scenes without regularities.
|
|
WeBT1-08 Interactive Session, 220 |
Add to My Program |
AI-Based Methods II - 3.2.08 |
|
|
|
11:30-12:45, Paper WeBT1-08.1 | Add to My Program |
Unsupervised Out-Of-Context Action Understanding |
Kataoka, Hirokatsu | National Institute of Advanced Industrial Science and Technology |
Satoh, Yutaka | AIST |
Keywords: Gesture, Posture and Facial Expressions, Surveillance Systems
Abstract: The paper presents an unsupervised out-of-context action (O2CA) paradigm that is based on facilitating understanding by separately presenting both human action and context within a video sequence. As a means of generating an unsupervised label, we comprehensively evaluate responses from action-based (ActionNet) and context-based (ContextNet) convolutional neural networks (CNNs). Additionally, we have created three synthetic databases based on the human action (UCF101, HMDB51) and motion capture (mocap) (SURREAL) datasets. We then conducted experimental comparisons between our approach and conventional approaches. We also compared our unsupervised learning method with supervised learning using an O2CA ground truth given by synthetic data. From the results obtained, we achieved a 96.8 score on Synth-UCF, a 96.8 score on Synth-HMDB, and 89.0 on SURREAL-O2CA with F-score.
|
|
11:30-12:45, Paper WeBT1-08.2 | Add to My Program |
Air-To-Ground Surveillance Using Predictive Pursuit |
Dutta, Sourav | University at Albany |
Ekenna, Chinwe | University at Albany |
Keywords: Surveillance Systems, Localization, Motion and Path Planning
Abstract: This paper introduces a probabilistic prediction model with a novel variant of the Markov decision process to improve tracking time and location detection accuracy in an air-to-ground robot surveillance scenario. While most surveillance algorithms focus mainly on controls of an unmanned aerial vehicle (UAV) and camera for faster tracking of an unmanned ground vehicle (UGV), this paper proposes a way of minimizing detection and tracking time by applying a prediction model to the first observed path taken by the UGV. We present a pursuit algorithm that addresses the problem of target (UGV) localization by combining prediction of used planning algorithm by the target, and application of the same planning algorithm to predict future trajectories. Our results show a high predictive accuracy based on a final position attained by the target and the location predicted by our model.
|
|
11:30-12:45, Paper WeBT1-08.3 | Add to My Program |
Online Planning for Target Object Search in Clutter under Partial Observability |
Xiao, Yuchen | Northeastern Univerisity |
Katt, Sammie | Northeastern |
ten Pas, Andreas | Northeastern University |
Chen, Shengjian | Tsinghua University |
Amato, Christopher | Northeastern University |
Keywords: AI-Based Methods, Task Planning, Mobile Manipulation
Abstract: The problem of finding and grasping a target object in a cluttered, uncertain environment, target object search, is a common and important problem in robotics. One key challenge is the uncertainty of locating and recognizing each object in a cluttered environment due to noisy perception and occlusions. Furthermore, the uncertainty in localization makes manipulation difficult and uncertain. To cope with these challenges, we formulate the target object search task as a partially observable Markov decision process (POMDP), enabling the robot to reason about perceptual and manipulation uncertainty while searching. To further address the manipulation difficulty, we propose Parameterized Action Partially Observable Monte-Carlo Planning (PA-POMCP), an algorithm that evaluates manipulation actions by taking into account the effect of the robot’s current belief on the success of the action execution. In addition, a novel run-time initial belief generator and a state value estimator are introduced in this paper to facilitate the PA-POMCP algorithm. Our experiments show that our methods solve the target object search task in settings where simpler methods either take more object movements or fail.
|
|
11:30-12:45, Paper WeBT1-08.4 | Add to My Program |
Learning to Drive in a Day |
Kendall, Alex | University of Cambridge |
Hawke, Jeffrey | Wayve |
Janz, David | University of Cambridge |
Mazur, Przemysław | Wayve Technologies |
Reda, Daniele | Wayve |
Allen, John-Mark A. | Wayve Technologies |
Lam, Vinh-Dieu | Wayve Technologies |
Bewley, Alex | Wayve |
Shah, Amar | Wayve |
Keywords: AI-Based Methods, Deep Learning in Robotics and Automation, Computer Vision for Transportation
Abstract: We demonstrate the first application of deep reinforcement learning to autonomous driving. From randomly initialised parameters, our model is able to learn a policy for lane following in a handful of training episodes using a single monocular image as input. We provide a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control. We use a continuous, model-free deep reinforcement learning algorithm, with all exploration and optimisation performed on-vehicle. This demonstrates a new framework for autonomous driving which moves away from reliance on defined logical rules, mapping, and direct supervision. We discuss the challenges and opportunities to scale this approach to a broader range of autonomous driving tasks.
|
|
11:30-12:45, Paper WeBT1-08.5 | Add to My Program |
Human-Robot Collaborative Site Inspection under Resource Constraints (I) |
Cai, Hong | University of California, Santa Barbara |
Mostofi, Yasamin | University of California Santa Barbara |
Keywords: AI-Based Methods, Human Factors and Human-in-the-Loop, Surveillance Systems
Abstract: This paper is on human-robot collaborative site inspection and target classification. We consider the realistic case that human visual performance is imperfect (depending on the sensory input quality), and that the robot has constraints in communication with human (e.g., limited chances for query, poor channel quality). The robot has limited onboard motion and communication energy and operates in realistic channel environments experiencing path loss, shadowing, and multipath. We then show how to co-optimize motion, sensing, and human queries. Given a probabilistic assessment of human visual performance and a probabilistic channel prediction, we pose the co-optimization as multiple-choice multidimensional knapsack problems. We then propose a linear program-based efficient near-optimal solution, mathematically characterize the optimality gap, showing it to be very small, and mathematically characterize properties of the optimum solution. We then comprehensively validated the proposed approach with extensive real human data (from Amazon MTurk) and real channel data (from downtown San Francisco), confirming that the proposed approach significantly outperforms benchmark methodologies.
|
|
WeBT1-09 Interactive Session, 220 |
Add to My Program |
Simulation and Animation - 3.2.09 |
|
|
|
11:30-12:45, Paper WeBT1-09.1 | Add to My Program |
Generating Adversarial Driving Scenarios in High-Fidelity Simulators |
Abeysirigoonawardena, Yasasa | McGill University |
Shkurti, Florian | University of Toronto |
Dudek, Gregory | McGill University |
Keywords: Simulation and Animation, Learning and Adaptive Systems, Autonomous Vehicle Navigation
Abstract: In recent years self-driving vehicles have become more commonplace on public roads, with the promise of bringing safety and efficiency to modern transportation systems. Increasing the reliability of these vehicles on the road requires an extensive suite of software tests, ideally performed on high-fidelity simulators, where multiple vehicles and pedestrians interact with the self-driving vehicle. It is therefore of critical importance to ensure that self-driving software is assessed against a wide range of challenging simulated driving scenarios. The state of the art in driving scenario generation, as adopted by some of the front-runners of the self-driving car industry, still relies on human input. In this paper we propose to automate the process using Bayesian Optimization to generate adversarial self-driving scenarios that expose poorly-engineered or poorly-trained self-driving policies, and increase the risk of collision with simulated pedestrians and vehicles. We show that by incorporating the generated scenarios into the training set of the self-driving policy, and by fine-tuning the policy using vision-based imitation learning we obtain safer self-driving behavior.
|
|
11:30-12:45, Paper WeBT1-09.2 | Add to My Program |
Data-Driven Contact Clustering for Robot Simulation |
Kim, Myungsin | Seoul National University |
Yoon, Jaemin | Seoul National University |
Son, Dongwon | Seoul National University |
Lee, Dongjun | Seoul National University |
Keywords: Simulation and Animation
Abstract: We propose a novel data-driven learning-based contact clustering (i.e., of contact points and contact normals) framework for rigid-body robot simulation, with its accuracy established/verified by real experimental data. We first construct an experimental robotic setup with force/torque (F/T) sensors to collect real contact motion/force data. We then design a multilayer perceptron (MLP) network for the contact clustering based on the full motion and force/torque information of the contacts. We also adopt the constraint-based optimization contact solver to facilitate the learning of our MLP network during the training. Our proposed data-driven/learning-based contact clustering framework is then verified against the experimental setup, compared with other techniques/simulators and shown to significantly (or meaningfully) enhance the accuracy of contact simulation as compared to them.
|
|
11:30-12:45, Paper WeBT1-09.3 | Add to My Program |
Pavilion: Bridging Photo-Realism and Robotics |
Jiang, Fan | Southern University of Science and Technology |
Hao, Qi | Southern University of Science and Technology |
Keywords: Simulation and Animation, Software, Middleware and Programming Environments, Big Data in Robotics and Automation
Abstract: Simulation environments play a centric role in the research of sensor fusion and robot control. This paper presents Pavilion, a novel open-source simulation system, for robot perception and kinematic control based on the Unreal Engine and the Robot Operating System (ROS). The novelty of this work includes threefold: (1) developing a shader-based method to generate optical flow ground-truth data with the Unreal Engine, (2) developing a toolset to remove binary incompatibility between ROS and the Unreal Engine to enable real-time interaction, and (3) developing a method to directly import Simulation Description Format (SDF) robot models into the Unreal Engine at runtime. Finally, a Gazebo-compatible real-time simulation system is developed to enable training and evaluation of a large number of sensor fusion, planning, decision and control algorithms. The system can be implemented on both Linux and macOS, with the latest version of ROS. Various experiments have been performed to validate the superior performance of the proposed simulation environment over other state-of-the-art simulators in terms of number of modalities, simulation accuracy, latency and degree of integration difficulty.
|
|
11:30-12:45, Paper WeBT1-09.4 | Add to My Program |
A Real-Time Interactive Augmented Reality Depth Estimation Technique for Surgical Robotics |
Kalia, Megha | University of British Columbia |
Navab, Nassir | TU Munich |
Salcudean, Septimiu E. | University of British Columbia |
Keywords: Surgical Robotics: Planning, Virtual Reality and Interfaces, Simulation and Animation
Abstract: Augmented reality (AR) is a promising technology where the surgeon can see the medical abnormality in the context of the patient. It makes the anatomy of interest visible to the surgeon which otherwise is not visible. It can result in better surgical precision and therefore, potentially better surgical outcomes and faster recovery times. Despite these benefits, the current AR systems suffer from two major challenges; first, incorrect depth perception and, second, the lack of suitable evaluation systems. Therefore, in the current paper we addressed both of these problems. We proposed a color depth encoding (CDE) technique to estimate the distance between the tumor and the tissue surface using a surgical instrument. We mapped the distance between the tumor and the tissue surface to the blue-red color spectrum. For evaluation and interaction with our AR technique, we propose to use a virtual surgical instrument method using the CAD model of the instrument. The users were asked to reach the judged distance in the surgical field using the virtual tool. Realistic tool movement was simulated by collecting the forward kinematics joint encoder data. The results showed significant improvement in depth estimation, time for task completion and confidence, using our CDE technique with and without stereo versus other two cases, that are, Stereo-No CDE and No Stereo-No CDE.
|
|
11:30-12:45, Paper WeBT1-09.5 | Add to My Program |
Force-Based Heterogeneous Traffic Simulation for Autonomous Vehicle Testing |
Chao, Qianwen | Xidian University |
Jin, Xiaogang | State Key Lab of CAD&CG, Zhejiang University |
Huang, Hen-Wei | MIT |
Foong, Shaohui | Singapore University of Technology and Design |
Yu, Lap-Fai | University of Massachusetts Boston |
Yeung, Sai-Kit | Singapore University of Technology and Design |
Keywords: Intelligent Transportation Systems, Simulation and Animation, Autonomous Agents
Abstract: Recent failures in real-world self-driving tests have suggested a paradigm shift from directly learning in real-world roads to building a high-fidelity driving simulator as an alternative, effective, and safe tool to handle intricate traffic environments in urban areas. To date, traffic simulation can construct virtual urban environments with various weather conditions, day and night, and traffic control for autonomous vehicle testing. However, mutual interactions between autonomous vehicles and pedestrians are rarely modeled in existing simulators. Besides vehicles and pedestrians, the usage of personal mobility devices is increasing in congested cities as an alternative to the traditional transport system. A simulator that considers all potential road-users in a realistic urban environment is urgently desired. In this work, we propose a novel, extensible, and microscopic method to build heterogenous traffic simulation using the force-based concept. This force-based approach can accurately replicate the sophisticated behaviors of various road users and their interactions through a simple and unified way. Furthermore, we validate our approach through simulation experiments and comparisons to the popular simulators currently used for research and development of autonomous vehicles.
|
|
WeBT1-10 Interactive Session, 220 |
Add to My Program |
Object Recognition & Segmentation IV - 3.2.10 |
|
|
|
11:30-12:45, Paper WeBT1-10.1 | Add to My Program |
Dual Refinement Network for Single-Shot Object Detection |
Chen, Xingyu | Institute of Automation, Chinese Academy of Science |
Yang, Xiyuan | University of Chinese Academy of Sciences |
Kong, Shihan | Institute of Automation, Chinese Academy of Sciences |
Wu, Zhengxing | Chinese Academy of Sciences |
Yu, Junzhi | Chinese Academy of Sciences |
Keywords: Object Detection, Segmentation and Categorization, Computer Vision for Other Robotic Applications, Visual Learning
Abstract: Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has a considerable inference speed. Hence, it is imperative to fuse their merits for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in a novel anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve a state-of-the-art detection performance in terms of both accuracy and inference speed.
|
|
11:30-12:45, Paper WeBT1-10.2 | Add to My Program |
Distant Vehicle Detection Using Radar and Vision |
Chadwick, Simon | University of Oxford |
Maddern, Will | Nuro |
Newman, Paul | Oxford University |
Keywords: Object Detection, Segmentation and Categorization, Intelligent Transportation Systems, Computer Vision for Transportation
Abstract: For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths.
|
|
11:30-12:45, Paper WeBT1-10.3 | Add to My Program |
Customizing Object Detectors for Indoor Robots |
Alabachi, Saif | University of Central Florida |
Sukthankar, Gita | University of Central Florida |
Sukthankar, Rahul | Intel Labs and Carnegie Mellon |
Keywords: Object Detection, Segmentation and Categorization, Deep Learning in Robotics and Automation, Aerial Systems: Perception and Autonomy
Abstract: Object detection models based on convolutional neural networks (CNNs) demonstrate impressive performance when trained on large-scale labeled datasets. While a generic object detector trained on such a dataset performs adequately in applications where the input data is similar to user photographs, the detector performs poorly on small objects, particularly ones with limited training data or imaged from uncommon viewpoints. Also, a specific room will have many objects that are missed by standard object detectors, frustrating a robot that continually operates in the same indoor environment. This paper describes a system for rapidly creating customized object detectors. Data is collected from a quadcopter that is teleoperated with an interactive interface. Once an object is selected, the quadcopter autonomously photographs the object from multiple viewpoints to create training data that is used by DUNet (Dense Upscaled Net), our specialized architecture for learning customized object detectors with small amounts of data. Our experiments compare the performance of learning models from scratch with DUNet vs. fine tuning existing state of the art object detectors with the training data.
|
|
11:30-12:45, Paper WeBT1-10.4 | Add to My Program |
Semi Supervised Deep Quick Instance Detection and Segmentation |
Kumar, Ashish | Indian Institute of Technology, Kanpur |
Behera, Laxmidhar | IIT Kanpur |
Keywords: Object Detection, Segmentation and Categorization, Computer Vision for Automation, Deep Learning in Robotics and Automation
Abstract: In this paper, we present a semi supervised deep quick learning framework for instance detection and pixel- wise semantic segmentation of images in a dense clutter of items. The framework can quickly and incrementally learn novel items in an online manner by real-time data acquisition and generating corresponding ground truths on its own. To learn various combinations of items, it can synthesize cluttered scenes, in real time. The overall approach is based on the tutor-child analogy in which a deep network (tutor) is pretrained for class-agnostic object detection which generates labeled data for another deep network (child). The child utilizes a customized convolutional neural network head for the purpose of quick learning. There are broadly four key components of the proposed framework: semi supervised labeling, occlusion aware clutter synthesis, a customized convolutional neural network head, and instance detection. The initial version of this framework was implemented during our participation in Amazon Robotics Challenge (ARC), 2017. Our system was ranked 3 rd rd, 4 th and 5 th worldwide in pick, stow-pick and stow task respectively. The proposed framework is an improved version over ARC’17 where novel features such as instance detection and online learning has been added.
|
|
11:30-12:45, Paper WeBT1-10.5 | Add to My Program |
Mixed Frame-/Event-Driven Fast Pedestrian Detection |
Jiang, Zhuangyi | Technical University of Munich |
Xia, Pengfei | Technical University of Munich |
Huang, Kai | Sun Yat-Sen University |
Stechele, Walter | Technical University of Munich |
Chen, Guang | Tongji Univerisity |
Bing, Zhenshan | Technical University of Munich |
Knoll, Alois | Tech. Univ. Muenchen TUM |
Keywords: Object Detection, Segmentation and Categorization, Intelligent Transportation Systems, Surveillance Systems
Abstract: Pedestrian detection has attracted enormous research attention in the field of Intelligent Transportation System (ITS) due to that pedestrians are the most vulnerable traffic participants. So far, almost all pedestrian detection solutions are based on the conventional frame-based camera. However, they cannot perform very well in scenarios with bad light condition and high-speed motion. In this work, a Dynamic and Active Pixel Sensor (DAVIS), whose two channels concurrently output conventional gray-scale frames and asynchronous low-latency temporal contrast events of light intensity, was first used to detect pedestrians in a traffic monitoring scenario. Data from two camera channels were fed into Convolutional Neural Networks (CNNs) including three YOLOv3 models and three YOLO-tiny models to gather bounding boxes of pedestrians with respective confidence map. Furthermore, a confidence map fusion method combining the CNN-based detection results from both DAVIS channels was proposed to obtain higher accuracy. The experiments were conducted on a custom dataset collected on TUM campus. Benefiting from the high speed, low latency and wide dynamic range of the event channel, our method achieved higher frame rate and lower latency than those only using a conventional camera. Additionally, it reached higher average precision by using the fusion approach.
|
|
11:30-12:45, Paper WeBT1-10.6 | Add to My Program |
Real-Time Vehicle Detection from Short-Range Aerial Image with Compressed MobileNet |
He, Yuhang | Wuhan University, Hubei, China |
Pan, Ziyu | Sun Yat-Sen University |
Li, Lingxi | Indiana University-Purdue University Indianapolis |
Shan, Yunxiao | Sun Yat-Sen University |
Cao, Dongpu | University of Waterloo |
Chen, Long | Sun Yat-Sen University |
Keywords: Object Detection, Segmentation and Categorization, Deep Learning in Robotics and Automation, Computer Vision for Transportation
Abstract: Vehicle detection from short-range aerial image faces challenges including vehicle blocking, irrelevant object interference, motion blurring, color variation etc., leading to the difficulty to achieve high detection accuracy and real-time detection speed. In this paper, benefiting from the recent development in MobileNet family network engineering, we propose a compressed MobileNet which is not only internally resistant to the above listed challenges but also gains the best detection accuracy/speed tradeoff when comparing with the original MobileNet. In a nutshell, we reduce the bottleneck architecture number during the feature map downsampling stage but add more bottlenecks during the feature map plateau stage, neither extra FLOPs nor parameters are thus involved but reduced inference time and better accuracy are expected. We conduct experiment on our collected 5-k short-range aerial images, containing six vehicle categories: truck, car, bus, bicycle, motorcycle, crowded bicycles and crowded motorcycles. Our proposed compressed MobileNet achieves 110 FPS(GPU), 31 FPS (CPU) and 15 FPS (mobile phone), 1.2 times faster and 2% more accurate(mAP) than the original MobileNet.
|
|
WeBT1-11 Interactive Session, 220 |
Add to My Program |
Haptics and Manipulation - 3.2.11 |
|
|
|
11:30-12:45, Paper WeBT1-11.1 | Add to My Program |
Guaranteed Active Constraints Enforcement on Point Cloud-Approximated Regions for Surgical Applications |
Kastritsi, Theodora | Aristotle University of Thessaloniki |
Papageorgiou, Dimitrios | Aristotle University of Thessaloniki |
Sarantopoulos, Iason | Aristotle University of Thessaloniki |
Stavridis, Sotiris | Aristotle University of Thessaloniki |
Doulgeri, Zoe | Aristotle University of Thessaloniki |
Rovithakis, George | Aristotel University of Thessaloniki |
Keywords: Surgical Robotics: Laparoscopy, Physical Human-Robot Interaction
Abstract: In this work, a passive physical human-robot interaction (pHRI) controller is proposed to intraoperatively ensure that sensitive tissues will not be damaged by the robot’s tool. The proposed scheme uses the point cloud of the restricted region’s surface as constraint definition and Artificial Potential fields for constraint enforcement. The controller is proven to be passive with respect to the interaction force and to guarantee constraint satisfaction in all cases. The proposed methodology is experimentally validated by the kinesthetic guidance of a KUKA LWR4+ robot’s end-effector driving a virtual slave KUKA in the vicinity of a 3D point-cloud of a kidney and its adjacent vessels.
|
|
11:30-12:45, Paper WeBT1-11.2 | Add to My Program |
Designing an Accurate and Customizable Epidural Anaesthesia Haptic Simulator |
Sénac, Thibault | Ecole Centrale Lyon |
Lelevé, Arnaud | INSA De Lyon (Institut National Des Sciences Appliquees), Univer |
Moreau, Richard | INSA-Lyon |
Krahenbuhl, Laurent | Ecole Centrale Lyon |
Sigwalt, Florent | Hôpital De La Croix Rousse |
Bauer, Christian | Department of Anesthesia and Intensive Care, Hopital De La Croix |
Rouby, Quentin | INSA Lyon |
Keywords: Haptics and Haptic Interfaces, Force Control
Abstract: Epidural anesthesia, despite being a relatively common medical procedure, remains quite demanding in terms of skills as it is mostly blind and thus heavily reliant on the haptic sensations. Although some training support solutions exist, anes- thetists consider them mostly inefficient or impracti- cal. A few attempts at creating a simulator for this particular procedure exist but each one lacks one of the important requirements of the procedure. This article introduces a haptic simulator featuring a more complete and realistic simulation of the procedure than we could observe in existing simulators. The simulator is composed of a generic electrical haptic interface coupled with a pneumatic cylinder.
|
|
11:30-12:45, Paper WeBT1-11.3 | Add to My Program |
Sleeve Pneumatic Artificial Muscles for Antagonistically Actuated Joints |
Cullinan, Michael F. | Trinity College Dublin |
McGinn, Conor | Trinity College Dublin |
Kelly, Kevin | Trinity College Dublin |
Keywords: Hydraulic/Pneumatic Actuators, Compliant Joint/Mechanism, Physical Human-Robot Interaction
Abstract: Pneumatic artificial muscles (PAMs) have been researched for applications in powered exoskeletons, orthosis and robotics. Their high force to mass ratio, low cost and inherent compliance are particularly advantageous for systems requiring physical interaction with humans. Sleeve PAMs, which introduce an internal structure to the actuator, offer improved force capacity, contraction ratio, efficiency and operating bandwidth. In this paper sleeve PAMs are applied to a popular muscle configuration; that of a joint operated antagonistically by two muscles. It is shown that the sleeve PAM can increases the range of joint rotation by 14% or load capacity by over 50% of that of a comparable joint actuated with traditional PAMs, depending on the joint configuration. The stiffness of joints actuated with both PAM types is also studied, particularly the case of closed system operation (mass of air in the PAMs is constant), where the reduced volume of the sleeve PAM significantly increases the observed stiffness. Finally, energy consumption is considered, showing substantial savings in the case of joints actuated with sleeve PAMs.
|
|
11:30-12:45, Paper WeBT1-11.4 | Add to My Program |
Sensing Shear Forces During Food Manipulation: Resolving the Trade-Off between Range and Sensitivity |
Song, Hanjun | University of Washington |
Bhattacharjee, Tapomayukh | University of Washington |
Srinivasa, Siddhartha | University of Washington |
Keywords: Haptics and Haptic Interfaces
Abstract: Autonomous assistive feeding systems need to acquire deformable food items of varying physical characteristics to be able to feed users. However, bite acquisition of these deformable food items is challenging without force feedback of appropriate range and sensitivity. We developed custom solutions using two widely-used shear sensing fingertip tactile sensors and calibrated them to the range of forces needed for manipulating food items. We compared their performance with traditional force/torque sensors and showed the trade-off between the range and the sensitivity of the fingertip tactile sensors in detecting potential bite acquisition successes for food items with widely varying weights and compliance. We then developed a control policy, using which a robotic gripper equipped with the fingertip tactile sensors can autonomously regulate the sensing range and the sensitivity to be able to skewer food items of different compliance and detect their bite acquisition success attempts.
|
|
11:30-12:45, Paper WeBT1-11.5 | Add to My Program |
Benchmarking Resilience of Artificial Hands |
Negrello, Francesca | Istituto Italiano Di Tecnologia |
Catalano, Manuel Giuseppe | Istituto Italiano Di Tecnologia |
Garabini, Manolo | Università Di Pisa |
Grioli, Giorgio | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Bicchi, Antonio | Università Di Pisa |
Keywords: Performance Evaluation and Benchmarking, Grippers and Other End-Effectors
Abstract: The deployment of robotics in real-world scenarios, which may involve harsh and irregular physical interactions with the environment, such as those when robots operating in a disaster scenario, or the interactions that prosthetic devices may experience, demands hardware, which is physically resilient. The end-effectors, as the main media of interaction, are probably the parts at the highest risk. The capability of robotic hands to survive severe impacts is thus a necessity for the effective deployment of reliable robotic solutions in real-world tasks. Although, this robustness capability has been noted and discussed in the robotics community for long time, the literature does not provide a systematic study nor there is any proposal of standardized test or metric to evaluate hand resilience. In this work, inspired by the works of Charpy and Izod for the systematic definition of resilience and toughness of materials through impact tests, we consider extending the standard test to robot hands. We introduce a resilience evaluation framework, including a precisely defined experimental set-up and test procedure. As an example of application of the procedure, we apply it to experimentally characterize two robot hands, with a similar conceptual architecture but different size and material. From these tests we obtain several insights, including the observation that the dominant factor in hand resilience is their compliance and actuation principle, and that the use, under certai
|
|
11:30-12:45, Paper WeBT1-11.6 | Add to My Program |
CHiMP: A Contact Based Hilbert Map Planner |
Uhde, Constantin | Technical University of Munich |
Dean-Leon, Emmanuel | Technischen Universitaet Muenchen |
Cheng, Gordon | Technical University of Munich |
Keywords: Force and Tactile Sensing, Motion and Path Planning, Reactive and Sensor-Based Planning
Abstract: This work presents a new contact-based 3D path planning approach for manipulators using robot skin. We make use of the Stochastic Functional Gradient Path Planner, extending it to the 3D case, and assess its usefulness in combination with multi-modal robot skin. Our proposed algorithm is verified on a 6 DOF robot arm that has been covered with multi-modal robot skin. The experimental platform is combined with a skin based compliant controller, making the robot inherently reactive. We implement different state-of-the-art planners within our contact-based robot system to compare their performance under the same conditions. In this way, all the planners use the same skin compliant control during evaluation. Furthermore, we extend the stochastic planner with tactile-based explorative behavior to improve its performance, especially for unknown environments. We show that CHiMP is able to outperform state of the art algorithms when working with skin-based sparse contact data.
|
|
WeBT1-12 Interactive Session, 220 |
Add to My Program |
Compliant Actuators I - 3.2.12 |
|
|
|
11:30-12:45, Paper WeBT1-12.1 | Add to My Program |
A Novel Reconfigurable Revolute Joint with Adjustable Stiffness |
Li, Zhongyi | Aalborg University |
Chen, Weihai | Beijing University of Aeronaurics and Astronautics |
Bai, Shaoping | Aalborg University |
Keywords: Compliant Joint/Mechanism, Mechanism Design, Physical Human-Robot Interaction
Abstract: In this paper, a novel revolute joint of adjustable stiffness with reconfigurability (JASR) is presented. The JASR is designed with zero-length base link four-bar linkage, and allows adjusting its stiffness to achieve soft- and hard-spring behaviour. The new joint has a compact and light-weight structure and can be integrated in robot and transmissions for different applications. In the paper, mathematical models are developed for the JASR, with which influences of design parameters on stiffness performance are analyzed. A prototype of JASR is constructed and preliminary test results demonstrate the compliance properties of the new joint.
|
|
11:30-12:45, Paper WeBT1-12.2 | Add to My Program |
A Novel Force Sensor with Zero Stiffness at Contact Transition Based on Optical Line Generation |
Begey, Jérémy | University of Strasbourg |
Nierenberger, Mathieu | University of Strasbourg, ICube |
Pfeiffer, Pierre | University of Strasbourg |
Lecler, Sylvain | University of Strasbourg |
Renaud, Pierre | ICube AVR |
Keywords: Force and Tactile Sensing, Mechanism Design
Abstract: Robotization of medical acts often requires the evaluation of contacts between a robotic system and a patient, for safety or efficiency reasons. When contact occurs with a stiff environment, instabilities and vibrations can appear and a passive compliance is therefore needed. In this paper, we propose to embed compliance in a force sensor and to develop a novel force sensor with large compliance, textit{i.e.} a zero stiffness at contact transition to ease robot control. To get at the same time a satisfying measurement range and low off-axis sensitivity, an optical measurement process based on an optical line generated thanks to additive manufacturing is exploited. A compliant sensor body allowing the desired stiffness profile is presented and the specific optical measurement technique is developed. Finally, a prototype of the proposed force sensor is evaluated experimentally.
|
|
11:30-12:45, Paper WeBT1-12.3 | Add to My Program |
Hydraulically-Actuated Compliant Revolute Joint for Medical Robotic Systems Based on Multimaterial Additive Manufacturing |
Pfeil, Antoine | ICUBE - University of Strasbourg |
Siegfarth, Marius | Fraunhofer Institute for Manufacturing Engineering and Automatio |
Geiskopf, Francois | INSA De Strasbourg |
Pusch, Tim Philipp | Fraunhofer Institute for Manufacturing Engineering and Automatio |
Barbé, Laurent | University of Strasbourg, ICUBE CNRS |
Renaud, Pierre | ICube AVR |
Keywords: Hydraulic/Pneumatic Actuators, Compliant Joint/Mechanism, Mechanism Design
Abstract: In this paper, an active compliant revolute joint actuated by hydraulic energy is developed. The joint is made of polymer for integration in medical robotic systems, even in a challenging environment such as Magnetic Resonance Imaging (MRI). The use of multimaterial additive manufacturing allows us to develop two original aspects. First, a new seal design is proposed to build miniature hydraulic cylinders embedded in the active joint, with low level of friction. Second, a rack-and-pinion mechanism is being integrated to a compliant revolute joint to obtain a high level of compactness. Design and experimental assessment of the hydraulic cylinder and the compliant joint with embedded rack-and-pinion are presented, as well as an illustration in the context of needle manipulation with passive teleoperation.
|
|
11:30-12:45, Paper WeBT1-12.4 | Add to My Program |
Model-Based On-Line Estimation of Time-Varying Nonlinear Joint Stiffness on an E-Series Universal Robots Manipulator |
Madsen, Emil | Aarhus University |
Rosenlund, Oluf Skov | Universal Robots A/S |
Brandt, David | Universal-Robots |
Zhang, Xuping | Aarhus University |
Keywords: Industrial Robots, Calibration and Identification, Flexible Robots
Abstract: Flexibility commonly exists in the joints of many industrial robots due to the elasticity of the lightweight strain-wave type transmissions being used. This leads to a dynamic time-varying displacement between the position of the drive actuator and that of the driven link. Furthermore, the joint flexibility changes with time due to the material slowly being worn off at the gear meshing. Knowing the stiffness of the robot joints is of great value, e.g. in the design of new model-based feedforward and feedback controllers, and for predictive maintenance in the case of gearing unit failure. In this paper, we address on-line estimation of robot joint stiffness using a recursive least squares strategy based on a parametric model taking into account the elastic torques' nonlinear dependency on transmission deformation. Robustness is achieved in the presence of measurement noise and in poor excitation conditions. The method can be easily extended to general classes of serial-link multi-degree-of-freedom robots. The estimation technique uses only feedback signals that are readily available on Universal Robots' e-Series manipulators. Experiments on the new UR5e manipulator demonstrate the effectiveness of the proposed method.
|
|
11:30-12:45, Paper WeBT1-12.5 | Add to My Program |
A Rolling Flexure Mechanism for Progressive Stiffness Actuators |
Malzahn, Jörn | Istituto Italiano Di Tecnologia |
Barrett, Eamon | (Fondazione) Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Keywords: Compliant Joint/Mechanism, Mechanism Design, Force Control
Abstract: Linear Series Elastic Actuators exhibit a restricted design space. This inevitably leads to design trade-offs translating into robot performance limitations. These prevent robots from eventually reaching human comparable soft but also powerful physical interaction performance. This work presents a novel fixed passive rolling flexure design principle enabling the realization of a wide range of progressive torque-deflection characteristics. The proposed principle displays low hysteresis and can be manufactured in single 2D components. The paper derives the analytic foundation for the rolling flexure principle and is supported by numerical finite element analyses. The theory is validated by experimental results obtained on two laboratory prototypes.
|
|
WeBT1-13 Interactive Session, 220 |
Add to My Program |
Soft Robots VI - 3.2.13 |
|
|
|
11:30-12:45, Paper WeBT1-13.1 | Add to My Program |
Locomotion Dynamics of a Miniature Wave-Like Robot, Modeling and Experiments |
Drory, Lee-Hee | Ben Gurion University of the Negev |
Zarrouk, David | Ben Gurion University |
Keywords: Biologically-Inspired Robots, Mechanism Design, Underactuated Robots
Abstract: In a recent study, we developed a minimally actuated wave-like robot and analyzed its kinematics. In this paper, we present the dynamic locomotion analysis of a miniature version of this wave robot. We examine different crawling environments, determine under which conditions it can advance, and evaluate its propulsion force. We first developed two locomotion models to characterize the cases where the robot is crawling between two straight surfaces or over a single flat surface. We specify the conditions in which the robot will advance and the advance time ratio as a function of the friction forces and weight of the robot. Next, we developed highly flexible tube-like shapes that we molded from silicone rubber to experimentally test the forces acting on the robot inside these tubes. Finally, we designed a miniature model of the robot and experimentally validated its crawling conditions (see video).
|
|
11:30-12:45, Paper WeBT1-13.2 | Add to My Program |
Fabric Soft Poly-Limbs for Physical Assistance of Daily Living Tasks |
Pham, Huy Nguyen | Arizona State University |
Mohd, Imran Irfan Bin | Arizona State University |
Sparks, Curtis | Arizona State University |
Lopez Arellano, Francisco | Arizona State University |
Zhang, Wenlong | Arizona State University |
Polygerinos, Panagiotis | Arizona State University |
Keywords: Soft Material Robotics, Wearable Robots
Abstract: This paper presents the design and development of a highly articulated, continuum, wearable, fabric-based Soft Poly-Limb (fSPL). This fabric soft arm acts as an additional limb that provides the wearer with mobile manipulation assistance through the use of soft actuators made with high-strength inflatable fabrics. In this work, a set of systematic design rules is presented for the creation of highly compliant soft robotic limbs through an understanding of the fabric based components behavior as a function of input pressure. These design rules are generated by investigating a range of parameters through computational finite-element method (FEM) models focusing on the fSPL’s articulation capabilities and payload capacity in 3D space. The theoretical motion and payload outputs of the fSPL and its components are experimentally validated as well as additional evaluations verify its capability to safely carry loads 10.1x its body weight, by wrapping around the object. Finally, we demonstrate how the fully collapsible fSPL can comfortably be stored in a soft-waist belt and interact with the wearer through spatial mobility and preliminary pick-and-place control experiments.
|
|
11:30-12:45, Paper WeBT1-13.3 | Add to My Program |
Design of a Soft Ankle-Foot Orthosis Exosuit for Foot Drop Assistance |
Thalman, Carly | Arizona State University |
Hsu, Joshua | Arizona State University |
Snyder, Laura | Barrow Neurological Institute at Dignity Health St. Joseph’s Hos |
Polygerinos, Panagiotis | Arizona State University |
Keywords: Soft Material Robotics, Wearable Robots
Abstract: This paper presents the design of a soft ankle-foot orthosis (AFO) exosuit to aid natural gait restoration for individuals suffering from foot drop. The sock-like AFO is comprised of soft actuators made from fabric-based, thermally-bonded nylon and designed to be worn over the users shoes. The system assists dorsiflexion during swing phase of the gait cycle utilizing a contracting soft actuator, and provides ankle joint proprioception during stance with a variable stiffness soft actuator. A computational model is developed using finite element analysis to optimize the performance characteristics of the fabric actuators prior to fabrication, maximize contraction, and minimize overall volume. The dorsiflexion actuator is able to achieve a linear tensile force of 197N at 200kPa. The variable stiffness actuator generates up to 1.2Nm of torque at the same pressure. The computational model and soft AFO are experimentally validated and with a healthy participant through kinematic and electromyography studies. When active the AFO is capable of reducing by 13.3% the activity of the muscle responsible for ankle dorsiflexion during the swing phase.
|
|
11:30-12:45, Paper WeBT1-13.4 | Add to My Program |
A Depth Camera-Based Soft Fingertip Device for Contact Region Estimation and Perception-Action Coupling |
Huang, Isabella | UC Berkeley |
Liu, Jingjun | University of Wisconsin, Madison |
Bajcsy, Ruzena | Univ of California, Berkeley |
Keywords: Soft Material Robotics, Force and Tactile Sensing, Flexible Robots
Abstract: As the demand for robotic applications in unconstrained and dynamic environments rises, so does the benefit of advancing the state of the art in soft robotic technologies. However, the complex capabilities of soft robots elicited by their high-dimensional, non-linear characteristics simultaneously yield difficult challenges in control and sensing. Moreover, embedding tactile sensing capabilities in soft materials is often expensive and difficult to fabricate. In recent years, however, the invention of small-scale depth-sensing cameras introduced a promising channel for soft tactile sensor design. In this work, we propose a novel soft device inspired by the human fingertip that not only utilizes a small depth camera as the perception mechanism, but also possesses compliance-modulating capabilities. We demonstrate its ability to accurately estimate contact regions upon interaction with an external obstacle, and show that the estimation sensitivity can be modulated via internal fluid states. In addition, we determine an empirical model of the device's force-deformation characteristics under simplifying assumptions, and validate its performance with real-time force matching control experiments.
|
|
11:30-12:45, Paper WeBT1-13.5 | Add to My Program |
A Pipe-Climbing Soft Robot |
Singh, Gaurav | University of Illinois Urbana Champaign |
Patiballa, Sreekalyan | University of Illinois Urbana-Champaign |
Zhang, Xiaotian | University of Illinois at Urbana-Champaign |
Krishnan, Girish | University of Illinois Urbana Champaign |
Keywords: Soft Material Robotics, Climbing Robots, Grasping
Abstract: This paper presents the design and testing of a bioinspired soft pneumatic robot that can achieve locomotion along the outside of a cylinder. The robot uses soft pneumatic actuators called FREEs (Fiber Reinforced Elastomeric Enclosure), which can have a wide range of deformation behavior upon pressurization. The robot being soft and compliant can grasp and move along cylinders of varying dimensions. Two different types of FREEs are used in the robot namely (a) extending FREEs and (b) bending FREEs. These actuators are arranged in such a way that the bending actuators are used to grip the pipe while the extending actuators generate forward motion as well as bending for direction control. The modular design of the robot provides simplicity and ease of maintenance. The entire robot is made of flexible actuators and can withstand external impact with minimal to no damage. The maximum speed achieved for horizontal motion is 4.2 mm/s and for vertical motion is 2.1 mm/s.
|
|
11:30-12:45, Paper WeBT1-13.6 | Add to My Program |
Bio-Inspired Terrestrial Motion of Magnetic Soft Millirobots |
Kalpathy Venkiteswaran, Venkatasubramanian | University of Twente |
Peña Samaniego, Luis Fernando | University of Twente |
Sikorski, Jakub | University of Twente |
Misra, Sarthak | University of Twente |
Keywords: Soft Material Robotics, Biomimetics, Biologically-Inspired Robots
Abstract: Magnetic soft robots have the combined advantages of contactless actuation, requiring no on-board power source, and having flexible bodies that can adapt to unstructured environments. In this study, four milli-scale soft robots are designed (Inchworm, Turtle, Quadruped and Millipede) and their actuation under external magnetic fields is investigated with the objective of reproducing multi-limbed motion patterns observed in nature. Magnetic properties are incorporated into a silicone polymer by mixing in ferromagnetic microparticles (PrFeB) before curing. The magnet-polymer composite is used to fabricate soft magnetic parts, with pre-determined magnetization profiles achieved using a 1 T field. The resulting soft robots are actuated under external magnetic fields of 10-35 mT which are controlled using an array of six electromagnetic coils. The achieved motion patterns are analyzed over five iterations and the motions are quantified in terms of body lengths traversed per actuation cycle and speed of displacement. The speed of the specimens is calculated to be in the range of 0.15-0.37 mm/s for the actuation field used here. The ability of the soft robots to traverse uneven terrain is also tested, with the Turtle and the Millipede demonstrating successful motion.
|
|
WeBT1-14 Interactive Session, 220 |
Add to My Program |
Legged Robots IV - 3.2.14 |
|
|
|
11:30-12:45, Paper WeBT1-14.1 | Add to My Program |
Generation of Stealth Walking Gait on Low-Friction Road Surface |
Asano, Fumihiko | Japan Advanced Institute of Science and Technology |
Keywords: Legged Robots, Motion Control, Underactuated Robots
Abstract: The author has investigated the method of stealth walking for generating adaptive walking gaits of underactuated walkers without having the control torques at the feet. This approach is also effective for achieving careful walking on the frictionless road surface by applying angular momentum constraint control (AMCC); the generated gait completes in one step while maintaining the horizontal ground reaction force to zero. The result is mathematically thorough, but is not realistic because any uncertainties in the system cannot be permitted. This paper then discusses more realistic sliding-resistant situation: stealth walking on the low-friction road surface. First, we introduce a model of a planar underactuated rimless wheel, and describe the equation of motion and the control input for AMCC. Second, we specify the linearized equation of motion with AMCC, and derive the analytical solution of the stance-leg motion which is used as a desired trajectory for the nonlinear model. Furthermore, we discuss the optimality of the upper-body control during the double-limb support phase from the sliding-resistant characteristics point of view through mathematical and numerical investigations.
|
|
11:30-12:45, Paper WeBT1-14.2 | Add to My Program |
Support Surface Estimation for Legged Robots |
Homberger, Timon | ETH Zurich |
Wellhausen, Lorenz | ETH Zürich |
Fankhauser, Péter | ETH Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Legged Robots, Mapping, Robotics in Agriculture and Forestry
Abstract: The high agility of legged systems allows them to operate in rugged outdoor environments. In these situations, knowledge about the terrain geometry is key for foothold planning to enable safe locomotion. However, on penetrable or highly compliant terrain (e.g. grass) the visibility of the supporting ground surface is obstructed, i.e. it cannot directly be perceived by depth sensors. We present a method to estimate the underlying terrain topography by fusing haptic information about foot contact closure locations with exteroceptive sensing. To obtain a dense support surface estimate from sparsely sampled footholds we apply Gaussian process regression. Exteroceptive information is integrated into the support surface estimation procedure by estimating the height of the penetrable surface layer from discrete penetration depth measurements at the footholds. The method is designed such that it provides a continuous support surface estimate even if there is only partial exteroceptive information available due to shadowing effects. Field experiments with the quadrupedal robot ANYmal show how the robot can smoothly and safely navigate in dense vegetation.
|
|
11:30-12:45, Paper WeBT1-14.3 | Add to My Program |
ALMA - Articulated Locomotion and Manipulation for a Torque-Controllable Robot |
Bellicoso, C. Dario | ETH Zurich |
Krämer, Koen | ETH Zurich |
Stäuble, Markus | ETH Zürich |
Sako, Dhionis | ETH Zurich -Robotic System Lab |
Jenelten, Fabian | ETH Zurich |
Bjelonic, Marko | ETH Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Legged Robots, Mobile Manipulation, Optimization and Optimal Control
Abstract: The task of robotic mobile manipulation poses several scientific challenges that need to be addressed to execute complex manipulation tasks in unstructured environments, in which collaboration with humans might be required. Therefore, we present ALMA, a motion planning and control framework for a torque-controlled quadrupedal robot equipped with a six degrees of freedom robotic arm capable of performing dynamic locomotion while executing manipulation tasks. The online motion planning framework, together with a whole-body controller based on a hierarchical optimization algorithm, enables the system to walk, trot and pace while executing operational space end-effector control, reactive human-robot collaboration and torso posture optimization to increase the arm’s workspace. The torque control of the whole system enables the implementation of compliant behavior, allowing a user to safely interact with the robot. We verify our framework on the real robot by performing tasks such as opening a door and carrying a payload together with a human.
|
|
11:30-12:45, Paper WeBT1-14.4 | Add to My Program |
Real-Time Model Predictive Control for Versatile Dynamic Motions in Quadrupedal Robots |
Ding, Yanran | University of Illinois at Urbana-Champaign |
Pandala, Abhishek | University of Illinois at Urbana–Champaign |
Park, Hae-Won | University of Illinois at Urbana Champaign |
Keywords: Legged Robots, Optimization and Optimal Control, Underactuated Robots
Abstract: This paper presents a new Model Predictive Control (MPC) framework for controlling various dynamic movements of a quadrupedal robot. System dynamics are represented by linearizing single rigid body dynamics in threedimensional (3D) space. Our formulation linearizes rotation matrices without resorting to parameterizations like Euler angles and quaternions, avoiding issues of singularity and unwinding phenomenon, respectively. With a carefully chosen configuration error function, the MPC control law is transcribed into a Quadratic Program (QP) which can be solved efficiently in realtime. Our formulation can stabilize a wide range of periodic quadrupedal gaits and acrobatic maneuvers. We show various simulation as well as experimental results to validate our control strategy. Experiments prove the application of this framework with a custom QP solver could reach execution rates of 160 Hz on embedded platforms.
|
|
11:30-12:45, Paper WeBT1-14.5 | Add to My Program |
Online Gait Transitions and Disturbance Recovery for Legged Robots Via the Feasible Impulse Set |
Boussema, Chiheb | Ecole Polytechnique Fédérale De Lausanne |
Powell, Matthew | Massachusetts Institute of Technology |
Bledt, Gerardo | Massachusetts Institute of Technology (MIT) |
Ijspeert, Auke | EPFL |
Wensing, Patrick M. | University of Notre Dame |
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords: Legged Robots, Motion Control, Underactuated Robots
Abstract: Gaits in legged robots are often hand-tuned and time-based, either explicitly or through an internal clock, for instance in the form of central pattern generators. This strategy requires trial and error to identify leg timings, which may not be suitable in challenging terrains. In this paper, we introduce new concepts to quantify leg capabilities for online gait emergence and adaptation, without fixed timings or predefined foothold sequences. Specifically, we introduce the Feasible Impulse Set, a notion that extends aspects of the classical wrench cone to include a prediction horizon into the future. By considering the impulses that can be delivered by the legs, quantified notions of leg utility are proposed for coordinating adaptive lift-off and touch-down of stance legs. The proposed methods provide push recovery and emergent gait transitions with speed. These advances are validated in experiments with the MIT Cheetah 3 robot, where the framework is shown to automatically coordinate aperiodic behaviors on a partially-moving walkway.
|
|
11:30-12:45, Paper WeBT1-14.6 | Add to My Program |
Walking and Running with Passive Compliance: Lessons from Engineering a Live Demonstration of the ATRIAS Biped (I) |
Hubicki, Christian | Florida State University |
Abate, Andy | Agility Robotics |
Clary, Patrick | Oregon State University |
Rezazadeh, Siavash | University of Texas at Dallas |
Jones, Mikhail | Agility Robotics |
Peekema, Andrew | Oregon State University |
Van Why, Johnathan | Oregon State University |
Domres, Ryan | Oregon State University |
Wu, Albert | Carnegie Mellon University |
Martin, William | Carnegie Mellon University Robotics Institute |
Geyer, Hartmut | Carnegie Mellon University |
Hurst, Jonathan | Oregon State University |
Keywords: Humanoid and Bipedal Locomotion, Legged Robots, Compliant Joint/Mechanism
Abstract: Biological bipeds have long been thought to take advantage of compliance and passive dynamics to walk and run, but realizing robotic locomotion in this fashion has been difficult in practice. ATRIAS is a bipedal robot designed to take advantage of inherent stabilizing effects that emerge as a result of tuned mechanical compliance. We describe the mechanics of the biped and how our controller exploits the interplay between passive dynamics and actuation to achieve robust locomotion. We outline our development process for incremental design and testing of our controllers through rapid iteration. By show time at the DARPA Robotics Challenge, ATRIAS was able to walk with robustness to large human kicks, locomote in terrain from asphalt to grass to artificial turf, and traverse changes in surface height as large as 15 cm without planning or visual feedback. Further, ATRIAS can accelerate from rest, transition smoothly to an airborne running gait, and reach a top speed of 2.5 m/s (9 kph). This endeavor culminated in seven live shows of ATRIAS walking and running, with disturbances and without falling, in front of a live audience at the DARPA Robotics Challenge. We conclude by enumerating what we believe were the key lessons learned in the process of developing these capabilities.
|
|
WeBT1-15 Interactive Session, 220 |
Add to My Program |
Robot Safety II - 3.2.15 |
|
|
|
11:30-12:45, Paper WeBT1-15.1 | Add to My Program |
Scanning the Internet for ROS: A View of Security in Robotics Research |
DeMarinis, Nicholas | Brown University |
Tellex, Stefanie | Brown |
Kemerlis, Vasileios P. | Brown University |
Konidaris, George | Brown University |
Fonseca, Rodrigo | Brown Uninversity |
Keywords: Networked Robots, Robot Safety
Abstract: Security is particularly important in robotics, as robots can directly perceive and affect the physical world. We describe the results of a scan of the entire IPv4 address space of the Internet for instances of the Robot Operating System (ROS), a widely used robotics software platform. We identified a number of hosts supporting ROS that are exposed to the public Internet, thereby allowing anyone to access robotic sensors and actuators. As a proof of concept, and with the consent of the relevant researchers, we were able to read image sensor information from and actuate a physical robot present in a research lab in an American university. This paper gives an overview of our findings, including our methodology, the geographic distribution of publicly-accessible platforms, the sorts of sensor and actuator data that is available, and the different kinds of robots and sensors that our scan uncovered. Additionally, we offer recommendations on best practices to mitigate these security issues in the future.
|
|
11:30-12:45, Paper WeBT1-15.2 | Add to My Program |
Risk Averse Robust Adversarial Reinforcement Learning |
Pan, Xinlei | UC Berkeley |
Seita, Daniel | University of California, Berkeley |
Gao, Yang | UC Berkeley |
Canny, John F. | University of California, Berkeley |
Keywords: Robot Safety, Autonomous Vehicle Navigation, Computer Vision for Transportation
Abstract: Deep reinforcement learning has recently made significant progress in solving computer games and robotic control tasks. A known problem, though, is that policies overfit to the training environment and may not avoid rare, catastrophic events such as automotive accidents. A classical technique for improving the robustness of reinforcement learning algorithms is to train on a set of randomized environments, but this approach only guards against common situations. Recently, robust adversarial reinforcement learning (RARL) was developed, which allows efficient applications of random and systematic perturbations by a trained adversary. A limitation of RARL is that only the expected control objective is optimized; there is no explicit modeling or optimization of risk. Thus the agents do not consider the probability of catastrophic events (i.e., those inducing abnormally large negative reward), except through their effect on the expected objective. In this paper we introduce risk-averse robust adversarial reinforcement learning (RARARL), using a risk-averse protagonist and a risk-seeking adversary. We test our approach on a self-driving vehicle controller. We use an ensemble of policy networks to model risk as the variance of value functions. We show through experiments that a risk-averse agent is better equipped to handle a risk-seeking adversary, and experiences substantially fewer crashes compared to agents trained without an adversary.
|
|
11:30-12:45, Paper WeBT1-15.3 | Add to My Program |
Bounded Collision Force by the Sobolev Norm |
Haninger, Kevin | Fraunhofer IPK |
Surdilovic, Dragoljub | Fraunhofer IPK |
Keywords: Robot Safety, Compliance and Impedance Control, Compliant Joint/Mechanism
Abstract: A robot making contact with an environment or human presents potential safety risks, including excessive collision force. Experimental works have established the role of robot inertia, relative velocity, and interface stiffness in several collision characteristics, but analytical models for maximum collision force are limited to a simplified mass-spring robot model. This limits the study of control (force/torque, impedance, or admittance), or robots which have joint and end-effector compliance. Here, the Sobolev norm is adapted to be a system norm, giving rigorous bounds on the maximum force on a stiffness element in a general dynamic system, allowing the study of collision with more accurate models and feedback control. The Sobolev norm can be found through the H 2 norm of a transformed system, allowing efficient computation, connection with existing control theory, and controller synthesis to minimize collision force. The Sobolev norm is validated, first experimentally with an admittance-controlled robot, then in simulation with a linear flexible-joint robot. It is then used to investigate the impact of control, joint flexibility and end-effector compliance on collision, and a trade-off between collision performance and environmental estimation uncertainty is shown.
|
|
11:30-12:45, Paper WeBT1-15.4 | Add to My Program |
Liability, Ethics, and Culture-Aware Behavior Specification Using Rulebooks |
Censi, Andrea | ETH Zürich & NuTonomy |
Konstantin, Slutsky | NuTonomy |
Wongpiromsarn, Tichakorn | NuTonomy |
Pendleton, Scott Drew | NuTonomy |
Fu, James Guo Ming | NuTonomy |
Yershov, Dmitry | NuTonomy |
Frazzoli, Emilio | ETH Zürich |
Keywords: Robot Safety, Formal Methods in Robotics and Automation, Motion and Path Planning
Abstract: The behavior of self-driving cars must be compatible with an enormous set of conflicting and ambiguous objectives, from law, from ethics, from the local culture, and so on. This paper describes a new way to conveniently define the desired behavior for autonomous agents, which we use on the self-driving cars developed at nuTonomy. We define a “rulebook” as a pre-ordered set of “rules”, each akin to a violation metric on the possible outcomes (“realizations”). The rules are partially ordered by priority. The semantics of a rulebook imposes a pre-order on the set of realizations. We study the compositional properties of the rulebooks, and we derive which operations we can allow on the rulebooks to preserve previously-introduced constraints. While we demonstrate the application of these techniques in the self-driving domain, the methods are domain-independent.
|
|
11:30-12:45, Paper WeBT1-15.5 | Add to My Program |
Early Failure Detection of Deep End-To-End Control Policy by Reinforcement Learning |
Lee, Keuntaek | Georgia Institute of Technology |
Saigol, Kamil | Georgia Institute of Technology |
Theodorou, Evangelos | Georgia Institute of Technology |
Keywords: Failure Detection and Recovery, Robot Safety, Learning from Demonstration
Abstract: We propose the use of Bayesian networks, which provide both a mean value and an uncertainty estimate as output, to enhance the safety of learned control policies under circumstances in which a test-time input differs significantly from the training set. Our algorithm combines reinforcement learning and end-to-end imitation learning to simultaneously learn a control policy as well as a threshold over the predictive uncertainty of the learned model, with no hand-tuning required. Corrective action, such as a return of control to the model predictive controller or human expert, is taken before the failure of tasks when the uncertainty threshold is exceeded. We validate our method on fully-observable and vision-based partially-observable systems using cart-pole and autonomous driving simulations using deep convolutional Bayesian neural networks. We demonstrate that our method is robust to uncertainty resulting from varying system dynamics as well as from partial state observability.
|
|
11:30-12:45, Paper WeBT1-15.6 | Add to My Program |
Bridging Hamilton-Jacobi Safety Analysis and Reinforcement Learning |
Fisac, Jaime F. | University of California, Berkeley |
Lugovoy, Neil | UC Berkeley |
Rubies Royo, Vicenc | UC Berkeley |
Ghosh, Shromona | University of California, Berkeley |
Tomlin, Claire | UC Berkeley |
Keywords: Robot Safety, Deep Learning in Robotics and Automation, Optimization and Optimal Control
Abstract: Safety analysis is a necessary component in the design and deployment of autonomous robotic systems. Techniques from robust optimal control theory, such as Hamilton-Jacobi reachability analysis, allow a rigorous formalization of safety as guaranteed constraint satisfaction. Unfortunately, the computational complexity of these tools for general dynamical systems scales poorly with state dimension, making existing tools impractical beyond small problems. Modern reinforcement learning methods have shown promising ability to find approximate yet proficient solutions to optimal control problems in complex and high-dimensional systems, however their application has in practice been restricted to problems with an additive payoff over time, unsuitable for reasoning about safety. In recent work, we introduced a time-discounted modification of the problem of maximizing the minimum payoff over time, central to safety analysis, through a modified dynamic programming equation that induces a contraction mapping. Here, we show how a similar contraction mapping can render reinforcement learning techniques amenable to quantitative safety analysis as tools to approximate the safe set and optimal safety policy. We validate the correctness of our formulation by comparing safety results computed through Q-learning to analytic and numerical solutions, and demonstrate its scalability by learning safe sets and control policies for simulated systems of up to 18 state dimensions.
|
|
WeBT1-16 Interactive Session, 220 |
Add to My Program |
Wheeled Robotics II - 3.2.16 |
|
|
|
11:30-12:45, Paper WeBT1-16.1 | Add to My Program |
Trajectory Planning for a Tractor with Multiple Trailers in Extremely Narrow Environments: A Unified Approach |
Li, Bai | JD Inc |
Zhang, Youmin | Concordia University |
Acarman, Tankut | Computer Engineering Department, Galatasaray University |
Kong, Qi | JDR&D Center of Automated Driving, JD Inc |
Zhang, Yue | Boston University |
Keywords: Nonholonomic Motion Planning, Optimization and Optimal Control, Planning, Scheduling and Coordination
Abstract: Trajectory planning for a tractor-trailer vehicle is challenging because the vehicle kinematics consists of underactuated and nonholonomic constraints that are highly coupled. Prevalent sampling-based or search-based planners suitable for rigid-body vehicles are not capable of handling the tractor-trailer vehicle cases. This work aims to deal with generic n-trailer cases in the tiny environments. To this end, an optimal control problem is formulated, which is beneficial in being accurate, straightforward, and unified. An adaptively homotopic warm-starting approach is proposed to facilitate the numerical solution process of the formulated optimal control problem. Compared with the existing sequential warm starting strategies, our proposal can adaptively define the subproblems with the purpose of making the gaps between adjacent subproblems “pleasant” for the solver. Unification and efficiency of the proposed adaptively homotopic warm-starting approach have been investigated in several extremely tiny scenarios. Our planner finds solutions that other existing planners cannot. Online planning opportunities are briefly discussed as well.
|
|
11:30-12:45, Paper WeBT1-16.2 | Add to My Program |
A Friction-Based Kinematic Model for Skid-Steer Wheeled Mobile Robots |
Rabiee, Sadegh | University of Massachusetts Amherst |
Biswas, Joydeep | University of Massachusetts Amherst |
Keywords: Kinematics, Wheeled Robots, Contact Modeling
Abstract: Skid-steer drive systems are widely used in mobile robot platforms. Such systems are subject to significant slippage and skidding during normal operation due to their nature. The ability to predict and compensate for such slippages in the forward kinematics of these types of robots is of great importance and provides the means for accurate control and safe navigation. In this work, we propose a new kinematic model capable of slip prediction for skid-steer wheeled mobile robots (SSWMRs). The proposed model outperforms the state-of-the-art in terms of both translational and rotational prediction error on a dataset composed of more than 6km worth of trajectories traversed by a skid-steer robot. We also publicly release our dataset to serve as a benchmark for system identification and model learning research for SSWMRs.
|
|
11:30-12:45, Paper WeBT1-16.3 | Add to My Program |
Turning a Corner with a Dubins Car |
Koval, Alan | University of Minnesota Twin Cities |
Isler, Volkan | University of Minnesota |
Keywords: Nonholonomic Motion Planning, Nonholonomic Mechanisms and Systems, Motion and Path Planning
Abstract: We study the problem of computing shortest collision-free Dubins paths when turning a corner. We present a sufficient condition for a closed-form solution. Specifically, consider S as the set consisting of paths of the form RSRSR, RSRSL, LSRSR and LSRSL that pass through the interior corner, where sub-paths RSR, RSL, and LSR are elementary Dubins paths composed of segments which are either straight (S) or turning left (L) or right (R). We find the closed-form optimal path around a corner when S is nonempty. Our solution can be used in an efficient path planner, for example, when navigating corridors. It can also be used as a subroutine for planners such as RRTs.
|
|
11:30-12:45, Paper WeBT1-16.4 | Add to My Program |
Modeling and State Estimation of a Micro Ball-Balancing Robot Using a High Yaw-Rate Dynamic Model and an Extended Kalman Filter |
Sihite, Eric | University of California San Diego |
Yang, Daniel | University of California San Diego |
Bewley, Thomas | Flow Control & Coordinated Robotics Labs |
Keywords: Dynamics, Sensor Fusion, Wheeled Robots
Abstract: The state estimation and control of a ball-balancing robot under high yaw rate is a challenging problem due to its highly nonlinear 3D dynamic. The small size and low-cost components in our Micro Ball-Balancing Robot makes the system inherently very noisy which further increases the complexity of the problem. In order to drive the robot more aggressively such as translating and spinning at the same time, a good state estimator which works well under high yaw rates is required. This paper presents the derivation of a high yaw-rate Ball-Balancing Robot dynamic model and the implementation of said model in an Extended Kalman Filter (EKF) using raw on-board sensor measurements. The EKF using the new model is then compared to a Kalman Filter which uses a linearized dynamic model. The accuracy of the attitude estimates and the controller performance under high yaw rates were verified using a motion capture system.
|
|
11:30-12:45, Paper WeBT1-16.5 | Add to My Program |
Near-Optimal Path Planning for a Car-Like Robot Visiting a Set of Waypoints with Field of View Constraints |
Rathinam, Sivakumar | TAMU |
Manyam, Satyanarayana Gupta | Infoscitex Corporation |
Zhang, Yuntao | Texas a & M University |
Keywords: Nonholonomic Motion Planning, Optimization and Optimal Control, Wheeled Robots
Abstract: This article considers two variants of a shortest path problem for a car-like robot visiting a set of waypoints. The sequence of waypoints to be visited is specified in the first variant while the robot is allowed to visit the waypoints in any sequence in the second variant. The shortest path problem is first solved for two waypoints with heading angle constraints at the waypoints using the Pontryagin's minimum principle. Using the results for the two point problem, tight lower and upper bounds on the length of the shortest path are developed for visiting n points by relaxing the requirement that the arrival angle must be equal to the departure angle of the robot at each waypoint. Theoretical bounds are also provided on the length of the feasible solutions obtained by the proposed algorithm. Simulation results verify the performance of the bounds for instances with 20 waypoints.
|
|
WeBT1-17 Interactive Session, 220 |
Add to My Program |
Motion Planning - 3.2.17 |
|
|
|
11:30-12:45, Paper WeBT1-17.1 | Add to My Program |
Orientation-Aware Motion Planning in Complex Workspaces Using Adaptive Harmonic Potential Fields |
Vlantis, Panagiotis | National Technical University of Athens |
Vrohidis, Constantinos | National Technical University of Athens |
Bechlioulis, Charalampos | National Technical University of Athens |
Kyriakopoulos, Kostas | National Technical Univ. of Athens |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation, Collision Avoidance
Abstract: In this work, a hybrid control scheme is presented in order to address the navigation problem for a planar robotic platform of arbitrary shape that is moving inside an obstacle cluttered workspace. Given an initial and desired robot configuration, we propose a methodology based on approximate configuration space decomposition techniques, that makes use of heuristics to adaptively refine a partition of the configuration space into non-overlapping, adjacent slices. Furthermore, we employ appropriate workspace transformations and adaptive potential field based control laws that integrate elegantly with the type of configuration space representation used, in order to safely navigate within a given cell and successfully cross over to the next, for almost all initial configurations, until the desired configuration is reached. Finally, we present simulation results that demonstrate the efficacy of the proposed control scheme.
|
|
11:30-12:45, Paper WeBT1-17.2 | Add to My Program |
Energy-Aware Temporal Logic Motion Planning for Mobile Robots |
Kundu, Tanmoy | Indian Institute of Technology - Kanpur |
Saha, Indranil | IIT Kanpur |
Keywords: Motion and Path Planning, Formal Methods in Robotics and Automation, Factory Automation
Abstract: This paper presents a methodology for synthesizing a motion plan for a mobile robot to ensure that the robot never gets depleted with battery charge while carrying out its mission successfully. The specification of the robot is provided in the form of an LTL (Linear Temporal Logic) formula. A trajectory satisfying an LTL formula may contain a loop whose repetitive execution causes the depletion of battery charge in the robot. The motion plan generated by our methodology ensures that the robot visits the charging station periodically in such a way that it never gets depleted with battery charge while carrying out its mission optimally. Given a set of potential charging station locations and an LTL specification, our algorithm also finds the best location for the charging station along with the optimal trajectory for the robot. We encode the motion planning problem as an SMT (Satisfiability Modulo Theory) solving problem and use the off-the-shelf SMT solver Z3 to solve the constraints to find the location of the charging station and generate an optimal trajectory for the robot. We apply our methodology to synthesize energy-aware trajectories for robots with different dynamics in various workspaces and for various LTL specifications.
|
|
11:30-12:45, Paper WeBT1-17.3 | Add to My Program |
Using Local Experiences for Global Motion Planning |
Chamzas, Constantinos | Rice University |
Shrivastava, Anshumali | Rice University |
Kavraki, Lydia | Rice University |
Keywords: Motion and Path Planning, Learning and Adaptive Systems
Abstract: Sampling-based motion planners are effective in many real-world applications such as robotics manipulation, navigation, and even protein modeling. However, it is often challenging to generate a collision-free path in environments where key areas are hard to sample. In absence of any prior information, sampling-planners are forced to explore uniformly or heuristically, which can lead to degraded performance. One way to improve performance is to use prior knowledge of environments to adapt the sampling strategy to the problem at hand. In this work, we decompose the workspace into local primitives and then memorize local experiences, in the form of local samplers, by storing them in a database. We synthesize an efficient global sampler by retrieving local experiences relevant to the given situation. Our method transfers knowledge effectively between diverse environments that share local primitives and speeds up the performance dramatically. Our results show, in terms of solution time, an improvement of multiple orders of magnitude in two traditionally challenging high-dimensional problems compared to state-of-the-art approaches.
|
|
11:30-12:45, Paper WeBT1-17.4 | Add to My Program |
DMP Based Trajectory Tracking for a Nonholonomic Mobile Robot with Automatic Goal Adaptation and Obstacle Avoidance |
Sharma, Radhe Shyam | IIT Kanpur |
Shukla, Santosh | IIT Kanpur |
Karki, Hamad | Petroleum Institute |
Shukla, Amit | The Petroleum Institute, Abu Dhabi |
Behera, Laxmidhar | IIT Kanpur |
Subramanian, K. Venkatesh | Indian Intitute of Technology Kanpur |
Keywords: Motion and Path Planning, Learning from Demonstration, Autonomous Vehicle Navigation
Abstract: Dynamic Movement Primitive (DMP) which is popular for motion planning of a robot manipulator, has been adapted for a nonholonomic mobile robot to track the desired trajectory. DMP is a simple damped spring model with a forcing function, which learns the trajectory. The damped spring model attracts the robot towards the goal position, and the forcing function forces the robot to follow the given trajectory. Two Radial Basis Function Networks (RBFNs) have been used to learn the forcing function associated with the DMP model. Weight update laws are derived using the gradient descent approach to train the RBFNs. Fuzzy logic based steering angle dynamics is proposed to handle the asymmetric nature of an obstacle. The proposed scheme is capable enough to generate a smooth trajectory in the presence of an obstacle even when start and goal positions are altered, without losing the spatial information embedded while training. The convergence of the robot goal position has been shown using Lyapunov stability theory-based analysis. The approach has been extended to multiple static and dynamic obstacles for the successful convergence of the robot at the goal position. Both simulation and experimental results are provided to confirm the efficacy of the proposed scheme.
|
|
11:30-12:45, Paper WeBT1-17.5 | Add to My Program |
Predictive Collision Avoidance for the Dynamic Window Approach |
Missura, Marcell | University of Bonn |
Bennewitz, Maren | University of Bonn |
Keywords: Motion and Path Planning, Nonholonomic Motion Planning, Wheeled Robots
Abstract: Foresighted navigation is an essential skill for robots to rise from rigid factory floor installations to much more versatile mobile robots that partake in our everyday environment. The current state of the art that provides this mobility to some extent is the Dynamic Window Approach combined with a global start-to-target path planner. However, neither the Dynamic Window Approach nor the path planner are equipped to predict the motion of other objects in the environment. We propose a change in the Dynamic Window Approach - a dynamic collision model - that is capable of predicting future collisions with the environment by also taking into account the motion of other objects. We show in simulated experiments that our new way of computing the Dynamic Window Approach significantly reduces the number of collisions in a dynamic setting with nonholonomic vehicles while still being computationally efficient.
|
|
11:30-12:45, Paper WeBT1-17.6 | Add to My Program |
Kinematic Constraints Based Bi-Directional RRT (KB-RRT) with Parameterized Trajectories for Robot Path Planning in Cluttered Environment |
Ghosh, Dibyendu | Intel Corporation |
Nandakumar, Ganeshram | Intel Technology India Pvt Ltd |
Narayanan, Karthik | Intel Corporation |
Honkote, Vinayak | Intel Corporation |
Sharma, Sidharth | IIIT Delhi |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation, Motion Control
Abstract: Optimal path planning and smooth trajectory planning are critical for effective navigation of mobile robots working towards accomplishing complex missions. For autonomous, real time and extended operations of mobile robots, the navigation capability needs to be executed at the edge. Thus, efficient compute, minimum memory utilization and smooth trajectory are the key parameters that drive the successful operation of autonomous mobile robots. Traditionally, navigation solutions focus on developing robust path planning algorithms which are complex and compute/memory intensive. Bidirectional-RRT(Bi-RRT) based path planning algorithms have gained increased attention due to their effectiveness and computational efficiency in generating feasible paths. However, these algorithms neither optimize memory nor guarantee smooth trajectories. To this end, we propose a kinematically constrained Bi-RRT (KB-RRT) algorithm, which restricts the number of nodes generated without compromising on the accuracy and incorporates kinodynamic constraints for generating smooth trajectories, together resulting in efficient navigation of autonomous mobile robots. The proposed algorithm is tested in a highly cluttered environment on an Ackermann-steering vehicle model with severe kinematic constraints. The experimental results demonstrate that KB-RRT achieves three times (3X) better performance in terms of convergence rate and memory utilization compared to a standard Bi-RRT algorithm.
|
|
WeBT1-18 Interactive Session, 220 |
Add to My Program |
Autonomous Vehicles II - 3.2.18 |
|
|
|
11:30-12:45, Paper WeBT1-18.1 | Add to My Program |
Predicting Vehicle Behaviors Over an Extended Horizon Using Behavior Interaction Network |
Ding, Wenchao | Hong Kong University of Science and Technology |
Chen, Jing | Hong Kong University of Science and Technology |
Shen, Shaojie | Hong Kong University of Science and Technology |
Keywords: Autonomous Vehicle Navigation, Deep Learning in Robotics and Automation, Intelligent Transportation Systems
Abstract: Anticipating possible behaviors of traffic participants is an essential capability of autonomous vehicles. Many behavior detection and maneuver recognition methods only have a very limited prediction horizon that leaves inadequate time and space for planning. To avoid unsatisfactory reactive decisions, it is essential to count long-term future rewards in planning, which requires extending the prediction horizon. In this paper, we uncover that clues to vehicle behaviors over an extended horizon can be found in vehicle interaction, which makes it possible to anticipate the likelihood of a certain behavior, even in the absence of any clear maneuver pattern. We adopt a recurrent neural network (RNN) for observation encoding, and based on that, we propose a novel vehicle behavior interaction network (VBIN) to capture the vehicle interaction from the hidden states and connection feature of each interaction pair. The output of our method is a probabilistic likelihood of multiple behavior classes, which matches the multimodal and uncertain nature of the distant future. A systematic comparison of our method against two state-of-the-art methods and another two baseline methods on a publicly available real highway dataset is provided, showing that our method has superior accuracy and advanced capability for interaction modeling.
|
|
11:30-12:45, Paper WeBT1-18.2 | Add to My Program |
Multimodal Spatio-Temporal Information in End-To-End Networks for Automotive Steering Prediction |
Abouhussein, Mohamed | University of Freiburg |
Boedecker, Joschka | University of Freiburg |
Muller, Stefan | BMW |
Keywords: Autonomous Vehicle Navigation, Deep Learning in Robotics and Automation, Visual Learning
Abstract: We study the end-to-end steering problem using visual input data from an onboard vehicle camera. An empirical comparison between spatial, spatio-temporal and multimodal models is performed assessing each concept's performance from two points of evaluation. First, how close the model is in predicting and imitating a real-life driver's behavior, second, the smoothness of the predicted steering command. The latter is a newly proposed metric. Building on our results, we propose a new recurrent multimodal model. The suggested model has been tested on a custom dataset recorded by BMW, as well as the public dataset provided by Udacity. Results show that it outperforms previously released scores. Further, a steering correction concept from off-lane driving through the inclusion of correction frames is presented. We show that our suggestion leads to promising results empirically.
|
|
11:30-12:45, Paper WeBT1-18.3 | Add to My Program |
OVPC Mesh: 3D Free-Space Representation for Local Ground VehicleNavigation |
Ruetz, Fabio | ETH Zurich |
Hernandez, Emili | CSIRO |
Pfeiffer, Mark | ETH Zurich |
Oleynikova, Helen | ETH Zürich |
Cox, Mark | CSIRO |
Lowe, Tom | CSIRO |
Borges, Paulo Vinicius Koerich | CSIRO |
Keywords: Autonomous Vehicle Navigation, Field Robots, Mapping
Abstract: This paper presents a novel approach for local 3D environment representation for autonomous unmanned ground vehicle (UGV) navigation called emph{On Visible Point Clouds Mesh} (OVPC Mesh). Our approach represents the surrounding of the robot as a watertight 3D mesh generated from local point cloud data in order to represent the free space surrounding the robot. It is a conservative estimation of the free space and provides a desirable trade-off between representation precision and computational efficiency, without having to discretize the environment into a fixed grid size. Our experiments analyze the usability of the approach for UGV navigation in rough terrain, both in simulation and in a fully integrated real-world system. Additionally, we compare our approach to well-known state-of-the-art solutions, such as Octomap and Elevation Mapping and show that OVPC Mesh can provide reliable 3D information for trajectory planning while fulfilling real-time constraints.
|
|
11:30-12:45, Paper WeBT1-18.4 | Add to My Program |
Attention-Based Lane Change Prediction |
Scheel, Oliver | BMW Group |
Nagaraja, Naveen Shankar | BMW Group |
Schwarz, Loren | BMW Group |
Navab, Nassir | TU Munich |
Tombari, Federico | Technische Universität München |
Keywords: Autonomous Vehicle Navigation, Deep Learning in Robotics and Automation, Intelligent Transportation Systems
Abstract: Lane change prediction of surrounding vehicles is a key building block of path planning. The focus has been on increasing the accuracy of prediction by posing it purely as a function estimation problem at the cost of model understandability. However, the efficacy of any lane change prediction model can be improved when both corner and failure cases are humanly understandable. We propose an attention-based recurrent model to tackle both understandability and prediction quality. We also propose metrics which reflect the discomfort felt by the driver. We show encouraging results on a publicly available dataset and proprietary fleet data.
|
|
11:30-12:45, Paper WeBT1-18.5 | Add to My Program |
Safe Reinforcement Learning with Model Uncertainty Estimates |
Lutjens, Bjorn | Massachusetts Institute of Technology |
Everett, Michael | Massachusetts Institute of Technology |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Keywords: Autonomous Vehicle Navigation, Failure Detection and Recovery, Deep Learning in Robotics and Automation
Abstract: Many current autonomous systems are being designed with a strong reliance on black box predictions from deep neural networks (DNNs). However, DNNs tend to be overconfident in predictions on unseen data and can give unpredictable results for far-from-distribution test data. The importance of predictions that are robust to this distributional shift is evident for safety-critical applications, such as collision avoidance around pedestrians. Measures of model uncertainty can be used to identify unseen data, but the state-of-the-art extraction methods such as Bayesian neural networks are mostly intractable to compute. This paper uses MC-Dropout and Bootstrapping to give computationally tractable and parallelizable uncertainty estimates. The methods are embedded in a Safe Reinforcement Learning framework to form uncertainty-aware navigation around pedestrians. The result is a collision avoidance policy that knows what it does not know and cautiously avoids pedestrians that exhibit unseen behavior. The policy is demonstrated in simulation to be more robust to novel observations and take safer actions than an uncertainty-unaware baseline.
|
|
11:30-12:45, Paper WeBT1-18.6 | Add to My Program |
Using DP towards a Shortest Path Problem-Related Application |
Jiao, Jianhao | The Hong Kong University of Science and Technology |
Fan, Rui | The Hong Kong University of Science and Technology |
Ma, Han | Tsinghua University |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Autonomous Vehicle Navigation, Computer Vision for Automation, Intelligent Transportation Systems
Abstract: The detection of curved lanes is still challenging for autonomous driving systems. Although current cutting edge approaches have performed well in real applications, most of them are based on strict model assumptions. Similar to other visual recognition tasks, lane detection can be formulated as a two-dimensional graph searching problem. It can be solved by searching several optimal paths along with line segments and boundaries. In this paper, we present a directed graph model, in which dynamic programming is used to deal with a specific shortest path problem. This model is particularly suitable to represent objects with long continuous shape structure, like lanes and roads. We apply the designed model and proposed an algorithm for detecting lanes by formulating it as a shortest path problem. To evaluate the performance of our proposed algorithm, we tested five sequences (including 1573 frames) from the KITTI database. The results showed that our method achieves an average successful detection precision of 97.5%.
|
|
WeBT1-19 Interactive Session, 220 |
Add to My Program |
Manipulation IV - 3.2.19 |
|
|
|
11:30-12:45, Paper WeBT1-19.1 | Add to My Program |
Improving Dual-Arm Assembly by Master-Slave Compliance |
Suomalainen, Markku | University of Oulu |
Calinon, Sylvain | Idiap Research Institute |
Pignat, Emmanuel | Idiap Research Institute, Martigny, Switzerland |
Kyrki, Ville | Aalto University |
Keywords: Dual Arm Manipulation, Compliant Assembly, Learning from Demonstration
Abstract: In this paper we show how different choices regarding compliance affect a dual-arm assembly task. In addition, we present how the compliance parameters can be learned from a human demonstration. Compliant motions can be used in assembly tasks to mitigate pose errors originating from, for example, inaccurate grasping. We present analytical background and accompanying experimental results on how to choose the center of compliance to enhance the convergence region of an alignment task. Then we present the possible ways of choosing the compliant axes for accomplishing alignment in a scenario where orientation error is present. We show that an earlier presented Learning from Demonstration method can be used to learn motion and compliance parameters of an impedance controller for both manipulators. The learning requires a human demonstration with a single teleoperated manipulator only, easing the execution of demonstration and enabling usage of manipulators at difficult locations as well. Finally, we experimentally verify our claim that having both manipulators compliant in both rotation and translation can accomplish the alignment task with less total joint motions and in shorter time than moving one manipulator only. In addition, we show that the learning method produces the parameters that achieve the best results in our experiments.
|
|
11:30-12:45, Paper WeBT1-19.2 | Add to My Program |
Generation of Synchronized Configuration Space Trajectories of Multi-Robot Systems |
Kabir, Ariyan M | University of Southern California |
Kanyuck, Alec | University of Southern California |
Malhan, Rishi | University of Southern California |
Shembekar, Aniruddha | University of Southern California |
Thakar, Shantanu | University of Southern California |
Shah, Brual C. | University of Maryland, College Park |
Gupta, Satyandra K. | University of Southern California |
Keywords: Dual Arm Manipulation, Optimization and Optimal Control, Motion and Path Planning
Abstract: We pose the problem of path-constrained trajectory generation for the synchronous motion of multi-robot systems as a non-linear optimization problem. Our method determines appropriate parametric representation for the configuration variables, generates an approximate solution as a starting point for the optimization method, and uses successive refinement techniques to solve the problem in a computationally efficient manner. We have demonstrated the effectiveness of the proposed method on challenging simulation and physical experiments with high degrees of freedom robotic systems.
|
|
11:30-12:45, Paper WeBT1-19.3 | Add to My Program |
REPLAB: A Reproducible Low-Cost Arm Benchmark for Robotic Learning |
Yang, Brian | University of California, Berkeley |
Jayaraman, Dinesh | University of California, Berkeley |
Zhang, Jesse | UC Berkeley |
Levine, Sergey | UC Berkeley |
Keywords: Performance Evaluation and Benchmarking, Deep Learning in Robotics and Automation
Abstract: Standardized evaluation measures have aided in the progress of machine learning approaches in disciplines such as computer vision and machine translation. In this paper, we make the case that robotic learning would also benefit from benchmarking, and present a template for a vision-based manipulation benchmark. Our benchmark is built on "REPLAB", a reproducible and self-contained hardware stack (robot arm, camera, and workspace) that costs about 2000 USD and occupies a cuboid of size 70x40x60 cm. Each REPLAB cell may be assembled within a few hours. Through this low-cost, compact design, REPLAB aims to drive wide participation by lowering the barrier to entry into robotics and to enable easy scaling to many robots. We envision REPLAB as a framework for reproducible research across manipulation tasks, and as a step in this direction, we define a grasping benchmark consisting of a task definition, evaluation protocol, performance measures, and a dataset of over 50,000 grasp attempts. We implement, evaluate, and analyze several previously proposed grasping approaches to establish baselines for this benchmark. Project page with assembly instructions, additional details, and videos: https://goo.gl/5F9dP4.
|
|
11:30-12:45, Paper WeBT1-19.4 | Add to My Program |
Stable Bin Packing of Non-Convex 3D Objects with a Robot Manipulator |
Wang, Fan | Duke University |
Hauser, Kris | Duke University |
Keywords: Factory Automation, Motion and Path Planning, Computational Geometry
Abstract: Recent progress in the field of robotic manipulation has generated interest in fully automatic object packing in warehouses. This paper proposes a formulation of the packing problem that is tailored to the automated warehousing domain. Besides minimizing waste space inside a container, the problem requires stability of the object pile during packing and the feasibility of the robot motion executing the placement plans. To address this problem, a set of constraints are formulated, and a constructive packing pipeline is proposed to solve these constraints. The pipeline is able to pack geometrically complex, non-convex objects while satisfying stability and robot packability constraints. In particular, a new 3D positioning heuristic called Heightmap-Minimization heuristic is proposed, and heightmaps are used to speed up the search. Experimental evaluation of the method is conducted with a realistic physical simulator on a dataset of scanned real-world items, demonstrating stable and high-quality packing plans compared with other 3D packing methods.
|
|
11:30-12:45, Paper WeBT1-19.5 | Add to My Program |
A Constraint Programming Approach to Simultaneous Task Allocation and Motion Scheduling for Industrial Dual-Arm Manipulation Tasks |
Behrens, Jan Kristof | Robert Bosch GmbH |
Lange, Ralph | Robert Bosch GmbH |
Mansouri, Masoumeh | Örebro University |
Keywords: Dual Arm Manipulation, Planning, Scheduling and Coordination, Manipulation Planning
Abstract: Modern lightweight dual-arm robots bring the physical capabilities to quickly take over tasks at typical industrial workplaces designed for workers. Low setup times - including the instructing/specifying of new tasks - are crucial to stay competitive. We propose a constraint programming approach to simultaneous task allocation and motion scheduling for such industrial manipulation and assembly tasks. Our approach covers the robot as well as connected machines. The key concept are Ordered Visiting Constraints, a descriptive and extensible model to specify such tasks with their spatiotemporal requirements and combinatorial or ordering constraints. Our solver integrates such task models and robot motion models into constraint optimization problems and solves them efficiently using various heuristics to produce makespan-optimized robot programs. For large manipulation tasks with 200 objects, our solver implemented using Google's Operations Research tools requires less than a minute to compute usable plans. The proposed task model is robot-independent and can easily be deployed to other robotic platforms. This portability is validated through several simulation-based experiments.
|
|
11:30-12:45, Paper WeBT1-19.6 | Add to My Program |
Exploiting Symmetries in Reinforcement Learning of Bimanual Robotic Tasks |
Amadio, Fabio | Università Di Padova |
Colomé, Adrià | Institut De Robòtica I Informàtica Industrial (CSIC-UPC), Q28180 |
Torras, Carme | Csic - Upc |
Keywords: Dual Arm Manipulation, Learning from Demonstration, Learning and Adaptive Systems
Abstract: Movement Primitives (MPs) have been widely adopted for representing and learning robotic movements using reinforcement learning policy search. Probabilistic Movement Primitives (ProMPs) are a kind of MP based on a stochastic representation over sets of trajectories, able to capture the variability allowed while executing a movement. This approach has proved effective in learning a wide range of robotic movements, but it comes with the necessity of dealing with a high-dimensional space of parameters. This may be a critical problem when learning tasks with two robotic manipulators, and this work proposes an approach to reduce the dimension of the parameter space based on the exploitation of symmetry. A symmetrization method for ProMPs is presented and used to represent two movements, employing a single ProMP for the first arm and a symmetry surface that maps that ProMP to the second arm. This symmetric representation is then adopted in reinforcement learning of bimanual tasks (from user-provided demonstrations), using Relative Entropy Policy Search (REPS) algorithm. The symmetry-based approach developed has been tested in an experiment of cloth manipulation, showing a speed increment in learning the task.
|
|
WeBT1-20 Interactive Session, 220 |
Add to My Program |
Medical Computer Vision - 3.2.20 |
|
|
|
11:30-12:45, Paper WeBT1-20.1 | Add to My Program |
Self-Supervised Surgical Tool Segmentation Using Kinematic Information |
da Costa Rocha, Cristian | NTNU |
Padoy, Nicolas | University of Strasbourg |
Rosa, Benoît | CNRS, France |
Keywords: Computer Vision for Medical Robotics, Deep Learning in Robotics and Automation, Medical Robots and Systems
Abstract: Surgical tool segmentation in endoscopic images is the first step towards pose estimation and (sub-)task automation in challenging minimally invasive surgical operations. While many approaches in the literature have shown great results using modern machine learning methods such as convolutional neural networks, the main bottleneck lies in the acquisition of a large number of manually-annotated images for efficient learning. This is especially true in surgical context, where patient-to-patient differences impede the overall generalizability. In order to cope with this lack of annotated data, we propose a self-supervised approach in a robot-assisted context. To our knowledge, the proposed approach is the first to make use of the kinematic model of the robot in order to generate training labels. The core contribution of the paper is to propose an optimization method to obtain good labels for training despite an unknown hand-eye calibration and an imprecise kinematic model. The labels can subsequently be used for fine-tuning a fully-convolutional neural network for pixel-wise classification. As a result, the tool can be segmented in the endoscopic images without needing a single manually-annotated image. Experimental results on ex vivo and in vivo datasets obtained using a flexible robotized endoscopy system are very promising.
|
|
11:30-12:45, Paper WeBT1-20.2 | Add to My Program |
Needle Localization for Robot-Assisted Subretinal Injection Based on Deep Learning |
Zhou, Mingchuan | Technische Universität München |
Wang, Xijia | Technische Universität München |
Weiss, Jakob | Technische Universität München |
Eslami, Abouzar | Carl Zeiss Meditec AG |
Huang, Kai | Sun Yat-Sen University |
Maier, Mathias | Klinikum Rechts Der Isar Der TU München |
Lohmann, Chris P. | Klinikum Rechts Der Isar Der TU München |
Navab, Nassir | TU Munich |
Knoll, Alois | Tech. Univ. Muenchen TUM |
Nasseri, M. Ali | Technische Universitaet Muenchen |
Keywords: Computer Vision for Medical Robotics, Medical Robots and Systems
Abstract: Subretinal injection is known to be a complicated task for ophthalmologists to perform, the main sources of difficulties are the fine anatomy of the retina, insufficient visual feedback, and high surgical precision. Image guided robot-assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduces the physical limitations of human surgeons. In this paper, we demonstrate a robust framework for needle detection and localization in subretinal injection using microscope-integrated Optical Coherence Tomography (MI-OCT) based on deep learning. The proposed method consists of two main steps: a) the preprocessing of OCT volumetric images; b) needle localization in the processed images. The first step is to coarsely localize the needle position based on the needle information above the retinal surface and crop the original image into a small region of interest (ROI). Afterward, the cropped small image is fed into a well trained network for detection and localization of the needle segment. The entire framework is extensively validated in ex-vivo pig eye experiments with robotic subretinal injection. The results show that the proposed method can localize the needle accurately with a confidence of 99.2%.
|
|
11:30-12:45, Paper WeBT1-20.3 | Add to My Program |
Robust Generalized Point Set Registration Using Inhomogeneous Hybrid Mixture Models Via Expectation Maximization |
Min, Zhe | The Chinese University of Hong Kong |
Meng, Max Q.-H. | The Chinese University of Hong Kong |
Keywords: Computer Vision for Medical Robotics, Medical Robots and Systems
Abstract: Point set registration (PSR) is an important problem in computer vision, robotics and biomedical engineering communities. Usually, only positional information at each point is adopted in a registration. In this paper, the orientation-al vector (or normal vector) associated with each point is also utilized. Generalized point set registration is formulated and solved under the Expectation-Maximization (EM) framework. In the E-step, the posterior probabilities representing the correspondence probabilities are computed. In the M-step, rigid transformation parameters including the rotation matrix, the translation vector are updated. The proposed algorithm stops when it converges to the optimal solution or a maximum number of iterations is achieved. The observed position set and normal vector set are assumed to follow Gaussian Mixture Models (GMMs) and Fisher distribution Mixture Models (FMMs), respectively. To further improve our algorithm's robustness, the hybrid mixture models (HMMs) are assumed to be inhomogeneous. Experimental results on the surface points extracted from a human femur' CT model show that our algorithm can achieve lower registration error, is more robust to noise and outliers than the state-of-the-art registration methods.
|
|
11:30-12:45, Paper WeBT1-20.4 | Add to My Program |
Visual Guidance and Automatic Control for Robotic Personalized Stent Graft Manufacturing |
Guo, Yu | Imperial College London |
Sun, Miao | Imperial College London |
Lo, Po Wen | Imperial College London |
Lo, Benny Ping Lai | Imperial College London |
Keywords: Computer Vision for Medical Robotics, Deep Learning in Robotics and Automation, Visual Servoing
Abstract: Personalized stent graft is designed to treat Abdominal Aortic Aneurysms (AAA). Due to the individual difference in arterial structures, stent graft has to be custom made for each AAA patient. Robotic platforms for autonomous personalized stent graft manufacturing have been proposed in recently which rely upon stereo vision systems for coordinating multiple robots for fabricating customized stent grafts. This paper proposes a novel hybrid vision system for real-time visual-sevoing for personalized stent-graft manufacturing. To coordinate the robotic arms, this system is based on projecting a dynamic stereo microscope coordinate system onto a static wide angle view stereo webcam coordinate system. The multiple stereo camera configuration enables accurate localization of the needle in 3D during the sewing process. The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization. To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements. The deep deterministic policy gradient (DDPG) reinforcement learning algorithm is developed for real-time intelligent robotic control. Experimental results have shown that the robotic arm can learn to reach the desired targets autonomously.
|
|
11:30-12:45, Paper WeBT1-20.5 | Add to My Program |
3D Path Planning from a Single 2D Fluoroscopic Image for Robot Assisted Fenestrated Endovascular Aortic Repair |
Zheng, Jian-Qing | Imperial College London |
Zhou, Xiao-Yun | Imperial College London |
Riga, Celia | Imperial College London |
Yang, Guang-Zhong | Imperial College London |
Keywords: Computer Vision for Medical Robotics, Surgical Robotics: Planning, Medical Robots and Systems
Abstract: The current standard of intra-operative navigation during Fenestrated Endovascular Aortic Repair (FEVAR) calls for the need of 3D alignments between inserted devices and aortic branches. The navigation commonly via 2D fluoroscopic images, lacks anatomical information, resulting in longer operation hours and radiation exposure. In this paper, a skeleton instantiation framework of Abdominal Aortic Aneurysm (AAA) from a single 2D fluoroscopic image is introduced for real-time 3D robotic path planning. A graph matching method is proposed to establish the correspondences between the 3D preoperative and 2D intra-operative AAA skeletons, and then the two skeletons are registered by skeleton deformation and regularization in respect to skeleton length and smoothness. Furthermore, deep learning was used to segment 3D pre-operative AAA from Computed Tomography (CT) scans to facilitate the framework automation. Simulation, phantom and patient AAA data sets have been used to validate the proposed framework. 3D distance error of 2mm was achieved in the phantom setup. Performance advantages were also achieved in terms of accuracy, robustness, and time-efficiency.
|
|
11:30-12:45, Paper WeBT1-20.6 | Add to My Program |
Context-Aware Depth and Pose Estimation for Bronchoscopic Navigation |
Shen, Mali | The Hamlyn Centre for Robotic Surgery, Imperial College London |
Gu, Yun | SJTU |
Liu, Ning | Imperial College London |
Yang, Guang-Zhong | Imperial College London |
Keywords: Computer Vision for Medical Robotics, Deep Learning in Robotics and Automation, Visual Learning
Abstract: Endobronchial intervention is increasingly used as a minimally invasive means of lung intervention. Vision-based localization approaches are often sensitive to image artifacts in bronchoscopic videos. In this paper, a robust navigation system based on a context-aware depth recovery approach for monocular video images is presented. To handle the artifacts, a conditional generative adversarial learning framework is proposed for reliable depth recovery. The accuracy of depth estimation and camera localization is validated on an in vivo dataset. Both quantitative and qualitative results demonstrate that the depth recovered with the proposed method preserves better structural information of airway lumens in the presence of image artifacts, and the improved camera localization accuracy demonstrates its clinical potential for bronchoscopic navigation.
|
|
WeBT1-21 Interactive Session, 220 |
Add to My Program |
Active Perception - 3.2.21 |
|
|
|
11:30-12:45, Paper WeBT1-21.1 | Add to My Program |
Multi-View Picking: Next-Best-View Reaching for Improved Grasping in Clutter |
Morrison, Douglas | Australian Centre for Robotic Vision |
Corke, Peter | Queensland University of Technology |
Leitner, Jurgen | Australian Centre for Robotic Vision / QUT |
Keywords: Perception for Grasping and Manipulation, Grasping
Abstract: Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present. Where other approaches use a static camera position or fixed data collection routines, our Multi-View Picking (MVP) controller acts directly on a distribution of grasp pose estimates. While reaching to a grasp, informative viewpoints for an eye-in-hand camera are chosen in real time to reduce uncertainty in the grasp pose caused by clutter and occlusions. In trials of grasping 20 objects from clutter, our MVP controller achieves 80% grasp success, outperforming a single-viewpoint grasp detector by 12%. We also show that our approach is both more accurate and more efficient than approaches which consider multiple fixed viewpoints.
|
|
11:30-12:45, Paper WeBT1-21.2 | Add to My Program |
A Multi-Sensor Next-Best-View Framework for Geometric Model-Based Robotics Applications |
Cui, Jinda | Rensselaer Polytechnic Institute |
Wen, John | Rensselaer Polytechnic Institute |
Trinkle, Jeff | Rensselaer Polytechnic Institute |
Keywords: Computer Vision for Automation, Computer Vision for Other Robotic Applications, Reactive and Sensor-Based Planning
Abstract: Geometric models are crucial for many robotics applications. Current robotic 3D reconstruction systems only focus on specific reconstruction goals which make them hard to adapt to different tasks. In this paper we present a next-best-view framework which allows robots to construct a geometric model incrementally through consecutive sensing actions. Instead of limiting the type and total number of sensors, in each sensing step we evaluate actions from all available sensors and pick the best to execute. Our framework is more comprehensive since the model building process can be designed to best accomplish different tasks. The system has been demonstrated in two experiments on 3D reconstruction and weld seam inspection, yielding promising results.
|
|
11:30-12:45, Paper WeBT1-21.3 | Add to My Program |
Model-Free Optimal Estimation and Sensor Placement Framework for Elastic Kinematic Chain |
Ahn, Joonmo | Seoul National University |
Yoon, Jaemin | Seoul National University |
Lee, Jeongseob | Seoul National University |
Lee, Dongjun | Seoul National University |
Keywords: Flexible Robots, Soft Material Robotics
Abstract: We propose a novel model-free optimal estimation and sensor placement framework for a high-DOF (degree-of-freedom) EKC (elastic kinematic chain) with only a limited number of IMU (inertial measurement unit) sensors based on POD (proper orthogonal decomposition) and MAP (maximum a posteriori) estimation. First, we (off-line) excite the system richly enough, collect the data and perform the POD to extract dominant and non-dominant modes. We then decide the minimum number of IMUs according to the dominant modes, and construct the prior distribution of the output (i.e., top-end position of EKC) based on the singular value of each POD mode. We also formulate the MAP estimation given the prior distribution and different placements of the IMUs and choose the optimal IMU placement to maximize the posterior probability. This optimal placement is then used for real-time output estimation of the EKC. Experiments are also performed to verify the theory.
|
|
11:30-12:45, Paper WeBT1-21.4 | Add to My Program |
Efficient Autonomous Exploration Planning of Large Scale 3D-Environments |
Selin, Magnus | Linköping University |
Tiger, Mattias | Department of Computer and Information Science, Linköping Univer |
Duberg, Daniel | KTH - Royal Institute of Technology |
Heintz, Fredrik | Linköping University |
Jensfelt, Patric | KTH - Royal Institute of Technology |
Keywords: Search and Rescue Robots, Motion and Path Planning, Mapping
Abstract: Exploration is an important aspect of robotics, whether it is for mapping, rescue missions or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this work we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.
|
|
11:30-12:45, Paper WeBT1-21.5 | Add to My Program |
Tree Search Techniques for Minimizing Detectability and Maximizing Visibility |
Zhang, Zhongshun | Virginia Tech |
Lee, Joseph | U.S. Army TARDEC |
Smereka, Jonathon M. | U.S. Army TARDEC |
Sung, Yoonchang | Virginia Tech |
Zhou, Lifeng | Virginia Tech |
Tokekar, Pratap | Virginia Tech |
Keywords: Surveillance Systems, Planning, Scheduling and Coordination, Autonomous Agents
Abstract: We introduce and study the problem of planning a trajectory for an agent to carry out a reconnaissance mission while avoiding being detected by an adversarial guard. This introduces a multi-objective version of classical visibility-based target search and pursuit-evasion problem. In our formulation, the agent receives a positive reward for increasing its visibility (by exploring new regions) and a negative penalty every time it is detected by the guard. The objective is to find a finite-horizon path for the agent that balances the trade off between maximizing visibility and minimizing detectability. We model this problem as a discrete, sequential, two-player, zero-sum game. We use two types of game tree search algorithms to solve this problem: minimax search tree and Monte-Carlo search tree. Both search trees can yield the optimal policy but may require possibly exponential computational time and space. We propose several pruning techniques to reduce the computational cost while still preserving optimality guarantees. Simulation results show that the proposed strategy prunes approximately three orders of magnitude nodes as compared to the brute-force strategy. We also find that the Monte-Carlo search tree saves approximately one order of computational time as compared to the minimax search tree.
|
|
11:30-12:45, Paper WeBT1-21.6 | Add to My Program |
Autonomous Exploration of Complex Underwater Environments Using a Probabilistic Next-Best-View Planner |
Palomeras, Narcis | Universitat De Girona |
Hurtos, Natalia | University of Girona |
Vidal Garcia, Eduard | Universitat De Girona |
Carreras, Marc | Universitat De Girona |
Keywords: Marine Robotics, Reactive and Sensor-Based Planning, Mapping
Abstract: Autonomous underwater vehicles (AUVs) have been extensively used for open-sea exploration. However, the mapping or inspection of complex underwater structures, which have an interest from the scientific and the industrial point of view, is still carried out by professional divers or remotely operated vehicles (ROVs). We propose a probabilistic next-best-view (NBV) planner, targeted to hover-capable AUVs, that will allow them to explore these complex environments without an a priori model. The proposed method is based on scanning the area from different viewpoints in an iterative way. At each step, a viewpoint is chosen from a set of random samples according to a utility function. An obstacle-free path to the selected viewpoint is planned, and the vehicle navigates to it to gather a new scan that will be registered with the previous ones. To evaluate the proposed method we present four different tests using the Girona 500 AUV, both in simulation and in real scenarios. The results demonstrate the capability to explore complex environments autonomously, producing models of the environment with a high degree of coverage that can enable mapping and inspection applications.
|
|
WeBT1-22 Interactive Session, 220 |
Add to My Program |
Planning - 3.2.22 |
|
|
|
11:30-12:45, Paper WeBT1-22.1 | Add to My Program |
Chance Constrained Motion Planning for High-Dimensional Robots |
Dai, Siyu | Massachusetts Institute of Technology |
Schaffert, Shawn | Massachusetts Institute of Technology |
M. Jasour, Ashkan | MIT |
Hofmann, Andreas | MIT |
Williams, Brian | MIT |
Keywords: Probability and Statistical Methods, Manipulation Planning, Motion and Path Planning
Abstract: This paper introduces Probabilistic Chekov (p-Chekov), a chance-constrained motion planning system that can be applied to high degree-of-freedom (DOF) robots under motion uncertainty and imperfect state information. Given process and observation noise models, it can find feasible trajectories which satisfy a user-specified bound over the probability of collision. Leveraging our previous work in deterministic motion planning which integrated trajectory optimization into a sparse roadmap framework, p-Chekov shows superiority in its planning speed for high-dimensional tasks. P-Chekov incorporates a linear-quadratic Gaussian motion planning approach into the estimation of the robot state probability distribution, applies quadrature theories to waypoint collision risk estimation, and adapts risk allocation approaches to assign allowable probabilities of failure among waypoints. Unlike other existing risk-aware planners, p-Chekov can be applied to high-DOF robotic planning tasks without the convexification of the environment. The experiment results in this paper show that this p-Chekov system can effectively reduce collision risk and satisfy user-specified chance constraints in typical real-world planning scenarios for high-DOF robots.
|
|
11:30-12:45, Paper WeBT1-22.2 | Add to My Program |
Complete and Near-Optimal Path Planning for Simultaneous Sensor-Based Inspection and Footprint Coverage in Robotic Crack Filling |
Yu, Kaiyan | Binghamton University |
Guo, Chaoke | Rutgers University |
Yi, Jingang | Rutgers University |
Keywords: Robotics in Construction, Planning, Scheduling and Coordination, Field Robots
Abstract: A simultaneous robotic footprint and sensor coverage planning scheme is proposed to efficiently detect all the unknown targets with range sensors and cover the targets with the robot’s footprint in a structured environment. The proposed online Sensor-based Complete Coverage (online SCC) planning minimizes the total traveling distance of the robot, guarantees the complete sensor coverage of the whole free space, and achieves near-optimal footprint coverage of all the targets. The planning strategy is applied to a crack-filling robotic prototype to detect and fill all the unknown cracks on ground surfaces. Simulation and experimental results are presented that confirm the efficiency and effectiveness of the proposed online planning algorithm.
|
|
11:30-12:45, Paper WeBT1-22.3 | Add to My Program |
Approximate Stability Analysis for Drystacked Structures |
Liu, Yifang | University at Buffalo |
Saboia Da Silva, Maira | University at Buffalo |
Thangavelu, Vivekanandhan | University at Buffalo |
Napp, Nils | SUNY Buffalo |
Keywords: Building Automation, Assembly, Robotics in Construction
Abstract: We introduce a fast approximate stability analysis into an automated dry stacking procedure. Evaluating structural stability is essential for any type of construction, but especially challenging in techniques where building elements remain distinct and do not use fasteners or adhesives. Due to the irregular shape of construction materials, autonomous agents have restricted knowledge of contact geometry, which makes existing analysis tools difficult to deploy. A geometric safety factor called kern is used to estimate how much the contact interface can shrink and the structure still be feasible, where feasibility can be checked efficiently using linear programming. We validate the stability measure by comparing the proposed methods with fully simulated shaking test in 2D. We also improve existing heuristics-based planning by adding the proposed measure into the assembly process.
|
|
11:30-12:45, Paper WeBT1-22.4 | Add to My Program |
User-Guided Offline Synthesis of Robot Arm Motion from 6-DoF Paths |
Praveena, Pragathi | University of Wisconsin-Madison |
Rakita, Daniel | University of Wisconsin-Madison |
Mutlu, Bilge | University of Wisconsin–Madison |
Gleicher, Michael | University of Wisconsin - Madison |
Keywords: Kinematics, Motion and Path Planning, Human Factors and Human-in-the-Loop
Abstract: We present an offline method to generate smooth, feasible motion for robot arms such that end-effector pose goals of a 6-DoF path are matched within acceptable limits specified by the user. Our approach aims to accurately match the position and orientation goals of the given path, and allows deviation from these goals if there is danger of self-collisions, joint-space discontinuities or kinematic singularities. Our method generates multiple candidate trajectories, and selects the best by incorporating sparse user input that specifies what kinds of deviations are acceptable. We apply our method to a range of challenging paths and show that our method generates solutions that achieve smooth, feasible motions while closely approximating the given pose goals and adhering to user specifications.
|
|
11:30-12:45, Paper WeBT1-22.5 | Add to My Program |
Visual Robot Task Planning |
Paxton, Chris | NVIDIA Research |
Barnoy, Yotam | Johns Hopkins |
Katyal, Kapil | Johns Hopkins University Applied Physics Lab |
Arora, Raman | Johns Hopkins University |
Hager, Gregory | Johns Hopkins University |
Keywords: Task Planning, Cognitive Human-Robot Interaction, Visual Learning
Abstract: Prospection is key to solving challenging problems in new environments, but it has not been deeply explored as applied to task planning for perception-driven robotics. We propose visual robot task planning, where we take in an input image and must generate a sequence of high-level actions and associated observations that achieve some task. In this paper, we describe a neural network architecture and associated planning algorithm that (1) learns a representation of the world that can generate prospective futures, (2) uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and (3) evaluates these actions via a variant of Monte Carlo Tree Search to find a viable solution to a particular problem. Our approach allows us to visualize intermediate motion goals and learn to plan complex activity from visual information, and used this to generate and visualize task plans on held-out examples of a block-stacking simulation.
|
|
11:30-12:45, Paper WeBT1-22.6 | Add to My Program |
Towards Blended Reactive Planning and Acting Using Behavior Trees |
Colledanchise, Michele | IIT - Italian Institute of Technology |
Almeida, Diogo | Royal Institute of Technology, KTH |
Ogren, Petter | Royal Institute of Technology (KTH) |
Keywords: Behavior-Based Systems, Reactive and Sensor-Based Planning, Autonomous Agents
Abstract: In this paper, we show how a planning algorithm can be used to automatically create and update a Behavior Tree (BT), controlling a robot in a dynamic environment. The planning part of the algorithm is based on the idea of back chaining. Starting from a goal condition we iteratively select actions to achieve that goal, and if those actions have unmet preconditions, they are extended with actions to achieve them in the same way. The fact that BTs are inherently modular and reactive makes the proposed solution blend acting and planning in a way that enables the robot to efficiently react to external disturbances. If an external agent undoes an action the robot reexecutes it without re-planning, and if an external agent helps the robot, it skips the corresponding actions, again without replanning. We illustrate our approach in two different robotics scenarios.
|
|
WeBT1-23 Interactive Session, 220 |
Add to My Program |
Vision-Based Navigation - 3.2.23 |
|
|
|
11:30-12:45, Paper WeBT1-23.1 | Add to My Program |
Visual Representations for Semantic Target Driven Navigation |
Mousavian, Arsalan | NVIDIA |
Toshev, Alexander | Google |
Fiser, Marek | Google |
Kosecka, Jana | George Mason University |
Wahid, Ayzaan | Google |
Davidson, James | Google Inc |
Keywords: Visual-Based Navigation, Computer Vision for Other Robotic Applications, Deep Learning in Robotics and Automation
Abstract: What is a good visual representation for navigation? We study this question in the context of semantic visual navigation, which is the problem of a robot finding its way through a previously unseen environment to a target object, e.g.~{em go to the refrigerator}. Instead of acquiring a metric semantic map of an environment and using planning for navigation, our approach learns navigation policies on top of representations that capture spatial layout and semantic contextual cues. We propose to use semantic segmentation and detection masks as observations obtained by the state-of-the-art computer vision algorithms and use deep network to learn the navigation policy. The availability of the equitable representations in simulated environments enables joint training using real and simulated data and alleviates the need for domain adaptation or domain randomization commonly used to tackle the sim-to-real transfer of the learned policies. Both the representation and the navigation policy can be readily applied to real non-synthetic environments as demonstrated on the Active Vision Dataset~cite{active-vision-dataset2017}. Our approach gets successfully to the target in 54% of the cases in unexplored environments, compared to 46% for non-learning based approach, and 28% for the learning-based baseline.
|
|
11:30-12:45, Paper WeBT1-23.2 | Add to My Program |
Deep Object-Centric Policies for Autonomous Driving |
Wang, Dequan | UC Berkeley |
Devin, Coline | University of California, Berkeley |
Cai, Qi-Zhi | Nanjing University |
Yu, Fisher | UC Berkeley |
Darrell, Trevor | UC Berkeley |
Keywords: Deep Learning in Robotics and Automation, Learning and Adaptive Systems, Autonomous Vehicle Navigation
Abstract: While learning visuomotor skills in an end-to-end manner is appealing, deep neural networks are often uninterpretable and fail in surprising ways. For robotics tasks, such as autonomous driving, models that explicitly represent objects may be more robust to new scenes and provide intuitive visualizations. We describe a taxonomy of “object-centric” models which leverage both object instances and end-to-end learning. In the Grand Theft Auto V simulator, we show that object-centric models outperform object-agnostic methods in scenes with other vehicles and pedestrians, even with an imperfect detector. We also demonstrate that our architectures perform well on real-world environments by evaluating on the Berkeley DeepDrive Video dataset, where an object-centric model outperforms object-agnostic models in the low-data regimes.
|
|
11:30-12:45, Paper WeBT1-23.3 | Add to My Program |
Neural Autonomous Navigation with Riemannian Motion Policy |
Meng, Xiangyun | University of Washington |
Ratliff, Nathan | Lula Robotics Inc |
Xiang, Yu | NVIDIA |
Fox, Dieter | University of Washington |
Keywords: Visual-Based Navigation, Autonomous Agents, Deep Learning in Robotics and Automation
Abstract: End-to-end learning for autonomous navigation has received substantial attention recently as a promising method for reducing modeling error. However, its data complexity, especially around generalization to unseen environments is high. We introduce a novel image-based autonomous navigation technique that leverages in policy structure using the Riemannian Motion Policy (RMP) framework for deep learning of vehicular control. We design a deep neural network to predict control point RMPs of the vehicle from visual images, from which the optimal control commands can be computed analytically. We show that our network trained in the Gibson environment can be used for indoor obstacle avoidance and navigation on a real RC car, and our RMP representation generalizes better to unseen environments than predicting local geometry or predicting control commands directly.
|
|
11:30-12:45, Paper WeBT1-23.4 | Add to My Program |
The Oxford Multimotion Dataset: Multiple SE(3) Motions with Ground Truth |
Judd, Kevin Michael | University of Oxford |
Gammell, Jonathan | University of Oxford |
Keywords: Visual-Based Navigation, Visual Tracking, Computer Vision for Automation
Abstract: Datasets advance research by posing challenging new problems and providing standardized methods of algorithm comparison. High-quality datasets exist for many important problems in robotics and computer vision, including egomotion estimation and motion/scene segmentation, but not for techniques that estimate every motion in a scene. Metric evaluation of these multimotion estimation techniques requires datasets consisting of multiple, complex motions that also contain ground truth for every moving body. The Oxford Multimotion Dataset provides a number of multimotion estimation problems of varying complexity. It includes both complex problems that challenge existing algorithms as well as a number of simpler problems to support development. These include observations from both static and dynamic sensors, a varying number of moving bodies, and a variety of different 3D motions. It also provides a number of experiments designed to isolate specific challenges of the multimotion problem, including rotation about the optical axis and occlusion. In total, the Oxford Multimotion Dataset contains over 110 minutes of multimotion data consisting of stereo and RGB-D camera images, IMU data, and Vicon ground-truth trajectories. The dataset culminates in a complex toy car segment representative of many challenging real-world scenarios. This paper describes each experiment with a focus on its relevance to the multimotion estimation problem.
|
|
11:30-12:45, Paper WeBT1-23.5 | Add to My Program |
Safe Navigation with Human Instructions in Complex Scenes |
Hu, Zhe | City University of Hong Kong |
Pan, Jia | The City University of Hong Kong |
Fan, Tingxiang | Dorabot |
Yang, Ruigang | University of Kentucky |
Manocha, Dinesh | University of Maryland |
Keywords: Visual-Based Navigation, Motion and Path Planning, Collision Avoidance
Abstract: In this paper, we present a robotic navigation algorithm with natural language interfaces, which enables a robot to safely walk through a changing environment with moving persons by following human instructions such as ``go to the restaurant and keep away from people''. We first classify human instructions into three types: the goal, the constraints, and uninformative phrases. Next, we provide grounding for the extracted goal and constraint items in a dynamic manner along with the navigation process, to deal with the target objects that are too far away for sensor observation and the appearance of moving obstacles like humans. In particular, for a goal phrase (e.g., ``go to the restaurant''), we ground it to a location in a predefined semantic map and treat it as a goal for a global motion planner, which plans a collision-free path in the workspace for the robot to follow. For a constraint phrase (e.g., ``keep away from people''), we dynamically add the corresponding constraint into a local planner by adjusting the values of a local costmap according to the results returned by the object detection module. The updated costmap is then used to compute a local collision avoidance control for the safe navigation of the robot. By combining natural language processing, motion planning and computer vision, our developed system is demonstrated to be able to successfully follow natural language navigation instructions to achieve navigation tasks in both simulated and real-world scena
|
|
11:30-12:45, Paper WeBT1-23.6 | Add to My Program |
Two-Stage Transfer Learning for Heterogeneous Robot Detection and 3D Joint Position Estimation in a 2D Camera Image Using CNN |
Miseikis, Justinas | University of Oslo |
Brijacak, Inka | Joanneum Research |
Yahyanejad, Saeed | Joanneum Research |
Glette, Kyrre | University of Oslo |
Elle, Ole Jakob | Oslo University Hospital |
Torresen, Jim | University of Oslo |
Keywords: Visual Learning, Recognition, Computer Vision for Automation
Abstract: Collaborative robots are becoming more common on factory floors as well as regular environments, however, their safety still is not a fully solved issue. Collision detection does not always perform as expected and collision avoidance is still an active research area. Collision avoidance works well for fixed robot-camera setups, however, if they are shifted around, Eye-to-Hand calibration becomes invalid making it difficult to accurately run many of the existing collision avoidance algorithms. We approach the problem by presenting a stand-alone system capable of detecting the robot and estimating its position, including individual joints, by using a simple 2D colour image as an input, where no Eye-to-Hand calibration is needed. As an extension of previous work, a two-stage transfer learning approach is used to re-train a multi-objective convolutional neural network (CNN) to allow it to be used with heterogeneous robot arms. Our method is capable of detecting the robot in real-time and new robot types can be added by having significantly smaller training datasets compared to the requirements of a fully trained network. We present data collection approach, the structure of the multi-objective CNN, the two-stage transfer learning training and test results by using real robots from Universal Robots, Kuka, and Franka Emika. Eventually, we analyse possible application areas of our method together with the possible improvements.
|
|
WeBT1-24 Interactive Session, 220 |
Add to My Program |
Medical Robotics VIII - 3.2.24 |
|
|
|
11:30-12:45, Paper WeBT1-24.1 | Add to My Program |
3D Control of Rotating Millimeter-Scale Swimmers through Obstacles |
Julien, Leclerc | University of Houston |
Zhao, Haoran | University of Houston |
Becker, Aaron | University of Houston |
Keywords: Medical Robots and Systems, Autonomous Vehicle Navigation, Surgical Robotics: Planning
Abstract: This study investigates the high-speed 3D navigation of magnetic rotating millimeter-scale swimmers. The swimmers have a spiral-shaped surface to ensure propulsion. The rotational movement is used for propulsion and, in future work, could provide the power needed to remove blood clots. For instance, an abrasive tip could be used to progressively grind a blood clot. An algorithm to perform 3D control of rotating millimeter-scale swimmers was implemented and tested experimentally. The swimmers can follow a trajectory and can navigate without touching the walls inside a tube having a diameter of 15 mm. This diameter is smaller than the average diameter of the distal descending aorta, which is the smallest section of the aorta. Several swimmers designs were built and tested. The maximum velocity recorded for our best swimmer was 103.6 mm/s with a rotational speed of 477.5 rotations per second.
|
|
11:30-12:45, Paper WeBT1-24.2 | Add to My Program |
Automatic Optical Coherence Tomography Imaging of Stationary and Moving Eyes with a Robotically-Aligned Scanner |
Draelos, Mark | Duke University |
Ortiz, Pablo | Duke University |
Qian, Ruobing | Duke University |
Keller, Brenton | Duke University |
Hauser, Kris | Duke University |
Kuo, Anthony | Duke University |
Izatt, Joseph | Duke University |
Keywords: Medical Robots and Systems
Abstract: Optical coherence tomography (OCT) has found great success in ophthalmology where it plays a key role in screening and diagnostics. Clinical ophthalmic OCT systems are typically deployed as tabletop instruments that require chinrest stabilization and trained ophthalmic photographers to operate. These requirements preclude OCT diagnostics in bedbound or unconscious patients who cannot use a chinrest, and restrict OCT screening to ophthalmology offices. We present a robotically-aligned OCT scanner capable of automatic eye imaging without chinrests. The scanner features eye tracking from fixed-base RGB-D cameras for coarse and stereo pupil cameras for fine alignment, as well as galvanometer aiming for fast lateral tracking, reference arm adjustment for fast axial tracking, and a commercial robot arm for slow lateral and axial tracking. We demonstrate the system's performance autonomously aligning with stationary eyes, pursuing moving eyes, and tracking eyes undergoing physiologic motion. The system demonstrates sub-millimeter eye tracking accuracy, 12 um lateral pupil tracking accuracy, 83.2 ms stabilization time following step disturbance, and 9.7 Hz tracking bandwidth.
|
|
11:30-12:45, Paper WeBT1-24.3 | Add to My Program |
Dense-ArthroSLAM: Dense Intra-Articular 3D Reconstruction with Robust Localization Prior for Arthroscopy |
Marmol, Andres | Queensland University of Technology |
Banach, Artur | Imperial College London |
Peynot, Thierry | Queensland University of Technology (QUT) |
Keywords: Medical Robots and Systems, Computer Vision for Medical Robotics
Abstract: Arthroscopy is a minimally invasive surgery that imposes great physical and mental challenges to surgeons. Extensive experience is required to safely navigate camera and instruments in the narrow spaces of the human joints. Robust camera localization as well as a detailed reconstruction of the anatomy can benefit surgeons and would be essential for future robotic assistants. Our existing Simultaneous Localization and Mapping (SLAM) system provides robust, at-scale camera localization and a sparse map. However, a denser map is required to be of clinical relevance. In this paper we propose a new system that combines the robust localizer with a keyframe selection strategy and a batch MultiView stereo (MVS) for 3D reconstruction. Tissues are reconstructed at scale, accurately and densely even under challenging arthroscopic conditions. The consistency of our system is verified in tests with synthetic noise and several keyframing strategies. Nine experiments were performed in phantom and three cadavers including various imaging conditions, camera settings and scope motions. Our system reconstructed surfaces of more than 12cm2 with a Root Mean Square Error (RMSE) of no more than 0.5mm. In comparison, monocular state-of-the-art SLAM feature-based (ORBSLAM) and direct (LSDSLAM) methods commonly failed to track more than 20% of any camera motion and, in the few successful cases, yielded much larger estimation errors.
|
|
11:30-12:45, Paper WeBT1-24.4 | Add to My Program |
3D Image Reconstruction of Biological Organelles with a Robot-Aided Microscopy System for Intracellular Surgery |
Gao, Wendi | CityU of Hong Kong |
Shakoor, Adnan | City University of Hong Kong |
Zhao, Libo | Xi'an Jiaotong University |
Jiang, Zhuangde | Xi'an Jiaotong University |
Sun, Dong | City University of Hong Kong |
Keywords: Medical Robots and Systems, Biological Cell Manipulation
Abstract: Intracellular surgery suffers from a low success rate due to the lack of 3D position feedback of the selected organelles. In this study, we developed a novel robot-aided microscopy system and 3D reconstruction algorithm to conduct intracellular surgery with 3D information. A series of optical sections along the vertical direction was obtained by microscopy lens movement. A 3D reconstruction model of the specimen was realized after several deconvolution, segmentation, and reconstruction processes. Simulations and experiments were performed to verify the accuracy of the proposed algorithm. Furthermore, 3D reconstruction position feedback was applied to extract mitochondria via a robot-aided wide-field fluorescence system. The proposed approach facilitates intracellular surgeries, such as organelle biopsy and cell injection.
|
|
11:30-12:45, Paper WeBT1-24.5 | Add to My Program |
Autonomous Data-Driven Manipulation of Unknown Anisotropic Deformable Tissues Using Unmodelled Continuum Manipulators |
Alambeigi, Farshid | Johns Hopkins University |
Wang, Zerui | The Chinese University of Hong Kong |
Hegeman, Rachel | Johns Hopkins University |
Liu, Yunhui | Chinese University of Hong Kong |
Armand, Mehran | Johns Hopkins University Applied Physics Laboratory |
Keywords: Medical Robots and Systems, Dexterous Manipulation, Learning and Adaptive Systems
Abstract: We present an autonomous manipulation approach for tissues with anisotropic deformation behavior using a continuum manipulator. The key feature of our vision-based study is an online learning and estimation method, which makes its implementation independent of any prior knowledge about (i) the deformation behavior of the tissue and (ii) continuum manipulator as well as (iii) calibration of the vision system with respect to the robot. This important feature addresses the difficulty of using model-based control approaches in deformation control of a continuum manipulator manipulating an unknown deformable tissue. We evaluated the performance and robustness of our method in three different experiments using the da Vinci Research Kit coupled with a 5 mm instrument that has a hl{4-degree-of-freedom} snake-like wrist. These experiments simulated situations that occur in various surgical schemes and verified the adaptability, learning capability, and accuracy of the proposed method.
|
|
11:30-12:45, Paper WeBT1-24.6 | Add to My Program |
Magnetic Levitation for Soft-Tethered Capsule Colonoscopy Actuated with a Single Permanent Magnet: A Dynamic Control Approach |
Pittiglio, Giovanni | University of Leeds |
Barducci, Lavinia | University of Leeds |
Martin, James William | University of Leeds |
Norton, Joseph | University of Leeds |
Avizzano, Carlo Alberto | Scuola Superiore Sant'Anna |
Obstein, Keith | Vanderbilt University |
Valdastri, Pietro | University of Leeds |
Keywords: Medical Robots and Systems, Force Control, Motion Control
Abstract: The present paper investigates a novel control approach for magnetically driven soft-tethered capsules for colonoscopy - a potentially painless approach for colon inspection. The focus of this work is on a class of devices composed of a magnetic capsule endoscope actuated by a single external permanent magnet. Actuation is achieved by manipulating the external magnet with a serial manipulator, which in turn produces forces and torques on the internal magnetic capsule. We propose a control strategy which, counteracting gravity, achieves levitation of the capsule. This technique, based on a nonlinear backstepping approach, is able to limit contact with the colon walls, reducing friction, avoiding contact with internal folds and facilitating the inspection of non-planar cavities. The approach is validated on an experimental setup which embodies a general scenario faced in colonoscopy. The experiments show that we can attain 19.5 % of contact with the colon wall, compared to the almost 100 % of previously proposed approaches. Moreover, we show that the control can be used to navigate the capsule through a more realistic environment - a colon phantom - with reasonable completion time.
|
|
WeKN1 Keynote Session, 517ab |
Add to My Program |
Keynote Session V |
|
|
Chair: Desai, Jaydev P. | Georgia Institute of Technology |
|
14:45-15:30, Paper WeKN1.1 | Add to My Program |
Robotic Technologies and Targeted Therapy: Challenges and Opportunities |
Menciassi, Arianna | Scuola Superiore Sant'Anna - SSSA |
Keywords: Medical Robots and Systems
Abstract: Arianna Menciassi is Full Professor of Biomedical Robotics at Scuola Superiore Sant’Anna (SSSA, Pisa, Italy) and team leader of the “Surgical Robotics & Allied Technologies” Area at The BioRobotics Institute. She obtained the Master Degree in Physics (summa cum laude, 1995) at the Pisa University and the PhD in Bioengineering at SSSA (1999). She was Visiting Professor at the Ecole Nationale Superieure de Mecaniques et des Microtechniques of Besancon (France), and at the ISIR Institute at the Université Pierre et Marie Curie, in Paris. Her main research interests involve surgical robotics, biomedical robotics, smart solutions for biomedical devices, biomechatronic artificial organs, microsystem technology and micromechatronics, with a special attention to the synergy between robot-assisted therapy and micro-nano-biotechnology-related solutions. She also focuses on magnetically-driven microrobots and microdevices, as well as on biomedical integrated platforms for magnetic navigation and ultrasound-based treatments. She is co-author of more than 400 scientific publications and 7 book chapters on biomedical robots/devices and microtechnology. Arianna Menciassi is co-inventor of 81 patents (national and international). In the year 2007, she received the Well-tech Award (Milan, Italy) for her researches on endoscopic capsules, and she was awarded by the Tuscany Region with the Gonfalone D’Argento, as one of the best 10 young talents of the region.
|
|
WeKN2 Keynote Session, 517cd |
Add to My Program |
Keynote Session VI |
|
|
Chair: Dudek, Gregory | McGill University |
|
14:45-15:30, Paper WeKN2.1 | Add to My Program |
Mocap As a Service |
Nakamura, Yoshihiko | University of Tokyo |
Keywords: Humanoid Robots
Abstract: Yoshihiko Nakamura is Professor at Department of Mechano-Informatics, University of Tokyo. He received Ph.D from Kyoto University. Humanoid robotics, cognitive robotics, neuro musculoskeletal human modeling, and their computational algorithms are his fields of research. Dr. Nakamura served as President of IFToMM (2012-2015). He is Foreign Member of Academy of Engineering Science of Serbia, TUM Distinguished Affiliated Professor of Technische Universitat Munchen, Executive Member of International Foundation of Robotics Research, and Fellow of JSME, RSJ, IEEE, and World Academy of Art and Science.
|
|
WeCT1 |
220 |
PODS: Wednesday Session III |
Interactive Session |
|
16:00-17:15, Subsession WeCT1-01, 220 | |
Award Finalists I - 3.3.01 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-02, 220 | |
Award Finalists II - 3.3.02 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-03, 220 | |
Award Finalists III - 3.3.03 Interactive Session, 5 papers |
|
16:00-17:15, Subsession WeCT1-04, 220 | |
Award Finalists IV - 3.3.04 Interactive Session, 5 papers |
|
16:00-17:15, Subsession WeCT1-05, 220 | |
Award Finalists V - 3.3.05 Interactive Session, 5 papers |
|
16:00-17:15, Subsession WeCT1-06, 220 | |
Award Finalists VI - 3.3.06 Interactive Session, 5 papers |
|
16:00-17:15, Subsession WeCT1-07, 220 | |
Medical Robotics IX - 3.3.07 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-08, 220 | |
Aerial Robotics - 3.3.08 Interactive Session, 4 papers |
|
16:00-17:15, Subsession WeCT1-09, 220 | |
Vision and Control - 3.3.09 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-10, 220 | |
Mobile Robotics - 3.3.10 Interactive Session, 5 papers |
|
16:00-17:15, Subsession WeCT1-11, 220 | |
Control - 3.3.11 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-12, 220 | |
Compliant Actuators II - 3.3.12 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-13, 220 | |
Soft Robots VII - 3.3.13 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-14, 220 | |
Legged Robots V - 3.3.14 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-15, 220 | |
Compliance - 3.3.15 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-16, 220 | |
Object Recognition & Segmentation V - 3.3.16 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-17, 220 | |
Autonomous Vehicles III - 3.3.17 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-18, 220 | |
Neurorobotics - 3.3.18 Interactive Session, 5 papers |
|
16:00-17:15, Subsession WeCT1-19, 220 | |
Cooperative and Distributed Robot Systems II - 3.3.19 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-20, 220 | |
Machine Learning for Transportation - 3.3.20 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-21, 220 | |
Legged Robots VI - 3.3.21 Interactive Session, 4 papers |
|
16:00-17:15, Subsession WeCT1-22, 220 | |
Robot Learning II - 3.3.22 Interactive Session, 6 papers |
|
16:00-17:15, Subsession WeCT1-23, 220 | |
Medical Robotics X - 3.3.23 Interactive Session, 6 papers |
|
WeCT1-01 Interactive Session, 220 |
Add to My Program |
Award Finalists I - 3.3.01 |
|
|
|
16:00-17:15, Paper WeCT1-01.1 | Add to My Program |
Online Multilayered Motion Planning with Dynamic Constraints for Autonomous Underwater Vehicles |
Vidal Garcia, Eduard | Universitat De Girona |
Moll, Mark | Rice University |
Palomeras, Narcis | Universitat De Girona |
Hernández, Juan David | Rice University |
Carreras, Marc | Universitat De Girona |
Kavraki, Lydia | Rice University |
Keywords: Motion and Path Planning, Marine Robotics
Abstract: Underwater robots are subject to complex hydrodynamic forces. These forces define how the vehicle moves, so it is important to consider them when planning trajectories. However, performing motion planning considering the dynamics on the robot's onboard computer is challenging due to the limited computational resources available. In this paper an efficient motion planning framework for AUVs is presented. By introducing a loosely coupled multilayered planning design, our framework is able to generate dynamically feasible trajectories while keeping the planning time low enough for online planning. First, a fast path planner operating in a lower-dimensional projected space computes a lead path from the start to the goal configuration. Then, the lead path is used to bias the sampling of a second motion planner, which takes into account all the dynamic constraints. Furthermore, we propose a strategy for online planning that saves computational resources by generating the final trajectory only up to a finite horizon. By using the finite horizon strategy together with the multilayered approach, the sampling of the second planner focuses on regions where good quality solutions are more likely to be found, significantly reducing the planning time. To provide strong safety guarantees our framework also incorporates the conservative approximations of ICSs. Finally, we present simulations and experiments using a real underwater robot to demonstrate the capabilities of our framework.
|
|
16:00-17:15, Paper WeCT1-01.2 | Add to My Program |
Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks |
Lee, Michelle | Stanford University |
Zhu, Yuke | Stanford University |
Srinivasan, Krishnan | Stanford University |
Shah, Parth | Stanford University |
Savarese, Silvio | Stanford University |
Fei-Fei, Li | Stanford University |
Garg, Animesh | Stanford University |
Bohg, Jeannette | Stanford University |
Keywords: Deep Learning in Robotics and Automation, Perception for Grasping and Manipulation, Sensor-based Control
Abstract: Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. However, it is non-trivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We evaluate our method on a peg insertion task, generalizing over different geometry, configurations, and clearances, while being robust to external perturbations. Results for simulated and real robot experiments are presented.
|
|
16:00-17:15, Paper WeCT1-01.3 | Add to My Program |
Deep Visuo-Tactile Learning: Estimation of Tactile Properties from Images |
Takahashi, Kuniyuki | Preferred Networks |
Tan, Jethro | Preferred Networks, Inc |
Keywords: Force and Tactile Sensing, Deep Learning in Robotics and Automation
Abstract: Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help us decide which actions we should choose and how to perform them. E.g., we can drive slower if we see that we have bad traction or grasp tighter if an item looks slippery. We believe that this ability also helps robots to enhance their understanding of the environment, and thus enables them to tailor their actions to the situation at hand. We therefore propose a model to estimate the degree of tactile properties from visual perception alone (e.g., the level of slipperiness or roughness). Our method extends a encoder-decoder network, in which the latent variables are visual and tactile features. In contrast to previous works, our method does not require manual labeling, but only RGB images and the corresponding tactile sensor data. All our data is collected with a webcam and uSkin tactile sensor mounted on the end-effector of a Sawyer robot, which strokes the surfaces of 25 different materials. We show that our model generalizes to materials not included in the training data by evaluating the feature space, indicating that it has learned to associate important tactile properties with images.
|
|
16:00-17:15, Paper WeCT1-01.4 | Add to My Program |
Variational End-To-End Navigation and Localization |
Amini, Alexander | Massachusetts Institute of Technology |
Rosman, Guy | Massachusetts Institute of Technology |
Karaman, Sertac | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Deep Learning in Robotics and Automation, Computer Vision for Transportation, Autonomous Vehicle Navigation
Abstract: Deep learning has revolutionized the ability to learn "end-to-end" autonomous vehicle control directly from raw sensory data. While there have been recent extensions to handle forms of navigation instruction, these works are unable to capture the full distribution of possible actions that could be taken and to reason about localization of the robot within the environment. In this paper, we extend end-to-end driving networks with the ability to perform point-to-point navigation as well as probabilistic localization using only noisy GPS data. We define a novel variational network capable of learning from raw camera data of the environment as well as higher level roadmaps to predict (1) a full probability distribution over the possible control commands; and (2) a deterministic control command capable of navigating on the route specified within the map. Additionally, we formulate how our model can be used to localize the robot according to correspondences between the map and the observed visual road topology, inspired by the rough localization that human drivers can perform. We test our algorithms on real-world driving data that the vehicle has never driven through before, and integrate our point-to-point navigation algorithms onboard a full-scale autonomous vehicle for real-time performance. Our localization algorithm is also evaluated over a new set of roads and intersections to demonstrates rough pose localization even in situations without any GPS prior.
|
|
16:00-17:15, Paper WeCT1-01.5 | Add to My Program |
Geo-Supervised Visual Depth Prediction |
Fei, Xiaohan | University of California, Los Angeles |
Wong, Alex | University of California Los Angeles |
Soatto, Stefano | University of California, Los Angeles |
Keywords: Visual Learning, Sensor Fusion
Abstract: We propose using global orientation from inertial measurements, and the bias it induces on the shape of objects populating the scene, to inform visual 3D reconstruction. We test the effect of using the resulting prior in depth prediction from a single image, where the normal vectors to surfaces of objects of certain classes tend to align with gravity or be orthogonal to it. Adding such a prior to baseline methods for monocular depth prediction yields improvements beyond the state-of-the-art and illustrates the power of gravity as a supervisory signal.
|
|
16:00-17:15, Paper WeCT1-01.6 | Add to My Program |
Closing the Sim-To-Real Loop: Adapting Simulation Randomization with Real World Experience |
Chebotar, Yevgen | University of Southern California |
Handa, Ankur | IIIT Hyderabad |
Makoviichuk, Viktor | NVIDIA |
Macklin, Miles | University of Copenhagen, NVIDIA |
Issac, Jan | Max Planck Institute for Intelligent Systems |
Ratliff, Nathan | Lula Robotics Inc |
Fox, Dieter | University of Washington |
Keywords: Learning and Adaptive Systems, Model Learning for Control, Deep Learning in Robotics and Automation
Abstract: We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer.
|
|
WeCT1-02 Interactive Session, 220 |
Add to My Program |
Award Finalists II - 3.3.02 |
|
|
|
16:00-17:15, Paper WeCT1-02.1 | Add to My Program |
Robotic Orientation Control of Deformable Cells |
Dai, Changsheng | University of Toronto |
Zhang, Zhuoran | University of Toronto |
Lu, Yuchen | University of Toronto |
Shan, Guanqiao | University of Toronto |
Wang, Xian | University of Toronto |
Zhao, Qili | University of Toronto |
Sun, Yu | University of Toronto |
Keywords: Biological Cell Manipulation, Automation at Micro-Nano Scales
Abstract: Robotic manipulation of deformable objects (vs. rigid objects) has been a classic topic in robotics. Compared to deformable synthetic objects such as rubber balls and clothes, biological cells are highly deformable and more prone to damage. This paper presents robotic manipulation of deformable cells for orientation control (both out-of-plane and in-plane), which is required in both clinical (e.g., in vitro fertilization) and biomedical (e.g., clone) applications. Compared to manual cell rotation based on empirical experience, the robotic approach, based on mathematical modeling and path planning, effectively rotates a cell while consistently maintaining minimal cell deformation to avoid cell damage. A force model is established to determine the minimal force applied by the micropipette to rotate a spherical or more generally, an ellipsoidal mouse oocyte. The force information is translated into indentation through a contact mechanics model, and the manipulation path of the micropipette is formed by connecting the indentation positions on the oocyte. A compensation controller is designed to compensate for the variations of mechanical properties across cells. The polar body of an oocyte is detected by deep neural networks with robustness to shape and size differences. Experimental results demonstrate that the system achieved an accuracy of 97.6% in polar body detection and an accuracy of 0.7 degree in oocyte orientation control with maximum oocyte deformation of 2.69 um.
|
|
16:00-17:15, Paper WeCT1-02.2 | Add to My Program |
Drift-Free Roll and Pitch Estimation for High-Acceleration Hopping |
Yim, Justin K. | University of California, Berkeley |
Wang, Eric K. | University of California, Berkeley |
Fearing, Ronald | University of California at Berkeley |
Keywords: Legged Robots
Abstract: We develop a drift-free roll and pitch attitude estimation scheme for monopedal jumping robots. The estimator uses only onboard rate gyroscopes and encoders and does not rely on external sensing or processing. It is capable of recovering from attitude estimate disturbances and, together with onboard velocity estimation, enables fully autonomous stable hopping control. The estimator performs well on a small untethered robot capable of large jumps and extreme stance accelerations. We demonstrate that the robot can follow a rectangular path using onboard dead-reckoning with less than 2 meters of drift over 200 seconds and 300 jumps covering 60 m. We also demonstrate that the robot can operate untethered outdoors under human wireless joystick direction.
|
|
16:00-17:15, Paper WeCT1-02.3 | Add to My Program |
Efficient Symbolic Reactive Synthesis for Finite-Horizon Tasks |
He, Keliang | Rice University |
Wells, Andrew | Rice University |
Kavraki, Lydia | Rice University |
Moshe, Vardi | Rice University |
Keywords: Formal Methods in Robotics and Automation, Manipulation Planning
Abstract: When humans and robots perform complex tasks together, the robot must have a strategy to choose its actions based on observed human behavior. One well-studied approach for finding such strategies is reactive synthesis. Existing approaches for finite-horizon tasks have used an explicit state approach, which incurs high runtime. In this work, we present a compositional approach to perform synthesis for finite-horizon tasks based on binary decision diagrams. We show that for pick-and-place tasks, the compositional approach achieves orders-of-magnitude speed-ups compared to previous approaches. We demonstrate the synthesized strategy on a UR5 robot.
|
|
16:00-17:15, Paper WeCT1-02.4 | Add to My Program |
Combined Task and Motion Planning under Partial Observability: An Optimization-Based Approach |
Phiquepal, Camille | University of Stuttgart |
Toussaint, Marc | University of Stuttgart |
Keywords: Task Planning, Motion and Path Planning, Manipulation Planning
Abstract: We propose a novel approach to Combined Task and Motion Planning (TAMP) under partial observability. Previous optimization-based TAMP methods compute optimal plans and paths assuming full observability. However, partial observability requires the solution to be a policy that reacts to the observations that the agent receives. We consider a formulation where observations introduce additional branching in the symbolic decision tree. The solution is now given by a reactive policy on the symbolic level together with a path tree that describes the branchings of optimal motion depending on the observations. Our method works in two stages: First, the symbolic policy is optimized using approximate path costs estimated from independent optimizations of trajectory pieces. Second, we fix the best symbolic policy and optimize a joint trajectory tree. We test our approach on object manipulation and autonomous driving examples. We also compare the algorithm’s performance to a state-of-the-art TAMP planner in fully observable cases.
|
|
16:00-17:15, Paper WeCT1-02.5 | Add to My Program |
Towards Robust Product Packing with a Minimalistic End-Effector |
Shome, Rahul | Rutgers University |
Tang, Wei Neo | Rutgers University |
Song, Changkyu | Rutgers University |
Mitash, Chaitanya | Rutgers University |
Kourtev, Hristiyan | Rutgers University |
Yu, Jingjin | Rutgers University |
Boularias, Abdeslam | Carnegie Mellon University |
Bekris, Kostas E. | Rutgers, the State University of New Jersey |
Keywords: Manipulation Planning, Factory Automation, Perception for Grasping and Manipulation
Abstract: Advances in sensor technologies, object detection algorithms, planning frameworks and hardware designs have motivated the deployment of robots in warehouse automation. A variety of such applications, like order fulfillment or packing tasks, require picking objects from unstructured piles and carefully arranging them in bins or containers. Desirable solutions need to be low-cost, easily deployable and controllable, making minimalistic hardware choices desirable. The challenge in designing an effective solution to this problem relates to appropriately integrating multiple components, so as to achieve a robust pipeline that minimizes failure conditions. The current work proposes a complete pipeline for solving such packing tasks, given access only to RGB-D data and a single robot arm with a minimalistic, vacuum-based end-effector. To achieve the desired level of robustness, three key manipulation primitives are identified, which take advantage of the environment and simple operations to successfully pack multiple cubic objects. The overall approach is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated by considering different versions of the proposed pipeline that incrementally introduce reasoning about object poses and corrective manipulation actions.
|
|
16:00-17:15, Paper WeCT1-02.6 | Add to My Program |
Contactless Robotic Micromanipulation in Air Using a Magneto-Acoustic System |
Youssefi, Omid | University of Toronto |
Diller, Eric D. | University of Toronto |
Keywords: Micro/Nano Robots, Dexterous Manipulation, Automation at Micro-Nano Scales
Abstract: Precise and dexterous handling of micrometer to millimeter-scale objects are the two key and challenging factors for mincromanipualtion, especially in the fields of biotechnology where delicate microcomponents can be easily damaged by contact during handling. Many complex microrobotic techniques, scaling from fully autonomous to teleoperated, have been developed to address the limitations individually. However, a scalable, reliable, and versatile method which can be applied to a wide range of applications is not present. This work uniquely combines the advantages of magnetic and acoustic micromanipulation methods to achieve three-dimensional, contactless, and semi-autonomous micromanipulation, with potential for full automation, for use in microassembly applications. Solid and liquid materials, with sizes less than 3 mm (down to 300 μm), are handled in a cylindrical workspace of 30 mm in height and 4 mm in diameter using acoustic levitation while an externally applied magnetic field controls the orientation of magnetically active components. A maximum vertical positioning RMSE of 1.5% of parts length was observed. This paper presents the concept, design, characterization, and modeling of the new method, along with a demonstration of a typical assembly process.
|
|
WeCT1-03 Interactive Session, 220 |
Add to My Program |
Award Finalists III - 3.3.03 |
|
|
|
16:00-17:15, Paper WeCT1-03.1 | Add to My Program |
Pre-Grasp Sliding Manipulation of Thin Objects Using Soft, Compliant, or Underactuated Hands |
Hang, Kaiyu | Yale University |
Morgan, Andrew | Yale University |
Dollar, Aaron | Yale University |
Keywords: Manipulation Planning, Grasping, Motion and Path Planning
Abstract: We address the problem of pre-grasp sliding manipulation, which is an essential skill when a thin object cannot be directly grasped from a flat surface. Leveraged on the passive reconfigurability of soft, compliant, or underactuated robotic hands, we formulate this problem as an integrated motion and grasp planning problem, and plan the manipulation directly in the robot configuration space. Rather than explicitly pre-computing a pair of valid start and goal configurations, and then in a separate step planning a path to connect them, our planner actively samples start and goal robot configurations from configuration sampleable regions modeled from the geometries of the object and support surface. While randomly connecting the sampled start and goal configurations in pairs, the planner verifies whether any connected pair can achieve the task to finally confirm a solution. The proposed planner is implemented and evaluated both in simulation and on a real robot. Given the inherent compliance of the employed Yale T42 hand, we relax the motion constraints and show that the planning performance is significantly boosted. Moreover, we show that our planner outperforms two baseline planners, and that it can deal with objects and support surfaces of arbitrary geometries and sizes.
|
|
16:00-17:15, Paper WeCT1-03.2 | Add to My Program |
Gesture Recognition Via Flexible Capacitive Touch Electrodes |
Dankovich IV, Louis | University of Maryland College Park |
Bergbreiter, Sarah | Carnegie Mellon University |
Keywords: Gesture, Posture and Facial Expressions, Perception for Grasping and Manipulation, Prosthetics and Exoskeletons
Abstract: Abstract— A novel wearable device for gesture recognition was developed and tested on five subjects. The low-cost, wireless wearable device was engineered with a set of seven flexible capacitive touch electrodes sewn into an armband to be worn on the forearm between the wrist and elbow. These capacitive touch electrodes were interfaced with a microcontroller and Bluetooth transceiver for measurement and transmission. As different gestures are made, flexing muscles beneath the skin affects the capacitance measured on these seven electrodes. A set of 32 gestures were tested including the 16 grasps in the Cutkosky Grasp Taxonomy and 16 basic finger and wrist motions. Several classification algorithms were tested on this data. Using a Random Forest (RF) algorithm to classify the training data, an average gesture recognition accuracy of 95.6 ± 0.06% was achieved across all five subjects individually
|
|
16:00-17:15, Paper WeCT1-03.3 | Add to My Program |
Robust Learning of Tactile Force Estimation through Robot Interaction |
Sundaralingam, Balakumar | University of Utah |
Lambert, Alex | Georgia Institute of Technology |
Handa, Ankur | IIIT Hyderabad |
Boots, Byron | Georgia Institute of Technology |
Hermans, Tucker | University of Utah |
Birchfield, Stan | NVIDIA |
Ratliff, Nathan | Lula Robotics Inc |
Fox, Dieter | University of Washington |
Keywords: Force and Tactile Sensing, Deep Learning in Robotics and Automation, Dexterous Manipulation
Abstract: Current methods for estimating force from tactile sensor signals are either inaccurate analytic models or task-specific learned models. In this paper, we explore learning a robust model that maps tactile sensor signals to force. We specifically explore learning a mapping for the SynTouch BioTac sensor via neural networks. We propose a voxelized input feature layer for spatial signals and leverage information about the sensor surface to regularize the loss function. To learn a robust tactile force model that transfers across tasks, we generate ground truth data from three different sources: (1) the BioTac rigidly mounted to a force torque~(FT) sensor, (2) a robot interacting with a ball rigidly attached to the same FT sensor, and (3) through force inference on a planar pushing task by formalizing the mechanics as a system of particles and optimizing over the object motion. A total of 140k samples were collected from the three sources. We achieve a median angular accuracy of 3.5 degrees in predicting force direction (66% improvement over the current state of the art) and a median magnitude accuracy of 0.06 N (93% improvement) on a test dataset. Additionally, we evaluate the learned force model in a force feedback grasp controller performing object lifting and gentle placement. Our results can be found on https://sites.google.com/view/tactile-force.
|
|
16:00-17:15, Paper WeCT1-03.4 | Add to My Program |
Deconfliction of Motion Paths with Traffic Inspired Rules in Robot–Robot and Human–Robot Interactions |
Celi, Federico | University of Pisa |
Wang, Li | Georgia Institute of Technology |
Pallottino, Lucia | Università Di Pisa |
Egerstedt, Magnus | Georgia Institute of Technology |
Keywords: Collision Avoidance, Multi-Robot Systems, Physical Human-Robot Interaction
Abstract: In this paper we investigate how to resolve conflicting motions for mixed robot-robot and human-robot multiagent systems. This work is motivated by atypical driving conditions, such as parking lots, where driving rules are not as strictly enforced as on standard roads. As a result, navigation algorithms should take into account the human drivers' behaviors, especially if they prove to be in conflict with the common rules of the road. In this work we make use of safety barrier certificates with a direction bias to deconflict agents' behaviour in a near-to-collision scenario, in compliance with local traffic rules. We also propose a tool to identify the driving direction bias--both for human and autonomous agents.
|
|
16:00-17:15, Paper WeCT1-03.5 | Add to My Program |
The Role of Closed-Loop Hand Control in Handshaking Interactions |
Vigni, Francesco | University of Siena |
Knoop, Espen | The Walt Disney Company |
Prattichizzo, Domenico | University of Siena |
Malvezzi, Monica | University of Siena |
|