|
SuFR1 |
Room T1 |
Forum on Robotic Food Manipulation Challenge |
Workshop |
Chair: Yamaguchi, Akihiko | Tohoku University |
Co-Chair: Bhattacharjee, Tapomayukh | University of Washington |
Organizer: Yamaguchi, Akihiko | Tohoku University |
Organizer: Kroemer, Oliver | Carnegie Mellon University |
Organizer: Bhattacharjee, Tapomayukh | University of Washington |
Organizer: Hirai, Shinichi | Ritsumeikan Univ |
|
09:00-17:00, Paper SuFR1.1 | |
>Introduction to Forum on Robotic Food Manipulation Challenge (I) |
|
Yamaguchi, Akihiko | Tohoku University |
Kroemer, Oliver | Carnegie Mellon University |
Bhattacharjee, Tapomayukh | University of Washington |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords:
Abstract: This forum focuses on robotic technologies for food manipulation, especially cooking. Robotics and AI solutions for food are becoming a trend these days. Some companies, including industry leaders and startups, are working in this field. There are also increasing social demands, as aging societies will need assistive robots to support people's quality of life. However, food manipulation is still a challenging problem in robotics since we need to unify a number of different technologies. Food manipulation involves many challenging problems in robotics. For example, manipulation of non-rigid objects (vegetables, fruits, meats, liquids, powders, etc.), tool use, control of food state (raw/overcooked, shape, viscosity, content of salt/sugar/acidity, etc.), and adaptation to personalized taste. It requires a wide range of technologies, such as motion planning, machine learning, computer vision, robot hands, and non-visual sensing. In this forum, we will discuss the state of the art and how to unify these technologies to achieve applications such as cooking, industrial food manipulation, and assistive robots. In this forum, we discuss designing food manipulation challenge, which is a competition of robotic food manipulation. As unifying many different technologies is important in robotic food manipulation, we believe that having competitions is a promising approach to gather researchers from different fields, share ideas to solve problems, and collaborate with each other toward better robotic food manipulation. This forum is for brainstorming such competitions. Website of the forum: https://sites.google.com/view/robotcook20/
|
|
09:00-17:00, Paper SuFR1.2 | |
>Learning Food-Arrangement Policies from Raw Images with Generative Adversarial Imitation Learning (I) |
|
Yamaguchi, Akihiko | Tohoku University |
Matsubara, Takamitsu | Nara Institute of Science and Technology |
Keywords:
Abstract: Food arrangement on a plate is one of the most challenging kitchen tasks for automation with robots. In particular, the food-arrangement planning problem has not been much studied to our knowledge, maybe due to the difficulty in its quantitative evaluation. In this talk, I introduce our attempt on the food-arrangement planning problem by an imitation learning approach from expert demonstrations. In particular, our approach employs a Generative Adversarial Imitation Learning framework, which allows an agent to learn near-optimal behaviors from a few expert demonstrations and self explorations without an explicit reward function. For evaluation, we developed a food-arrangement simulator for the Japanese cuisine “Tempura” with 3D-scanned tempura ingredients and conducted experiments for its performance evaluation. The experimental results demonstrate that our method can learn expert-like arrangement policies from bird-view raw images of plates without manually designing a reward function or requiring a massive number of expert demonstration data. Website of the forum: https://sites.google.com/view/robotcook20/
|
|
09:00-17:00, Paper SuFR1.3 | |
>Tactile Image Sensor for Food Inspection Tasks (I) |
|
Yamaguchi, Akihiko | Tohoku University |
Shimonomura, Kazuhiro | Ritsumeikan University |
Keywords:
Abstract: Detection of foreign substances mixed in food is one of the important issues in the manufacturing process of food factories. In this talk, we will discuss the use of tactile image sensors in food inspection tasks. A tactile image sensor consisting of a camera can acquire tactile information with high spatial resolution and can be used to detect small pieces of hard objects mixed in soft foods. In this study, we attempted to detect shrimp shells. Website of the forum: https://sites.google.com/view/robotcook20/
|
|
09:00-17:00, Paper SuFR1.4 | |
>Food Manipulation Competition Trials (I) |
|
Yamaguchi, Akihiko | Tohoku University |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords:
Abstract: This video introduces two trials of food manipulation competition. The presenter organized food arrangement competition and wine pouring competition in my laboratory. In food arrangement competition, each team will drive a manipulator to pick food samples on the table and place them into a neighboring container. Each team designed a robotic hand attached to the manipulator and program the motion of the manipulator. The winner succeeded picking seven samples out of ten. In wine pouring competition, each team will drive a manipulator to pour the beads, which substitute for wine, into a glass fixed on the table. Each team will perform hand design and motion programming. The presenter obtained many issues to be tackled in the future competitions. Website of the forum: https://sites.google.com/view/robotcook20/
|
|
09:00-17:00, Paper SuFR1.5 | |
>Robotic Cutting: Mechanics and Knife Control (I) |
|
Yamaguchi, Akihiko | Tohoku University |
Jia, Yan-Bin | Iowa State University |
Keywords:
Abstract: Home robots have long been a fascination to the public. They are at the core of the quality of life technology, carrying high promises for relieving people from daily chores, and providing costeffective health care to the growing elderly population and people with disabilities. Automation of kitchen skills is an important part of home robotics, and also one of the ultimate tests for robots to achieve human-like dexterity. Despite its significance and appeal, until today robotic kitchen assistance has been limited to dish washing and sorting, and to cooking of food items prepared by human. In this work, a robotic arm equipped with a force/torque sensor is used to cut through an object in a sequence of three moves: pressing, touching, and slicing. For each move, a separate control strategy in the Cartesian space is designed to incorporate contact and/or force constraints while following some prescribed trajectory. Website of the forum: https://sites.google.com/view/robotcook20/
|
|
09:00-17:00, Paper SuFR1.6 | |
>Human-Robot-Food Interaction (I) |
|
Yamaguchi, Akihiko | Tohoku University |
Admoni, Henny | Carnegie Mellon University |
Keywords:
Abstract: Cooking is a uniquely human activity. Thus, robot cooking competitions should include challenges that integrate humans. In this talk, I divide the process of cooking (and eating) into four phases, and introduce a challenge in each phase that involves human-robot interaction. By integrating human interaction into cooking competitions, I posit that we can drive forward research, expand real-world applicability, and increase public interest in our work.
|
|
SuWS1 |
Room T1 |
3rd Workshop on Proximity Perception in Robotics: Towards Multi-Modal
Cognition |
Workshop |
Chair: Escaida Navarro, Stefan | Inria |
Co-Chair: Mühlbacher-Karrer, Stephan | JOANNEUM RESEARCH Forschungsgesellschaft mbH - ROBOTICS |
Organizer: Escaida Navarro, Stefan | Inria |
Organizer: Mühlbacher-Karrer, Stephan | JOANNEUM RESEARCH Forschungsgesellschaft mbH - ROBOTICS |
Organizer: Zangl, Hubert | Alpen-Adria-Universitaet Klagenfurt |
Organizer: Koyama, Keisuke | Osaka University |
Organizer: Hein, Björn | Karlsruhe Institute of Technology |
Organizer: Thomas, Ulrike | Chemnitz University of Technology |
Organizer: Alagi, Hosam | Karlsruhe Institute of Technology |
Organizer: Ding, Yitao | Chemnitz University of Technology |
Organizer: Schöffmann, Christian | Alpen-Adria Universität Klagenfurt |
|
09:00-17:00, Paper SuWS1.1 | |
Intro Video to the 3rd Workshop on Proximity Perception in Robotics: Towards Multi-Modal Cognition (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.2 | |
Invited Talk Wenzhen Yuan: What Can a Robot Learn from High-Resolution Tactile Sensing? (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.3 | |
Invited Talk Joshua R. Smith: A Computationally Efficient Model of Octopus Sensing & Neuro-Muscular Control (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.4 | |
Invited Talk Edward Cheung: A Look at Sensor-Based Arm Manipulator Motion Planning in 1980s (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.5 | |
Invited Talk Kazuhiro Shimonomura: Vision-Based Tactile Image Sensor for Manipulation and Inspection Tasks (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.6 | |
Invited Talk Michael Zillich (Blue Danube Robotics): Combined Proximity and Tactile Sensing for Fast Fenceless Automation (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.7 | |
Invited Talk Christian Duriez: Model-Based Sensing for Soft Robots (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.8 | |
Invited Talk Andrea Cherubini: Multi-Modal Perception for Physical Human-Robot Interaction (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.9 | |
Invited Talk Jan Steckel: Advanced 3D Sonar Sensing for Heavy Industry Applications (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.10 | |
Invited Talk Tamim Asfour: Multi-Modal Sensing for Semi-Autonomous Grasping in Prosthetics Hands (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.11 | |
Invited Talk Genesis Laboy (Toyoda Gosei): E-Rubber and Its Applications (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.12 | |
Panel Discussion + Q&A Invited Talks, October 28th, Part 1 (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.13 | |
Panel Discussion + Q&A Invited Talks, October 28th, Part 2 (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.14 | |
PhD-Forum + Q&A Demos, October 29th, Part 1 (I) |
|
Escaida Navarro, Stefan | Inria |
|
09:00-17:00, Paper SuWS1.15 | |
PhD-Forum + Q&A Demos, October 29th, Part 2 (I) |
|
Escaida Navarro, Stefan | Inria |
|
SuWS2 |
Room T2 |
Bringing Geometric Methods to Robot Learning, Optimization and Control |
Workshop |
Chair: Jaquier, Noémie | Idiap Research Institute |
Co-Chair: Rozo, Leonel | Bosch Center for Artificial Intelligence |
Organizer: Jaquier, Noémie | Karlsruhe Institute of Technology |
Organizer: Rozo, Leonel | Bosch Center for Artificial Intelligence |
Organizer: Hauberg, Søren | Technical University of Denmark |
Organizer: Schröcker, Hans-Peter | University of Innsbruck |
Organizer: Sra, Suvrit | Massachusetts Institute of Technology |
|
09:00-17:00, Paper SuWS2.1 | |
>Workshop on Bringing Geometric Methods to Robot Learning, Optimization and Control (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: In many robotic applications, robots are required to react to new situations, act in unstructured and dynamic environments, and overcome uncertainty. This entails to have outstanding adaptation capabilities so that robot actions lead to successful performance. A key component in both data-driven learning and adaptation is how robots may exploit explicit (e.g. domain knowledge) or implicit (e.g. learned) structures arising in the collected data. Domain knowledge and data structures in robotics can be viewed from a geometric perspective as different variables and problems have specific geometric characteristics. Rigid body orientations, controller gains, inertia matrices, manipulability ellipsoids or end-effector poses are examples of variables with predefined geometric structure. These diverse types of variables do not belong to a vector space and thus the use of classical Euclidean space methods for treating and analyzing these variables is inadequate. In this context, differential geometry, or more specifically Lie group and Riemannian manifold theories provide appropriate tools and methods to cope with or to learn the geometry of non-Euclidean parameter spaces. The main objective of this workshop is to attract the interest of the robotic community on geometric methods, which have been overlooked in robot learning, control and optimization. Moreover, we expect this workshop will raise awareness on the importance of geometry in the different research branches of robotics. We aim at bringing together researchers from various robotic fields to discuss the benefits and explore the challenges of bringing geometry-awareness to solve robotic problems. Finally, we aim at building bridges between the robotic community and mathematicians, as well as machine learning researchers, in order to efficiently tackle the upcoming challenges involving differential geometry and robotics.
|
|
09:00-17:00, Paper SuWS2.2 | |
>Certifiable 3D Perception: From Geometry to Global Optimization and Back (Luca Carlone) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: 3D Perception problems in robotics and computer vision are concerned with the estimation of a world model from data. As such, they include a broad set of inverse problems, ranging from object pose estimation to robot localization and mapping. These inverse problems are typically formulated as a nonconvex or combinatorial optimization, and are solved using local solvers or heuristics. The resulting techniques are brittle, due to the non-convexity of the problem. While many applications can afford occasional failures (e.g., AR/VR for entertainment), safety-critical applications of robotics in the wild (e.g., self-driving vehicles) demand a new generation of algorithms. In this talk, I present recent advances in the design of _certifiable_ perception algorithms that find globally optimal estimates in the face of extreme noise and outliers. The key insight behind these algorithms is the design of *tight* semidefinite and sum-of-squares relaxations, combined with fast verification methods based on Lagrangian duality. These algorithms are “hard to break” and work in regimes where all related techniques fail, while providing performance guarantees. I discuss applications to a variety of perception problems, including mesh registration, image-based object localization, and robot pose estimation. For instance, I show that our algorithms can solve registration problems where 99% of the measurements are outliers and succeed in localizing objects where an average human would fail.
|
|
09:00-17:00, Paper SuWS2.3 | |
>Lie Theory for the Roboticist (Joan Sola) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: A Lie group is an old mathematical abstract object dating back to the XIX century, when mathematician Sophus Lie laid the foundations of the theory of continuous transformation groups. Its influence has spread over diverse areas of science and technology many years later. In robotics, we are recently experiencing an important trend in its usage, at least in the fields of estimation, and particularly in motion estimation for navigation. Yet for a vast majority of roboticians, Lie groups are highly abstract constructions and therefore difficult to understand and to use. In many fields in robotics it is often not necessary to exploit the full capacity of the theory, and therefore an effort of selection of materials is required. In this presentation, we will walk through the most basic principles of the Lie theory, with the aim of conveying clear and useful ideas, and leave a significant corpus of the Lie theory behind. Even with this mutilation, the material included here has proven to be extremely useful in modern estimation and control algorithms for robotics, especially in the fields of SLAM, visual odometry, nMPC, and the like.
|
|
09:00-17:00, Paper SuWS2.4 | |
>Geometric Methods for Dynamic Model-Based Robotics (Taeyoon Lee) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: This talk will cover geometric methods for dynamic model-based robotics, specifically on topics such as identification, adaptive control and excitation trajectory optimization.
|
|
09:00-17:00, Paper SuWS2.5 | |
>Complex Robotic Systems: Modeling, Control, and Planning Using Dual Quaternion Algebra (Bruno Adorno) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: According to the United Nations, more than two billion elderlies will live in the world by 2050, whereas there will be only four working-age people per elderly. While population aging is increasing, the proportional workforce is decreasing, hence motivating the use of robotic assistants that will work closely with humans.As a result of more than fifty years of research, we are seeing increasingly more robots working in human environments and/or alongside humans, and we expect that they will actively interact with people and other robots in complex tasks both in the homes and factories of the future. However, many theoretical and practical challenges have to be solved to guarantee the reliability and proper functionality of such complex systems. To manage that complexity, robot modeling, control, planning, and high-level task description are usually treated separately in different layers. Although that strategy may provide useful abstractions and make the complexity more manageable, it invariably leads to the usage of different mathematical rep-resentations and techniques that demand intermediate mappings between those layers, which results in atheoretical patchwork that usually introduces unnecessary singularities and discontinuities in the complete robotic system. Furthermore, due to those different layers, local guarantees (i.e., the ones in specific layers)may not hold when all layers are integrated. In this talk, I will present our efforts to unify robot modeling,control, and planning by using a single mathematical language, namely dual quaternion algebra, and the application of our techniques to surgical robots, mobile manipulators, humanoids, and cooperative robotic systems.
|
|
09:00-17:00, Paper SuWS2.6 | |
>Planning and Control on Riemannian Manifolds with Boundaries (Subhrajit Bhattacharya) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: This talk will cover planning and control on Riemannian manifolds in robotics. Specifically, the talk will introduce discrete search-based planning on simplicial complex representation of manifolds and control algorithms based on path metric and its gradient.
|
|
09:00-17:00, Paper SuWS2.7 | |
>Planning for High-Dimensional Robotic Systems by Solving Problems in Low-Dimensional Manifolds (Maxim Likhachev) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: This talk will cover different planning methods for high-dimensional robotic systems which are built on solutions exploiting low-dimensional manifolds. Two main application problems will be explained, namely, motion planning for a 10-link robot with deformable objects and footstep/motion planning for a 30 DoF robot.
|
|
09:00-17:00, Paper SuWS2.8 | |
>State Space Representations for Complex Manipulation (Anastasiia Varava) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: Modeling states of the system and transitions between them is fundamental for manipulation planning. Analytically devised representations of scenes and objects based on tools from computational geometry and topology are designed to capture the most essential information about the task. Unlike data-driven state representations, they also have the advantage of being naturally interpretable. However, the resulting algorithms can be computationally expensive, and thus not suitable for real-time applications. Recently, the idea of learning low-dimensional state representations from high-dimensional observations (such as images), has gained significant attention in the robotics community. For this, latent space models, such as variational autoencoders (VAE), are commonly used. However, this approach poses its own challenges. First, when no prior information about the problem is given, learning representations requires large amounts of data. Furthermore, standard metrics in the space of images or point clouds do not reflect similarities between the underlying states. Finally, most latent space models, and, in particular, VAEs, do not preserve the underlying geometric and topological structure of the input space, which is crucial for planning optimal paths, analyzing connectivity of the space, homotopy equivalence of paths, etc. In my work, I aim to bridge the gap between analytical and data-driven approaches by using the available information about the structure of the task when possible, and complementing it with machine learning tools when necessary. In my talk, I will present some geometric and topological tools for state representation in robotics and possible ways of integrating them with machine learning approaches in order to achieve robustness, high computational performance, and data efficiency.
|
|
09:00-17:00, Paper SuWS2.9 | |
>Structured Policies for Reactive Motion Generation (Mustafa Mukadam) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: This talk will explain how structure and geometry can be exploited to learn and generate reactive motions, building on Riemannian motion policy flows (RMPflow).
|
|
09:00-17:00, Paper SuWS2.10 | |
>Lagrangian and Hamiltonian Mechanics for Probabilities on the Statistical Manifold (Luigi Malago) (I) |
|
Jaquier, Noémie | Idiap Research Institute |
Keywords:
Abstract: We provide an Information-Geometric formulation of Classical Mechanics on the Riemannian manifold of probability distributions, which is an affine manifold endowed with a dually-flat connection. In a non-parametric formalism, we consider the full set of positive probability functions on a finite sample space, and we provide a specific expression for the tangent and cotangent spaces over the statistical manifold, in terms of a Hilbert bundle structure that we call the Statistical Bundle. In this setting, we compute velocities and accelerations of a one-dimensional statistical model using the canonical dual pair of parallel transports and define a coherent formalism for Lagrangian and Hamiltonian mechanics on the bundle. Finally, in a series of examples, we show how our formalism provides a consistent framework for accelerated natural gradient dynamics on the probability simplex, paving the way for direct applications in optimization, game theory and neural networks.
|
|
SuWS3 |
Room T3 |
12th IROS Workshop on Planning, Perception, Navigation for Intelligent
Vehicle |
Workshop |
Chair: Martinet, Philippe | INRIA |
Co-Chair: Laugier, Christian | INRIA |
Organizer: Martinet, Philippe | INRIA |
Organizer: Laugier, Christian | INRIA |
Organizer: Ang Jr, Marcelo H | National University of Singapore |
Organizer: Wolf, Denis Fernando | University of Sao Paulo |
|
09:00-17:00, Paper SuWS3.1 | |
>WS-2374 Workshop Introduction - 12th IROS Workshop on Planning, Perception, Navigation for Intelligent Vehicle (I) |
|
Martinet, Philippe | INRIA |
Laugier, Christian | INRIA |
Ang Jr, Marcelo H | National University of Singapore |
Wolf, Denis Fernando | University of Sao Paulo |
Keywords:
Abstract: The purpose of this workshop is to discuss topics related to the challenging problems of autonomous navigation and of driving assistance in open and dynamic environments. Technologies related to application fields such as unmanned outdoor vehicles or intelligent road vehicles will be considered from both the theoretical and technological point of views. Several research questions located on the cutting edge of the state of the art will be addressed. Among the many application areas that robotics is addressing, transportation of people and goods seem to be a domain that will dramatically benefit from intelligent automation. Fully automatic driving is emerging as the approach to dramatically improve efficiency while at the same time leading to the goal of zero fatalities. This workshop will address robotics technologies, which are at the very core of this major shift in the automobile paradigm. Technologies related to this area, such as autonomous outdoor vehicles, achievements, challenges and open questions would be presented, including the following topics: Road scene understanding Lane detection and lane keeping Pedestrian and vehicle detection Detection, tracking and classification Feature extraction and feature selection Cooperative techniques Collision prediction and avoidance Advanced driver assistance systems Environment perception, vehicle localization and autonomous navigation Real-time perception and sensor fusion SLAM in dynamic environments Mapping and maps for navigation Real-time motion planning in dynamic environments Human-Robot Interaction Behavior modeling and learning Robust sensor-based 3D reconstruction Modeling and Control of mobile robot Deep learning applied in autonomous driving Deep reinforcement learning applied in intelligent vehicles
|
|
09:00-17:00, Paper SuWS3.2 | |
>WS-2374 Keynote 1 - Self-Supervised Learning for Perception Tasks in Automated Driving (I) |
|
Burgard, Wolfram | Toyota Research Institute |
Martinet, Philippe | INRIA |
Keywords:
Abstract: At the Toyota Research Institute we are following the one-system-two-modes approach to building truly automated cars. More precisely, we simultaneously aim for the L4/L5 chauffeur application and the the guardian system, which can be considered as a highly advanced driver assistance system of the future that prevents the driver from making any mistakes. TRI aims to equip more and more consumer vehicles with guardian technology and in this way to turn the entire Toyota fleet into a giant data collection system. To leverage the resulting data advantage, TRI performs substantial research in machine learning and, in addition to supervised methods, particularly focuses on unsupervised and self-supervised approaches. In this presentation, I will present three recent results regarding self-supervised methods for perception problems in the context of automated driving. I will present novel approaches to inferring depth from monocular images and a new approach to panoptic segmentation.
|
|
09:00-17:00, Paper SuWS3.3 | |
>WS-2374 Keynote 2 - Decision Making Architectures for Safe Planning and Control of Agile Autonomous Vehicles (I) |
|
Theodorou, Evangelos | Georgia Institute of Technology |
Martinet, Philippe | INRIA |
Keywords:
Abstract: In this talk I will present novel algorithms and decision-making architectures for safe planning and control of terrestrial and aerial vehicles operating in dynamic environments. These algorithms incorporate different representations of robustness for high speed navigation and bring together concepts from stochastic contraction theory, robust adaptive control, and dynamic stochastic optimization using augmented importance sampling techniques. I will present demonstrations on simulated and real robotic systems and discuss future research directions.
|
|
09:00-17:00, Paper SuWS3.4 | |
>WS-2374 Keynote 3 - Understanding Risk and Social Behavior Improves Decision Making for Autonomous Vehicles (I) |
|
Rus, Daniela | MIT |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Deployment of autonomous vehicles on public roads promises increases in efficiency and safety, and requires evaluating risk, understanding the intent of human drivers, and adapting to different driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. This talk describes how to integrate risk and behavior analysis in the control look of an autonomous car. I will describe how Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect their interactions with others by quantifying the degree of selfishness or altruism, can be integrsted in decision making and provide recent examples of developing and deploying self-driving vehicles with adaptation capabilities.
|
|
09:00-17:00, Paper SuWS3.5 | |
>WS-2374 Keynote 4 - Safe Autonomous Driving and Humans: Perception and Transitions (I) |
|
Trivedi, Mohan | University of California San Diego (UCSD) |
Martinet, Philippe | INRIA |
Keywords:
Abstract: These are truly exciting times especially for researchers and scholars active in robotics and intelligent systems fields. Fruits of their labor are enabling transformative changes in daily lives of general public. In this presentation we will focus on changes affecting our mobility on roads with highly automated intelligent vehicles. We specifically discuss issues related to the understanding of human agents interacting with the automated vehicle, either as occupants of such vehicles, or who are in the near vicinity of the vehicles, as pedestrians, cyclists, or inside surrounding vehicles. These issues require deeper examination and careful resolution to assure safety, reliability and robustness of these highly complex systems for operation on public roads. The presentation will highlight recent research dealing with understanding of activities, behavior, intentions of humans specifically in the context of autonomous driving and transition controls.
|
|
09:00-17:00, Paper SuWS3.6 | |
>WS-2374 Talk 1 Marker-Based Mapping and Localization for Autonomous Valet Parking (I) |
|
Fang, Zheng | Northeastern University |
Chen, Yongnan | Northeastern University(China) |
Zhou, Ming | Northeastern University(China) |
Lu, Chao | Northeastern University |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Autonomous valet parking (AVP) is one of the most important research topics of autonomous driving in low speed scenes, with accurate mapping and localization being its key technologies. The traditional visual-based method, due to the change of illumination and appearance of the scene, easily causes localization failure in long-term applications. In order to solve this problem, we introduce visual fiducial markers as artificial landmarks for robust mapping and localization in parking lots. Firstly, the absolute scale information is acquired from fiducial markers, and a robust and accurate monocular mapping method is proposed by fusing wheel odometry. Secondly, on the basis of the map of fiducial markers that are sparsely placed in the parking lot, we propose a robust and efficient filtering-based localization method, which realizes accurate real-time localization of vehicles in parking lot. Compared with the traditional visual localization methods, we adopt artificial landmarks, which have strong stability and robustness to illumination and viewpoint changes. Meanwhile, because the fiducial markers can be selectively placed on the columns and walls of the parking lot, it is not easy to be occluded compared to the ground information, ensuring the reliability of the system. We have verified the effectiveness of our methods in real scenes. The experiment results show that the average localization error is about 0.3 m in a typical autonomous parking operation at a speed of 10km/h.Autonomous valet parking (AVP) is one of the most important research topics of autonomous driving in low speed scenes, with accurate mapping and localization being its key technologies. The traditional visual-based method, due to the change of illumination and appearance of the scene, easily causes localization failure in long-term applications. In order to solve this problem, we introduce visual fiducial markers as artificial landmarks for robust mapping and localization in parking lots. Firstly, the absolute scale information is acquired from fiducial markers, and a robust and accurate monocular mapping method is proposed by fusing wheel odometry. Secondly, on the basis of the map of fiducial markers that are sparsely placed in the parking lot, we propose a robust and efficient filtering-based localization method, which realizes accurate real-time localization of vehicles in parking lot. Compared with the traditional visual localization methods, we adopt artificial landmarks, which have strong stability and robustness to illumination and viewpoint changes. Meanwhile, because the fiducial markers can be selectively placed on the columns and walls of the parking lot, it is not easy to be occluded compared to the ground information, ensuring the reliability of the system. We have verified the effectiveness of our methods in real scenes. The experiment results show that the average localization error is about 0.3
|
|
09:00-17:00, Paper SuWS3.7 | |
>WS-2374 Talk 2 - Parameter Optimization for Loop Closure Detection in Closed Environments (I) |
|
Rottmann, Nils | University of Luebeck |
Bruder, Ralf | University of Lübeck |
Xue, Honghu | University of Luebeck |
Schweikard, Achim | University of Luebeck |
Rueckert, Elmar | University of Luebeck |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Tuning parameters is crucial for the performance of localization and mapping algorithms. In general, the tuning of the parameters requires expert knowledge and is sensitive to information about the structure of the environment. In order to design truly autonomous systems the robot has to learn the parameters automatically. Therefore, we propose a parameter optimization approach for loop closure detection in closed environments which requires neither any prior information, e.g. robot model parameters, nor expert knowledge. It relies on several path traversals along the boundary line of the closed environment. We demonstrate the performance of our method in challenging real world scenarios with limited sensing capabilities. These scenarios are exemplary for a wide range of practical applications including lawn mowers and household robots.
|
|
09:00-17:00, Paper SuWS3.8 | |
>WS-2374 Talk 3 - Radar-Camera Sensor Fusion for Joint Object Detection and Distance Estimation in Autonomous Vehicles (I) |
|
Nabati, Ramin | The University of Tennessee Knoxville |
Qi, Hairong | University of Tennessee |
Martinet, Philippe | INRIA |
Keywords:
Abstract: In this paper we present a novel radar-camera sensor fusion framework for accurate object detection and distance estimation in autonomous driving scenarios. The proposed architecture uses a middle-fusion approach to fuse the radar point clouds and RGB images. Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image and fed into a Radar Proposal Refinement (RPR) network for objectness score prediction and box refinement. The RPR network utilizes both radar information and image feature maps to generate accurate object proposals and distance estimations. The radar-based proposals are combined with image-based proposals generated by a modified Region Proposal Network (RPN). The RPN has a distance regression layer for estimating distance for every generated proposal. The radar-based and image-based proposals are merged and used in the next stage for object classification. Experiments on the challenging nuScenes dataset show our method outperforms other existing radarcamera fusion methods in the 2D object detection task while at the same time accurately estimates objects’ distances.
|
|
09:00-17:00, Paper SuWS3.9 | |
>WS-2374 Talk 4 - SalsaNext Fast Uncertainty-Aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving (I) |
|
Cortinhal, Tiago | Halmstad University |
Tzelepis, George | Institut De Robòtica I Informàtica Industrial |
Aksoy, Eren Erdal | Halmstad University |
Martinet, Philippe | INRIA |
Keywords:
Abstract: In this paper, we introduce SalsaNext for the uncertainty-aware semantic segmentation of a full 3D LiDAR point cloud in real-time. SalsaNext is the next version of SalsaNet [1] which has an encoder-decoder architecture consisting of a set of ResNet blocks. In contrast to SalsaNet, we introduce a new context module, replace the ResNet encoder blocks with a new residual dilated convolution stack with gradually increasing receptive fields and add the pixel-shuffle layer in the decoder. Additionally, we switch from stride convolution to average pooling and also apply central dropout treatment. To directly optimize the Jaccard index, we further combine the weighted cross entropy loss with Lov´asz-Softmax loss [2]. We finally inject a Bayesian treatment to compute the epistemic and aleatoric uncertainties for each LiDAR point. We provide a thorough quantitative evaluation on the Semantic-KITTI dataset [3], which demonstrates that SalsaNext outperforms the previous networks and ranks first on the Semantic-KITTI leaderboard.
|
|
09:00-17:00, Paper SuWS3.10 | |
>WS-2374 Talk 5 - SDVTracker Real-Time Multi-Sensor Association and Tracking for Self-Driving Vehicles (I) |
|
Gautam, Shivam | Carnegie Mellon University |
Meyer, Gregory P. | Uber Advanced Technologies Group |
Vallespi-Gonzalez, Carlos | CMU |
Becker, Brian C. | Carnegie Mellon Unversity |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Accurate motion state estimation of Vulnerable Road Users (VRUs), is a critical requirement for autonomous vehicles that navigate in urban environments. Due to their computational efficiency, many traditional autonomy systems perform multi-object tracking using Kalman Filters which frequently rely on hand-engineered association. However, such methods fail to generalize to crowded scenes and multi-sensor modalities, often resulting in poor state estimates which cascade to inaccurate predictions. We present a practical and lightweight tracking system, SDVTracker, that uses a deep learned model for association and state estimation in conjunction with an Interacting Multiple Model (IMM) filter. The proposed tracking method is fast, robust and generalizes across multiple sensor modalities and different VRU classes. In this paper, we detail a model that jointly optimizes both association and state estimation with a novel loss, an algorithm for determining ground-truth supervision, and a training procedure. We show this system significantly outperforms hand-engineered methods on a real-world urban driving dataset while running in less than 2.5 ms on CPU for a scene with 100 actors, making it suitable for self-driving applications where low latency and high accuracy is critical.
|
|
09:00-17:00, Paper SuWS3.11 | |
>WS-2374 Talk 6 - Situation Awareness at Autonomous Vehicle Handover - Preliminary Results of a Quantitative Analysis (I) |
|
Nagy, Tamas | Obuda University |
Drexler, Dániel András | Óbuda University |
Ukhrenkov, Nikita | Antal Bejczy Center for Intelligent Robotics, Óbuda University |
Takács, Árpád | Óbuda University |
Haidegger, Tamas | Obuda University (OU) |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Enforcing system level safety is a key research domain within self-driving technology. Current general development efforts aim for Level 3+ autonomy, where the vehicle controls both lateral and longitudinal motion of the dynamic driving task, while the driver is permitted to divert their attention, as long as she/he is able to react properly to a handover request initiated by the vehicle. Consequently, situation awareness of the human driver has become one of the most important metrics of handover safety. In this paper, the preliminary results of a user study are presented to quantitatively evaluate emergency handover performance, using custom-designed experimental setup, built upon the Master Console of the da Vinci Surgical System and the CARLA driving simulator. The measured control signals and the questionnaire filled out by participants were analyzed to gain further knowledge on the situation awareness of drivers during handover at Level 3 autonomy. The supporting, custom open-source platform developed is available at https://github.com/ABC-iRobotics/dvrk_carla.
|
|
09:00-17:00, Paper SuWS3.12 | |
>WS-2374 Talk 7 - towards Context-Aware Navigation for Long-Term Autonomy in Agricultural Environments (I) |
|
Höllmann, Mark | DFKI |
Kisliuk, Benjamin | DFKI |
Krause, Jan Christoph | DFKI GmbH |
Tieben, Christoph | DFKI |
Mock, Alexander | University of Osnabrück |
Pütz, Sebastian | Osnabrueck University |
Igelbrink, Felix | Osnabrueck University |
Wiemann, Thomas | Osnabrueck University |
Focke Martínez, Santiago | University of Bremen |
Stiene, Stefan | University of Osnabrueck |
Hertzberg, Joachim | University of Osnabrueck |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Autonomous surveying systems for agricultural applications are becoming increasingly important. Currently, most systems are remote-controlled or relying on a single global map representation. Over the last years, several use-case-specific representations for path and action planning in different contexts have been proposed. However, solely relying on fixed representations and action schemes limits the flexibility of autonomous systems. Especially in agriculture, the surroundings in which autonomous systems are deployed, may change rapidly during vegetation periods, and the complexity of the environment may vary depending on farm size and season. In this paper, we propose a context-aware system implemented in ROS that allows to change the representation, planning strategy and execution logics based on a spatially grounded semantic context. Our vision is to build up an autonomous system called Autonomous Robotic Experimental Platform (AROX) that is able to generate crop maps over a whole vegetation period without any user interference. To this end, we built up the hardware infrastructure for storing and charging the robot as well as the needed software to realize context-awareness using available ROS packages.
|
|
09:00-17:00, Paper SuWS3.13 | |
>WS-2374 Talk 8 - Exploiting Continuity of Rewards - Efficient Sampling in POMDPs with Lipschitz Bandits (I) |
|
Tas, Omer Sahin | FZI Research Center for Information Technology at the Karlsruhe |
Hauser, Felix | Karlsruhe Institute of Technology |
Lauer, Martin | Karlsruhe Institute of Technology |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Decision making under uncertainty can be framed as a partially observable Markov decision process (POMDP). Finding exact solutions of POMDPs is generally computationally intractable, but the solution can be approximated by sampling-based approaches. These approaches rely on multiarmed bandit (MAB) heuristics, which assume the outcomes of different actions to be uncorrelated. In some applications, like motion planning in continuous spaces, similar actions yield similar outcomes. In this paper, we use variants of MAB heuristics that make Lipschitz continuity assumptions on the outcomes of actions to improve the efficiency of sampling-based planning approaches. We demonstrate the effectiveness of this approach in the context of motion planning for automated driving.
|
|
09:00-17:00, Paper SuWS3.14 | |
>WS-2374 Talk 9 - Impact of Traffic Lights on Trajectory Forecasting of Human-driven Vehicles Near Signalized Intersections (I) |
|
Oh, Geunseob | University of Michigan |
Peng, Huei | University of MIchigan |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Forecasting trajectories of human-driven vehicles is a crucial problem in autonomous driving. Trajectory forecasting in the urban area is particularly hard due to complex interactions with cars and pedestrians, and traffic lights (TLs). Unlike the former that has been widely studied, the impact of TLs on the trajectory prediction has been rarely discussed. Our contribution is twofold. First, we identify the potential impact qualitatively and quantitatively. Second, we present a novel resolution that is mindful of the impact, inspired by the fact that human drives differently depending on signal phase and timing. Central to the proposed approach is Human Policy Models which model how drivers react to various states of TLs by mapping a sequence of states of vehicles and TLs to a subsequent action of the vehicle. We then combine the Human Policy Models with a known transition function (system dynamics) to conduct a sequential prediction; thus our approach is viewed as Behavior Cloning. One novelty of our approach is the use of vehicle-to-infrastructure communications to obtain the future states of TLs. We demonstrate the impact of TL and the proposed approach using an ablation study for longitudinal trajectory forecasting tasks on real-world driving data recorded near a signalized intersection. Finally, we propose probabilistic (generative) Human Policy Models which provide probabilistic contexts and capture competing policies, e.g., pass or stop in the yellow-light dilemma zone.
|
|
09:00-17:00, Paper SuWS3.15 | |
>WS-2374 Talk 10 - Semantic Grid Map Based LiDAR Localization in Highly Dynamic Urban Scenarios (I) |
|
Yang, Chenxi | Shanghai Jiao Tong University |
He, Lei | Shanghai Jiao Tong University |
Zhuang, Hanyang | Shanghai Jiao Tong University |
Wang, Chunxiang | Shanghai Jiaotong University |
Yang, Ming | Shanghai Jiao Tong University |
Martinet, Philippe | INRIA |
Keywords:
Abstract: Change-over-time objects such as pedestrians and vehicles remain challenging for scan-to-map pose estimation using 3D LiDAR in the field of autonomous driving because they lead to incorrect data association and structural occlusion. This paper proposes a novel semantic grid map (SGM) and corresponding algorithms to estimate the pose of observed scans in such scenarios to improve robustness and accuracy. The algorithms consist of a Gaussian mixture model (GMM) to initialize the pose, and a grid probability model to keep estimating the pose in real-time. We evaluate our algorithm thoroughly in two scenarios. The first scenario is an express road with heavy traffic to prove the performance towards dynamic interferences. The second scenario is a factory to confirm the compatibility. Experimental results show that the proposed method achieves higher accuracy and smoothness than mainstream methods, and is compatible with static environments.
|
|
SuWS4 |
Room T4 |
Robot-Assisted Training for Primary Care: How Can Robots Help Train Doctors
in Medical Examinations? |
Workshop |
Chair: Nanayakkara, Thrishantha | Imperial College London |
Co-Chair: Leong, Florence Ching Ying | Imperial College London |
Organizer: Nanayakkara, Thrishantha | Imperial College London |
Organizer: Leong, Florence Ching Ying | Imperial College London |
Organizer: Lalitharatne, Thilina | Imperial College London |
Organizer: He, Liang | Imperial College London |
Organizer: Iida, Fumiya | University of Cambridge |
Organizer: Scimeca, Luca | University of Cambridge |
Organizer: Hauser, Simon | École Polytechnique Fédérale De Lausanne (EPFL) |
Organizer: Hughes, Josie | MIT |
Organizer: Maiolino, Perla | University of Oxford |
|
09:00-17:00, Paper SuWS4.1 | |
WS-2375 Workshop Intro Video (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.2 | |
WS-2375 Video 1 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.3 | |
WS-2375 Video 2 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.4 | |
WS-2375 Video 3 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.5 | |
WS-2375 Video 4 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.6 | |
WS-2375 Video 5 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.7 | |
WS-2375 Video 6 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.8 | |
WS-2375 Video 7 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.9 | |
WS-2375 Video 8 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.10 | |
WS-2375 Video 9 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.11 | |
>WS-2375 Video 10 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
Keywords:
Abstract: Poster Presentations Abstract: This video contains 2-mins video presentations of all accepted papers for RoPat 2020 Workshop at IEEE/RSJ IROS 2020. Full papers and posters are available at thrish.org • Yongxuan Tan, “Real-Time 3D Soft-Tissue Deformation Simulation Using Blender and Unity” • Athanasios Martsopoulos, Rajendra Persad, Stefanos Bolomytis, Thomas L. Hill and Antonia Tzemanaki, “Spatial Rigid/Flexible Dynamic Model of Biopsy and Brachytherapy Needles Under a General Force Field” • Pilar Zhang Qiu, Oliver Thompson, Yongxuan Tan and Bennet Cobley, “Acoustic Response Analysis of Medical Percussion using Wavelet Transform and Neural Networks” • Mihan Perera, Sanju Uyanahewa, Pasindu Ranathunga, Thilina Dulantha Lalitharatne, Kanishka Madusanka and Thrishantha Nanayakkara, “Feasibility of using Cartoon Faces for Expressing Pain to be used in a Robotic Patient: A Preliminary Study” • Yongxuan Tan, Pilar Zhang Qiu, Oliver Thompson and Bennet Cobley, “Design and Implementation of a Robotic Device for Medical Percussion” • Ashan T. Wanasinghe, W. V. I. Awantha, Pasindu Kavindya, Asitha L. Kulasekera, Damith S. Chathuranga, Bimsara Senanayake, “Towards a Soft Hand Tremor Suppression Device for Primary Care”
|
|
09:00-17:00, Paper SuWS4.12 | |
WS-2375 Video 11 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.13 | |
WS-2375 Video 12 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.14 | |
WS-2375 Video 13 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
09:00-17:00, Paper SuWS4.15 | |
WS-2375 Video 14 (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
|
SuWS5 |
Room T5 |
Workshop on Animal–robot Interaction |
Workshop |
Chair: Romano, Donato | Scuola Superiore Sant’Anna |
Co-Chair: Stefanini, Cesare | Scuola Superiore Sant'Anna |
Organizer: Stefanini, Cesare | Scuola Superiore Sant'Anna |
Organizer: Romano, Donato | Scuola Superiore Sant’Anna |
|
09:00-17:00, Paper SuWS5.1 | |
>Introduction to the Workshop on Animal–Robot Interaction (I) |
|
Stefanini, Cesare | Scuola Superiore Sant'Anna |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: The Workshop on Animal-Robot Interaction is a world first, and aims at introducing a broad range of methodologies and results based on a novel, highly multidisciplinary biomimetic approach, and at providing a well substantiated vision on future strategic research lines in this field. Among the contexts of biorobotics and bionics, animal–robot interactive systems represent a fascinating and unique research field, opening up to new opportunities for multiple scientific and technological purposes, including biological structure and function investigations, as well as bioinspired algorithms and artifacts design. In these biohybrid dynamic systems, artificial agents are no longer simple dummies, but they are accepted as natural agents (i.e. heterospecific or conspecific) by animals. Robots are able to perceive, communicate and interact/adapt with the animals, activating, in the latter, selected neuro-behavioral responses, and adjusting their behavior according with the animal’s one. Cognitive traits, including perception, learning, memory and decision making, play an important role in biological adaptations and conservation of an animal species. Robots can represent advanced allies in studying these behavioral adaptations, since they are fully controllable, and it is possible to adjust their position in the environment, allowing a highly standardized and reproducible ethorobotic experimental interaction. This research field represents a paradigm shifting in the study of animal behavior, with potential applications to the control of animal populations in agriculture, to the improvement of animal farming conditions, as well as in preserving wildlife. The aim of this workshop is to introduce and promote the field of animal-robot interaction to a wide and multidisciplinary audience, and in particular to the robotics community. It will facilitate communication and exchange of information among roboticists and biologists that want to learn innovative approaches to establish animal-robot interactions to successfully investigate and control natural-artificial systems, by exploiting the synergic contribution from multiples scientific and technological fields. Also, it is an attempt to help biologists and in particular zoologists to shift from traditional ethological methods to the highly advances approach offered by robotic systems, in order to improve reliability and reproducibility of their studies in an Open Science perspective. Given the outstanding worldwide reputation of IROS in the field of robotics, IROS will be a key vector contributing to the spreading of this novel and promising field of Science, providing a significant added value to it. This Workshop on Animal-Robot Interaction will represent a significant landmark to introduce the potentiality of this field to the whole robotics community, and for triggering future developments.
|
|
09:00-17:00, Paper SuWS5.2 | |
>Animal-Robot Interaction: Relevant Works at the Organizing Institution (I) |
|
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: The high resonance on the society robotics has shown in the last decades has also impacted research contributions in animal behavioural ecology. Animal-robot interaction represent a fascinating field of biorobotics and bionics, proposing the use of robotic animal replicas as an advanced method for investigating and control animal behaviour. Herein, different case studies, carried out at The BioRobotics Institute of Scuola Superiore Sant’Anna (Pisa, Italy), have been reported. Innovative approaches to establish animal-robot interactions successfully enabled to investigate and control natural-artificial systems, by exploiting the synergic contribution between engineering and biology. Several behaviours that play a key role in the energetics and the physiology of a species (e.g. aggressive behaviours, courtship displays; the coalescence of animal aggregations and their location in the space) have been modulated, thus potentially affecting the fitness of a species. These results can greatly contribute to the management of natural systems and to control animals used as biosensors in the environment, pushing beyond the current state of the art in animal-robot mixed societies, as well as in multi-agent systems. We also provided a new paradigm of neuro-robotics by introducing biorobotic artifacts in neuroethological studies, and in particular in investigation focusing on laterality of several arthropod species. In addition, the new scientific knowledge provided here can be exploited to design optimized control strategies in artificial systems.
|
|
09:00-17:00, Paper SuWS5.3 | |
>Introducing the Invited Speakers (I) |
|
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Herein, we introduce the estimated Invited Speakers who contributed to the Workshop on Animal-Robot Interaction. We also acknowledge the Endorsers, and the Sponsor that supported this Workshop.
|
|
09:00-17:00, Paper SuWS5.4 | |
>Zebrafish-Robot Interactions for Hypothesis-Driven Experiments in Behavioral Neuroscience (I) |
|
Porfiri, Maurizio | New York University Polytechnic School of Engineering |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Zebrafish are gaining momentum as the third millennium laboratory species for the investigation of several functional and dysfunctional biological processes in humans, including the fundamental mechanisms modulating emotional patterns, learning processes, and individual and social response to alcohol and drugs of abuse. Robotics offer a powerful range of theoretical and experimental approaches that can advance our understanding of this animal model. In this talk, we report recent advances on the design of robotics-based platforms to elicit highly-controllable and customizable stimuli for laboratory experiments on zebrafish behavior. We summarize research on model-based control of the behavior of live animals, social learning, and behavioral teleporting.
|
|
09:00-17:00, Paper SuWS5.5 | |
>From Proactive Monitoring to Ecosystem Hacking: The Role of Robots in the Ongoing Ecosystem Crisis (I) |
|
Schmickl, Thomas | University of Graz |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Ecosystems are currently breaking down world-wide, especially insect species are dramatically disappearing. In order to protect our society, which depends on the ecosystems it is a part of, we have to prevent further decline of bio-diversity. However, damages are already present and crucial „key-stone species“ are threatened. I have developed a three-step contingency strategy based on biological studies, mathematical modelling and autonomous robotics to react to this crisis. In various collaborative projects, my lab and international partner labs developed technology to monitor ecosystems with robot swarms on the large scale and on the long term. We also developed autonomous robots to directly interact with organisms (animals and plants) in order to be capable of rapid intervention, if necessary. We studied the collective swarm-behavior of two highly threatened animal groups (honeybees and fish) and, as a proof-of-principle demonstration, we have designed two “robot species” that can infiltrate those swarm systems and coordinate these very different animals with respect to each other. This way we have created, for the first time in history, a novel ecological link between two species by embedding autonomous robots in a small artificially created ecosystem. This shows that robots might offer a viable option to externally stabilize fragile, or even already broken, ecosystems. In my recent research, I try to use robotic devices to turn whole honeybee colonies into bio-hybrid robotic super-organisms to use them as a novel ecological agent.
|
|
09:00-17:00, Paper SuWS5.6 | |
>Using Biorobots to Investigate Extant and Extinct Animal Locomotion (I) |
|
Ijspeert, Auke | EPFL |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience, biomechanics, and robotics can fruitfully interact. In this talk, I will present how biorobots and numerical models can be used to explore the interplay of the four main components underlying animal locomotion, namely central pattern generators (CPGs), reflexes, descending modulation, and the musculoskeletal system. Going from lamprey to human locomotion, I will present a series of models that tend to show that the respective roles of these components have changed during evolution with a dominant role of CPGs in lamprey and salamander locomotion, and a more important role for sensory feedback and descending modulation in human locomotion. I will also present a recent project showing how robotics can provide scientific tools for palaeontology.
|
|
09:00-17:00, Paper SuWS5.7 | |
>A Bio-Hybrid System to Shape Natural Plants (I) |
|
Hamann, Heiko | University of Luebeck |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Bio-hybrid systems can be formed with animals but also natural plants. In the EU-funded project flora robotica, we have developed novel methods to guide the growth and motion of plants. I present experiments on three scales: one robot controlling one plant with high precision, eight robots forming a simple pattern of eight plants, and many robots steering the biomass of many plants on room-scale. This new methodology may help to innovate architecture and how we build our future cities.
|
|
09:00-17:00, Paper SuWS5.8 | |
>Animal‐Robot Interactions: Introducing Robots into Groups of Weakly Electric Fish (I) |
|
von der Emde, Gerhard | University of Bonn |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: In this presentation, I will talk about how to integrate a biomimetic fish robot into groups of weakly electric fish. First I will introduce the fish, their biology and their way of electro-communicating with electric signals. Then I'll go on with our experiments on the integration of robots into groups of electric fish, first using very simple robots and then more and more sophisticated robots, which can interact with the fish. I will point out that for integration into a group, the robot needs to engage in electro-communication, while its visual appearance can be neglected. For full acceptance as a conspecific by the life fish, the robot has to be able to interact with the animals. Integration of the robot into the group is best, if it interacts both electrically and locomotorically.
|
|
09:00-17:00, Paper SuWS5.9 | |
>Behavioural and Life-History Responses of Mosquitofish to Biologically Inspired and Interactive Robotic Predators (I) |
|
Polverino, Giovanni | The University of Western Australia |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Human activities have resulted in the rapid redistribution of the world’s biota, assisting some species to colonize regions far beyond their natural range. Consequences of these biological introductions have been severe, with invasive species disrupting ecological communities, driving population declines and species extinctions, and costing billions of dollars every year globally. In freshwater ecosystems the invasive mosquitofish is one of the major threats to biodiversity, and how to eradicate it remains an urgent environmental challenge. Robotics is emerging as a promising tool to study animal behaviour and animal invasions, but whether and how robots can manipulate mosquitofish behaviour and mitigate its ecological success remain unknown. In this study, we tested whether behavioural and life-history responses of invasive mosquitofish can be modulated through a robotic predator whose visual appearance and locomotion were inspired by native mosquitofish predators. We varied the degree of biomimicry of the robot’s motion, and observed that real-time interactions at varying swimming speeds triggered stronger antipredator responses in mosquitofish than simpler movement patterns by the robot—swimming on predetermined trajectories and/or at constant speed. Remarkably, we found that non-lethal costs of predation threat extend far beyond behaviour; a 15-min-per-week exposure to a robotic predator elicited stress related physiological changes in mosquitofish associated to loss of energy reserves and compromised body conditions, which are likely to impair reproduction and lifespan. This evidence represents a paradigm shift for uncovering non-lethal consequences of predation threat with the use of state-of-the-art robotic tools, and opens the door for future endeavours to control mosquitofish in the wild.
|
|
09:00-17:00, Paper SuWS5.10 | |
>Socially Competent Robots: Real-Time Adaptation Improves Leadership Performance in Groups of Live Fish (I) |
|
Landgraf, Tim | Free University Berlin |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Collective motion is commonly modeled with simple interaction rules between agents. Yet in nature, numerous observables vary within and between individuals and it remains largely unknown how animals respond to this variability, and how much of it may be the result of social responses. Here, we hypothesize that Guppies (Poecilia reticulata) respond to avoidance behaviors of their shoal mates and that "socially competent" responses allow them to be more effective leaders. We test this hypothesis in an experimental setting in which a robotic Guppy, called RoboFish, is programmed to adapt to avoidance reactions of its live interaction partner. We compare the leadership performance between socially competent robots and two non-competent control behaviors and find that 1) behavioral variability itself appears attractive and that socially competent robots are better leaders that 2) require fewer approach attempts to 3) elicit longer average following behavior than non-competent agents. This work provides evidence that social responsiveness to avoidance reactions plays a role in the social dynamics of guppies. We showcase how social responsiveness can be modeled and tested directly embedded in a living animal model using adaptive, interactive robots.
|
|
09:00-17:00, Paper SuWS5.11 | |
>Hybrid Cognitive Agents: Dynamical Modeling of the Intelligent Behavior of Animals and Robots (I) |
|
Long, John | Vassar College |
Aaron, Eric | Colby College |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: To facilitate the construction and analysis of animal-robot systems, we propose to model animals and robots on equal footing: as hybrid cognitive agents. This modeling uses Aaron’s framework for intelligent behavior modeling, where goal-directed behavior of embodied agents emerges from closed-loop interactions with the environment, including other agents. The relevant physical and cognitive components of the agents’ systems are represented and integrated in a unifying dynamical system model, in order to underwrite reactive processes, deliberative processes, and learning. By treating both fish and robots as hybrid cognitive agents, we can model their behavioral interactions explicitly as the on-going consequence of internal states such as goals and intentions.
|
|
09:00-17:00, Paper SuWS5.12 | |
>Biomimetic Fish Robots As a Tool in Animal Behavior Research (I) |
|
Bierbach, David | Humboldt Universität Zu Berlin |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Biomimetic robots are an extremely effective tool for studying animal social behavior, as they can help elucidate the important characteristics necessary to engage and interact with their live animal counterparts. I will outline how differentially controlled biomimetic robots can be used to answer questions on how animals differ in social responsiveness, what characteristics define leadership and how individual differences in movement speed affect collective behavior. Further I will show some of the prerequisites that robots need to serve as a tool for the study of animal social behavior and how biomimetic robots help to reduce numbers of live animals used during experiments.
|
|
09:00-17:00, Paper SuWS5.13 | |
WS-2379 Presentation 12 (I) |
|
Romano, Donato | Scuola Superiore Sant’Anna |
|
09:00-17:00, Paper SuWS5.14 | |
>Using an Open-Source Robotic Platform to Investigate Social and Hunting Behavior in Banded Archerfish (I) |
|
Brown, Alexander | Lafayette College |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Banded archerfish (Toxotes jaculatrix ) often hunt by spitting at food items above the surface of the water, which requires a high visual acuity in addition to precise body position and orientation control. Hunting behavior in archerfish has been shown to have a social component-- kleptoparasitism is common, and “shooting” behavior is modulated in the presence of an audience. At least one study has reported that young archerfish learn to hunt by watching conspecifics. In this presentation, we explore the possibility of using a robotic lure to learn more about the social cues that govern archerfish hunting behavior. We summarize the development of an extensible, open-source robotic platform that can be used for a host of fish-robot interaction experiments, including those that require the type of motion required to replicate archerfish hunting behavior. We summarize the results of a pilot study showing that archerfish swimming behavior can be influenced by a swimming robotic lure, demonstrate the platform’s flexibility in being adapted for an experiment that includes replication of archerfish hunting, and show preliminary results from an experiment-in-progress that seeks to determine what influence a “hunting” robotic lure has on archerfish social behavior.
|
|
09:00-17:00, Paper SuWS5.15 | |
>FishSim Animation Toolchain: An Innovative Tool Pushing the Boundaries of Studies on Fish Behavior (I) |
|
Gierszewski, Stefanie | University of Siegen |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: Here I present the results of an interdisciplinary collaboration between biologists and computer scientist at University of Siegen (Germany). Using an innovative approach combining 3D computer animation and a robotic system, we developed the free and open-source FishSim Animation Toolchain (short: FishSim). FishSim combines different tools for the design and animation of virtual fish stimuli, as well as their presentation on screen during behavioral experiments. Even though FishSim was specifically tailored for the study of mate-choice copying in sailfin mollies (Poecilia latipinna), its framework can be adapted to other fish species and research questions as well. Using examples from our own research, we show how visual information (e.g., morphology and behavior) may be manipulated during experiments and we demonstrate the high degree of control and standardization that can be achieved. Further, FishSim enables closed-loop interactions between virtual and live fish. By implementing a 3D tracking system, we show that a virtual male can follow the position of a live focal female on screen and perform courtship behavior according to predefined criteria. Overall, closed-loop computer animation provides a more natural stimulus experience and paves the way for the study of social communication, in which a virtual animal may respond to the behavior expressed by a live counterpart in real-time.
|
|
09:00-17:00, Paper SuWS5.16 | |
>Computational and Robotic Modelling Reveal Parsimonious Combinations of Interactions between Individuals in Schooling Fish (I) |
|
Liu, Lei | University of Shanghai for Science and Techology |
Romano, Donato | Scuola Superiore Sant’Anna |
Keywords:
Abstract: How do fish integrate and combine information from multiple neighbors when swimming in a school? What is the minimum amount of information about their environment needed to coordinate their motion? To answer these questions, we combine experiments with computational and robotic modeling to test several hypotheses about how individual fish could integrate and combine the information on the behavior of their neighbors when swimming in groups. Our research shows that, for robots simulation, using the information of two neighbors is sufficient to qualitatively reproduce the collective motion patterns observed in groups of fish. Remarkably, our results also show that it is possible to obtain group cohesion and coherent collective motion over long periods of time even when individuals only interact with their most influential neighbor, that is, the one that exerts the most important effect on their heading variation.
|
|
09:00-17:00, Paper SuWS5.17 | |
WS-2379 Presentation 16 (I) |
|
Romano, Donato | Scuola Superiore Sant’Anna |
|
SuWS6 |
Room T6 |
New Advances in Soft Robots Control |
Workshop |
Chair: Monje, Concepción A. | University Carlos III of Madrid |
Co-Chair: Falotico, Egidio | Scuola Superiore Sant'Anna |
Organizer: Monje, Concepción A. | University Carlos III of Madrid |
Organizer: Falotico, Egidio | Scuola Superiore Sant'Anna |
|
09:00-17:00, Paper SuWS6.1 | |
>IROS 2020 Workshop on New Advances in Soft Robots Control (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Keywords:
Abstract: The emerging field of soft robotics is nowadays looking at innovative ways to create and apply robotic technology in our lives. It is a relatively new domain in the field of robotics, but one that has a lot of potential to change how we relate with robots and also how they are used. The term "soft robot" describes a system that is inherently soft yielding a complex dynamics and a passive compliance similar to the biological counterpart. As this was a new design paradigm for the hardware, methods or algorithms to prescribe the robotic system a certain dynamics changed as well. Classical control approaches in robotics are nonlinear model based. However, the highly complex and nonlinear models necessary for a soft robotic system make this approach a difficult task and therefore seem to come to a limit in the presence of a soft robot. Therefore, other methods have been applied seemingly being more useful in this context, such as learning-based control algorithms, model-free approaches like bang bang control, control algorithms motivated by neuroscience, or morphological computation. These methods add new perspectives to the well known model-based approach. We want to provide an inter- and cross-disciplinary platform to discuss techniques, conventional as well as novel, that are currently applied and developed and discuss limitations, potentials and future directions.
|
|
09:00-17:00, Paper SuWS6.2 | |
>Soft Robotics and Morphological Computation (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Laschi, Cecilia | Scuola Superiore Sant'Anna |
Keywords:
Abstract: Soft Robotics is the use of soft materials or compliance structures in robotics. Soft robots pose interesting challenges for control, which can use model-based techniques or take model-free approaches, based on learning. Does a soft body only represent a problem for control? Or does it instead help simplifying control? According to the embodied intelligence view of robot control and behaviour, a soft body allows behaviour to emerge from the interaction with the environment, thus simplifying control. Morphological computation means the part of control done by the body itself, according to its physical characteristics. Locomotion is a good example of embodied intelligence and an ideal case for implementing morphological computation. Underwater legged locomotion is shown by marine animals with soft or compliant limbs. The U-SLIP model well describes the animal patterns of locomotion and helps designing robots with self-stabilizing locomotion patterns, underwater. SILVER and SILVER2 are underwater robots that show the ability to walk on the seabed, in lab and real settings. In conclusion, morphological computation can help simplify control and leaning-based approaches to control can better encode morphological computation in soft robots.
|
|
09:00-17:00, Paper SuWS6.3 | |
>Fractional Order Control of a Soft Robotic Neck (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Keywords:
Abstract: In this work we describe the fractional order control approaches tested in a soft robotic neck with two Degrees of Freedom (DOF), able to achieve flexion, extension, and lateral bending movements similar to those of a human neck. The design is based on a cable-driven mechanism consisting of a NinjaFlex link acting as a cervical spine and three servomotors actuated tendons that allow the neck to reach all desired inclinations and orientations. The prototype was manufactured using a 3D printer. Two control approaches are proposed and tested experimentally: a motor position approach using encoder feedback and a tip position approach using Inertial Measurement Unit (IMU) feedback, both applying fractional order controllers. The platform operation is tested for different load configurations so that the robustness of the system can be checked. Besides, the results from the integration of the neck in the real humanoid robot TEO are presented, together with those from the replacement of the servomotors by Shape Memory Alloy (SMA) actuators.
|
|
09:00-17:00, Paper SuWS6.4 | |
>A Gait Pattern Generator for Closed-Loop Position Control of a Soft Walking Robot (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Schiller, Lars | Hamburg University of Technology |
Keywords:
Abstract: This presentation presents an approach to control the position in Cartesian space of a gecko-inspired soft robot. By formulating constraints under the assumption of constant curvature, the joint space of the robot is reduced in its dimension from nine to two. The remaining two generalized coordinates describe respectively the walking speed and the rotational speed of the robot and define the so-called velocity space. By means of simulations and experimental validation, the direct kinematics of the entire velocity space (mapping in Cartesian task space) is approximated by a bivariate polynomial. Based on this, an optimization problem is formulated that recursively generates the optimal references to reach a given target position in task space. Finally, we show in simulation and experiment that the robot can master arbitrary obstacle courses by making use of this gait pattern generator.
|
|
09:00-17:00, Paper SuWS6.5 | |
>Toward Understanding Design Principle Underlying Versatile Animal Behaviors (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Fukuhara, Akira | Tohoku University |
Keywords:
Abstract: To explore the next challenge of soft robotics, this presentation takes a look at soft-bodied system in nature, that is, animals. Animals frequently exploit their flexible bodies and exhibit adaptive and versatile behaviors in real-time under real-world constraints. To understand the mechanism underlying their versatility, our research group recently find out a new perspective, i.e., functional polysemy of the animal body parts. For example, animals use their limbs as locomotor organs, whereas they use as manipulators depending on situations. Understanding the design principle of functional polysemy underlying versatile animal behaviors could shed a new perspective on the design of versatile soft robot systems. This presentation shows two case studies of the functional polysemy conducted with fruitful interactions among robotics and anatomy.
|
|
09:00-17:00, Paper SuWS6.6 | |
>Self-Healing Soft Robots (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Terryn, Seppe | Vrije Universiteit Brussel (VUB) |
Keywords:
Abstract: The need for robots that can safely interact with humans has led to the development of the novel field of “soft robotics”. In soft robots, compliance is integrated through flexible elements, which are in many cases elastomeric membranes. Because of their intrinsic flexibility these robots are suitable for applications in uncertain, dynamic task environments, including safe human-robot interactions. However, the soft polymers used are highly susceptible to damage, such as cuts and perforations caused by sharp objects present in the uncontrolled and unpredictable environments these soft robots operate in. In contrast with stiff robots, in soft robotics a large part of the robot’s body will experience dynamic strains. As a result, fatigue will occur throughout the entire soft robotic body. In many soft robotic designs weak interfaces that rely mainly on secondary interactions, are created due to multi-material designs or multistage molding. After a limited number of actuation cycles, interfacial de-bonding leads to delamination and eventually failure. These damaging conditions lead to a limited lifetime of soft robotic components. Most flexible polymers currently used in soft robots are irreversible elastomeric networks, which cannot be recycled. Therefore, damaged parts are disposed after a limited life cycle as not recyclable waste. In our research we propose to increase the lifetime of soft robotic components by constructing them out of self-healing polymers, more specifically out of reversible Diels-Alder (DA) networks. Based on healing capacities found in nature, these polymers are given the ability to heal damage. As an additional benefit, these polymers are completely recyclable and can pave the way towards sustainable, ecological robotics. A variety of DA-networks was synthesized and characterized that vary in concentration, functionality and (non)stoichiometry ratio of the maleimide and furan reactive components. The knowledge of the direct effect of these three network design parameters on the material properties allow the design and preparation of DA-networks with customized mechanical properties for dedicated applications. A new manufacturing technique “folding & covalently bonding” that exploits the healing ability was invented. In addition, fused filament fabrication is developed, which allows to 3D print DA-networks into 3D objects with isotropic mechanical properties. These novel manufacturing techniques were used to develop the first healable soft robotic components including; soft grippers, soft robotic hands and artificial muscles. These components, of which some consist of multiple DA-materials, were designed through finite element modeling and their mechanical performances were characterized using customized dedicated test benches. It was experimentally validated that the healing ability of these components allows healing microscopic and macroscopic damages with near complete recovery of initial characteristics.
|
|
09:00-17:00, Paper SuWS6.7 | |
>Soft Upper Limb Exoskeletons (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Copaci, Dorin Sabin | Universidad Carlos III De Madrid |
Keywords:
Abstract: Applications of robotic exoskeletons have undergone an enormous research interest in the last years for rehabilitation therapies. However, although the wide research in this field, there are still open issues, especially in the development of devices for upper limb, that limit their implementation in real clinical practice. One of the limitations in the development of wearable soft robotic devices lies in the development of lightweight actuators. Thanks to their flexibility, high force-to-weight ratio and small volume, SMAbased actuators can be considered a good actuation solution for soft robotic applications and especially for rehabilitation devices. The devices that we have shown in this presentation and the developments carried out by our research group demonstrate the real possibilities of applying soft robotics in rehabilitation and the advantages that these devices offer for patients and therapists.
|
|
09:00-17:00, Paper SuWS6.8 | |
>Closing the Loop with Embedded Soft Sensors (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
George Thuruthel, Thomas | Bio-Inspired Robotics Lab, University of Cambridge |
Keywords:
Abstract: Providing robots a sense of touch is one of the biggest challenges hindering the progress of robots into our everyday lives. Recent developments in soft robotic technologies and artificial intelligence are a promising solution to this problem. This talk introduces the technologies behind the development of soft robotic sensors, their applications and challenges. A brief insight into the current algorithms for their design optimization and data processing is provided and concluded with future challenges and opportunities in the development of closed-loop controllers with embedded strain sensors.
|
|
09:00-17:00, Paper SuWS6.9 | |
>Trajectory Tracking of a One-Link Flexible Arm Via Iterative Learning Control (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Pierallini, Michele | Centro Di Ricerca E. Piaggio - Università Di Pisa |
Keywords:
Abstract: Robot flexibility underwent a radical change in the last twenty years. Historically, link elasticity was considered as an element jeopardizing the correct execution of the desired task. For this reason, controllers aimed at removing the system compliance by means of high-gain feedback loops, with the result of stiffening the robot. However, with the development of soft robots, physical elasticity became one of the main solutions to obtain safe interactions with unstructured environments, human beings, and other robots. For this reason, the past approaches to tackle classical control problems need to be re-designed. Nowadays, the goal of the controllers moves to approaches which aim to provide good tracking performance, while preserving and exploiting the robot elasticity. Following this idea, we present an iterative learning control algorithm for trajectory tracking with a flexible arm. The proposed method preserves the robot compliant behavior while achieving good tracking performance. We show how the proposed solution can be applied to systems modeled with a generic number of passive joints actuated by one joint, and we provide conditions, based on the system dynamics, that ensure the applicability of the iterative algorithm. Finally, we validate the theoretical results with simulations and experimental tests.
|
|
09:00-17:00, Paper SuWS6.10 | |
>Machine Learning Controllers for Continuum and Soft Manipulators (I) |
|
Monje, Concepción A. | University Carlos III of Madrid |
Falotico, Egidio | Scuola Superiore Sant'Anna |
Keywords:
Abstract: A novel subdomain of continuum manipulators, referred to as soft robotic manipulators, has been rapidly growing in the past decade since roboticists found inspiration from biological organisms such as elephant trunk and octopus arms. This led to the growth of a new range of continuum manipulators made up of soft with the ability to undergo a large deformations. The deformability of the soft material offers compliance, which facilitates safe human–robot interaction in comparison to the rigid counterparts. These desirable characteristics are the fundamental reason behind the rapidly increasing demand in industrial, surgical, and assistive applications. However, the long-term success for the practical application of these systems is dependent on the development of real-time kinematic and/or dynamic controllers that facilitate fast, reliable, accurate, and energy-efficient control. This is nontrivial because unlike rigid manipulators, the movement of which can be specified by three translations and three rotations, elastic deformation of soft robotic manipulators results in virtually infinite degrees-of-freedom motions, (bending, extension, contraction, torsion, buckling, etc.). In addition, the material properties exhibit nonlinear characteristics such as compliance and hysteresis that make the model of these manipulators very complex. Machine learning techniques represent a valid approach for the control of these artifacts, since the robot model can be derived from data. The focus is on kinematic and dynamic controllers relying on supervised or reinforcement learning networks, developed and tested on soft and braided-structure continuum manipulators. The results demonstrate the effectiveness of this approach that needs to be further explored to extend its applicability in real world settings and human-robot interaction tasks.
|
|
SuWS7 |
Room T7 |
Autonomous System in Medicine: Current Challenges in Design, Modeling,
Perception, Control and Applications |
Workshop |
Chair: Chen, Yue | University of Arkansas |
Co-Chair: Su, Hang | Politecnico Di Milano |
Organizer: Su, Hang | Politecnico Di Milano |
Organizer: Chen, Yue | University of Arkansas |
Organizer: Guo, Jing | Guangdong University of Technology |
Organizer: Faragasso, Angela | The University of Tokyo |
Organizer: Yu, Haoyong | National University of Singapore |
Organizer: De Momi, Elena | Politecnico Di Milano |
|
09:00-17:00, Paper SuWS7.1 | |
>WS-2399 Workshop Intro Video (I) |
|
Chen, Yue | University of Arkansas |
Su, Hang | Politecnico Di Milano |
Keywords:
Abstract: Autonomous System in Medicine: Current Challenges in Design, Modeling, Perception, Control and Applications IROS 2020 Full-day Workshop October 25, 2020 Introduction Video
|
|
09:00-17:00, Paper SuWS7.2 | |
>Novel Body-Mounted MR-Compatible Robot for Shoulder Arthrography and Back Pain (I) |
|
Cleary, Kevin | Children's National Medical Center |
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: Novel Body-Mounted MR-compatible Robot for Shoulder Arthrography and Back Pain Speaker Name: Kevin Cleary
|
|
09:00-17:00, Paper SuWS7.3 | |
>Flexible, Patient-Specific Robotic Systems for Surgical Interventions (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: Flexible, Patient-Specific Robotic Systems for Surgical Interventions Speaker Name: Jaydev P. Desai
|
|
09:00-17:00, Paper SuWS7.4 | |
>Safe Robot-Assisted Retinal Surgery (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: Safe Robot-assisted Retinal Surgery Speaker Name: Iulian Iordachita
|
|
09:00-17:00, Paper SuWS7.5 | |
>Diagnosis System for Post-Stroke Patients (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: Diagnosis System for Post-stroke Patients Speaker Name: Qi An
|
|
09:00-17:00, Paper SuWS7.6 | |
>Autonomy in Robotic Colonoscopy (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: Autonomy in Robotic Colonoscopy Speaker Name: Pietro Valdastri
|
|
09:00-17:00, Paper SuWS7.7 | |
>Robot-Clinician Collaboration for Semi-Autonomous ComputerIntegrated Medicine (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: Robot-Clinician Collaboration for Semi-Autonomous ComputerIntegrated Medicine Speaker Name: Mahdi Tavakoli
|
|
09:00-17:00, Paper SuWS7.8 | |
>Visual and Tactile Sensing for Robotic-Assisted Surgery (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: Visual and Tactile Sensing for Robotic-assisted Surgery Speaker Name: Shan Luo
|
|
09:00-17:00, Paper SuWS7.9 | |
>To Be Confirmed (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: To be confirmed Speaker Name: Robert Webster
|
|
09:00-17:00, Paper SuWS7.10 | |
>To Be Confirmed (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: To be confirmed Speaker Name: Caleb Rucker
|
|
09:00-17:00, Paper SuWS7.11 | |
>To Be Confirmed (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: To be confirmed Speaker Name: Nobuhiko Hata
|
|
09:00-17:00, Paper SuWS7.12 | |
>To Be Confirmed (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: To be confirmed Speaker Name: Paolo Fiorini
|
|
09:00-17:00, Paper SuWS7.13 | |
>High-Precision Direct Cell Injection Robot in MRI (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: High-Precision Direct Cell Injection Robot in MRI Speaker Name: Jun Ueda
|
|
09:00-17:00, Paper SuWS7.14 | |
>To Be Confirmed (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: To be confirmed Speaker Name: Sarthak Misra
|
|
09:00-17:00, Paper SuWS7.15 | |
>To Be Confirmed (I) |
|
Chen, Yue | University of Arkansas |
Keywords:
Abstract: Title: To be confirmed Speaker Name: Yue Chen
|
|
SuWS8 |
Room T8 |
MIT MiniCheetah Workshop |
Workshop |
Chair: Kim, Sangbae | Massachusetts Institute of Technology |
Co-Chair: Wensing, Patrick M. | University of Notre Dame |
Organizer: Kim, Sangbae | Massachusetts Institute of Technology |
Organizer: Wensing, Patrick M. | University of Notre Dame |
Organizer: Kim, Inhyeok | NAVER LABS Corp |
|
09:00-17:00, Paper SuWS8.1 | |
>WS-2429 Workshop Intro Video (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: The MIT Mini-Cheetah workshop aims to address critical challenges and opportunities for legged robot control architectures by sharing extremely capable platforms. Through tight collaboration with MIT Mini-Cheetah teams, the workshop will compile existing legged robot controllers and collectively identify common challenges, missing tools, and architectural insights that have the broadest promise to move the field forward. The workshop will start with the introduction of a collaborative research program using the shared hardware Mini-Cheetah and open-source software for a quadruped simulator/controller. Although the contents of the workshop will be centered around controllers developed for quadrupedal robots, we believe that the topics are highly relevant for any future/current collaboration using a platform that integrates joint controllers, whole-body control, model-based optimization for predictive control, and vision-based planning. The Mini-Cheetah software is an open-source package and can be applicable to many legged robot research programs. In addition, we aim to discuss the MIT Mini-Cheetah Loan Program to further increase the community centered around the MIT Mini-Cheetah. Application processes for the loan program will be announced at the workshop.
|
|
09:00-17:00, Paper SuWS8.2 | |
>WS-2429 Video 1 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.3 | |
>WS-2429 Video 2 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.4 | |
>WS-2429 Video 3 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.5 | |
>WS-2429 Video 4 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.6 | |
>WS-2429 Video 5 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.7 | |
>WS-2429 Video 6 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.8 | |
>WS-2429 Video 7 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.9 | |
>WS-2429 Video 8 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.10 | |
>WS-2429 Video 9 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.11 | |
>WS-2429 Video 10 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords:
Abstract: ABSTRACT
|
|
09:00-17:00, Paper SuWS8.12 | |
WS-2429 Video 11 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
|
09:00-17:00, Paper SuWS8.13 | |
WS-2429 Video 12 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
|
09:00-17:00, Paper SuWS8.14 | |
WS-2429 Video 13 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
|
09:00-17:00, Paper SuWS8.15 | |
WS-2429 Video 14 (I) |
|
Kim, Sangbae | Massachusetts Institute of Technology |
|
SuWS10 |
Room T10 |
Robotics-Inspired Biology |
Workshop |
Chair: Gravish, Nick | UC San Diego |
Co-Chair: Jayaram, Kaushik | University of Colorado Boulder |
Organizer: Gravish, Nick | UC San Diego |
Organizer: Jayaram, Kaushik | University of Colorado Boulder |
Organizer: Li, Chen | Johns Hopkins University |
Organizer: Clifton, Glenna | University of California San Diego |
Organizer: van Breugel, Floris | University of Nevada, Reno |
|
09:00-17:00, Paper SuWS10.1 | |
>WS-2433 Robotics Inspired Biology Welcome Video (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Robotics Inspired Biology Welcome Video
|
|
09:00-17:00, Paper SuWS10.2 | |
>WS-2433 Prof. Thomas Daniel, University of Washington (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Prof. Thomas Daniel, University of Washington
|
|
09:00-17:00, Paper SuWS10.3 | |
>WS-2433 Prof. Bob Full, University of California Berkeley (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Prof. Bob Full, University of California Berkeley
|
|
09:00-17:00, Paper SuWS10.4 | |
>WS-2433 Prof. George Lauder, Harvard University (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Prof. George Lauder, Harvard University
|
|
09:00-17:00, Paper SuWS10.5 | |
>WS-2433 Prof. Brad Dickerson, University of North Carolina Chapel-Hill (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Prof. Brad Dickerson, University of North Carolina Chapel-Hill
|
|
09:00-17:00, Paper SuWS10.6 | |
>WS-2433 Vani Sundaram, Grad Student, University of Colorado Boulder (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Vani Sundaram, Grad Student, University of Colorado Boulder
|
|
09:00-17:00, Paper SuWS10.7 | |
>WS-2433 Eugene Rush, Grad Student, University of Colorado Boulder (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Eugene Rush, Grad Student, University of Colorado Boulder
|
|
09:00-17:00, Paper SuWS10.8 | |
>WS-2433 Prof. Rob Wood, Harvard University (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Prof. Rob Wood, Harvard University
|
|
09:00-17:00, Paper SuWS10.9 | |
>WS-2433 Panel Discussion (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Panel Discussion
|
|
09:00-17:00, Paper SuWS10.10 | |
>WS-2433 Plenary Talk 1 (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Plenary Talk 1
|
|
09:00-17:00, Paper SuWS10.11 | |
>WS-2433 Plenary Talk 2 (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Plenary Talk 2
|
|
09:00-17:00, Paper SuWS10.12 | |
>WS-2433 Plenary Talk 3 (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Plenary Talk 3
|
|
09:00-17:00, Paper SuWS10.13 | |
>WS-2433 Plenary Talk 4 (I) |
|
Gravish, Nick | UC San Diego |
Keywords:
Abstract: Plenary Talk 4
|
|
09:00-17:00, Paper SuWS10.14 | |
WS-2433 Video 13 (I) |
|
Gravish, Nick | UC San Diego |
|
09:00-17:00, Paper SuWS10.15 | |
WS-2433 Video 14 (I) |
|
Gravish, Nick | UC San Diego |
|
SuWS11 |
Room T11 |
Robots Building Robots. Digital Manufacturing and Human-Centered Automation
for Building Consumer Robots |
Workshop |
Chair: Dario, Paolo | Scuola Superiore Sant'Anna |
Co-Chair: Huang, George Q. | The University of Hong Kong |
Organizer: Dario, Paolo | Scuola Superiore Sant'Anna |
Organizer: Huang, George Q. | The University of Hong Kong |
Organizer: Luh, Peter | University of Connecticut |
Organizer: Zhou, MengChu | New Jersey Institute of Technology |
|
09:00-17:00, Paper SuWS11.1 | |
>Paolo Dario - Presentation of the Workshop (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: Presentation of the on-demand content of the Workshop
|
|
09:00-17:00, Paper SuWS11.2 | |
>Paolo Dario - Introduction to the Workshop (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: The demand for Industrial Robots (IR) has risen considerably due to the ongoing trend toward automation and continued technical innovations brought in Smart Factories. According to World Robotics 2019, in the year 2018 global robot installations were 422.271 units, worth USD 16.5 billion, and the operational stock of robots was computed at 2.439.543 units. The massive use of robots is key to the development of highly automated and productive factories, that are used worldwide to manufacture virtually all products, mainly mass-market consumer goods but also customized products manufactured under a Lot Size One logics. The forefront technology that IR brought in Manufacturing is likely ready to leave the industrial environment and enter directly into our homes in the coming years. The robotic market, indeed, is facing the challenge to open towards a new frontier where Robots exclusively utilized in production plants enter unprecedented fields of application. In fact, IR leaders worldwide have already started to be interested in Consumer Robots (CR), robots that can be bought for supporting domestic tasks or for entertainment. Today, autonomously navigating vacuums, pool cleaners, automated kitchen tools, pets, and educational toys are becoming family domestic robots. CR have been fueling visions of having robots living in our homes to assist humans with daily tasks. However, the promise of CR remains largely unfulfilled, despite CR could have an incredibly large market. Indeed, almost 2 billion households in the Western world would be possibly interested in purchasing at least one CR model (within different price ranges). Assuming to fully cover the whole market in 20 years, covering different price ranges (from 1K to 30K euro) of CR, the possible revenues in 20 years could reach almost 2-60 trillion €, having almost 100 million CR sold per year. The workshop will (i) analyse how the current paradigm of industrial automation for manufacturing consumer products could evolve into the emerging paradigm of industrial automation for manufacturing Consumer Robots and (ii) discuss the technical characteristics of new “Robots producing Robots” and of automation cells intended to manufacture CR with high precision, accuracy and at low cost. Four representative classes of CR will be considered as case studies of Digital Manufacturing and Human-centered Automation: vacuum cleaners; drones and aerial robots; educational robots and home robots.
|
|
09:00-17:00, Paper SuWS11.3 | |
>Marco Controzzi - Human Collaborating with Robots As a Natural Act (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: Roboticists must face different challenges in the context of the new wave of industrialization, i.e., the Fourth Industrial Revolution (Industry 4.0). In fact, robots are envisioned to share their working space and actively cooperate with human workers taking into account their needs. Behavioural studies can be very informative for robotics by providing human-inspired principles that will allow robots to achieve success actions in the real world. Thus, human-inspired behaviours may be an asset if robots are meant to interact with humans in a collaborative task producing fluent and efficient actions. Among the different actions, we decided to focus our research on the object handover, since it is a very common joint action performed multiple times in many cooperative scenarios. In the object handover, an object is passed between two agents, the passer and the receiver. Despite its apparent simplicity, the object handover is a complex action that requires sensory-motor coordination among the two agents, a specific grasping strategy and the regulation of the grip force to allow a seamless interaction. In this talk, we will review our recent studies concerning the grasp choice in humans and their implication on the performance and perception of the human-robot collaboration.
|
|
09:00-17:00, Paper SuWS11.4 | |
>Francesco Ferro - When the User Builds Their Own Robot (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: When it comes to building robots, different aspects need to be taken into consideration. From the design to the manufacturing, the development of a robot goes through different processes and phases. Therefore it is crucial to create platforms that are modular and flexible that are also able to adapt to the needs that the user may need in the future, once the robot is built. Our PAL Robotics’ Platform TIAGo is a versatile robot that allows different configurations. It’s the case that it’s “The robot that adapts to the user’s research needs, not the other way around”. TIAGo robot combines perception, navigation, manipulation & Human-Robot Interaction skills out of the box. TIAGo's abilities open up many possibilities for applications in complex industrial scenarios and the robot is a standard research platform for manipulation, perception and autonomous navigation. The robot is well suited for applications in the healthcare sector and light industry, specifically for its manipulation capabilities and modular design, as demonstrated by several European Projects that choose the robot as a research platform.
|
|
09:00-17:00, Paper SuWS11.5 | |
>Espen Knoop - Computational Design Tools for Expressive Robot Characters (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: Most robots today are still being designed manually, by engineers. This talk will discuss some of our recent project in the domain of developing design tools that leverage physics-based simulation and optimization in order to solve design problems computationally. I will showcase different projects that highlight various aspects of such tools, illustrating how computational tools can significantly speed up design iterations and enable robot designs that would otherwise not be feasible. Our pipelines take as input a creative intent, in the form of an animated digital character, thus leveraging standard workflows and tools from digital animation that are familiar to digital artists. We then computationally solve a design problem, enabling the fabrication of a physical robot that matches the original creative intent.
|
|
09:00-17:00, Paper SuWS11.6 | |
>Jeremy Ma - Teaching Robots in the Home (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: In this talk, we present the latest research work from the Toyota Research Institute on robotic assistance in the home. Motivated by the growing problem of an ageing society, a mobile manipulator robot is demonstrated that can be easily taught by human demonstration in virtual reality to achieve complex tasks. We show our latest results of real tasks executed in various homes around the Bay area and discuss the future of where Toyota is headed with this research.
|
|
09:00-17:00, Paper SuWS11.7 | |
>Giorgio Metta - Physical and Social Human-Robot Interaction (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: In this talk I will summarize results from research on two important components of collaborative robots: i.e. the ability to control physical interaction with humans and, the dual problem, of interacting in a socially meaningful way. Our team at the Italian Institute of Technology developed the iCub robot to study exactly these two aspects of human-robot interaction (HRI). The iCub robot resembles (in size) a child of about five years of age. One special feature of the iCub is the fact that it is completely covered by tactile sensors and, therefore, it can feel precisely when interaction occurs and measure the interaction forces with the environment.
|
|
09:00-17:00, Paper SuWS11.8 | |
>Calogero Maria Oddo - Large-Area Sensorized Skins for Collaborative Robotics (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: The talk will discuss selected case studies of technologies for endowing robots with artificial tactile sensors that are distributed over large areas. In the presented scientific approach, robotic systems are developed by capitalizing on a fertile interaction between robotics and neuroscience, so that the advancements of neuroscientific research can lead to the development of better technologies, which in turn contribute to the fundamental understanding of physiological processes. A first case study proposed is with piezoresistive MEMS sensors, applied to bionic hand prostheses to restore rich tactile skills, such as texture discrimination, in upper limb amputees. The developed biorobotic technologies and artificial intelligence methods, based on information encoding with neuromorphic spikes emulating physiological tactile representation, can be applied to a variety of scenarios, however the used piezoresistive MEMS sensors cannot be integrated in a straightforward manner in order to cover large areas of robot bodies. Hence, additional technologies were explored, including sensors based on cultured biological cells such as MDCK, piezoelectric ZnO nanowires grown with seedless hydrothermal method, and Fiber Bragg Grating (FBG) sensors. FBG technology, particularly, is considered very promising, and selected achievements are shown in the talk, including the application for the sensorization of a gripper able to manipulate fragile and deformable objects, or for covering the full area of an anthropomorphic robotic arm. Particularly, covering a robotic arm with a large sensorized skin allows the implementation of smart collaborative policies, such as safe interaction and programming by demonstration, that can be deployed in factories of the future.
|
|
09:00-17:00, Paper SuWS11.9 | |
>Cesare Stefanini - Robot Aided Design: Using Robots to Design Robots (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: Advanced computational tools (e.g. CAD, FEM, VR, etc.) are today ubiquitously adopted to design machines, robots and more. Capabilities are impressive, as recently demonstrated by AI platforms able to evolve the behaviour of a 3D bipedal virtual agent towards effective locomotion. This is made possible by unprecedented computation power, eight orders of magnitude larger than what was available 50 years ago, that is transforming engineering, science and technology. A simple reality check however shows that there is a striking mismatch between simulated and real-world operation of engineered systems, with particular reference to robotics. In the talk Prof. Stefanini will elaborate on this discrepancy and on the need of re-discovering the importance of a deep intellectual approach in the design phase as well as on the use of robots during the design itself. Two main case studies will be presented, representative of two paradigmatic scenarios: multi-coupled systems and bio-hybrid systems. The aim of the talk, far from being dogmatic or definitive, is to start a discussion on the use of robots at design stage and to be provocative against some current trends in the extensive use of computational tools in all design steps, including those early ones where creativity and intellectual representation play a vital role.
|
|
09:00-17:00, Paper SuWS11.10 | |
>Gentiane Venture - My Robot Is Not Your Robot: The Craftmanship behind Designing Personalized Robots (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: While robots are starting to be more massively produced and become consumer goods the question of adequacy to the task and the user becomes more and more relevant. In this short presentation I survey briefly why robot need to be personalized and how this is done now. Robots are not exactly like any consumer goods and more particularly when it comes to robots that have a social relation with the user. For that the creation of robot “contents” is crucial and that’s where engineers are failing, and this part should be done by end-users. I then present a few of the solutions that we have developed based on our experience of creating contents for research with children, with elderly for different purposes. The work presented here is divided in three parts: (1) the NEP framework to connect easily all kind of robots and sensors together and create the necessary foundation for interactive robots and systems. (2) RIZE, an interface for user with zero knowledge in programming or computers that enables them to create interactive content for robots. (3) our control framework to generate expressive movements given a desired task, using priority task control. I will finally conclude with some perspectives about the things that we are still missing to move from total craftmanship of engineer programmed robots from end-users program robots, which will allow for a personalized experience.
|
|
09:00-17:00, Paper SuWS11.11 | |
>Live Panel Session-WS-2437 Video 10 (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
Keywords:
Abstract: The demand for Industrial Robots (IR) has risen considerably due to the ongoing trend toward automation and continued technical innovations in industrial robots (IR). According to World Robotics 2019, in the year 2018 global robot installations were 422.271 units, worth USD 16.5 billion, and the operational stock of robots was computed at 2.439.543 units. The massive use of robots is key to the development of highly automated and productive factories, that are used worldwide to manufacture virtually all products, mainly mass-market consumer goods but also customized products manufactured under a Lot Size One logics. The forefront technology that IR brought in Manufacturing is likely ready to leave the industrial environment and enter directly into our homes in the coming years. The robotic market, indeed, is facing the challenge to open towards a new frontier where Robots exclusively utilized in production plants enter unprecedented fields of application. In fact, IR leaders worldwide have already started to be interested in Consumer Robots (CR), robots that can be bought for supporting domestic tasks or for entertainment. Today, autonomously navigating vacuums, pool cleaners, automated kitchen tools, pets, and educational toys are becoming family domestic robots. CR have been fueling visions of having robots living in our homes to assist humans with daily tasks. However, the promise of CR remains largely unfulfilled, despite CR could have an incredibly large market. Indeed, almost 2 billion households in the Western world would be possibly interested in purchasing at least one CR model (within different price ranges). Assuming to fully cover the whole market in 20 years, covering different price ranges (from 1K to 30K euro) of CR, the possible revenues in 20 years could reach almost 2-60 trillion €, having almost 100 million CR sold per year. The workshop will (i) analyse how the current paradigm of industrial automation for manufacturing consumer products could evolve into the emerging paradigm of industrial automation for manufacturing Consumer Robots and (ii) discuss the technical characteristics of new “Robots producing Robots” and of automation cells intended to manufacture CR with high precision, accuracy and at low cost. Four representative classes of CR will be considered as case studies of Digital Manufacturing and Human-centered Automation: vacuum cleaners; drones and aerial robots; educational robots and home robots.
|
|
09:00-17:00, Paper SuWS11.12 | |
WS-2437 Video 11 (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
|
09:00-17:00, Paper SuWS11.13 | |
WS-2437 Video 12 (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
|
09:00-17:00, Paper SuWS11.14 | |
WS-2437 Video 13 (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
|
09:00-17:00, Paper SuWS11.15 | |
WS-2437 Video 14 (I) |
|
Dario, Paolo | Scuola Superiore Sant'Anna |
|
SuWS12 |
Room T12 |
Cognitive Robotic Surgery |
Workshop |
Chair: Richter, Florian | University of California, San Diego |
Co-Chair: Yip, Michael C. | University of California, San Diego |
Organizer: Yip, Michael C. | University of California, San Diego |
Organizer: Richter, Florian | University of California, San Diego |
Organizer: Stoyanov, Danail | University College London |
Organizer: Vasconcelos, Francisco | University College London |
Organizer: Ficuciello, Fanny | Università Di Napoli Federico II |
Organizer: Vander Poorten, Emmanuel B | KU Leuven |
Organizer: Kazanzides, Peter | Johns Hopkins University |
Organizer: Hannaford, Blake | University of Washington |
Organizer: Fischer, Gregory Scott | Worcester Polytechnic Institute, WPI |
|
09:00-17:00, Paper SuWS12.1 | |
>WS-2434 Workshop Intro Video (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: Introduction video for our workshop on cognitive robotic surgery. Please visit our website for more information: https://sites.google.com/eng.ucsd.edu/iros-2020-workshop-crs
|
|
09:00-17:00, Paper SuWS12.2 | |
>Interactive Session (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: This video contains a recording of the interactive session held on November 6th, 8-9AM PST. The spotlight presentations are from the submitted extended abstracts with an optional short video. The submissions are original research or late-breaking results that fall under the scope of this workshop. An award sponsored by Intuitive Surgical Inc. will be given to the best submitted abstract.
|
|
09:00-17:00, Paper SuWS12.3 | |
>Learning Action Rules in Surgery (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: This presentation focuses on the Autonomous Robotic Surgery (ARS) project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 742671). The project aims at introducing autonomy in the surgical robotic scenario, to improve the quality of interventions and mainly the outcome for the patients. The presentation is held by Paolo Fiorini, full professor at the Department of Computer Science, University of Verona, Italy, and Daniele Meli, PhD student at the same institution. After a general introduction to the project and its research challenges, the presentation summarizes the recent results related to autonomous task planning and learning of task knowledge. First, a framework for the autonomous execution of surgical task is presented, employing answer set programming (ASP, a logic programming paradigm) to encode expert surgeons’ knowledge in terms of rules and constraints. ASP addresses a number of issues not completely solved by state-of-the-art approaches to autonomous robotic surgery, including explainable plan generation for monitoring and reliability, real-time knowledge retrieval and plan refinement as new evidence is acquired from sensors, and guarantee of constraint-safe plan generation thanks to the logic formalism. The framework is validated on the benchmark surgical training task of ring transfer, exhibiting real-time performances even in unconventional scenarios. The second part of the presentation shows the advances in inductive logic programming (ILP) as applied to the problem of learning new surgical ASP knowledge, using the state-of-the-art tool ILASP by Mark Law. Given some background ASP knowledge and a very limited set of example executions, ILASP tries to learn logical rules which guarantee the satisfaction of examples. Moreover, negative examples can be shown to the learner for constraint inference. ILASP is tested for learning pre-conditions to actions in the ring transfer scenarios. The tool is able to learn in very short time the minimal set of rules to represent full task knowledge, using only four incomplete examples of execution. Current research is investigating the unsupervised automatic generation of examples from surgical datasets and the problem of learning effects of action with temporal delay.
|
|
09:00-17:00, Paper SuWS12.4 | |
>Clinical, Practical, Social, and Ethical Implications in Surgical Robotics (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: Presentation by Professor Alícia Casals from the Polytechnic University of Catalonia on the Clinical, Practical, Social, and Ethical implications in Surgical Robotics.
|
|
09:00-17:00, Paper SuWS12.5 | |
>Utilizing Clinically Relevant Performance Metrics (CROMs) in 3D Printing Procedural Models (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: Presentation by Professor Ahmed Ghazi, MD, M.Sc, from the University of Rochester on Clinically Relevant Performance Metrics (CROMs) in 3D Printing Procedural Models.
|
|
09:00-17:00, Paper SuWS12.6 | |
>Novel Sensing Techniques As Foundation for Autonomous Robotic Surgery (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: Autonomous robotic surgery is receiving increased attention lately. Even surgeons and interventionalists are intrigued by these developments as many realize that this technology may take out some of the complexity they are facing more and more. This talk introduces a number of recent and new sensing technologies that help building up the local awareness of the robot. Understanding the own pose/ shape and/or contact state helps building up fast computational local models that – as shown in this talk – can be employed to encode local autonomous behavior. Thanks to this, the surgeon can focus on higher level cognitive tasks. The navigation of a robotic catheter is taken as an example to sketch the potential benefit for the interventionalist.
|
|
09:00-17:00, Paper SuWS12.7 | |
>The SuPer and SuPer-Deep Framework (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: Professor Michael Yip from University of California at San Diego presents his recent work towards a complete end-to-end framework for 3D surgical robotic perception called SuPer and SuPer Deep. These Perception Frameworks combine 3D deformable reconstruction, 3D instrument tracking, and temporal mapping (SLAM) for surgical robotic scenes, using stochastic optimal estimation and deep learning.
|
|
09:00-17:00, Paper SuWS12.8 | |
>Computer Vision Developments for Surgical Perception at Wellcome/EPSRC Centre for Interventional and Surgical Sciences (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: Digital cameras have dramatically changed interventional and surgical procedures. Modern operating rooms utilize a range of cameras to minimise invasiveness or provide vision beyond human capabilities in magnification, spectra or sensitivity. Such surgical cameras provide the most informative and rich signal from the surgical site containing information about activity and events as well as physiology and tissue function. This talk will highlight some of the opportunities for computer vision in surgical applications and the challenges in translation to clinically usable systems.
|
|
09:00-17:00, Paper SuWS12.9 | |
WS-2434 Video 8 (I) |
|
Richter, Florian | University of California, San Diego |
|
09:00-17:00, Paper SuWS12.10 | |
>Practical Issues in Machine Learning for Robotic Surgery (I) |
|
Richter, Florian | University of California, San Diego |
Keywords:
Abstract: Artificial Intelligence (AI) has the potential to improve robotic surgery by creating systems that provide context-aware assistance, prevent surgical errors, and autonomously perform tedious or difficult tasks. However, machine learning typically requires large amounts of training data, which are difficult to obtain from clinical systems, leaving simulators and research platforms as possible data sources. The existence of shared research platforms, such as the da Vinci Research Kit (dVRK) and Raven II, raises the possibility of a community effort to share phantoms, protocols and data for machine learning. We present our experience using a dVRK to perform a hysterectomy procedure on a realistic hydrogel phantom. In addition, we present preliminary efforts to improve the realism and performance of soft-tissue simulators, which have the potential to significantly add to the corpus of training data. We are currently organizing a simulation-based surgical robotics challenge, with plans to host future challenges using actual dVRK and Raven systems.
|
|
09:00-17:00, Paper SuWS12.11 | |
WS-2434 Video 10 (I) |
|
Richter, Florian | University of California, San Diego |
|
09:00-17:00, Paper SuWS12.12 | |
WS-2434 Video 11 (I) |
|
Richter, Florian | University of California, San Diego |
|
09:00-17:00, Paper SuWS12.13 | |
WS-2434 Video 12 (I) |
|
Richter, Florian | University of California, San Diego |
|
09:00-17:00, Paper SuWS12.14 | |
WS-2434 Video 13 (I) |
|
Richter, Florian | University of California, San Diego |
|
09:00-17:00, Paper SuWS12.15 | |
WS-2434 Video 14 (I) |
|
Richter, Florian | University of California, San Diego |
|
SuWS13 |
Room T13 |
Application-Driven Soft Robotic Systems: Translational Challenges |
Workshop |
Chair: Stilli, Agostino | University College London |
Co-Chair: Abad Guaman, Sara Adela | University College London |
Organizer: Abad Guaman, Sara Adela | University College London |
Organizer: Lindenroth, Lukas | University College London |
Organizer: Maiolino, Perla | University of Oxford |
Organizer: Stilli, Agostino | University College London |
Organizer: Althoefer, Kaspar | Queen Mary University of London |
Organizer: Liu, Hongbin | King's College London |
Organizer: Menciassi, Arianna | Scuola Superiore Sant'Anna - SSSA |
Organizer: Nanayakkara, Thrishantha | Imperial College London |
Organizer: Paik, Jamie | Ecole Polytechnique Federale De Lausanne |
Organizer: Wurdemann, Helge Arne | University College London |
|
09:00-17:00, Paper SuWS13.1 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" - Introduction Video and Poster Presentations (I) |
|
Abad Guaman, Sara Adela | University College London |
Lindenroth, Lukas | University College London |
Maiolino, Perla | University of Oxford |
Stilli, Agostino | University College London |
Althoefer, Kaspar | Queen Mary University of London |
Liu, Hongbin | King's College London |
Menciassi, Arianna | Scuola Superiore Sant'Anna - SSSA |
Nanayakkara, Thrishantha | Imperial College London |
Paik, Jamie | Ecole Polytechnique Federale De Lausanne |
Wurdemann, Helge Arne | University College London |
Keywords:
Abstract: With the increased interest in the use of soft materials for the creation of highly dexterous robots, soft robotics has established itself as an important research field, as evidenced by the surge of publications in the recently appearing monothematic journals, such as “Soft Robotics” and conferences, such as Robosoft - the IEEE International Conference on Soft Robotics and dedicated sessions at the major robotics conferences, ICRA and IROS. Researchers have successfully demonstrated advantages of soft robotics over traditional robots made of rigid links and joints in several application areas including manufacturing, healthcare and surgical interventions. However, only a few of these potential soft robotic solutions have resulted in certified products contributing to a continuously growing robotic market. This workshop will focus on application-driven, holistic soft robotic systems and discuss the translational outcomes achieved to date. We will discuss obstacles along the path from designing a soft robot towards its commercialisation and we will learn from successes and failures of experienced soft roboticists in overcoming translational barriers. We will provide young academics and industrial researchers interested in exploring the potential of soft robotics with a platform to discuss these translational challenges with senior academics, industrial experts and end users. Applications will be presented in which soft robotic solutions have greatly surpassed their rigid counterparts or traditional robotic solutions failed. We will explore and identify sectors in which soft robots show great potential. The objectives of this workshop are: - to discuss and identify the barriers preventing fundamental soft robotic research from being translated into commercial products - to discuss and identify the design paradigms with the highest translational potential for specific applications - to provide young researchers in industry and academia with a platform to meet, to learn from experienced soft roboticists and to discuss the future of soft robotics research - to contribute to the creation of a white paper on soft robotics as proposed by the UK-RAS Strategic Task Group on Soft Robotics
|
|
09:00-17:00, Paper SuWS13.2 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 1 - Professor Jamie Paik - EPFL - Ecole Polytechnique Fédérale De Lausanne (I) |
|
Paik, Jamie | Ecole Polytechnique Federale De Lausanne |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Professor Jamie Paik, Reconfigurable Robotics Lab (Director), School of Engineering, EPFL - Ecole Polytechnique Fédérale de Lausanne for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges".
|
|
09:00-17:00, Paper SuWS13.3 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 2 - Dr. Jelizaveta Konstantinova, Ocado Technology (I) |
|
Konstantinova, Jelizaveta | Ocado Technology |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Dr. Jelizaveta Konstantinova Research Coordinator | OCTO, Ocado Technology, for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.4 | |
Empty (I) |
|
Stilli, Agostino | University College London |
|
09:00-17:00, Paper SuWS13.5 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 4 - Professor Arianna Menciassi Sant’Anna - School of Advanced Studies – Pisa (I) |
|
Menciassi, Arianna | Scuola Superiore Sant'Anna - SSSA |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Professor Arianna Menciassi The BioRobotics Institute Sant’Anna - School of Advanced Studies – Pisa for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.6 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 5 - Professor Bradley Nelson, ETH – Zurich (I) |
|
Nelson, Bradley J. | ETH Zurich |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Professor Bradley Nelson, Director Multi-Scale Robotics Lab, ETH – Zurich, for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.7 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 6 - Professor Conor Walsh, Harvard University (I) |
|
Walsh, Conor James | Harvard University |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Professor Conor Walsh, Director Harvard Biodesign Lab – WYSS Institute, Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.8 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 7 - Dr. David Noonan, Auris Health, Inc (I) |
|
Noonan, David | Auris Health Inc |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Dr. David Noonan, Senior Director - Systems, Algorithms and Robotics, Auris Health, Inc for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.9 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 8 - Dr. Fumiya Iida, Cambridge University (I) |
|
Iida, Fumiya | University of Cambridge |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Dr. Fumiya Iida, Director Bio-Inspired Robotics Lab, Cambridge University for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.10 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 9 - Professor Oliver Brock - Technische Universität Berlin (I) |
|
Brock, Oliver | Technische Universität Berlin |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Professor Oliver Brock, Alexander von Humboldt Professor, Technische Universität Berlin for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.11 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 10 - Dr. Girish Chowdhary, University of Illinois Urbana-Champaign (I) |
|
Chowdhary, Girish | University of Illinois at Urbana Champaign |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Dr. Girish Chowdhary, Assistant Professor, Director Distributed Autonomous Systems Laboratory, University Of Illinois Urbana-Champaign for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.12 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 11 - Margaret M. Coad, Stanford University (I) |
|
Coad, Margaret M. | Stanford University |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Margaret M. Coad, CHARM Lab, Stanford University for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.13 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 12 - Professor Mirko Kovac, Imperial College London (I) |
|
Kovac, Mirko | Imperial College London |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Professor Mirko Kovac, Director of the Aerial Robotics Laboratory, Imperial College London for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.14 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 13 - Dr. Thrishantha Nanayakkara, Imperial College London (I) |
|
Nanayakkara, Thrishantha | Imperial College London |
Stilli, Agostino | University College London |
Keywords:
Abstract: Video Presentation of Dr. Thrishantha Nanayakkara, Dyson School of Design Engineering, Reader in Design Engineering and Robotics, Imperial College London for the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
09:00-17:00, Paper SuWS13.15 | |
>WS-2441 IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges" Video 14 - Panel Talks (I) |
|
Abad Guaman, Sara Adela | University College London |
Lindenroth, Lukas | University College London |
Maiolino, Perla | University of Oxford |
Stilli, Agostino | University College London |
Althoefer, Kaspar | Queen Mary University of London |
Liu, Hongbin | King's College London |
Menciassi, Arianna | Scuola Superiore Sant'Anna - SSSA |
Nanayakkara, Thrishantha | Imperial College London |
Paik, Jamie | Ecole Polytechnique Federale De Lausanne |
Wurdemann, Helge Arne | University College London |
Keywords:
Abstract: Video recordings of the panels held at the IROS 2020 Workshop "Application-Driven Soft Robotic Systems: Translational Challenges"
|
|
SuWS14 |
Room T14 |
Reliable Deployment of Machine Learning for Long-Term Autonomy |
Workshop |
Chair: Dayoub, Feras | Queensland University of Technology |
Co-Chair: Krajník, Tomáš | Czech Technical University |
Organizer: Dayoub, Feras | Queensland University of Technology |
Organizer: Krajník, Tomáš | Czech Technical University |
Organizer: Sünderhauf, Niko | Queensland University of Technology |
Organizer: Kim, Ayoung | Korea Advanced Institute of Science Technology |
|
09:00-17:00, Paper SuWS14.1 | |
>WS-3143 Workshop Intro Video (I) |
|
Dayoub, Feras | Queensland University of Technology |
Krajník, Tomáš | Czech Technical University |
Kim, Ayoung | Korea Advanced Institute of Science Technology |
Sünderhauf, Niko | Queensland University of Technology |
Keywords:
Abstract: Achieving long-term autonomy by mobile robots means the ability to operate autonomously under no/or minimal supervision for days, weeks, months or even years. During these long periods, the environment where the robot operates can experience unpredictable gradual or/and radical changes. This fact adds an extra dimension to the fundamental problems in robotics such as perception, planning, navigation, SLAM and manipulation; and makes them more challenging. One of the keys to achieving long-term autonomy is having reliable sub-components in the robotic operating system, including the machine learning-based onse. In this context, reliability means that the components can identify and recover from failures and prevent or reduce the likelihood of failures in general, which otherwise can terminate the mission of the robot or/and might cause severe danger. This workshop focuses on the problem of long-term autonomy for mobile robots and the challenge of building a reliabile machine learning components in the robotic system that can handle bad sensory data, shifts to abnormal operational conditions, misclassification and detections. We invite several renowned experts in the field who will highlight the main challenges these robots face and talk about their own experiences and the lessons they learnt during long-term deployments of their robots.
|
|
09:00-17:00, Paper SuWS14.2 | |
>WS-3143 Video 1: Talk by Ben Upcroft: Oxbotica - Universal Autonomy (I) |
|
Upcroft, Ben | Queensland University of Technology |
Dayoub, Feras | Queensland University of Technology |
Keywords:
Abstract: Talk by Ben Upcroft: Oxbotica - Universal Autonomy
|
|
09:00-17:00, Paper SuWS14.3 | |
>WS-3143 Video 2: Talk by Michael Milford: Algorithmic, Machine Learning and Bio-Inspired Approach to Long Term Autonomy (I) |
|
Milford, Michael J | Queensland University of Technology |
Dayoub, Feras | Queensland University of Technology |
Keywords:
Abstract: Talk by Michael Milford: Algorithmic, Machine Learning and Bio-Inspired Approach to Long Term Autonomy
|
|
09:00-17:00, Paper SuWS14.4 | |
>WS-3143 Video 3: Talk by Tim Barfoot: Long-Term Navigation, (Where) Can Machine Learning Help? (I) |
|
Barfoot, Timothy | University of Toronto |
Dayoub, Feras | Queensland University of Technology |
Keywords:
Abstract: Talk by Tim Barfoot: Long-Term Navigation, (Where) can Machine Learning Help?
|
|
09:00-17:00, Paper SuWS14.5 | |
>WS-3143 Video 4: Talk by Nick Hawes: Mission Planning with Learned Models for Long-Term Autonomy (I) |
|
Hawes, Nick | University of Oxford |
Dayoub, Feras | Queensland University of Technology |
Keywords:
Abstract: This a trailer for a talk by Nick Hawes: Mission Planning with Learned Models for Long-term Autonomy. The full video will be available on the workshop website: https://sites.google.com/view/icra2020ltaws/home
|
|
09:00-17:00, Paper SuWS14.6 | |
>WS-3143 Video 5: Talk by Zhi Yan: Towards All-Weather Autonomous Driving (I) |
|
Yan, Zhi | University of Technology of Belfort-Montbéliard (UTBM) |
Dayoub, Feras | Queensland University of Technology |
Keywords:
Abstract: Talk by Zhi Yan: Towards All-weather Autonomous Driving
|
|
09:00-17:00, Paper SuWS14.7 | |
WS-3143 Video 6 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.8 | |
WS-3143 Video 7 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.9 | |
WS-3143 Video 8 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.10 | |
WS-3143 Video 9 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.11 | |
WS-3143 Video 10 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.12 | |
WS-3143 Video 11 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.13 | |
WS-3143 Video 12 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.14 | |
WS-3143 Video 13 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
09:00-17:00, Paper SuWS14.15 | |
WS-3143 Video 14 (I) |
|
Dayoub, Feras | Queensland University of Technology |
|
SuWS15 |
Room T15 |
Robotic In-Situ Servicing, Assembly, and Manufacturing |
Workshop |
Chair: Carignan, Craig | University of Maryland |
Co-Chair: Saaj, Chakravarthini Mini | University of Lincoln |
Organizer: Carignan, Craig | University of Maryland |
Organizer: Saaj, Chakravarthini Mini | University of Lincoln |
Organizer: Detry, Renaud | Jet Propulsion Laboratory |
Organizer: Vander Hook, Joshua | NASA Jet Propulsion Laboratory |
Organizer: Marani, Giacomo | West Virginia University |
|
09:00-17:00, Paper SuWS15.1 | |
>Introduction to the Workshop on Robotic In-Situ Servicing, Assembly and Manufacturing (I) |
|
Carignan, Craig | University of Maryland |
Keywords:
Abstract: Robotic manufacturing, assembly, and servicing utilizing in-situ resources will enable the construction and maintenance of large-scale assets at significantly reduced cost. Examples include large, ground-based infrastructure (buildings, particle accelerators, solar farms), large in-space assemblies (space telescopes, commercial platforms, transportation hubs), and undersea structures (oil platforms, laboratories, habitats). Achieving in-situ fabrication using cooperative, mobile robots is also a major goal of several terrestrial and orbital construction projects. Major challenges include mobility, precise manipulation, localization/mapping, and adaptability to uncertainties in the environment. The capacity to process raw materials in remote environments is also key for on-orbit, lunar, and planetary applications. In general, large ground-based facilities built in inaccessible locales could benefit greatly from in-situ robotics for inspection and servicing, and likely would necessitate a multi-robot scalable solution. The workshop will be highlighted by invited speakers covering a spectrum of topics as well as posters/videos submitted by researchers. The organizers are also collaborating with “Frontiers in Robotics and AI” to develop a special issue devoted to this research topic and will be soliciting future contributions from participants of the workshop.
|
|
09:00-17:00, Paper SuWS15.2 | |
>Robotic Assembly Activities at NASA Langley Research Center (I) |
|
Carignan, Craig | University of Maryland |
Cooper, John | NASA |
Keywords:
Abstract: Over the past several decades, NASA Langley Research Center (LaRC) has developed a suite of hardware and software capabilities for robotic in-space assembly. Specific robots include the Lightweight Surface Manipulation System (LSMS), Tendon-Actuated Lightweight In-Space Manipulator (TALISMAN), NASA Intelligent Jigging and Assembly Robot (NINJAR), Strut Assembly, Manufacturing, Utility & Robotic Aid (SAMURAI), and most recently the Assemblers modular robots. Alongside the hardware, software tools such as the Autonomous Entity Operations Network (AEON) and the Baseline Environment for Autonomous Modeling (BEAM) have been developed to enable communication and simulation respectively. These tools have supported foundational research in single and multi-agent control, sensing and perception, trajectory generation, task allocation, and human-machine teaming. This talk will provide a broad overview of these capabilities and go into detail on recent developments made by the Assemblers project to create modular, reconfigurable robots for autonomous in-space assembly.
|
|
09:00-17:00, Paper SuWS15.3 | |
>Autonomous Vision-Driven Robotic Manipulation in Space: On-Orbit Assembly with a CubeSate Arm, Sample Tube Pickup for Mars Sample Return (I) |
|
Carignan, Craig | University of Maryland |
Detry, Renaud | Jet Propulsion Laboratory |
Keywords:
Abstract: I will discuss the experimental validation of autonomous robot manipulation behaviors that support on-orbit assembly, and the exploration of Mars' surface, lava tubes on Mars and the Moon, icy bodies and ocean worlds. I will frame the presentation with the following questions: What new insights or limitations arise when applying algorithms to real-world data as opposed to benchmark datasets or simulations? How can we address the limitations of real-world environments—e.g., noisy or sparse data, non-i.i.d. sampling, etc.? What challenges exist at the frontiers of robotic exploration of unstructured and extreme environments? I will discuss our approach to validating autonomous machine-vision capabilities for the notional Mars Sample Return campaign, for autonomously navigating lava tubes, and for autonomously assembling modular structures on orbit. The talk will highlight the thought process that drove the decomposition of a validation need into a collection of tests conducted on off-the-shelf datasets, custom/application-specific datasets, and simulated or physical robot hardware, where each test addressed a different range of experimental parameters for sensing/actuation fidelity, breadth of environmental conditions, and breadth of jointly-tested robot functions.
|
|
09:00-17:00, Paper SuWS15.4 | |
>Comparison between Stationary and Crawling Multi-Arm Robotics for In-Space Assembly (I) |
|
Carignan, Craig | University of Maryland |
McBryan, Katherine | US Naval Research Laboratory |
Keywords:
Abstract: In-space assembly (ISA) is the next step to building larger and more permanent structures in orbit. The use of a robotic in-space assembler will save on costly and potentially risky EVAs. Determining the best robot for ISA is difficult as it will depend on the structure being assembled. A comparison between two categories of robots are presented: a stationary robot and robot which crawls along the truss. The estimated mass, energy, and time are presented for each system as it, in simulation, builds a desired truss system. There are trade-offs to every robot design and understanding those trade-offs is essential to building a system that is not only efficient but also cost-effective.
|
|
09:00-17:00, Paper SuWS15.5 | |
>Towards In-Situ 3D Printing and Assembly with Mobile Robots (I) |
|
Carignan, Craig | University of Maryland |
Cuong, Pham Quang | Nanyang Technological University |
Keywords:
Abstract: Construction is a promising but challenging application area for robotics. The main challenges stem from the messy and unpredictable nature of construction sites. In this talk, I will present two recent studies from our research group that highlight those challenges: 3D printing of large concrete structures by mobile robots, and human-robot collaboration for mobile assembly.
|
|
09:00-17:00, Paper SuWS15.6 | |
>Scaling the Payload W/ OSAM-2 – On-Orbit Manufacturing and Robotic Assembly Drives the Future Economies in Space (Part 2) (I) |
|
Carignan, Craig | University of Maryland |
McCarthy, Tom | Motiv Space Systems |
Keywords:
Abstract: A regular cadence of low cost launch opportunities coupled with government and commercial investment in the areas of on-orbit manufacturing and robotic operational capabilities are enabling hi-bandwidth communication networks, increased imagery fidelity, and point to point space tug and transport. The OSAM-2 mission, primed by Made In Space (MIS), will demonstrate on-orbit manufacturing capabilities from a free flying spacecraft by 3D printing and extruding primary structures for a large solar array system. MIS has been developing on-orbit 3D printing technologies for nearly a decade and have successfully demonstrated additive manufacturing aboard the ISS. MIS selected Motiv Space Systems as its space robotics partner to enable the assembly operations of the OSAM-2 system on orbit. Motiv is utilizing its modular, 7-DOF robotic manipulation architecture, xLink, as a disruptive solution for this revolutionary mission with an eye towards future orbital construction mission needs. This presentation will address material selection, capabilities, and challenges associate with the on-orbit manufacturing process. In addition, a breakdown of the intrinsic SWAP benefits on introducing a modular and distributed robotic manipulator architecture and how these are deployed for the OSAM-2 mission will be discussed.
|
|
09:00-17:00, Paper SuWS15.7 | |
>Live Interaction Session #1: Interactive Paper/Poster Session (I) |
|
Carignan, Craig | University of Maryland |
Detry, Renaud | Jet Propulsion Laboratory |
Keywords:
Abstract: Authors of contributed papers/posters will present "spotlight” pre-recorded video presentation of their work to start the session. Paper/poster authors as well as the invited speakers will then disperse to spaces in “gather.town” for live interaction with workshop participants. This interactive session is an excellent opportunity to ask questions and exchange ideas with the presenters!
|
|
09:00-17:00, Paper SuWS15.8 | |
>Live Interaction Session #2: Town Hall Discussion (I) |
|
Carignan, Craig | University of Maryland |
Saaj, Chakravarthini Mini | University of Lincoln |
Vander Hook, Joshua | NASA Jet Propulsion Laboratory |
Keywords:
Abstract: Join other workshop participants for a live Q & A discussion moderated by the workshop organizers. Share your ideas regarding topics covered by the presenters and also what you would like to see in future workshops.
|
|
09:00-17:00, Paper SuWS15.9 | |
>Scaling the Payload W/ OSAM-2 – On-Orbit Manufacturing and Robotic Assembly Drives the Future Economies in Space (Part 1) (I) |
|
Carignan, Craig | University of Maryland |
Riley, Deejay | Made in Space, Inc |
Keywords:
Abstract: A regular cadence of low cost launch opportunities coupled with government and commercial investment in the areas of on-orbit manufacturing and robotic operational capabilities are enabling hi-bandwidth communication networks, increased imagery fidelity, and point to point space tug and transport. The OSAM-2 mission, primed by Made In Space (MIS), will demonstrate on-orbit manufacturing capabilities from a free flying spacecraft by 3D printing and extruding primary structures for a large solar array system. MIS has been developing on-orbit 3D printing technologies for nearly a decade and have successfully demonstrated additive manufacturing aboard the ISS. MIS selected Motiv Space Systems as its space robotics partner to enable the assembly operations of the OSAM-2 system on orbit. Motiv is utilizing its modular, 7-DOF robotic manipulation architecture, xLink, as a disruptive solution for this revolutionary mission with an eye towards future orbital construction mission needs. This presentation will address material selection, capabilities, and challenges associate with the on-orbit manufacturing process. In addition, a breakdown of the intrinsic SWAP benefits on introducing a modular and distributed robotic manipulator architecture and how these are deployed for the OSAM-2 mission will be discussed.
|
|
09:00-17:00, Paper SuWS15.10 | |
>KRAKEN - Compliant Manipulator for Small Spacecraft (I) |
|
Carignan, Craig | University of Maryland |
Britton, Nathan | Tethers Unlimited, Inc |
Keywords:
Abstract: Tethers Unlimited, Inc. (TUI) develops transformative technologies to enable new capabilities for the next generation of space missions. The KRAKEN robotic manipulator brings TUI’s high performance-density philosophy to robotics, by enabling precision force control and active compliance to small spacecraft for on-orbit servicing, in-space assembly, and human spacecraft automation. KRAKEN is a modularly configurable joint system, where the baseline configuration consists of seven series elastic actuators with precision co-located torque sense capability, on-board impedance control, and a form factor allowing stowage of two 1m arms in a 42cm x 25cm x 18cm volume. TUI is currently developing a two-arm servicing payload for refueling and assembly operations on small (<500kg) spacecraft, including multi-arm control strategies for power limited rad-hard avionics.
|
|
09:00-17:00, Paper SuWS15.11 | |
>Modular Robotic Assets and Technologies for Future On-Orbit Applications (I) |
|
Carignan, Craig | University of Maryland |
Deremetz, Mathieu | Space Application Services NV/SA |
Keywords:
Abstract: Existing commercial satellites and space platforms are traditionally the result of a highly customized monolithic design with very limited or no capability of servicing and maintenance. To offer those capabilities while remaining cost effective, high performing, reliable, scalable and flexible, key technologies need to be developed. Within EU MOSAR and ESA MIRROR-SA projects, Space Applications is developing such technologies for demonstrations, in particular modular robotic systems involving manipulators and HOTDOCK standard interfaces, required to enable this fundamental shift of paradigm in designing and deploying satellites and spacecraft. This talk deals with the development of such systems for future on-orbit applications, especially servicing and large assemblies.
|
|
09:00-17:00, Paper SuWS15.12 | |
>Compliance Control for Robotic In-Space Assembly and Maintenance (I) |
|
Carignan, Craig | University of Maryland |
Newman, Wyatt | Case Western Reserve University |
Keywords:
Abstract: NASA is developing in-space, satellite-servicing capabilities that will include capture and refueling by robots. The necessary operations will require responsiveness to contact forces and moments. With communications delays, force-reflecting haptics for teleoperation is impractical. Instead, the robot must be responsible for compliant-motion behaviors essential to success, while the Earth-based human operator provides supervisory control. Particular challenges for space-based, compliant-motion controlled robots include: servo controller sample rate limitations, servo controller feedback latency, and non-collocated force feedback. Subject to these non-idealities, it is essential to guarantee contact stability under compliant-motion control. Additionally, given compliant-motion capability, one must identify and develop higher-level robot behaviors that are useful for achieving manipulation goals and which are intuitive to a remote supervisory human controller. This presentation will review techniques for assuring contact stability as well as behaviors appropriate for supervisory control of manipulation.
|
|
SuWS17 |
Room T17 |
Bringing Constraint-Based Robot Programming to Real-World Applications |
Workshop |
Chair: Decré, Wilm | Katholieke Universiteit Leuven |
Co-Chair: Bruyninckx, Herman | University of Leuven |
Organizer: Decré, Wilm | Katholieke Universiteit Leuven |
Organizer: Bruyninckx, Herman | University of Leuven |
Organizer: Borghesan, Gianni | KU Leuven |
Organizer: Aertbelien, Erwin | KU Leuven |
Organizer: Tingelstad, Lars | Norwegian University of Science and Technology |
Organizer: Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Organizer: Mingo, Enrico | Istituto Italiano Di Tecnologia |
Organizer: Kheddar, Abderrahmane | CNRS-AIST |
Organizer: Gergondet, Pierre | CNRS |
|
09:00-17:00, Paper SuWS17.1 | |
>Bringing Constraint-Based Robot Programming to Real-World Applications - Introduction (I) |
|
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: In constraint-based robot programming (CobaRoP), a robot task is described (either explicitly or implicitly) as an optimization problem subject to a number of constraints. CobaRoP is a highly promising approach to deploy robot applications not only in conventional highly conditioned production lines, but also in more human-like production lines with a large variability and uncertainty in the task. This can even involve robots that physically interact with humans to jointly perform tasks. Defining and implementing how to react to disturbances is key in these aspects, and this goes beyond specifying a nominal prescribed trajectory execution. At the same time, we see that these new robot applications typically consist of smaller production series. Hence, development costs need to be written off against a smaller batch. The central objective of this workshop is to identify how CobaRoP can resolve these seemingly conflicting goals: on the one hand, we want to have complex sensor-based applications; on the other hand, we need to decrease development costs. We aim to bring together leading researchers and industrial/real-world early adopters in constraint-based programming theory, software development and applications. Questions addressed in the workshop include: What can we learn from early adopters? What are key hurdles to bring CobaRoP from a lab to the real world? How should we organize the different stakeholder roles?
|
|
09:00-17:00, Paper SuWS17.2 | |
>Constraint-Based Robot Programming for Advanced Sensor-Based Applications and Human-Robot Interaction (I) |
|
Aertbelien, Erwin | KU Leuven |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Constraint-based robot programming for advanced sensor-based applications and human-robot interaction
|
|
09:00-17:00, Paper SuWS17.3 | |
>Multi-Objectives and Multi-Sensory Task Space Quadratic Programming Control (I) |
|
Gergondet, Pierre | CNRS |
Kheddar, Abderrahmane | CNRS-AIST |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Multi-objectives and multi-sensory task space quadratic programming control
|
|
09:00-17:00, Paper SuWS17.4 | |
>Robot Programming at Your Fingertips with OpenSoT and CartesI/O (I) |
|
Mingo Hoffman, Enrico | Fondazione Istituto Italiano Di Tecnologia |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Robot Programming at your Fingertips with OpenSoT and CartesI/O
|
|
09:00-17:00, Paper SuWS17.5 | |
>Welding Automation Using Constraint-Based Robot Programming (I) |
|
Tingelstad, Lars | Norwegian University of Science and Technology |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Welding Automation using Constraint-based Robot Programming
|
|
09:00-17:00, Paper SuWS17.6 | |
>The First Industrial Application of eTaSL: Ultrasound Inspection of Fold-Glue Joints of Car Parts in a Human-Robot Collaborative Setting at Audi Brussels (I) |
|
De Schutter, Joris | KU Leuven |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: The first industrial application of eTaSL: ultrasound inspection of fold-glue joints of car parts in a human-robot collaborative setting at Audi Brussels
|
|
09:00-17:00, Paper SuWS17.7 | |
>Constraint Identification from STEP AP242 Files for Automated Robotic Welding (I) |
|
Mohammed, Shafi Khurieshi | Norwegian University of Science and Technology |
Arbo, Mathias | NTNU |
Tingelstad, Lars | Norwegian University of Science and Technology |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Constraint Identification from STEP AP242 files for Automated Robotic Welding
|
|
09:00-17:00, Paper SuWS17.8 | |
>Constraint-Based Plan Transformation in a Safe and Usable GOLOG Language (I) |
|
Mataré, Victor | FH Aachen University |
Schiffer, Stefan | RWTH Aachen University |
Ferrein, Alexander | FH Aachen University of Applied Sciences |
Viehmann, Tarik | RWTH Aachen University |
Hofmann, Till | RWTH Aachen University |
Lakemeyer, Gerhard | Computer Science Department, RWTH Aachen University |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Constraint-based Plan Transformation in a Safe and Usable GOLOG Language
|
|
09:00-17:00, Paper SuWS17.9 | |
>Bridging the Gap between the Open-Source Task-Space Constraint-Based Control Framework and Real-World Human-Robot Interaction Applications (I) |
|
Bolotnikova, Anastasia | SoftBank Robotics Europe, Univerisy of Montpellier |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Bridging the Gap between the Open-source Task-Space Constraint-Based Control Framework and Real-World Human-Robot Interaction Applications
|
|
09:00-17:00, Paper SuWS17.10 | |
>Constraint-Based Dual Arm Control for Automated Wiring of Electrical Cabinets (I) |
|
Halt, Lorenz | Fraunhofer Institute for Manufacturing Engineering and Automatio |
Tenbrock, Philipp | Fraunhofer Institute for Manufacturing Engineering and Automatio |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: Constraint-based Dual Arm Control for Automated Wiring of Electrical Cabinets
|
|
09:00-17:00, Paper SuWS17.11 | |
>The Explicit Reference Governor for Real-Time Safe Control of a Robotic Manipulator (I) |
|
Merckaert, Kelly | Vrije Universiteit Brussel (VUB) |
Convens, Bryan | Vrije Universiteit Brussel |
El Makrini, Ilias | Vrije Universiteit Brussel |
Van de Perre, Greet | Vrije Universiteit Brussel |
Nicotra, Marco | University of Colorado Boulder |
Vanderborght, Bram | Vrije Universiteit Brussel |
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: The Explicit Reference Governor for Real-time Safe Control of a Robotic Manipulator
|
|
09:00-17:00, Paper SuWS17.12 | |
>Bringing Constraint-Based Robot Programming to Real-World Applications - Discussion (I) |
|
Decré, Wilm | Katholieke Universiteit Leuven |
Keywords:
Abstract: In constraint-based robot programming (CobaRoP), a robot task is described (either explicitly or implicitly) as an optimization problem subject to a number of constraints. CobaRoP is a highly promising approach to deploy robot applications not only in conventional highly conditioned production lines, but also in more human-like production lines with a large variability and uncertainty in the task. This can even involve robots that physically interact with humans to jointly perform tasks. Defining and implementing how to react to disturbances is key in these aspects, and this goes beyond specifying a nominal prescribed trajectory execution. At the same time, we see that these new robot applications typically consist of smaller production series. Hence, development costs need to be written off against a smaller batch. The central objective of this workshop is to identify how CobaRoP can resolve these seemingly conflicting goals: on the one hand, we want to have complex sensor-based applications; on the other hand, we need to decrease development costs. We aim to bring together leading researchers and industrial/real-world early adopters in constraint-based programming theory, software development and applications. Questions addressed in the workshop include: What can we learn from early adopters? What are key hurdles to bring CobaRoP from a lab to the real world? How should we organize the different stakeholder roles?
|
|
09:00-17:00, Paper SuWS17.13 | |
WS-2392 Video 12 (I) |
|
Decré, Wilm | Katholieke Universiteit Leuven |
|
09:00-17:00, Paper SuWS17.14 | |
WS-2392 Video 13 (I) |
|
Decré, Wilm | Katholieke Universiteit Leuven |
|
09:00-17:00, Paper SuWS17.15 | |
WS-2392 Video 14 (I) |
|
Decré, Wilm | Katholieke Universiteit Leuven |
|
SuWS18 |
Room T18 |
Managing Deformation: A Step towards Higher Robot Autonomy |
Workshop |
Chair: Zhu, Jihong | TU Delft |
Co-Chair: Cherubini, Andrea | LIRMM - Universite De Montpellier CNRS |
Organizer: Zhu, Jihong | TU Delft |
Organizer: Cherubini, Andrea | LIRMM - Universite De Montpellier CNRS |
Organizer: Dune, Claire | Université De Toulon |
Organizer: Navarro-Alarcon, David | The Hong Kong Polytechnic University |
|
09:00-17:00, Paper SuWS18.1 | |
>Intro Video for IROS 2020 Workshop on Managing Deformation: A Step towards Higher Robot Autonomy (I) |
|
Zhu, Jihong | TU Delft |
Keywords:
Abstract: ABSTRACT: This is an intro video to our workshop
|
|
09:00-17:00, Paper SuWS18.2 | |
>Theme 1: Safe Interaction with Delicate Deformable Objects (I) |
|
Cherubini, Andrea | LIRMM - Universite De Montpellier CNRS |
Navarro-Alarcon, David | The Hong Kong Polytechnic University |
Alambeigi, Farshid | University of Texas at Austin |
Ficuciello, Fanny | Università Di Napoli Federico II |
Yuan, Wenzhen | Carnegie Mellon University |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: ABSTRACT: This is an interactive session which consists of three parts: 1. each speaker introduces his/her research in robots working with deformation 2. an interactive discussion 3. a short wrap-up.
|
|
09:00-17:00, Paper SuWS18.3 | |
>Theme 2: Modeling and Characterizing Deformation (I) |
|
Navarro-Alarcon, David | The Hong Kong Polytechnic University |
Cherubini, Andrea | LIRMM - Universite De Montpellier CNRS |
Dune, Claire | Université De Toulon |
Berenson, Dmitry | University of Michigan |
Pan, Jia | University of Hong Kong |
Bohg, Jeannette | Stanford University |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: This is an interactive session which consists of three parts: 1. each speaker introduces his/her research in robots working 2. an interactive discussion 3. a short wrap-up
|
|
09:00-17:00, Paper SuWS18.4 | |
>Theme 3: Produce, Assemble and Build with Deformable Objects (I) |
|
Dune, Claire | Université De Toulon |
Cherubini, Andrea | LIRMM - Universite De Montpellier CNRS |
Navarro-Alarcon, David | The Hong Kong Polytechnic University |
Harada, Kensuke | Osaka University |
Schwartz, Mathew | Advanced Institutes of Convergence Technology |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: ABSTRACT: This is an interactive session which consists of three parts: 1. each speaker introduces his/her research in robots working with deformation 2. an interactive discussion 3. a short wrap-up.
|
|
09:00-17:00, Paper SuWS18.5 | |
>Accepted Workshop Paper/Presenation "RGB-D Sensing of Challenging Deformable Objects" (I) |
|
Lopez-Nicolas, Gonzalo | Universidad De Zaragoza |
Cuiral-Zueco, Ignacio | Universidad De Zaragoza |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: The problem of deformable object tracking is prominent in recent robot shape-manipulation research. Additionally, texture-less objects that undergo large deformations and movements lead to difficult scenarios. Three RGB-D sequences of different challenging scenarios are processed in order to evaluate the robustness and versatility of a deformable object tracking method. Everyday objects of different complex characteristics are manipulated and tracked. The tracking system, pushed out the comfort zone, performs satisfactorily
|
|
09:00-17:00, Paper SuWS18.6 | |
>Accepted Workshop Paper/Presentation "Building 3D Deformable Object Models in Partially Observable Robotic Environments" (I) |
|
Payeur, Pierre | University of Ottawa |
Cretu, Ana-Maria | Université Du Quéebec En Outaouais |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: Building 3D models is an important step to perform more autonomous and dexterous manipulation on deformable objects. Current techniques for modeling deformable objects are inflexible and suffer from discrepancies with actual behaviors when assumptions on their material are not properly fulfilled, or when objects can only be observed from limited viewpoints. Deformable object models require close integration between computer vision, deep learning and robotics. In this work, a framework for modeling 3D deformable objects from multi-view images is presented to support robotic manipulation.
|
|
09:00-17:00, Paper SuWS18.7 | |
>Accepted Workshop Paper/presentation "SOMA: A Data-Driven Representation Framework for Semantic Soft Object Manipulation" (I) |
|
Zhou, Peng | The Hong Kong Polytechnic University |
Zhu, Jihong | TU Delft |
Navarro-Alarcon, David | The Hong Kong Polytechnic University |
Keywords:
Abstract: Soft object manipulation has recently received a lot of attention from the robotics community due to its vast potential applications. Most existing vision-based methods are case-specific, as their representation algorithms typically rely on “hard-coded” features to characterize the object’s shape. In this paper, we present SOMA, a new feedback representation framework for Semantic Soft Object MAnipulation. We introduce internal automatic representation layers between a low-level geometric feature extraction and a highlevel semantic shape analysis. Thereby, allowing to identify the semantic function of each compressed feature and form a valid shape classifier. The high-level semantic layer shows how to perform (quasi) motion planning shaping tasks for soft objects. In this way, this decomposed framework makes soft object representation more generic and scalable. To validate the proposed methodology, we report a detailed experimental study with bimanual manipulation tasks.
|
|
09:00-17:00, Paper SuWS18.8 | |
>Accepted Workshop Paper/presentation "Task-Oriented Contact Adjustment in Deformable Objects Manipulation with Non-Fixed Contact" (I) |
|
Huang, Jing | The Chinese University of Hong Kong |
Cai, Yuanpei | CUHK |
Chu, Xiangyu | The Chinese University of Hong Kong |
Au, K. W. Samuel | The Chinese University of Hong Kong |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: The assumption of fixed contact between robots and deformable objects (DOs) is widely used by previous DO manipulation (DOM) studies. However, in many real-life applications, the fixed contact is inapplicable due to various factors, such as the end-effector’s size limitation and DO’s intrinsic material properties. In such cases, the non-fixed contact (NFC), which has not been well-investigated so far, usually demonstrates better applicability. For NFC-DOM problems, the contact condition description and adjustment are essential for the task execution. In this work, we propose a novel visual characterization method in both physical and task levels to describe the real-time contact status between the end-effector and the manipulated DO, which enables the robot to perform contact adjustment for the specific ongoing manipulative task. The simulation and experimental results are also presented to validate the effectiveness of the method.
|
|
09:00-17:00, Paper SuWS18.9 | |
>Accepted Workshop Paper/presentation "Adaptive Shape Servoing of Elastic Rods Using Parameterized Regression Features and Auto-Tuning Motion Controls" (I) |
|
Qi, Jiaming | Harbin Institute of Technology |
Ma, Wanyu | The Hong Kong Polytechnic University |
Navarro-Alarcon, David | The Hong Kong Polytechnic University |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: We present a new vision-based method to control the shape of elastic rods with robot manipulators. Our new method computes parameterized regression features from online sensor measurements that enable to automatically quantify the object’s configuration and establish an explicit shape servoloop. To automatically deform the rod into a desired shape, our adaptive controller iteratively estimates the differential transformation between the robot’s motion and the relative shape changes. This valuable capability allows to effectively manipulate objects with unknown mechanical models. An autotuning algorithm is introduced to adjust the robot’s shaping motion in real-time based on optimal performance criteria. To validate the proposed theory, we present a detailed numerical and experimental study with vision-guided robotic manipulators.
|
|
09:00-17:00, Paper SuWS18.10 | |
>Accepted Workshop Paper/presentation "Automatic Shape Control of Deformable Rods Based on Data-Driven Implicit Sensorimotor Models" (I) |
|
Ma, Wanyu | The Hong Kong Polytechnic University |
Qi, Jiaming | Harbin Institute of Technology |
Navarro-Alarcon, David | The Hong Kong Polytechnic University |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: In this work, we propose a general approach to design shape servoing controller for manipulating the deformable object into the desired shape. The raw visual feedback data will be processed using regression method to identify the parameters of the continuous geometric model defined as the shape feature based on the specific task, which is able to globally represent the object. The derivation of analytical pose-shape Jacobian matrix based on implicit functions is provided for, sometimes, it is not easy to obtain the explicit mapping of object deformation and robotic pose. Then, the shape servoing controller based on velocity is designed using the derived pose-shape Jacobian matrix to enable the robot to manipulate the deformable object into the desired shape.
|
|
09:00-17:00, Paper SuWS18.11 | |
>Accepted Workshop Paper/presentation "Assembly Strategy for Deformable Ring-Shaped Objects" (I) |
|
Kim, Yitaek | Hanyang University |
Sloth, Christoffer | University of Southern Denmark |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: This paper presents a method for assembly of deformable belts onto pulleys using two robots. The idea is to let the robots work in a master-slave relation where the master robot performs a desired motion of the belt, while the slave robot tightens the belt to ensure certainty in the configuration of the belt. The method is demonstrated for the assembly of a rubber belt onto two pulleys.
|
|
09:00-17:00, Paper SuWS18.12 | |
>Accepted Workshop Paper/presentation: "MGSD: Multi-Modal Gaussian Shape Descriptors for Correspondence Matching of Linear and Planar Deformable Objects" (I) |
|
Ganapathi, Aditya | University of California, Berkeley |
Sundaresan, Priya | University of California, Berkeley |
Thananjeyan, Brijen | UC Berkeley |
Balakrishna, Ashwin | University of California, Berkeley |
Seita, Daniel | University of California, Berkeley |
Hoque, Ryan | University of California, Berkeley |
Gonzalez, Joseph E. | UC Berkeley |
Goldberg, Ken | UC Berkeley |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: Acquiring useful visual representations is a central challenge in deformable manipulation due to the infinite dimensional state space, tendency to self-occlude, and often textureless and symmetric nature of deformable objects. In this work, we explore learning pixelwise correspondences between images of deformable objects in different configurations as traditional correspondence matching approaches such as SIFT, SURF, and ORB can fail to provide sufficient contextual information for fine-grained manipulation. We propose Multi-Modal Gaussian Shape Descriptor (MGSD), a new visual representation of deformable objects which leverages ideas from dense object descriptors to learn correspondences between different object configurations and reasons about uncertainty and symmetries in the learned correspondences, all in a self-supervised manner. In simulation, experiments suggest that MGSD can achieve an RMSE of 32.4 and 31.3 for cloth and rope respectively, an average of 47.7% improvement over a provided baseline.
|
|
09:00-17:00, Paper SuWS18.13 | |
>Placeholder for a Live Session (I) |
|
Zhu, Jihong | TU Delft |
Keywords:
Abstract: This live interactive session will be held on Zoom with all invited speakers.
|
|
09:00-17:00, Paper SuWS18.14 | |
>Accepted Workshop Paper/presentation "Impact-Aware Control for Deformable Contacts" (I) |
|
Dehio, Niels | Karlsruhe Institute of Technology |
Kheddar, Abderrahmane | CNRS-AIST |
Zhu, Jihong | TU Delft |
Keywords:
Abstract: We envision modern robots performing impulsive tasks like kicking a ball. However, safely generating impacts with robots is challenging due to discontinuous velocity and high impact forces. If not accounted for, they can make the controller unstable or damage the robot. In our recent work, we showed how to control impacts with rigid bodies safely, however, not all objects are rigid. We here generate impacts with deformable contacts, incorporating constraints imposed by the hardware. Therefore, we learn the shock-absorbing soft dynamics. Realrobot experiments with Panda validate our approach.
|
|
SuTU1 |
Room T1 |
From Perception to Planning and Intelligence: A Hands-On Course on Robotics
Design and Development using MATLAB and Simulink |
Tutorial |
Chair: Valenti, Roberto | MathWorks |
Co-Chair: Mavrommati, Anastasia | MathWorks |
Organizer: Valenti, Roberto | MathWorks |
Organizer: Mavrommati, Anastasia | MathWorks |
|
09:00-17:00, Paper SuTU1.1 | |
>TS-3186 Tutorial Introduction (I) |
|
Valenti, Roberto | MathWorks |
Mavrommati, Anastasia | MathWorks |
Keywords:
Abstract: ABSTRACT The objective of this tutorial is twofold: providing an introductory course on robotics system design by exploring the key technologies that enable autonomy of a robotics system, and presenting solutions for practical design, implementation, and hardware deployment using MathWorks tools. MATLAB® and Simulink® have long been used in many science and engineering disciplines and form the basis of coursework, tutorials, and laboratory experiments for many robotics curricula around the world. Recently, robotics problems have grown beyond the classical arm-manipulator kinematics into a broad spectrum of applications involving low-cost hardware such as Arduino, STM32, Raspberry Pi, education kits (LEGO) all the way to high-performance computing hardware such as General Processing units (GPUs) and Field Programmable Gate Arrays (FPGA) for fast, parallel, onboard processing. With new tools such as ROS Toolbox™, Navigation Toolbox™, Reinforcement Learning Toolbox™ as well as a variety of hardware support packages, MATLAB and Simulink provide an integrated software environment for developing robotic systems both in desktop simulation and on physical robots. Over the course of the tutorial, accomplished professors will give lectures on different components of an advanced robotics system. Moreover, MathWorks engineers will provide hands-on solutions using MATLAB and Simulink and practical examples. The audience will have access to the tools through a temporary license to run and modify the provided examples.
|
|
09:00-17:00, Paper SuTU1.2 | |
>TS-3186 (Perception Session) Lecture: Past, Present, and Future of Simultaneous Localization and Mapping (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: Simultaneous Localization And Mapping (SLAM) consists in the concurrent construction of a model of the environment (the map) and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. While a number of problems in SLAM can be considered solved, there is still a huge gap between humans and robots when it comes to world understanding: robot perception can be easily fooled by adversarial instances, requires a large amount of computational resources, and provides a very sparse and fragmented view of the environment in which the robot moves. In this tutorial, I briefly review the algorithmic foundations of SLAM, and I outline a number of open problems that need to be solved in order to bridge the gap between robot and human perception. In particular I discuss key questions like: how can we make SLAM algorithms more robust and reliable? is it possible to run SLAM on a palm-sized drone? what is the role of (deep) learning in the future of SLAM?
|
|
09:00-17:00, Paper SuTU1.3 | |
>TS-3186 (Perception Session) Tutorial: Lidar Data Processing for Object Detection and Tracking (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: ABSTRACT Lidar is one of the most reliable sensors used for robotics perception and autonomous navigation. Its output can be used for both self-awareness and situational awareness. For example, a Lidar point cloud can be processed to estimate the ego vehicle’s motion, and detect and track nearby objects. In this video the presenter will walk you through two examples that show how to detect, classify, and track vehicles by using lidar point cloud data captured by a lidar sensor mounted on an ego vehicle. The lidar data is recorded from a highway-driving scenario. The examples illustrate the workflow in MATLAB® for processing the point cloud and tracking the objects. The point cloud data is segmented to determine the class of objects using the PointSeg network. A joint probabilistic data association (JPDA) tracker with an interactive multiple model (IMM) filter is used to track the detected vehicles.
|
|
09:00-17:00, Paper SuTU1.4 | |
>TS-3186 (Motion Planning Session) Lecture: Motion Planning for Robotics (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: This lecture will review task and motion planning topics including but not limited to optimization-based methods, contact modeling, and contact-implicit trajectory optimization.
|
|
09:00-17:00, Paper SuTU1.5 | |
>TS-3186 (Motion Planning Session) Tutorial: Motion Planning for Mobile Robots & Manipulators with MATLAB (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: Adding autonomy to any system, such as a self-driving car, includes three main building blocks: perception, planning, and control. In this session, we will see how some of the popular motion planning algorithms work and how you can use them with MATLAB to simulate and deploy for mobile robot navigation as well as robot manipulators.
|
|
09:00-17:00, Paper SuTU1.6 | |
>TS-3186 (Machine Learning for Controls Session) Lecture: Using Reinforcement Learning to Solve Control Problems for Robotics (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: The use of reinforcement learning to solve control problems is an area that has attracted a significant amount of research attention in recent years and will continue to grow, as autonomy becomes a necessity. This talk will highlight some new methods for the design of optimal policies that do not require full information of the physics of the systems. Optimization-based design has been responsible for much of the successful performance of engineered systems in aerospace, industrial processes, vehicles, ships, robotics, and elsewhere since the 1960s. We will present novel data-driven methods for nonlinear control-theoretic problems evolving in a continuous-time sense. The dynamics do not need to be known for these online solution techniques. These methods implicitly solve the required design equations without ever explicitly solving them. Different aspects of the control inputs (decision makers) in terms of cooperation, collaboration, altruistic versus selfish behavior, antagonism, competition, incentives, cheating, and other concepts of multiplayer team play will be explored.
|
|
09:00-17:00, Paper SuTU1.7 | |
>TS-3186 (Machine Learning for Controls Session) Tutorial: Reinforcement Learning for Ball Balancing Using a Robot Manipulator (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: In this tutorial, we will demonstrate how to use reinforcement learning to solve control tasks in complex dynamic systems such as a redundant robot manipulator. The goal of the task is to design a controller that can balance a ping-pong ball on a flat surface attached to the end effector of the manipulator. Model-based control theories like Model Predictive Control (MPC) or other methods can solve such tasks by creating mathematical models of the plant, but it may become difficult to design such controllers when the plant model becomes complex. Model-free reinforcement learning is an alternative in such situations. We will go through how to use the Reinforcement Learning Toolbox™ to create and train agents that can perform the ball balancing task while being robust to variabilities in the environment. At the end of the tutorial, you will have learned how to create environments, represent agents through neural networks, and train the networks to satisfactory performance.
|
|
09:00-17:00, Paper SuTU1.8 | |
>TS-3186 (Implementation and Deployment Session) Tutorial: Complete Pick-And-Place Workflow for Robot Manipulators (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: Many pick-and-place robotic applications requires a lengthy workflow: from computer vision and deep learning for object detection, to motion planning and control. This workflow may require multiple programming languages. In MATLAB® and Simulink®, this process is integrated. In this presentation, we will show you how to label images and produce training data sets, train neural networks, deploy computer vision algorithms, perform path planning, and tune motion controller for a Kinova Robot arm hardware, without leaving MATLAB and Simulink.
|
|
09:00-17:00, Paper SuTU1.9 | |
>TS-3186 (Implementation and Deployment Session) Tutorial: Using ROS and ROS 2 with MATLAB and Simulink (I) |
|
Valenti, Roberto | MathWorks |
Keywords:
Abstract: ROS is a commonly used framework for designing complex robotic systems. It is popular for building distributed robot software systems, as well as for its integration with packages for simulation, visualization, robotics algorithms, and more. ROS has become increasingly popular in industry, especially in the development of autonomous vehicles. The ROS interface provided by MathWorks ROS Toolbox lets you: 1) connect to a ROS network from any operating system supported by MATLAB® and Simulink®, 2) leverage built-in functionality in MathWorks toolboxes – for example, control systems, computer vision, machine learning, signal processing, and state machine design; and 3) automatically generate stand-alone C++ ROS nodes from algorithms designed in MATLAB and Simulink. MATLAB and Simulink can coexist with your ROS based workflow via desktop prototyping, deployment of standalone ROS nodes, or both. This tutorial will introduce how to design the ROS-based applications in MATLAB and Simulink. Through several examples, we will cover: • Data analysis for ROS in MATLAB and Simulink • Algorithm prototyping and development using ROS and ROS 2 network connection to external simulator and hardware • ROS node generation and deployment.
|
|
09:00-17:00, Paper SuTU1.10 | |
TS-3186 Video 9 (I) |
|
Valenti, Roberto | MathWorks |
|
09:00-17:00, Paper SuTU1.11 | |
TS-3186 Video 10 (I) |
|
Valenti, Roberto | MathWorks |
|
09:00-17:00, Paper SuTU1.12 | |
TS-3186 Video 11 (I) |
|
Valenti, Roberto | MathWorks |
|
09:00-17:00, Paper SuTU1.13 | |
TS-3186 Video 12 (I) |
|
Valenti, Roberto | MathWorks |
|
09:00-17:00, Paper SuTU1.14 | |
TS-3186 Video 13 (I) |
|
Valenti, Roberto | MathWorks |
|
09:00-17:00, Paper SuTU1.15 | |
TS-3186 Video 14 (I) |
|
Valenti, Roberto | MathWorks |
|
SuTU2 |
Room T2 |
Introduction to Space Robotics |
Tutorial |
Chair: Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Co-Chair: Beksi, William | University of Texas at Arlington |
Organizer: Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Organizer: Beksi, William | University of Texas at Arlington |
|
09:00-17:00, Paper SuTU2.1 | |
>Welcome and Introduction to the Session (I) |
|
Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Beksi, William | University of Texas at Arlington |
Keywords:
Abstract: In this video, Joe Cloud introduces the session, speakers, and instructors. For more information about the session, please visit the tutorial webpage: www.space-robotics.org
|
|
09:00-17:00, Paper SuTU2.2 | |
>Introduction to In-Situ Resource Utilization (I) |
|
Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Keywords:
Abstract: Space Robotics for In-Situ Resource Utilization: Introduction to ISRU Speaker: Dr. Paul van Susante Please visit the website for more information: www.space-robotics.org
|
|
09:00-17:00, Paper SuTU2.3 | |
>Space Mobility and Autonomy for In-Situ Resources Utilization (I) |
|
Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Keywords:
Abstract: Space Robotics for In-Situ Resource Utilization: Space Mobility and Autonomy for in-situ resources utilization Speaker: Dr. Issa Nesnas Placeholder video. Please visit the website for more information: www.space-robotics.org
|
|
09:00-17:00, Paper SuTU2.4 | |
>Tutorial 0: Setup (I) |
|
Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Keywords:
Abstract: Space Robotics for In-Situ Resource Utilization: Tutorial 0: Setup Speaker: Tiger Sachse Please visit the website for more information: www.space-robotics.org
|
|
09:00-17:00, Paper SuTU2.5 | |
>Tutorial 1: Direct Operation (I) |
|
Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Keywords:
Abstract: Space Robotics for In-Situ Resource Utilization: Tutorial 1: Direct Operation Speaker: Tiger Sachse Please visit the website for more information: www.space-robotics.org
|
|
09:00-17:00, Paper SuTU2.6 | |
>Tutorial 2: Delayed Teleoperation (I) |
|
Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Keywords:
Abstract: Space Robotics for In-Situ Resource Utilization: Tutorial 2: Delayed Teleoperation Speaker: Ronald Marrero Please visit the website for more information: www.space-robotics.org
|
|
09:00-17:00, Paper SuTU2.7 | |
>Tutorial 3: Autonomous Operation (I) |
|
Cloud, Joseph | University of Texas at Arlington, NASA Kennedy Space Center |
Keywords:
Abstract: Space Robotics for In-Situ Resource Utilization: Tutorial 3: Autonomous Operations Speaker: Tiger Sachse Please visit the website for more information: www.space-robotics.org
|
|
SuTU3 |
Room T3 |
F1/10 Competition at IROS2020 |
Tutorial |
Chair: Krovi, Venkat | Clemson University |
Co-Chair: Abbas, Houssam | Oregon State University |
Organizer: Krovi, Venkat | Clemson University |
Organizer: Mangharam, Rahul | University of Pennsylvania |
Organizer: Abbas, Houssam | Oregon State University |
Organizer: O'Kelly, Matthew | University of Pennsylvania |
Organizer: Behl, Madhur | University of Virginia |
|
09:00-17:00, Paper SuTU3.1 | |
>TS-77 F1TENTH Tutorial at IROS2020: An Introduction (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: F1TENTH is a complete, ready-to-race autonomous race car that is 1/10th-scale, 10 times the fun and 1/100th the cost of a real self-driving car. In this talk we will demonstrate how F1TENTH is an easy-to-use high-performance platform for machine learning engineering for perception, planning, control and coordination for future safe and connected autonomous systems. F1TENTH has a growing community of over 60 universities, 7 international autonomous racing competitions and hands-on course offerings in over a dozen institutions. We'll detail the platform’s hardware, autonomous vehicle software stack, simulators, and systems infrastructure. We highlight three specific capabilities for streamlined algorithm development, testing and validation: a set of simulators, control and verification, and efficient machine-learning algorithm development. Members of the F1TENTH community have published new research at ICML, ICRA, NeurIPS, and HSCC to demonstrate autonomous driving at the limits of performance and to accelerate the development of safe autonomous vehicles. Together with Nvidia and the US Department of Transportation CARMA 1Tenth community, we would like to invite you to join the community for experimentation, standardization and certification of Cooperative Driving Autonomy.
|
|
09:00-17:00, Paper SuTU3.2 | |
>TS-77 Introduction to the F1Tenth Car (New Build) and Simulator: Video 01 (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: F1TENTH is a complete, ready-to-race autonomous race car that is 1/10th-scale, 1/100th the cost but 10 times the fun of a real self-driving car. In this tutorial we will demonstrate how F1TENTH has provided an easy-to-use high-performance platform for machine learning engineering for perception, planning, control and coordination for future safe and connected autonomous systems. Here is where you will find all the information needed to get started and join the F1TENTH community! This build documentation is divided into four sequential sections, each one building on the other: 1. Building the Car - Start here if you are building the car from scratch. 2. System Configuration- Start here if you have completed building the car. 3. Installing Firmware - Start here if you’ve already done 1 and 2 above. 4. Driving the Car - Start here if you have everything set up from the previous three sections and are ready to learn how to set up a workspace on the vehicle and start driving! Web URL: https://f1tenth.org/build.html and https://f1tenth.readthedocs.io/en/stable/going_forward/simulator/index.html# Difficulty Level: Intermediate-Advanced Approximate Time Investment: 10-15 hours You might want to plan on saving an entire day to work on this project if you going through this Getting Started section from beginning to end.
|
|
09:00-17:00, Paper SuTU3.3 | |
>TS-77 Introduction to the F1Tenth Car (New Build) and Simulator: Video 02 (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: ABSTRACT TS-77 Introduction to the F1Tenth Car (New Build) and Simulator: Video 02
|
|
09:00-17:00, Paper SuTU3.4 | |
>TS-77 F1TENTH Tutorial: Unsused (I) |
|
Behl, Madhur | University of Virginia |
Luong, Kim | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
O'Kelly, Matthew | University of Pennsylvania |
Abbas, Houssam | Oregon State University |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: Unused
|
|
09:00-17:00, Paper SuTU3.5 | |
>TS-77 FITENTH Tutorial: Reactive Methods - Follow the Gap (I) |
|
Behl, Madhur | University of Virginia |
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: The simplest autonomous algorithm for the F1tenth platform is a wall following algorithm which maintains the car parallel to the track boundaries. It involves using the sensor data from LIDAR and implementing a PID controller for tracking the wall. This is a reactive method for racing since no prior knowledge of the track is required. In this tutorial session, we will cover an advanced reactive method for autonomous racing - Follow the gap. We will explain the foundations and intuition behind the FTG method and talk about its pro and cons for use on the F1tenth platform. Many of our race winners have successfully sued the FTG method in the fast due it simplicity - especially for single agent situations.
|
|
09:00-17:00, Paper SuTU3.6 | |
>TS-77 F1TENTH Tutorial: Mapping and Localization (I) |
|
Behl, Madhur | University of Virginia |
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: We’ll learn how to determine the state (position and orientation) of a robot with respect to its environment. The F1TENTH vehicle uses range measurements from the 2D Lidar to build and map of its environment and localize itself. The tutorial introduces the concept of Occupancy cost grid map, followed by the overview of two Simultaneous Localization and Mapping (SLAM) algorithms - Hector SLAM and Cartographer. Following this the concept and implementation of a GPU particle filter algorithm for localization is presented with details of the Adaptive Monte Carlo Localization (AMCL) implementation on the F1tenth platform.
|
|
09:00-17:00, Paper SuTU3.7 | |
TS-77 F1TENTH Tutorial: Planning (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
|
09:00-17:00, Paper SuTU3.8 | |
TS-77 F1TENTH Tutorial: Learning & Vision (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Behl, Madhur | University of Virginia |
Abbas, Houssam | Arizona State University |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
|
09:00-17:00, Paper SuTU3.9 | |
TS-77 F1TENTH Tutorial: Video 8 (UNUSED and AVAILABLE) (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
|
09:00-17:00, Paper SuTU3.10 | |
TS-77 F1TENTH Tutorial: Video 9 (UNUSED and AVAILABLE) (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
|
09:00-17:00, Paper SuTU3.11 | |
TS-77 F1TENTH Tutorial: Video 10 (UNUSED and AVAILABLE) (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
|
09:00-17:00, Paper SuTU3.12 | |
>TS-77 F1TENTH Tutorial: Research and Educational Directions (OSU) (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Oregon State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: This part of the tutorial will give examples of how students got started on their research by leveraging the F1/10 platform to learn concepts in verification and control. The students were senior undergrads and M.Sc. students.
|
|
09:00-17:00, Paper SuTU3.13 | |
>TS-77 F1TENTH: Research and Educational Directions (Clemson) (I) |
|
Krovi, Venkat | Clemson University |
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Oregon State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Keywords:
Abstract: F1TENTH is a complete, ready-to-race autonomous race car that is 1/10th-scale and 1/100th the cost of a real self-driving car. In this talk we will demonstrate how F1TENTH is an easy-to-use high-performance platform for machine learning engineering for perception, planning, control and coordination for future safe and connected autonomous systems. In this section we describe goals, development and offering of scaffolded series of courses used with incoming MS student cohorts at Clemson University: (A) "Autonomy: Science and Systems” followed by (B) Independent study course in "Scaled Autonomous Vehicles". The course-series is geared towards graduate mechanical and automotive engineering students, with limited prior exposure to coding, autonomy and systems-level tradeoffs. URL: https://sites.google.com/g.clemson.edu/theautonomyclass/scaled-autonomous-vehicles?authuser=0 https://sites.google.com/g.clemson.edu/theautonomyclass/meet-the-students?authuser=0
|
|
09:00-17:00, Paper SuTU3.14 | |
>TS-77 F1TENTH Tutorial: Research and Education Directions (UVA) (I) |
|
Behl, Madhur | University of Virginia |
O'Kelly, Matthew | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |
Keywords:
Abstract: Prof. Madhur Behl, from the University of Virginia shares his experience of teaching the F1/10 Autonomous Racing course to undergraduates since 2017; and discusses how the research platform is enabling research and experimentation in his lab to bring compelling research ideas to reality. A video montage of F1tenth research from his lab is presented.
|
|
09:00-17:00, Paper SuTU3.15 | |
TS-77 F1TENTH Tutorial: (Optional) Research and Educational Directions (PENN) (I) |
|
O'Kelly, Matthew | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Luong, Kim | University of Pennsylvania |
Abbas, Houssam | Arizona State University |
Behl, Madhur | University of Virginia |
Mangharam, Rahul | University of Pennsylvania |
Krovi, Venkat | Clemson University |