|
MoAT10 |
Room T10 |
Aerial Systems: Applications I |
Regular session |
Chair: Mueller, Mark Wilfried | University of California, Berkeley |
Co-Chair: Bezzo, Nicola | University of Virginia |
|
10:00-10:15, Paper MoAT10.1 | |
>Staging Energy Sources to Extend Flight Time of a Multirotor UAV |
> Video Attachment
|
|
Jain, Karan | UC Berkeley |
Tang, Haoyun(Jerry) | UC Berkeley |
Sreenath, Koushil | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Aerial Systems: Applications, Mechanism Design, Cellular and Modular Robots
Abstract: Energy sources such as batteries do not decrease in mass after consumption, unlike combustion-based fuels. We present the concept of staging energy sources, i.e. consuming energy in stages and ejecting used stages, to progressively reduce the mass of aerial vehicles in-flight which reduces power consumption, and consequently increases flight time. A flight time vs. energy storage mass analysis is presented to show the endurance benefit of staging to multirotors. We consider two specific problems in discrete staging -- optimal order of staging given a certain number of energy sources, and optimal partitioning of a given energy storage mass budget into a given number of stages. We then derive results for a continuously staged case of an internal combustion engine driving propellers. Notably, we show that a multirotor powered by internal combustion has an upper limit on achievable flight time independent of the available fuel mass. Lastly, we validate the analysis with flight experiments on a custom two-stage battery-powered quadcopter. This quadcopter can eject a battery stage after consumption in-flight using a custom-designed mechanism, and continue hovering using the next stage. The experimental flight times match well with those predicted from the analysis for our vehicle. We achieve a 19% increase in flight time using the batteries in two stages as compared to a single stage.
|
|
10:15-10:30, Paper MoAT10.2 | |
>Target Search on Road Networks with Range-Constrained UAVs and Ground-Based Mobile Recharging Vehicles |
|
Booth, Kyle E. C. | University of Toronto |
Piacentini, Chiara | University of Toronto |
Bernardini, Sara | Royal Holloway University of London |
Beck, J. Christopher | University of Toronto |
Keywords: Aerial Systems: Applications, Surveillance Systems, Planning, Scheduling and Coordination
Abstract: We study a range-constrained variant of the multi-UAV target search problem where commercially available UAVs are used for target search in tandem with ground-based mobile recharging vehicles (MRVs) that can travel, via the road network, to meet up with and recharge a UAV. We propose a pipeline for representing the problem on real-world road networks, starting with a map of the road network and yielding a final routing graph that permits UAVs to recharge via rendezvous with MRVs. The problem is then solved using mixed-integer linear programming (MILP) and constraint programming (CP). We conduct a comprehensive simulation of our methods using real-world road network data from Scotland. The assessment investigates accumulated search reward compared to ideal and worst-case scenarios and briefly explores the impact of UAV speeds. Our empirical results indicate that CP is able to provide better solutions than MILP, overall, and that the use of a fleet of MRVs can improve the accumulated reward of the UAV fleet, supporting their inclusion for surveillance tasks.
|
|
10:30-10:45, Paper MoAT10.3 | |
>Assured Runtime Monitoring and Planning: Towards Verification of Neural Networks for Safe Autonomous Operations (I) |
> Video Attachment
|
|
Yel, Esen | University of Virginia |
Carpenter, Taylor | University of Pennsylvania |
Di Franco, Carmelo | University of Virginia |
Ivanov, Radoslav | University of Pennsylvania |
Kantaros, Yiannis | University of Pennsylvania |
Lee, Insup | University of Pennsylvania |
Weimer, James | University of Pennsylvania |
Bezzo, Nicola | University of Virginia |
Keywords: Aerial Systems: Applications, Novel Deep Learning Methods, Hybrid Logical/Dynamical Planning and Verification
Abstract: Autonomous systems operating in uncertain environments under the effects of disturbances and noises can reach unsafe states even while using fine-tuned controllers and precise sensors and actuators. To provide safety guarantees on such systems during motion planning operations, reachability analysis (RA) has been demonstrated to be a powerful tool. RA however suffers from computational complexity especially when dealing with complex systems characterized by high order dynamics, making it hard to be deployed for runtime monitoring. To deal with this issue, in this work, a neural network (NN)-based framework is proposed to perform fast online monitoring for safety and an approach for verification of NNs is presented. Training is performed offline using precise RA tools while the trained NN is used online as a fast safety checker for motion planning. In this way, at runtime, a planned trajectory can be quickly predicted to be safe or unsafe: when unsafe, a replanning procedure is triggered until a safe trajectory is obtained. The results of the trained network are tested for verification using our recent tool Verisig in which the NN is transformed into a hybrid system in order to provide guarantees before deployment. In case of unverified NN, the outputs of the verification are used to retrain the network until verification is achieved. Two illustrative case studies on a quadrotor aerial vehicle - a pick-up drop-off operation and a navigation in a cluttered environment - are presented to validate the proposed framework both in simulations and experiments.
|
|
10:45-11:00, Paper MoAT10.4 | |
>UAV-AdNet: Unsupervised Anomaly Detection Using Deep Neural Networks for Aerial Surveillance |
> Video Attachment
|
|
Bozcan, Ilker | Aarhus University |
Kayacan, Erdal | Aarhus University |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control, Aerial Systems: Perception and Autonomy
Abstract: Anomaly detection is a key goal of autonomous surveillance systems that should be able to alert unusual observations. In this paper, we propose a holistic anomaly detection system using deep neural networks for surveillance of critical infrastructures (e.g., airports, harbors, warehouses) using an unmanned aerial vehicle (UAV). First, we present a heuristic method for the explicit representation of spatial layouts of objects in bird-view images. Then, we propose a deep neural network architecture for unsupervised anomaly detection (UAV-AdNet), which is trained on environment representations and GPS labels of bird-view images jointly. Unlike studies in the literature, we combine GPS and image data to predict abnormal observations. We evaluate our model against several baselines on our aerial surveillance dataset and show that it performs better in scene reconstruction and several anomaly detection tasks. The codes, trained models, dataset, and video will be available at https://bozcani.github.io/uavadnet.
|
|
11:00-11:15, Paper MoAT10.5 | |
>A Morphing Cargo Drone for Safe Flight in Proximity of Humans |
> Video Attachment
|
|
Kornatowski, Przemyslaw Mariusz | Ecole Polytechnique Federale De Lausanne (EPFL) |
Feroskhan, Mir | Nanyang Technological University |
Stewart, William | Ecole Polytechnique Federale De Lausanne |
Floreano, Dario | Ecole Polytechnique Federal, Lausanne |
Keywords: Aerial Systems: Applications, Intelligent Transportation Systems, Field Robots
Abstract: Delivery drones used by logistics companies today are equipped with unshielded propellers, which represent a major hurdle for in-hand parcel delivery. The exposed propeller blades are hazardous to unsuspecting bystanders, pets, and untrained users. One solution to provide safety is to enclose a drone with an all-encompassing protective cage. However, the structures of existing cage designs have low density in order to minimize obstruction of propeller airflow, so as to not decrease efficiency. The relatively large openings in the cage do not protect hands and fingers from fast rotating propellers. Here we describe a novel approach to safety and aerodynamic efficiency by means of a high-density cage and morphing arms loosely inspired by the box turtle. The drone cage is made of a dense and lightweight grid. When flying in proximity of humans, the arms and propellers are retracted and fully sealed within the cage, thus making the drone safe and also reducing the total footprint. When flying at cruising altitude far from people and objects, the arms and propellers extend out of the protective grid, thus increasing aerodynamic efficiency by more than 20%.
|
|
MoAT11 |
Room T11 |
Aerial Systems: Applications II |
Regular session |
Chair: Minor, Mark | University of Utah |
Co-Chair: Yu, Kee-Ho | Chonbuk National University |
|
10:00-10:15, Paper MoAT11.1 | |
>ROSflight: A Lean Open-Source Research Autopilot |
|
Jackson, James | Brigham Young University |
Koch, Daniel | Brigham Young University |
Henrichsen, Trey | Brigham Young University |
McLain, T.W. | Brigham Young University |
Keywords: Aerial Systems: Applications
Abstract: ROSflight is a lean, open-source autopilot system developed with the primary goal of supporting the needs of researchers working with micro aerial vehicle systems. The project consists of firmware designed to run on low-cost, readily available flight controller boards, as well as ROS packages for interfacing between the flight controller and application code and for simulation. The core objectives of the project are as follows: maintain a small, easy-to-understand code base; provide high-bandwidth, low-latency communication between the flight controller and application code; provide a straightforward interface to research application code; allow for robust safety pilot integration; and enable true software-in-the-loop simulation capability.
|
|
10:15-10:30, Paper MoAT11.2 | |
>Online Weight-Adaptive Nonlinear Model Predictive Control |
|
Kostadinov, Dimche | University of Zurich, Robotics and Perception Group |
Scaramuzza, Davide | University of Zurich |
Keywords: Aerial Systems: Applications
Abstract: Nonlinear Model Predictive Control (NMPC) is a powerful and widely used technique for nonlinear dynamic process control under constraints. In NMPC, the state and control weights of the corresponding state and control costs are commonly selected based on human-expert knowledge, which usually reflects the acceptable stability in practice. Although broadly used, this approach might not be optimal for the execution of a trajectory with the lowest positional error and sufficiently "smooth" changes in the predicted controls. Furthermore, NMPC with an online weight update strategy for fast, agile, and precise unmanned aerial vehicle navigation, has not been studied extensively. To this end, we propose a novel control problem formulation that allows online updates of the state and control weights. As a solution, we present an algorithm that consists of two alternating stages: (i) state and command variable prediction and (ii) weights update. We present a numerical evaluation with a comparison and analysis of different trade-offs for the problem of quadrotor navigation. Our computer simulation results show improvements of up to 70% in the accuracy of the executed trajectory compared to the standard solution of NMPC with fixed weights.
|
|
10:30-10:45, Paper MoAT11.3 | |
>CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes |
> Video Attachment
|
|
Pueyo, Pablo | Universidad De Zaragoza |
Cristofalo, Eric | Stanford University |
Montijano, Eduardo | Universidad De Zaragoza |
Schwager, Mac | Stanford University |
Keywords: Software, Middleware and Programming Environments, Simulation and Animation, Aerial Systems: Applications
Abstract: Unmanned Aerial Vehicles (UAVs) are becoming increasingly popular in the film and entertainment industries, in part because of their maneuverability and perspectives they enable. While there exists methods for controlling the position and orientation of the drones for visibility, other artistic elements of the filming process, such as focal blur, remain unexplored in the robotics community. The lack of cinematographic robotics solutions is partly due to the cost associated with the cameras and devices used in the filming industry, but also because stateof-the-art photo-realistic robotics simulators only utilize a full in-focus pinhole camera model which does not incorporate these desired artistic attributes. To overcome this, the main contribution of this work is to endow the well-known drone simulator, AirSim, with a cinematic camera as well as extend its API to control all of its parameters in real time, including various filming lenses and common cinematographic properties. In this paper, we detail the implementation of our AirSim modification, CinemAirSim, present examples that illustrate the potential of the new tool, and highlight the new research opportunities that the use of cinematic cameras can bring to research in robotics and control.
|
|
10:45-11:00, Paper MoAT11.4 | |
>Design and Evaluation of a Perching Hexacopter Drone for Energy Harvesting from Power Lines |
|
Kitchen, Ryan | University of Utah |
Bierwolf, Nick | University of Utah |
Harbertson, Sean | University of Utah |
Platt, Brage | University of Utah |
Owen, Dean | University of Utah |
Griesman, Klaus | University of Utah |
Minor, Mark | University of Utah |
Keywords: Aerial Systems: Applications
Abstract: With a growing number of applications in the world for UAVs, there is a clear limitation regarding the need for extended battery life. With the current flight times, many users would benefit greatly with an innovative option of field charging these devices. The objective of this project is to investigate feasibility of inductively harvesting energy from a power line cable for applications such as charging a UAV drone. Research investigates a dual hook perching device that securely attaches to a power cable and aligns an inductive core with the cable for harvesting energy from its electro-magnetic field. Modeling and analysis of the core highlights critical design parameters, leading to evaluation of circular, semi-cylindrical, and u-shaped prototypes designed to interface with a 1” power cable. Underactuated two jaw manipulators at each end of the coil are proposed for grasping the cable and aligning it with the charging coil, ultimately providing a firm grasp and perch. An open source hexacopter drone was used in this study to integrate with the charging novelty. The results provided can be used as a starting point to study the reliability of this method of charging and to further investigate perching abilities of UAVs.
|
|
11:00-11:15, Paper MoAT11.5 | |
>Flight Path Planning of Solar-Powered UAV for Sustainable Communication Relay |
|
Guerra Padilla, Giancarlo Eder | Chonbuk National University |
Kim, Kun-Jung | Chonbuk National University |
Park, Seok-Hwan | Jeonbuk National University |
Yu, Kee-Ho | Chonbuk National University |
Keywords: Aerial Systems: Applications, Energy and Environment-Aware Automation, Motion and Path Planning
Abstract: Communication is a key aspect in modern life. Unfortunately, when natural disasters occur, the communication system and infrastructure of a city can be partially lost, and in the worst case, completely destroyed. In this case, communication is a crucial part for the search-and-rescue missions. This paper focuses on developing an aerial communication relay platform as an effective solution for communication loss in a natural disaster. The model used considers the aircraft altitude and attitude, which affects the energy acquisition and consumption, and the signal fading effects. The flight path planning is performed adopting a nonlinear optimization technique, Hermite-Simpson collocation method. For a realistic communication model regarding urban signal loss and path propagation, the building deployment of a 2km radius circular area of two cities in South Korea (Seoul and Jeonju) was obtained. Simulation experiments for the different urban environments are performed to test the communication reliability focusing on the relation between the Unmanned Aerial Vehicle (UAV) and the Ground Users (GU). As a result of the simulation, an optimal flight path in a high-rise urban and urban microcell environment is obtained. The flight path indicates the feasibility of endurance flights for low-altitude communication aid aircrafts including signal fading model alongside solar power energy acquisition into the case study.
|
|
MoAT12 |
Room T12 |
Aerial Systems: Cooperating Robots |
Regular session |
Chair: Tadakuma, Kenjiro | Tohoku University |
Co-Chair: Chirarattananon, Pakpong | City University of Hong Kong |
|
10:00-10:15, Paper MoAT12.1 | |
>SplitFlyer: A Modular Quadcoptor That Disassembles into Two Flying Robots |
> Video Attachment
|
|
Bai, Songnan | City University of Hong Kong |
Tan, Shixin | City University of Hong Kong |
Chirarattananon, Pakpong | City University of Hong Kong |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications, Cellular and Modular Robots
Abstract: We introduce SplitFlyer--a novel quadcopter with an ability to disassemble into two self-contained bicopters through human assistance. As a subunit, the bicopter is a severely underactuated aerial vehicle equipped with only two propellers. Still, each bicopter is capable of independent flight. To achieve this, we provide an analysis of the system dynamics by relaxing the control over the yaw rotation, allowing the bicopter to maintain its large spinning rate in flight. Taking into account the gyroscopic motion, the dynamics are described and a cascaded control strategy is developed. We constructed a transformable prototype to demonstrate consecutive flights in both configurations. The results verify the proposed control strategy and show the potential of the platform for future research in modular aerial swarm robotics.
|
|
10:15-10:30, Paper MoAT12.2 | |
>Towards Cooperative Transport of a Suspended Payload Via Two Aerial Robots with Inertial Sensing |
> Video Attachment
|
|
Xie, Heng | City University of Hong Kong |
Cai, Xinyu | City University of HongKong |
Chirarattananon, Pakpong | City University of Hong Kong |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications, Cooperating Robots
Abstract: This paper addresses the problem of cooperative transport of a point mass hoisted by two aerial robots. Treating the robots as a leader and a follower, the follower stabilizes the system with respect to the leader using only feedback from its Inertial Measurement Units (IMU). This is accomplished by neglecting the acceleration of the leader, analyzing the system through the generalized coordinates or the cables' angles, and employing an observation model based on the IMU measurements. A lightweight estimator based on an Extended Kalman Filter (EKF) and a controller are derived to stabilize the robot-payload-robot system. The proposed methods are verified with extensive flight experiments, first with a single robot and then with two robots. The results show that the follower is capable of realizing the desired quasi-static trajectory using only its IMU measurements. The outcomes demonstrate promising progress towards the goal of autonomous cooperative transport of a suspended payload via small flying robots with minimal sensing and computational requirements.
|
|
10:30-10:45, Paper MoAT12.3 | |
>Active Vertical Takeoff of an Aquatic UAV |
> Video Attachment
|
|
Tétreault, Étienne | Université De Sherbrooke |
Rancourt, David | Université De Sherbrooke |
Lussier Desbiens, Alexis | Université De Sherbrooke |
Keywords: Aerial Systems: Mechanics and Control, Marine Robotics
Abstract: To extend the mission duration of smaller unmanned aerial vehicles, this paper presents a solar recharge approach that uses lakes as landing, charging, and standby areas. The Sherbrooke University Water-Air VEhicle (SUWAVE) is a small aircraft capable of vertical takeoff and landing on water. A second-generation prototype has been developed with new capabilities: solar recharging, autonomous flight, and a larger takeoff envelope using an actuated takeoff strategy. A 3D dynamic model of the new takeoff maneuver is conceived to understand the major forces present during this critical phase. Numerical simulations are validated with experimental results from real takeoffs made in the laboratory and on lakes. The final prototype is shown to have accomplished repeated cycles of autonomous takeoff, followed by assisted flight and landing, without any human physical intervention between cycles.
|
|
10:45-11:00, Paper MoAT12.4 | |
>Energy-Based Cooperative Control for Landing Fixed-Wing UAVs on Mobile Platforms under Communication Delays |
|
Muskardin, Tin | German Aerospace Center (DLR) |
Coelho, Andre | German Aerospace Center (DLR) |
Rodrigues Della Noce, Eduardo | German Aerospace Center (DLR) |
Ollero, Anibal | University of Seville |
Kondak, Konstantin | German Aerospace Center |
Keywords: Aerial Systems: Applications, Cooperating Robots, Telerobotics and Teleoperation
Abstract: The landing of a fixed-wing UAV on top of a mobile landing platform requires a cooperative control strategy, which is based on relative motion estimates. These estimates typically suffer from communication or processing time delays, which can render an otherwise stable control system unstable. Such effects must therefore be considered during the design process of the cooperative landing controller. In this letter the application of a model-free passivity-based stabilizing controller is proposed, which is based on the monitoring of energy flows in the system, and actively dissipating any given active energy by means of adaptive damping elements. In doing so, overall system passivity and consequently stability is enforced in a straightforward and easy to implement way. The proposed control system is validated in numerical simulations for round trip delays of up to 4 seconds.
|
|
11:00-11:15, Paper MoAT12.5 | |
>Toward Enabling a Hundred Drones to Land in a Minute |
> Video Attachment
|
|
Fujikura, Daiki | TOHOKU UNIVERSITY |
Tadakuma, Kenjiro | Tohoku University |
Watanabe, Masahiro | Tohoku University |
Okada, Yoshito | Tohoku University |
Ohno, Kazunori | Tohoku University |
Tadokoro, Satoshi | Tohoku University |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control
Abstract: Currently, drone research and development has received significant attention worldwide. Particularly, delivery services employ drones as it is a viable method to improve delivery efficiency by using a several unmanned drones. Research has been conducted to realize complete automation of drone control for such services. However, regarding the takeoff and landing port of the drones, conventional methods have focused on the landing operation of a single drone, and the continuous landing of multiple drones has not been realized. To address this issue, we propose a completely novel port system, “EAGLES Port,” that allows several drones to continuously land and takeoff in a short time. Experiments verified that the landing time efficiency of the proposed port is ideally 7.5 times higher than that of conventional vertical landing systems. Moreover, the system can tolerate 270 mm of horizontal positional error, ±30 degrees of angular error in the drone’s approach (±40 degrees with the proposed gate mechanism), and up to 1.9 m/s of drone’s approach speed. This technology significantly contributes to the scalability of drone usage. Therefore, it is critical for the development of a future drone port for the landing of automated drone swarms.
|
|
11:15-11:30, Paper MoAT12.6 | |
>Adaptive Aerial Grasping and Perching with Dual Elasticity Combined Suction Cup |
> Video Attachment
|
|
Liu, Sensen | ShanghaiJiaotong University |
Dong, Wei | Shanghai Jiao Tong University |
Ma, Zhao | ShanghaiJiaotong University |
Sheng, Xinjun | Shanghai Jiao Tong University |
Keywords: Aerial Systems: Mechanics and Control, Grippers and Other End-Effectors, Mobile Manipulation
Abstract: To perch on or grasp the objective surface using the suction cup-based manipulator, the precise contact control is commonly required. Improper contact angle or insufficient contact force may cause failure. To enhance the tolerance to flight control insufficiency, a suction cup that comprises an inner soft cup and an outer firm cup to facilitate its engagement without reducing the adhesion stiffness is investigated. The soft cup is adaptable to the angular error induced by the multicopter and the resulting adhesion force can draw the firm cup and correct the angular error between the firm cup and the surface. These effects increase the engagement rate and reduce the dependence on precise control. The outer firm cup is devoted to providing a large adhesion force and a stiff base for subsequent tasks. To reduce the air evacuation time in the firm cup, a novel self-sealing structure is designed. Based on the combined cup, we build a multifunctional aerial manipulation system which can execute perching or lateral aerial grasping tasks. With the proposed prototype, the comparative flight experiments involving perching on a wall under disturbance and grasping an object are conducted. The results demonstrate that our proposed suction cup outperforms the conventional cup.
|
|
MoAT13 |
Room T13 |
Aerial Systems: Environmental Monitoring |
Regular session |
Chair: Rivas-Davila, Juan | Stanford University |
Co-Chair: Das, Jnaneshwar | Arizona State University |
|
10:00-10:15, Paper MoAT13.1 | |
>Wind and the City: Utilizing UAV-Based In-Situ Measurements for Estimating Urban Wind Fields |
> Video Attachment
|
|
Patrikar, Jay | Carnegie Mellon University |
Moon, Brady | Carnegie Mellon University |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: Aerial Systems: Applications, Environment Monitoring and Management, Field Robots
Abstract: A high-quality estimate of wind fields can potentially improve the safety and performance of Unmanned Aerial Vehicles (UAVs) operating in dense urban areas. Computational Fluid Dynamics (CFD) simulations can help provide a wind field estimate, but their accuracy depends on the knowledge of the distribution of the inlet boundary conditions. This paper provides a real-time methodology using a Particle Filter (PF) that utilizes wind measurements from a UAV to solve the inverse problem of predicting the inlet conditions as the UAV traverses the flow field. A Gaussian Process Regression (GPR) approach is used as a surrogate function to maintain the real-time nature of the proposed methodology. Real-world experiments with a UAV at an urban test-site prove the efficacy of the proposed method. The flight test shows that the 95% confidence interval for the difference between the mean estimated inlet conditions and mean ground truth measurements closely bound zero, with the difference in mean angles being between -3.67 degrees and 1.2 degrees and the difference in mean magnitudes being between -0.206 m/s and 0.020 m/s.
|
|
10:15-10:30, Paper MoAT13.2 | |
>Microdrone-Equipped Mobile Crawler Robot System, DIR-3, for High-Step Climbing and High-Place Inspection |
> Video Attachment
|
|
Ogusu, Yuji | AIST |
Tomita, Kohji | National Institute of Advanced Industrial Science AndTechnology |
Kamimura, Akiya | National Institute of Advanced Industrial Science and Technology |
Keywords: Multi-Robot Systems, Aerial Systems: Applications, Search and Rescue Robots
Abstract: Mobile robots of various types have been proposed for infrastructure inspection and disaster investigation. For such mobile robot applications, accessing the areas is of primary importance for missions. Therefore, various locomotive mechanisms have been studied. We introduce a novel mobile robot system, named DIR-3, combining a crawler robot and a microdrone. By rotating its arm back and forth, DIR-3, a very simple, lightweight crawler robot with a single 360-degree rotatable U-shaped arm, can climb up/down an 18 cm high step, 1.5 times its height. Furthermore, to inspect high places, which is considered difficult for conventional mobile robots, a drone mooring system for mobile robots is presented. The tethered microdrone of DIR-3 can be controlled freely as a flying camera by switching operating modes on the graphic user interface. The drone mooring system has a unique tension-controlled winding mechanism that enables stable landing on DIR-3 from any location in the air, in addition to measurement and estimation of relative positions of the drone. We evaluated the landing capability, position estimation accuracy, and following control of the drone using the winding mechanism. Results show the feasibility of the proposed system for inspection of cracks in a 5 m high concrete wall.
|
|
10:30-10:45, Paper MoAT13.3 | |
>MHYRO: Modular HYbrid RObot for Contact Inspection and Maintenance in Oil&gas Plants |
> Video Attachment
|
|
López, Abraham | University of Seville, GRVC |
Sanchez-Cuevas, Pedro J | University of Seville |
Suarez, Alejandro | University of Seville |
Soldado, Ámbar | University of Seville |
Ollero, Anibal | University of Seville |
Heredia, Guillermo | University of Seville |
Keywords: Aerial Systems: Applications
Abstract: In this paper, we propose a new concept of robot which is hybrid, including aerial and crawling subsystems and an arm, and also modular with interchangeable crawling subsystems for different pipe configurations, since it has been designed to cover most industrial oil & gas end-users’ requirements. The robot has the same ability than aerial robots to reach otherwise inaccessible locations, but makes the inspection more efficient, increasing operation time since crawling requires less energy than flying, and achieving better accuracy in the inspection. It also integrates safety-related characteristics for operating in the potentially explosive atmosphere of a refinery, being able to immediately interrupt the inspection if a hazardous situation is detected and carry the sensible parts such as batteries and electronic devices away as soon as possible. The paper presents the design of this platform in detail and shows the feasibility of the whole system performing indoor experiments.
|
|
10:45-11:00, Paper MoAT13.4 | |
>Geomorphological Analysis Using Unpiloted Aircraft Systems, Structure from Motion, and Deep Learning |
|
Chen, Zhiang | Arizona State University |
Scott, Tyler | Arizona State University |
Bearman, Sarah | Arizona State University |
Anand, Harish | Arizona State University |
Keating, Devin | Arizona State University |
Scott, Chelsea | Arizona State University |
Arrowsmith, Ramon | Arizona State University |
Das, Jnaneshwar | Arizona State University |
Keywords: Aerial Systems: Applications, Field Robots, Environment Monitoring and Management
Abstract: We present a pipeline for geomorphological analysis that uses structure from motion (SfM) and deep learning on close-range aerial imagery to estimate spatial distributions of rock traits (size, roundness, and orientation) along a tectonic fault scarp. The properties of the rocks on the fault scarp derive from the combination of initial volcanic fracturing and subsequent tectonic and geomorphic fracturing, and our pipeline allows scientists to leverage UAS-based imagery to gain a better understanding of such surface processes. We start by using SfM on aerial imagery to produce georeferenced orthomosaics and digital elevation models (DEM). A human expert then annotates rocks on a set of image tiles sampled from the orthomosaics, and these annotations are used to train a deep neural network to detect and segment individual rocks in the entire site. The extracted semantic information (rock masks) on large volumes of unlabeled, high-resolution SfM products allows subsequent structural analysis and shape descriptors to estimate rock size, roundness, and orientation. We present results of two experiments conducted along a fault scarp in the Volcanic Tablelands near Bishop, California. We conducted the first, proof-of-concept experiment with a DJI Phantom 4 Pro equipped with an RGB camera and inspected if elevation information assisted instance segmentation from RGB channels. Rock-trait histograms along and across the fault scarp were obtained with the neural network inference. In the second experiment, we deployed a hexrotor and a multispectral camera to produce a DEM and five spectral orthomosaics in red, green, blue, red edge, and near infrared. We focused on examining the effectiveness of different combinations of input channels in instance segmentation.
|
|
11:00-11:15, Paper MoAT13.5 | |
>Lightweight High Voltage Generator for Untethered Electroadhesive Perching of Micro Air Vehicles |
> Video Attachment
|
|
Park, Sanghyeon | Stanford University |
Drew, Daniel S. | Stanford University |
Follmer, Sean | Stanford University |
Rivas-Davila, Juan | Stanford University |
Keywords: Aerial Systems: Applications, Surveillance Systems
Abstract: The limited in-flight battery lifetime of centimeter-scale flying robots is a major barrier to their deployment, especially in applications which take advantage of their ability to reach high vantage points. Perching, where flyers remain fixed in space without use of flight actuators by attachment to a surface, is a potential mechanism to overcome this barrier. Electroadhesion, a phenomenon where an electrostatic force normal to a surface is generated by induced charge, has been shown to be an increasingly viable perching mechanism as robot size decreases due to the increased surface-area-to-volume ratio. Typically electroadhesion requires high (> 1 kV) voltages to generate useful forces, leading to relatively large power supplies that cannot be carried on-board a micro air vehicle. In this paper, we motivate the need for application-specific power electronics solutions for electroadhesive perching, develop a useful figure of merit (the "specific voltage") for comparing and guiding efforts, and walk through the design methodology of a system implementation. We conclude by showing that this high voltage power supply enables, for the first time in the literature, tetherless electroadhesive perching of a commercial micro quadrotor.
|
|
11:15-11:30, Paper MoAT13.6 | |
>Unmanned Aerial Sensor Placement for Cluttered Environments |
|
Farinha, Andre | Imperial College |
Zufferey, Raphael | Imperial College of London |
Zheng, Peter | Imperial College London |
Armanini, Sophie Franziska | Imperial College London |
Kovac, Mirko | Imperial College London |
Keywords: Aerial Systems: Applications, Robotics in Hazardous Fields, Sensor Networks
Abstract: Unmanned aerial vehicles (UAVs) have been shown to be useful for the installation of wireless sensor networks (WSNs). More notably, the accurate placement of sensor nodes using UAVs, opens opportunities for many industrial and scientific uses, in particular, in hazardous environments or inaccessible locations. This publication proposes and demonstrates a new aerial sensor placement method based on impulsive launching. Since direct physical interaction is not required, sensor deployment can be achieved in cluttered environments where the target location cannot be safely approached by the UAV, such as under the forest canopy. The proposed method is based on mechanical energy storage and an ultralight shape memory alloy (SMA) trigger. The developed aerial system weighs a total of 650 grams and can execute up to 17 deployments on a single battery charge. The system deploys sensors of 30 grams up to 4 meters from a target with an accuracy of pm 10 cm. The aerial deployment method is validated through more than 80 successful deployments in indoor and outdoor environments. The proposed approach can be integrated in field operations and complement other robotic or manual sensor placement procedures. This would bring benefits for demanding industrial applications, scientific field work, smart cities and hazardous environments [Video attachment: https://youtu.be/duPRXCyo6cY].
|
|
MoAT14 |
Room T14 |
Aerial Systems: Mechanics & Control I |
Regular session |
Chair: Bergbreiter, Sarah | Carnegie Mellon University |
Co-Chair: Szafir, Daniel J. | University of Colorado Boulder |
|
10:00-10:15, Paper MoAT14.1 | |
>In-Flight Efficient Controller Auto-Tuning Using a Pair of UAVs |
|
Giernacki, Wojciech | Poznan University of Technology |
Horla, Dariusz | Poznan University of Technology |
Saska, Martin | Czech Technical University in Prague |
Keywords: Multi-Robot Systems, Aerial Systems: Perception and Autonomy, Optimization and Optimal Control
Abstract: In the paper, a pair of auto-tuning methods for fixed-parameter controllers is presented, in application to multirotor unmanned aerial vehicles (UAVs) control. In both cases, the automatized process of searching the best altitude controller parameters is carried out with the use of a modified golden-search method, for a selected cost function, during the flight of a pair of UAVs. All the calculations are performed in real-time in the iterative manner using only basic sensory information available concerning current altitude information for a pair of UAVs. The auto-tuning process of the controller is characterized by neglectfully low computational demand, and the parameters are obtained rapidly with no dynamic model of a UAV needed. In both methods, by using a pair of UAVs in tuning process, the level of control performance can be increased, what has been proved by means of multiple outdoor experiments. The first method increases precision of the obtained controller parameters by averaging sensory information over a pair of UAVs, whereas in the second, by exchanging measurement information between the units, the search space is explored faster. The latter is of special importance when seeking the best controller parameters, what is especially expected when a limited experiment duration of multirotor UAVs is taken into account.
|
|
10:15-10:30, Paper MoAT14.2 | |
>A Novel Trajectory Optimization for Affine Systems: Beyond Convex-Concave Procedure |
> Video Attachment
|
|
Rastgar, Fatemeh | University of Tartu |
Singh, Arun Kumar | Tampere University of Technology, Finland |
Masnavi, Houman | Institute of Technology, University of Tartu |
Kruusamäe, Karl | University of Tartu |
Aabloo, Alvo | University of Tartu, IMS Lab |
Keywords: Optimization and Optimal Control, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: Trajectory optimization under affine motion models and convex cost functions are often solved through the convex-concave procedure (CCP), wherein the non-convex collision avoidance constraints are replaced with its affine approximation. Although mathematically rigorous, CCP has some critical limitations. First, it requires a collision-free initial guess of solution trajectory which is difficult to obtain, especially in dynamic environments. Second, at each iteration, CCP involves solving a convex constrained optimization problem which becomes prohibitive for real-time computation even with a moderate number of obstacles, if long planning horizons are used. In this paper, we propose a novel trajectory optimization algorithm which like CCP involves solving convex optimization problems but can work with an arbitrary initial guess. Moreover, our proposed optimizer can be computationally upto a few orders of magnitude faster than CCP while achieving similar or better optimal cost. The reduced computation time, in turn, stems from some interesting mathematical structures in our optimizer which allows for distributed computation and obtaining solutions in symbolic form. We validate the proposed optimizer on several benchmarks with static and dynamic obstacles.
|
|
10:30-10:45, Paper MoAT14.3 | |
>Development of a Passive Skid for Multicopter Landing on Rough Terrain |
> Video Attachment
|
|
Xu, Maozheng | Hiroshima University |
Sumida, Naoto | Hiroshima University |
Takaki, Takeshi | Hiroshima University |
Keywords: Underactuated Robots
Abstract: Landing is an essential part of multicopter task operations. A multicopter has relatively stringent requirements for landing, particularly for achieving flatness. Currently, landing on rough terrain with normal skids is difficult. Therefore, research is being conducted to obtain skids capable of landing on rough terrain. In this paper, a passive skid for multicopter landing on rough terrain is proposed. The proposed device is based on an existing previous study of the multicopter carried with a electric robo-arm only for object manipulation. This innovative idea stems from the aim of giving the multicopter carried with a electric robo-arm the ability to land on various occasions and then the passive skid is designed. By using a slope to simulate a rough terrain, the range of available landing in which a multicopter can maintain its pose and the frictional torque of the passive joint are analyzed. Further, experiments are conducted to demonstrate that landing can be achieved using the skid proposed in our study.
|
|
10:45-11:00, Paper MoAT14.4 | |
>Template-Based Optimal Robot Design with Application to Passive-Dynamic Underactuated Flapping |
|
De, Avik | Harvard University |
Wood, Robert | Harvard University |
Keywords: Optimization and Optimal Control, Aerial Systems: Mechanics and Control
Abstract: We present a novel paradigm and algorithm for optimal design of underactuated robot platforms in highly-constrained nonconvex parameter spaces. We apply this algorithm to two variants of the mature RoboBee platform, numerically demonstrating predicted performance improvements of over 10% in some cases by algorithmically reasoning about variable effective-mechanical-advantage (EMA) transmissions, higher aspect ratio (AR) wing designs, and force-power tradeoffs. The algorithm can currently be applied to any underactuated mechanical system with one actuated degree of freedom (DOF), and can be easily extended to arbitrary configuration spaces and dynamics.
|
|
11:00-11:15, Paper MoAT14.5 | |
>A Whisker-Inspired Fin Sensor for Multi-Directional Airflow Sensing |
|
Kim, Suhan | Carnegie Mellon University |
Kubicek, Regan | Carnegie Mellon University |
Paris, Aleix | Massachusetts Institute of Technology |
Tagliabue, Andrea | Massachusetts Institute of Technology |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Bergbreiter, Sarah | Carnegie Mellon University |
Keywords: Mechanism Design, Micro/Nano Robots, Aerial Systems: Perception and Autonomy
Abstract: This work presents the design, fabrication, and characterization of an airflow sensor inspired by the whiskers of animals. The body of the whisker was replaced with a fin structure in order to increase the air resistance. The fin was suspended by a micro-fabricated spring system at the bottom. A permanent magnet was attached beneath the spring, and the motion of fin was captured by a readily accessible and low-cost 3D magnetic sensor located below the magnet. The sensor system was modeled in terms of the dimension parameters of fin and the spring stiffness, which were optimized to improve the performance of the sensor. The system response was then characterized using a commercial wind tunnel and the results were used for sensor calibration. The sensor was integrated into a micro aerial vehicle (MAV) and demonstrated the capability of capturing the velocity of the MAV by sensing the relative airflow during flight.
|
|
11:15-11:30, Paper MoAT14.6 | |
>PufferBot: Actuated Expandable Structures for Aerial Robots |
> Video Attachment
|
|
Hedayati, Hooman | Colorado University Boulder |
Suzuki, Ryo | University of Colorado Boulder |
Leithinger, Daniel | MIT |
Szafir, Daniel J. | University of Colorado Boulder |
Keywords: Mechanism Design, Aerial Systems: Mechanics and Control, Human-Centered Robotics
Abstract: We present PufferBot, an aerial robot with an expandable structure that may expand to protect a drone's propellers when the robot is close to obstacles or collocated humans. PufferBot is made of a custom 3D printed expandable scissor structure, which utilizes a one degree of freedom actuator with rack and pinion mechanism. We propose four designs for the expandable structure, each with unique characterizations which may be useful in different situations. Finally, we present three motivating scenarios in which PufferBot might be useful beyond existing static propeller guard structures.
|
|
MoAT15 |
Room T15 |
Aerial Systems: Mechanics & Control II |
Regular session |
Chair: Zheng, Minghui | University at Buffalo |
Co-Chair: Kumar, Manish | University of Cincinnati |
|
10:00-10:15, Paper MoAT15.1 | |
>Optimal-Power Configurations for Hover Solutions in Mono-Spinners |
|
Hedayatpour, Mojtaba | University of Regina |
Mehrandezh, Mehran | University of Regina |
Janabi-Sharifi, Farrokh | Ryerson University |
Keywords: Aerial Systems: Mechanics and Control, Dynamics
Abstract: Rotary-wing flying machines draw attention within the UAV community for their in-place hovering capability, and recently, holonomic motion over fixed-wings. In this paper, we investigate about the power-optimality in a mono-spinner, i.e., a class of rotary-wing UAVs with one rotor only, whose main body has a streamlined shape for producing additional lift when counter-spinning the rotor. We provide a detailed dynamic model of our mono-spinner. Two configurations are studied: (1) a symmetric configuration, in which the rotor is aligned with the fuselage’s COM, and (2) an asymmetric configuration, in which the rotor is located with an offset from the fuselage’s COM. While the former can generate an in-place hovering flight condition, the latter can achieve trajectory tracking in 3D space by resolving the yaw and precession rates. Furthermore, it is shown that by introducing a tilting angle between the rotor and the fuselage, within the asymmetric design, one can further minimize the power consumption without compromising the overall stability. It is shown that an energy optimal solution can be achieved through the proper aerodynamic design of the mono-spinner for the first time.
|
|
10:15-10:30, Paper MoAT15.2 | |
>Knowledge Transfer between Different UAVs for Trajectory Tracking |
|
Chen, Zhu | University at Buffalo |
Liang, Xiao | University at Buffalo |
Zheng, Minghui | University at Buffalo |
Keywords: Aerial Systems: Mechanics and Control, Optimization and Optimal Control, Motion Control
Abstract: Robots are usually programmed for particular tasks with a considerable amount of hand-crafted tuning work. Whenever a new robot with different dynamics is presented, the well-designed control algorithms for the robot usually have to be re-tuned to guarantee good performance. It remains challenging to directly program a robot to automatically learn from the experiences gathered by other dynamically different robots. With such a motivation, this paper proposes a learning algorithm that enables a quadrotor unmanned aerial vehicle (UAV) to automatically improve its tracking performance by learning from the tracking errors made by other UAVs with different dynamics. This learning algorithm utilizes the relationship between the dynamics of different UAVs, named the target and training UAVs, respectively. The learning signal is generated by the learning algorithm and then added to the feedforward loop of the target UAV, which does not affect the closed-loop stability. The learning convergence can be guaranteed by the design of a learning filter. With the proposed learning algorithm, the target UAV can improve its tracking performance by learning from the training UAV without baseline controller modifications. Both numerical studies and experimental tests are conducted to validate the effectiveness of the proposed learning algorithm.
|
|
10:30-10:45, Paper MoAT15.3 | |
>Flight Control of Sliding Arm Quadcopter with Dynamic Structural Parameters |
> Video Attachment
|
|
Kumar, Rumit | University of Cincinnati |
Deshpande, Aditya M. | University of Cincinnati |
Wells, James Z. | University of Cincinnati |
Kumar, Manish | University of Cincinnati |
Keywords: Aerial Systems: Mechanics and Control, Robust/Adaptive Control of Robotic Systems, Motion Control
Abstract: The conceptual design and flight controller of a novel kind of quadcopter are presented. This design is capable of morphing the shape of the UAV during flight to achieve position and attitude control. We consider a dynamic center of gravity (CoG) which causes continuous variation in a moment of inertia (MoI) parameters of the UAV. These dynamic structural parameters play a vital role in the stability and control of the system. The length of quadcopter arms is a variable parameter, and it is actuated using attitude feedback-based control law. The MoI parameters are computed in real-time and incorporated in the equations of motion of the system. The UAV utilizes the angular motion of propellers and variable quadcopter arm lengths for position and navigation control. The movement space of the CoG is a design parameter and it is bounded by actuator limitations and stability requirements of the system. A detailed information on equations of motion, flight controller design and possible applications of this system are provided. Further, the proposed shape-changing UAV system is evaluated by comparative numerical simulations for way point navigation mission and complex trajectory tracking.
|
|
10:45-11:00, Paper MoAT15.4 | |
>Design and Control of SQUEEZE: A Spring-Augmented QUadrotor for intEractions with the Environment to SqueeZE-And-Fly |
> Video Attachment
|
|
Patnaik, Karishma | Arizona State University |
Mishra, Shatadal | ASU |
Rezayat Sorkhabadi, Seyed Mostafa | Arizona State University |
Zhang, Wenlong | Arizona State University |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control
Abstract: This paper presents the design and control of a novel quadrotor with a variable geometry to physically interact with cluttered environments and fly through relatively narrow gaps and passageways. This compliant quadrotor with passive morphing capabilities is designed using torsional springs at every arm hinge to allow for rotation with external forces. We derive the dynamic model of this variable geometry quadrotor (SQUEEZE), and develop a low-level adaptive controller for trajectory tracking. The corresponding Lyapunov stability proof of attitude tracking is also presented. Further, an admittance controller is designed to account for change in yaw due to physical interactions with the environment. Finally, the proposed design is validated in real-time flight tests in two setups: a relatively small gap and a passageway. The experimental results demonstrate unique capability of SQUEEZE in navigating through constrained narrow spaces.
|
|
11:00-11:15, Paper MoAT15.5 | |
>Hybrid Aerial-Ground Locomotion with a Single Passive Wheel |
> Video Attachment
|
|
Qin, Youming | The University of Hong Kong |
Li, Yihang | University of Hong Kong |
Xu, Wei | University of Hong Kong |
Zhang, Fu | University of Hong Kong |
Keywords: Aerial Systems: Mechanics and Control, Underactuated Robots, Mechanism Design
Abstract: Exploiting contacts with environment structures provides extra force support to a UAV, often reducing the power consumption and hence extending the mission time. This paper investigates one such way to exploit flat surfaces in the environment by a novel aerial-ground hybrid locomotion. Our design is a single passive wheel integrated at the UAV bottom, serving a minimal design to date. We present the principle and implementation of such a simple design as well as its control. Flight experiments are conducted to verify the feasibility and the power saving caused by the ground locomotion. Results show that our minimal design allows successful aerial-ground hybrid locomotion even with a less-controllable bi-copter UAV. The ground locomotion saves up to 77% battery without much tuning effort.
|
|
11:15-11:30, Paper MoAT15.6 | |
>TiltDrone: A Fully-Actuated Tilting Quadrotor Platform |
|
Zheng, Peter | Imperial College London |
Tan, XinKai | Imperial College London |
Koçer, Başaran Bahadır | Nanyang Technological University |
Yang, Erdeng | Imperial College London |
Kovac, Mirko | Imperial College London |
Keywords: Aerial Systems: Mechanics and Control, Mechanism Design, Aerial Systems: Applications
Abstract: Multi-directional aerial platforms can fly in almost any orientation and direction, often maneuvering in ways their underactuated counterparts cannot match. A subset of multi-directional platforms is fully-actuated multirotors, where all six degrees of freedom are independently controlled without redundancies. Fully-actuated multirotors possess much greater freedom of movement than conventional multirotor drones, allowing them to perform complex sensing and manipulation tasks. While there has been comprehensive research on multi-directional multirotor control systems, the spectrum of hardware designs remains fragmented. This paper sets out the hardware design architecture of a fully-actuated quadrotor and its associated control framework. Following the novel platform design, a prototype was built to validate the control scheme and characterize the flight performance. The resulting quadrotor was shown in operation to be capable of holding a stationary hover at 30 degrees incline, and track position commands by thrust vectoring [Video attachment: url{https://youtu.be/8HOQl_77CVg}].
|
|
MoAT16 |
Room T16 |
Aerial Systems: Mechanics & Control III |
Regular session |
Chair: Kim, H. Jin | Seoul National University |
Co-Chair: Bass, John | Université De Sherbrooke |
|
10:00-10:15, Paper MoAT16.1 | |
>Adaptive Nonlinear Control for Perching of a Bioinspired Ornithopter |
> Video Attachment
|
|
Maldonado Fernández, Francisco Javier | University of Seville |
Acosta, Jose Angel | University of Seville |
Tormo Barbero, Jesus | Universidad De Sevilla |
Grau, Pedro | University of Seville |
GuzmÁn GarcÍa, MarÍa Del Mar | University of Seville |
Ollero, Anibal | University of Seville |
Keywords: Aerial Systems: Mechanics and Control, Biologically-Inspired Robots
Abstract: This work presents a model-free nonlinear controller for an ornithopter prototype with bioinspired wings and tail. The size and power requirements have been thought to allocate a customized autopilot onboard. To assess the functionality and performance of the full mechatronic design, a controller has been designed and implemented to execute a prescribed perching 2D trajectory. Although functional, its 'handmade' nature forces many imperfections that cause uncertainty that hinder its control. Therefore, the controller is based on adaptive backstepping and does not require any knowledge of the aerodynamics. The controller is able to follow a given reference in flight path angle by actuating only on the tail deflection. A novel space-dependent nonlinear guidance law is also provided to prescribe the perching trajectory. Mechatronics, guidance and control system performance is validated by conducting indoor flight tests.
|
|
10:15-10:30, Paper MoAT16.2 | |
>Improving Multirotor Landing Performance on Inclined Surfaces Using Reverse Thrust |
|
Bass, John | Université De Sherbrooke |
Lussier Desbiens, Alexis | Université De Sherbrooke |
Keywords: Aerial Systems: Mechanics and Control, Contact Modeling, Flexible Robots
Abstract: Conventional multirotors are unable to land on inclined surfaces without specialized suspensions and adhesion devices. With the development of a bidirectional rotor, landing maneuvers could benefit from rapid thrust reversal, which would increase the landing envelope without involving the addition of heavy and complex landing gears or reduction of payload capacity. This article presents a model designed to accurately simulate quadrotor landings, the behavior of their stiff landing gear, and the limitations of bidirectional rotors. The model was validated using experimental results on both low-friction and high-friction surfaces, and was then used to test multiple landing algorithms over a wide range of touchdown velocities and slope inclinations to explore the benefits of reverse thrust. It is shown that thrust reversal can nearly double the maximum inclination on which a quadrotor can land and can also allow high vertical velocity landings.
|
|
10:30-10:45, Paper MoAT16.3 | |
>Evolved Neuromorphic Control for High Speed Divergence-Based Landings of MAVs |
|
Hagenaars, Jesse Jan | Delft University of Technology |
Paredes-Valles, Federico | Delft University of Technology |
Bohte, Sander | Centrum Wiskunde & Informatica |
de Croon, Guido | TU Delft / ESA |
Keywords: Aerial Systems: Perception and Autonomy, Autonomous Vehicle Navigation, Neurorobotics
Abstract: Flying insects are capable of vision-based navigation in cluttered environments, reliably avoiding obstacles through fast and agile maneuvers, while being very efficient in the processing of visual stimuli. Meanwhile, autonomous micro air vehicles still lag far behind their biological counterparts, displaying inferior performance at a much higher energy consumption. In light of this, we want to mimic flying insects in terms of their processing capabilities, and consequently show the efficiency of this approach in the real world. This letter does so through evolving spiking neural networks for controlling landings of micro air vehicles using optical flow divergence from a downward-looking camera. We demonstrate that the resulting neuromorphic controllers transfer robustly from a highly abstracted simulation to the real world, performing fast and safe landings while keeping network spike rate minimal. Furthermore, we provide insight into the resources required for successfully solving the problem of divergence-based landing, showing that high-resolution control can be learned with only a single spiking neuron. To the best of our knowledge, this work is the first to integrate spiking neural networks in the control loop of a real-world flying robot. Videos of the experiments can be found at https://bit.ly/neuro-controller.
|
|
10:45-11:00, Paper MoAT16.4 | |
>A Collision-Resilient Aerial Vehicle with Icosahedron Tensegrity Structure |
> Video Attachment
|
|
Zha, Jiaming | University of California, Berkeley |
Wu, Xiangyu | University of California, Berkeley |
Kroeger, Joseph | University of California Berkeley |
Perez, Natalia | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Aerial Systems: Mechanics and Control, Search and Rescue Robots, Robotics in Hazardous Fields
Abstract: Aerial vehicles with collision resilience can operate with more confidence in environments with obstacles that are hard to detect and avoid. This paper presents the methodology used to design a collision resilient aerial vehicle with icosahedron tensegrity structure. A simplified stress analysis of the tensegrity frame under impact forces is performed to guide the selection of its components. In addition, an autonomous controller is presented to reorient the vehicle from an arbitrary orientation on the ground to help it take off. Experiments show that the vehicle can successfully reorient itself after landing upside-down and can survive collisions with speed up to 6.5m/s.
|
|
11:00-11:15, Paper MoAT16.5 | |
>Fail-Safe Flight of a Fully-Actuated Quadcopter in a Single Motor Failure |
> Video Attachment
|
|
Lee, Seung Jae | Seoul National University |
Jang, Inkyu | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Mechanics and Control, Robot Safety, Aerial Systems: Applications
Abstract: In this paper, we introduce a new quadrotor fail-safe flight solution that can perform the same four controllable degrees-of-freedom flight as a standard multirotor even when a single thruster fails. The new solution employs a novel multirotor platform known as the T3-Multirotor and utilizes a distinctive strategy of actively controlling the center of gravity position to restore the controllable degrees of freedom. A dedicated control structure is introduced, along with a detailed analysis of the dynamic characteristics of the platform that change during emergency flights. Experimental results are provided to validate the feasibility of the proposed solution.
|
|
11:15-11:30, Paper MoAT16.6 | |
>Development of Hiryu-II: A Long-Reach Articulated Modular Manipulator Driven by Thrusters |
> Video Attachment
|
|
Ueno, Yusuke | Tokyo Institute of Technology |
Hagiwara, Tetsuo | KinderHeim |
Nabae, Hiroyuki | Tokyo Institute of Technology |
Suzumori, Koichi | Tokyo Institute of Technology |
Endo, Gen | Tokyo Institute of Technology |
Keywords: Aerial Systems: Mechanics and Control, Redundant Robots, Cellular and Modular Robots
Abstract: Robotic manipulators using thrusters for weight compensation are an active research topic due to their potential to exceed the limits of maximum length. However, existing manipulators that use thrusters have limitations of maximum length because the hardware design is not sufficiently refined. This paper focuses on overcoming these limitations and realizing an articulated manipulator more than twice the length of conventional ones. To cancel the moment for each link, we performed static analysis considering the torsional deformation around the link axis to derive the thruster position. Weight compensation and joint angle control of the manipulator can be realized with simple proportional integral derivative control for each link by numerical simulation. Consequently, we demonstrated the feasibility of the proposed manipulator by lifting a 0.6 kg payload at the arm end with a prototype of length 6.6 m. Theoretically, each thrust force control input was almost constant, regardless of link attitude. This suggests modular properties that contribute to the practicality of the proposed manipulator for various tasks.
|
|
MoAT17 |
Room T17 |
Aerial Systems: Path Planning |
Regular session |
Chair: Gao, Fei | Zhejiang University |
|
10:00-10:15, Paper MoAT17.1 | |
>Experimental Flights of Adaptive Patterns for Cloud Exploration with UAVs |
> Video Attachment
|
|
Verdu, Titouan | ENAC, University of Toulouse |
Maury Nicolas, Nicolas | Météo France Toulouse |
Narvor Pierre, Pierre | LAAS-CNRS, Université De Toulouse |
Seguin, Florian | LAAS-CNRS, Université De Toulouse |
Roberts Gregory, Gregory | METEO-FRANCE Toulouse |
Couvreux, Fleur | CNRM, Université Toulouse, Météo France and CNRS |
Cayez Grégoire, Grégoire | METEO-FRANCE Toulouse |
Bronz, Murat | ENAC, Université De Toulouse |
Hattenberger, Gautier | ENAC, French Civil Aviation University |
Lacroix, Simon | LAAS/CNRS |
Keywords: Aerial Systems: Applications, Reactive and Sensor-Based Planning
Abstract: This work presents the deployment of UAVs for the exploration of clouds, from the system architecture and simulation tests to a real-flight campaign and trajectory analyzes. Thanks to their small size and low altitude, light UAVs have proven to be adapted for in-situ cloud data collection. The short life time of the clouds and limited endurance of the planes require to focus on the area of maximum interest to gather relevant data. Based on previous work on cloud adaptive sampling, the article focuses on the overall system architecture, the improvements made to the system based on preliminary tests and simulations, and finally the results of a field campaign. The Barbados experimental flight campaign confirmed the capacity of the system to map clouds and to collect relevant data in dynamic environment, and highlighted areas for improvement.
|
|
10:15-10:30, Paper MoAT17.2 | |
>Navigation-Assistant Path Planning within a MAV Team |
> Video Attachment
|
|
Jang, Youngseok | Seoul National University |
Lee, Yunwoo | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Perception and Autonomy, Reactive and Sensor-Based Planning, Motion and Path Planning
Abstract: In micro aerial vehicle (MAV) operations, the success of mission is highly dependent on navigation performance, which has raised recent interests on navigation-aware path planning. One of the challenges lies in that optimal motions for successful navigation and a designated mission are often different in unknown, unstructured environments, and only sub-optimality may be obtained in each aspect. We aim to organize a two-MAV team that can effectively execute mission and simultaneously guarantee navigation quality, which consists of a main-agent responsible for mission and a sub-agent for navigation of the team. Especially, this paper focuses on path planning of the sub-agent to provide navigational assistance to the main-agent using a monocular camera. We adopt a graph-based receding horizon planner to find a dynamically feasible path in order for the sub-agent to help the main-agent's navigation. In this process, we present a metric for evaluating the localization performance utilizing the distribution of the features projected to the image plane. We also design a map management strategy and pose-estimation support mechanism in a monocular camera setup, and validate their effectiveness in two scenarios.
|
|
10:30-10:45, Paper MoAT17.3 | |
>UAV Coverage Path Planning under Varying Power Constraints Using Deep Reinforcement Learning |
> Video Attachment
|
|
Theile, Mirco | Technical University of Munich |
Bayerlein, Harald | EURECOM |
Nai, Richard | Technical University of Munich |
Gesbert, David | EURECOM |
Caccamo, Marco | Technical University of Munich |
Keywords: Aerial Systems: Perception and Autonomy, Motion and Path Planning, Autonomous Agents
Abstract: Coverage path planning (CPP) is the task of designing a trajectory that enables a mobile agent to travel over every point of an area of interest. We propose a new method to control an unmanned aerial vehicle (UAV) carrying a camera on a CPP mission with random start positions and multiple options for landing positions in an environment containing no-fly zones. While numerous approaches have been proposed to solve similar CPP problems, we leverage end-to-end reinforcement learning (RL) to learn a control policy that generalizes over varying power constraints for the UAV. Despite recent improvements in battery technology, the maximum flying range of small UAVs is still a severe constraint, which is exacerbated by variations in the UAV’s power consumption that are hard to predict. By using map-like input channels to feed spatial information through convolutional network layers to the agent, we are able to train a double deep Q-network (DDQN) to make control decisions for the UAV, balancing limited power budget and coverage goal. The proposed method can be applied to a wide variety of environments and harmonizes complex goal structures with system constraints.
|
|
10:45-11:00, Paper MoAT17.4 | |
>Detection-Aware Trajectory Generation for a Drone Cinematographer |
> Video Attachment
|
|
Jeon, Boseong | Seoul National University |
Shim, Dongseok | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Perception and Autonomy, Motion and Path Planning, Reactive and Sensor-Based Planning
Abstract: This work investigates an efficient trajectory generation for chasing a dynamic target, which incorporates the detectability objective. The proposed method actively guides the motion of a cinematographer drone so that the color of a target is well-distinguished against the colors of the background in the view of the drone. For the objective, we define a measure of color detectability given a chasing path. After computing a discrete path optimized for the metric, we generate a dynamically feasible trajectory. The whole pipeline can be updated on-the-fly to respond to the motion of the target. For the efficient discrete path generation, we construct a directed acyclic graph (DAG) for which a topological sorting can be determined analytically without the depth-first search. The smooth path is obtained in quadratic programming (QP) framework. We validate the enhanced performance of state-of-the-art object detection and tracking algorithms when the camera drone executes the trajectory obtained from the proposed method.
|
|
11:00-11:15, Paper MoAT17.5 | |
>Autonomous and Cooperative Design of the Monitor Positions for a Team of UAVs to Maximize the Quantity and Quality of Detected Objects |
> Video Attachment
|
|
Koutras, Dimitrios | Center for Research and Technology Hellas, Democritus University |
Kapoutsis, Athanasios | Democritus University of Thrace, Xanthi, Greece & Centre for Res |
Kosmatopoulos, Elias | Democritus Univ. Thrace & ITI/CERTH |
Keywords: Aerial Systems: Perception and Autonomy, Surveillance Systems, Motion and Path Planning
Abstract: This paper tackles the problem of positioning a swarm of UAVs inside a completely unknown terrain, having as objective to maximize the overall situational awareness. The situational awareness is expressed by the number and quality of unique objects of interest, inside the UAVs' fields of view. YOLOv3 and a system to identify duplicate objects of interest were employed to assign a single score to each UAVs' configuration. Then, a novel navigation algorithm, capable of optimizing the previously defined score, without taking into consideration the dynamics of either UAVs or environment, is proposed. A cornerstone of the proposed approach is that it shares the same convergence characteristics as the block coordinate descent (BCD) family of approaches. The effectiveness and performance of the proposed navigation scheme were evaluated utilizing a series of experiments inside the AirSim simulator. The experimental evaluation indicates that the proposed navigation algorithm was able to consistently navigate the swarm of UAVs to ``strategic'' monitoring positions and also adapt to the different number of swarm sizes, utilizing the dynamics of UAVs to the full extent. The source code and a video demonstration are available at https://github.com/dimikout3/ConvCAO_AirSim.
|
|
11:15-11:30, Paper MoAT17.6 | |
>Alternating Minimization Based Trajectory Generation for Quadrotor Aggressive Flight |
> Video Attachment
|
|
Wang, Zhepei | Zhejiang University |
Zhou, Xin | ZHEJIANG UNIVERSITY |
Xu, Chao | Zhejiang University |
Chu, Jian | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Aerial Systems: Applications, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: With much research has been conducted into trajectory planning for quadrotors, planning with spatial and temporal optimal trajectories in real-time is still challenging. In this paper, we propose a framework for large-scale waypoint-based polynomial trajectory generation, with highlights on its superior computational efficiency and simultaneous spatial-temporal optimality. Exploiting the implicitly decoupled structure of the problem, we conduct alternating minimization between boundary conditions and time durations of trajectory pieces. Algebraic convenience of both sub-problems is leveraged to escape poor local minima and achieve the lowest time consumption. Theoretical analysis for the global/local convergence rate of our method is provided. Moreover, based on polynomial theory, an extremely fast feasibility checker is designed for various kinds of constraints. By incorporating it into our alternating structure, a constrained minimization algorithm is constructed to optimize trajectories on the premise of feasibility. Benchmark evaluation shows that our algorithm outperforms state-of-the-art waypoint-based methods regarding efficiency, optimality, and scalability. The algorithm can be incorporated in a high-level waypoint planner, which can rapidly search over a three-dimensional space for aggressive autonomous flights. The capability of our algorithm is experimentally demonstrated by quadrotor fast flights in a limited space with dense obstacles.
|
|
MoAT18 |
Room T18 |
UAV Planning |
Regular session |
Chair: Mueller, Mark Wilfried | University of California, Berkeley |
Co-Chair: Jing, Wei | A*STAR |
|
10:00-10:15, Paper MoAT18.1 | |
>Motion Planning for Heterogeneous Unmanned Systems under Partial Observation from UAV |
> Video Attachment
|
|
Chen, Ci | Zhejiang University |
Wan, Yuanfang | Southern University of Science and Technology |
Li, Baowei | Peking University |
Wang, Chen | Peking University |
Xie, Guangming | Peking University |
Jiang, Huanyu | Zhejiang University |
Keywords: Motion and Path Planning, Multi-Robot Systems, Autonomous Vehicle Navigation
Abstract: For heterogeneous unmanned systems composed of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), using UAVs serve as eyes to assist UGVs in motion planning is a promising research direction due to the UAVs' vast view scope. However, its limitations on flight altitude prevent the UAVs from observing the global map. Thus motion planning in the local map becomes a Partially Observable Markov Decision Process (POMDP) problem. This paper proposes a motion planning algorithm for heterogeneous unmanned systems under partial observation from UAV without reconstruction of global maps. Our algorithm consists of two parts designed for perception and decision-making, respectively. For the perception part, we propose the Grid Map Generation Network (GMGN), which is used to perceive scenes from UAV's perspective and classify the pathways and obstacles. For the decision-making part, we propose the Motion Command Generation Network (MCGN). Due to the addition of the memory mechanism, MCGN has planning and reasoning abilities under partial observation from UAVs. We evaluate our proposed algorithm by comparing it with baseline algorithms. The results show that our method effectively plans the motion of heterogeneous unmanned systems and achieves a relatively high success rate.
|
|
10:15-10:30, Paper MoAT18.2 | |
>Multi-UAV Coverage Path Planning for the Inspection of Large and Complex Structures |
|
Jing, Wei | A*STAR |
Deng, Di | Carnegie Mellon University |
Wu, Yan | A*STAR Institute for Infocomm Research |
Shimada, Kenji | Carnegie Mellon University |
Keywords: Motion and Path Planning, Task Planning
Abstract: We present a multi-UAV Coverage Path Planning (CPP) framework for the inspection of large-scale, complex 3D structures. In the proposed sampling-based coverage path planning method, we formulate the multi-UAV inspection applications as a multi-agent coverage path planning problem. By combining two NP-hard problems: Set Covering Problem (SCP) and Vehicle Routing Problem (VRP), a Set-Covering Vehicle Routing Problem (SC-VRP) is formulated and subsequently solved by a modified Biased Random Key Genetic Algorithm (BRKGA) with novel, efficient encoding strategies and local improvement heuristics. We test our proposed method for several complex 3D structures with the 3D model extracted from OpenStreetMap. The proposed method outperforms previous methods, by reducing the length of the planned inspection path by up to 48%
|
|
10:30-10:45, Paper MoAT18.3 | |
>Generating Minimum-Snap Quadrotor Trajectories Really Fast |
> Video Attachment
|
|
Burke, Declan | The University of Melbourne |
Chapman, Airlie | University of Washington |
Shames, Iman | The University of Melbourne |
Keywords: Motion and Path Planning, Optimization and Optimal Control, Nonholonomic Motion Planning
Abstract: We propose an algorithm for generating minimum-snap trajectories for quadrotors with linear computational complexity with respect to the number of segments in the spline trajectory. Our algorithm is numerically stable for large numbers of segments and is able to generate trajectories of more than 500,000 segments. The computational speed and numerical stability of our algorithm makes it suitable for real-time generation of very large scale trajectories. We demonstrate the performance of our algorithm and compare it to existing methods, in which it is both faster and able to calculate larger trajectories than state-of-the-art. We also show the feasibility of the trajectories experimentally with a long quadrotor flight.
|
|
10:45-11:00, Paper MoAT18.4 | |
>Rectangular Pyramid Partitioning Using Integrated Depth Sensors (RAPPIDS): A Fast Planner for Multicopter Navigation |
> Video Attachment
|
|
Bucki, Nathan | University of California, Berkeley |
Lee, Junseok | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Reactive and Sensor-Based Planning, Collision Avoidance, Aerial Systems: Perception and Autonomy
Abstract: We present RAPPIDS: a novel collision checking and planning algorithm for multicopters that is capable of quickly finding local collision-free trajectories given a single depth image from an onboard camera. The primary contribution of this work is a new pyramid-based spatial partitioning method that enables rapid collision detection between candidate trajectories and the environment. By leveraging the efficiency of our collision checking method, we shown how a local planning algorithm can be run at high rates on computationally constrained hardware, evaluating thousands of candidate trajectories in milliseconds. The performance of the algorithm is compared to existing collision checking methods in simulation, showing our method to be capable of evaluating orders of magnitude more trajectories per second. Experimental results are presented showing a quadcopter quickly navigating a previously unseen cluttered environment by running the algorithm on an ODROID-XU4 at 30 Hz.
|
|
MoAT19 |
Room T19 |
Planning for Aerial Systems |
Regular session |
Chair: Chli, Margarita | ETH Zurich |
Co-Chair: Torres-González, Arturo | University of Seville |
|
10:00-10:15, Paper MoAT19.1 | |
>Persistent Connected Power Constrained Surveillance with Unmanned Aerial Vehicles |
|
Ghosh, Pradipta | University of Southern California |
Tabuada, Paulo | UCLA |
Govindan, Ramesh | University of Southern California |
Sukhatme, Gaurav | University of Southern California |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Networked Robots, Motion and Path Planning
Abstract: Persistent surveillance with aerial vehicles (drones) subject to connectivity and power constraints is a relatively uncharted domain of research. To reduce the complexity of multi-drone motion planning, most state-of-the-art solutions ignore network connectivity and assume unlimited battery power. Motivated by this and advances in optimization and constraint satisfaction techniques, we introduce a new persistent surveillance motion planning problem for multiple drones that incorporates connectivity and power consumption constraints. We use a recently developed constrained optimization tool (Satisfiability Modulo Convex Optimization (SMC)) that has the expressivity needed for this problem. We show how to express the new persistent surveillance problem in the SMC framework. Our analysis of the formulation based on a set of simulation experiments illustrates that we can generate the desired motion planning solution within a couple of minutes for small teams of drones (up to 5) confined to a 7 x 7 x 1 grid-space.
|
|
10:15-10:30, Paper MoAT19.2 | |
>Autonomous Planning for Multiple Aerial Cinematographers |
|
Caraballo de la Cruz, Luis Evaristo | Universidad De Sevilla |
Montes-Romero, Angel-Manuel | University of Seville; GRVC Team |
Díaz-Báńez, José-Miguel | Universidad Sevilla |
Capitan, Jesus | University of Seville |
Torres-González, Arturo | University of Seville |
Ollero, Anibal | University of Seville |
Keywords: Multi-Robot Systems, Planning, Scheduling and Coordination, Aerial Systems: Applications
Abstract: This paper proposes a planning algorithm for autonomous media production with multiple Unmanned Aerial Vehicles (UAVs) in outdoor events. Given filming tasks specified by a media Director, we formulate an optimization problem to maximize the filming time considering battery constraints. As we conjecture that the problem is NP-hard, we consider a discretization version, and propose a graph-based algorithm that can find an optimal solution of the discrete problem for a single UAV in polynomial time. Then, a greedy strategy is applied to solve the problem sequentially for multiple UAVs. We demonstrate that our algorithm is efficient for small teams (3-5 UAVs) and that its performance is close to the optimum. We showcase our system in field experiments carrying out actual media production in an outdoor scenario with multiple UAVs.
|
|
10:30-10:45, Paper MoAT19.3 | |
>Multi-Robot Coordination with Agent-Server Architecture for Autonomous Navigation in Partially Unknown Environments |
> Video Attachment
|
|
Bartolomei, Luca | ETH Zurich |
Karrer, Marco | ETH Zurich |
Chli, Margarita | ETH Zurich |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Aerial Systems: Perception and Autonomy, Multi-Robot Systems
Abstract: In this work, we present a system architecture to enable autonomous navigation of multiple agents across user-selected global interest points in a partially unknown environment. The system is composed of a server and a team of agents, here small aircrafts. Leveraging this architecture, computationally demanding tasks, such as global dense mapping and global path planning can be outsourced to a potentially powerful central server, limiting the onboard computation for each agent to local pose estimation using Visual-Inertial Odometry (VIO) and local path planning for obstacle avoidance. By assigning priorities to the agents, we propose a hierarchical multi-robot global planning pipeline, which avoids collisions amongst the agents and computes their paths towards the respective goals. The resulting global paths are communicated to the agents and serve as reference input to the local planner running onboard each agent. In contrast to previous works, here we relax the common assumption of a previously mapped environment and perfect knowledge about the state, and we show the effectiveness of the proposed approach in photo-realistic simulations with up to four agents operating in an industrial environment.
|
|
10:45-11:00, Paper MoAT19.4 | |
>A Distributed Pipeline for Scalable, Deconflicted Formation Flying |
|
Lusk, Parker C. | Massachusetts Institute of Technology |
Cai, Xiaoyi | Massachusetts Institute of Technology |
Wadhwania, Samir | Massachusetts Institute of Technology |
Paris, Aleix | Massachusetts Institute of Technology |
Fathian, Kaveh | MIT |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Keywords: Swarms, Distributed Robot Systems, Multi-Robot Systems
Abstract: Reliance on external localization infrastructure and centralized coordination are main limiting factors for formation flying of vehicles in large numbers and in unprepared environments. While solutions using onboard localization address the dependency on external infrastructure, the associated coordination strategies typically lack collision avoidance and scalability. To address these shortcomings, we present a unified pipeline with onboard localization and a distributed, collision-free motion planning strategy that scales to a large number of vehicles. Since distributed collision avoidance strategies are known to result in gridlock, we also present a decentralized task assignment solution to deconflict vehicles. We experimentally validate our pipeline in simulation and hardware. The results show that our approach for solving the optimization problem associated with motion planning gives solutions within seconds in cases where general purpose solvers fail due to high complexity. In addition, our lightweight assignment strategy leads to successful and quicker formation convergence in 96-100% of all trials, whereas indefinite gridlocks occur without it for 33-50% of trials. By enabling large-scale, deconflicted coordination, this pipeline should help pave the way for anytime, anywhere deployment of aerial swarms.
|
|
11:00-11:15, Paper MoAT19.5 | |
>Decentralized Nonlinear MPC for Robust Cooperative Manipulation by Heterogeneous Aerial-Ground Robots |
> Video Attachment
|
|
Lissandrini, Nicola | University of Padova |
Verginis, Christos | Electrical Engineering, KTH Royal Institute of Technology |
Roque, Pedro | KTH Royal Institute of Technology, Stockholm, Sweden |
Cenedese, Angelo | University of Padova |
Dimarogonas, Dimos V. | KTH Royal Institute of Technology |
Keywords: Multi-Robot Systems, Cooperating Robots, Aerial Systems: Applications
Abstract: Cooperative robotics is a trending topic nowadays as it makes possible a number of tasks that cannot be performed by individual robots, such as heavy payload transportation and agile manipulation. In this work, we address the problem of cooperative transportation by heterogeneous, manipulator- endowed robots. Specifically, we consider a generic number of robotic agents simultaneously grasping an object, which is to be transported to a prescribed set point while avoiding obstacles. The procedure is based on a decentralized leader-follower Model Predictive Control scheme, where a designated leader agent is responsible for generating a trajectory compatible with its dynamics, and the followers must compute a trajectory for their own manipulators that aims at minimizing the internal forces and torques that might be applied to the object by the different grippers. The Model Predictive Control approach appears to be well suited to solve such a problem, because it provides both a control law and a technique to generate trajectories, which can be shared among the agents. The proposed algorithm is implemented using a system comprised of a ground and an aerial robot, both in the robotic Gazebo simulator as well as in experiments with real robots, where the methodological approach is assessed and the controller design is shown to be effective for the cooperative transportation task.
|
|
11:15-11:30, Paper MoAT19.6 | |
>A Unified NMPC Scheme for MAVs Navigation with 3D Collision Avoidance under Position Uncertainty |
|
Sharif Mansouri, Sina | Luleĺ University of Technology |
Kanellakis, Christoforos | LTU |
Lindqvist, Björn | Luleĺ University of Technology |
Pourkamali-Anaraki, Farhad | Assistant Professor |
Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Burdick, Joel | California Institute of Technology |
Nikolakopoulos, George | Luleĺ University of Technology |
Keywords: Collision Avoidance, Aerial Systems: Applications, Object Detection, Segmentation and Categorization
Abstract: This article proposes a novel Nonlinear Model Predictive Control (NMPC) framework for Micro Aerial Vehicle (MAV) autonomous navigation in indoor enclosed environments. The introduced framework allows us to consider the nonlinear dynamics of MAVs, nonlinear geometric constraints, while guarantees real-time performance. Our first contribution is to reveal underlying planes within a 3D point cloud, obtained from a 3D lidar scanner, by designing an efficient subspace clustering method. The second contribution is to incorporate the extracted information into the nonlinear constraints of NMPC for avoiding collisions. Our third contribution focuses on making the controller robust by considering the uncertainty of localization in NMPC using Shannon's entropy to define the weights involved in the optimization process. This strategy enables us to track position or velocity references or none in the event of losing track of position or velocity estimations. As a result, the collision avoidance constraints are defined in the local coordinates of the MAV and it remains active and guarantees collision avoidance, despite localization uncertainties, e.g., position estimation drifts. The efficacy of the suggested framework has been evaluated using various simulations in the Gazebo environment.
|
|
MoAT20 |
Room T20 |
Aerial Systems: Perception |
Regular session |
Chair: Roy, Nicholas | Massachusetts Institute of Technology |
Co-Chair: Pan, Jia | University of Hong Kong |
|
10:00-10:15, Paper MoAT20.1 | |
>In-Flight Range Optimization of Multicopters Using Multivariable Extremum Seeking with Adaptive Step Size |
> Video Attachment
|
|
Wu, Xiangyu | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Energy and Environment-Aware Automation, Robust/Adaptive Control of Robotic Systems, Aerial Systems: Perception and Autonomy
Abstract: Limited flight range is a common problem for multicopters. To alleviate this problem, we propose a method for finding the optimal speed and heading of a multicopter when flying a given path to achieve the longest flight range. Based on a novel multivariable extremum seeking controller with adaptive step size, the method (a) does not require any power consumption model of the vehicle, (b) can adapt to unknown disturbances, (c) can be executed online, and (d) converges faster than the standard extremum seeking controller with constant step size. We conducted indoor experiments to validate the effectiveness of this method under different payloads and initial conditions, and showed that it is able to converge more than 30% faster than the standard extremum seeking controller. This method is especially useful for applications such as package delivery, where the size and weight of the payload differ for different deliveries and the power consumption of the vehicle is hard to model.
|
|
10:15-10:30, Paper MoAT20.2 | |
>Semantic Trajectory Planning for Long-Distant Unmanned Aerial Vehicle Navigation in Urban Environments |
> Video Attachment
|
|
Ryll, Markus | Massachusetts Institute of Technology |
Ware, John | Massachusetts Institute of Technology |
Carter, John | MIT |
Roy, Nicholas | Massachusetts Institute of Technology |
Keywords: Aerial Systems: Perception and Autonomy, Autonomous Vehicle Navigation, Aerial Systems: Applications
Abstract: There has been a considerable amount of recent work on high-speed micro-aerial vehicle flight in unknown and unstructured environments. Generally these approaches either use active sensing or fly slowly enough to ensure a safe braking distance with the relatively short sensing range of passive sensors. The former generally requires carrying large and heavy LIDARs and the latter only allows flight far away from the dynamic limits of the vehicle. One of the significant challenges for high-speed flight is the computational demand of trajectory planning at sufficiently high rates and length scales required in outdoor environments. We tackle both problems in this work by leveraging semantic information derived from an RGB camera on-board the vehicle. We first describe how to use semantic information to increase the effective range of perception on certain environment classes. Second, we present a sparse representation of the environment that is sufficiently lightweight for long distance path planning. We show how our approach outperforms more traditional metric planners which seek the shortest path, demonstrate the semantic planner's capabilities in a set of simulated and excessive real-world autonomous quadrotor flights in an urban environment.
|
|
10:30-10:45, Paper MoAT20.3 | |
>Augmented Memory for Correlation Filters in Real-Time UAV Tracking |
|
Li, Yiming | Tongji University |
Fu, Changhong | Tongji University |
Ding, Fangqiang | Tongji University |
Huang, Ziyuan | National Universitu of Singapore |
Pan, Jia | University of Hong Kong |
Keywords: Aerial Systems: Perception and Autonomy, Computer Vision for Automation, Computer Vision for Other Robotic Applications
Abstract: The outstanding computational efficiency of discriminative correlation filter (DCF) fades away with various complicated improvements. Previous appearances are also gradually forgotten due to the exponential decay of historical views in traditional appearance updating scheme of DCF framework, reducing the model’s robustness. In this work, a novel tracker based on DCF framework is proposed to augment memory of previously appeared views while running at real-time speed. Several historical views and the current view are simultaneously introduced in training to allow the tracker to adapt to new appearances as well as memorize previous ones. A novel rapid compressed context learning is proposed to increase the discriminative ability of the filter efficiently. Substantial experiments on UAVDT and UAV123 datasets have validated that the proposed tracker performs competitively against other 26 top DCF and deep-based trackers with over 40fps on CPU.
|
|
10:45-11:00, Paper MoAT20.4 | |
>Next-Best-View Planning for Surface Reconstruction of Large-Scale 3D Environments with Multiple UAVs |
> Video Attachment
|
|
Hardouin, Guillaume | ONERA |
Moras, Julien | ONERA |
Morbidi, Fabio | Université De Picardie Jules Verne |
Marzat, Julien | ONERA, Université Paris Saclay |
Mouaddib, El Mustapha | Universite De Picardie Jules Verne |
Keywords: Aerial Systems: Perception and Autonomy, Reactive and Sensor-Based Planning, Path Planning for Multiple Mobile Robots or Agents
Abstract: In this paper, we propose a novel cluster-based Next-Best-View path planning algorithm to simultaneously explore and inspect large-scale unknown environments with multiple Unmanned Aerial Vehicles (UAVs). In the majority of existing informative path-planning methods, a volumetric criterion is used for the exploration of unknown areas, and the presence of surfaces is only taken into account indirectly. Unfortunately, this approach may lead to inaccurate 3D models, with no guarantee of global surface coverage. To perform accurate 3D reconstructions and minimize runtime, we extend our previous online planner based on TSDF (Truncated Signed Distance Function) mapping, to a fleet of UAVs. Sensor configurations to be visited are directly extracted from the map and assigned greedily to the aerial vehicles, in order to maximize the global utility at the fleet level. The performances of the proposed TSGA (TSP-Greedy Allocation) planner and of a nearest neighbor planner have been compared via realistic numerical experiments in two challenging environments (a power plant and the Statue of Liberty) with up to five quadrotor UAVs equipped with stereo cameras.
|
|
11:00-11:15, Paper MoAT20.5 | |
>Towards Robust Visual Tracking for Unmanned Aerial Vehicle with Tri-Attentional Correlation Filters |
|
He, Yujie | Tongji University |
Fu, Changhong | Tongji University |
Lin, Fuling | Tongji University |
Li, Yiming | Tongji University |
Lu, Peng | The Hong Kong Polytechnic University |
Keywords: Aerial Systems: Perception and Autonomy, Aerial Systems: Applications, Surveillance Systems
Abstract: Object tracking has been broadly applied in unmanned aerial vehicle (UAV) tasks in recent years. However, existing algorithms still face difficulties such as partial occlusion, clutter background, and other challenging visual factors. Inspired by the cutting-edge attention mechanisms, a novel object tracking framework is proposed to leverage multi-level visual attention. Three primary attention, i.e., contextual attention, dimensional attention, and spatiotemporal attention, are integrated into the training and detection stages of correlation filter-based tracking pipeline. Therefore, the proposed tracker is equipped with robust discriminative power against challenging factors while maintaining high operational efficiency in UAV scenarios. Quantitative and qualitative experiments on two well-known benchmarks with 173 challenging UAV video sequences demonstrate the effectiveness of the proposed framework. The proposed tracking algorithm favorably outperforms 12 state-of-the-art methods, yielding 4.8% relative gain in UAVDT and 8.2% relative gain in UAV123@10fps against the baseline tracker while operating at the speed of ~28 frames per second.
|
|
11:15-11:30, Paper MoAT20.6 | |
>Inspection-On-The-Fly Using Hybrid Physical Interaction Control for Aerial Manipulators |
> Video Attachment
|
|
Abbaraju, Praveen | Purdue University |
Ma, Xin | Chinese Univerisity of HongKong |
Manoj Krishnan, Harikrishnan | Purdue University |
Venkatesh, L.N Vishnunandan | Purdue University |
Rastgaar, Mo | Purdue University |
Voyles, Richard | Purdue University |
Keywords: Aerial Systems: Perception and Autonomy
Abstract: Inspection for structural properties (surface stiffness and coefficient of restitution) is crucial for understanding and performing aerial manipulations in unknown environments, with little to no prior knowledge on their state. Inspection-on-the-fly is the uncanny ability of humans to infer states during manipulation, reducing the necessity to perform inspection and manipulation separately. This paper presents an infrastructure for inspection-on-the-fly method for aerial manipulators using hybrid physical interaction control. With the proposed method, structural properties (surface stiffness and coefficient of restitution) can be estimated during physical interactions. A three-stage hybrid physical interaction control paradigm is presented to robustly approach, acquire and impart a desired force signature onto a surface. This is achieved by combining a hybrid force/motion controller with a model-based feed-forward impact control as intermediate phase. The proposed controller ensures a steady transition from unconstrained motion control to constrained force control, while reducing the lag associated with the force control phase. And an underlying Operational Space dynamic configuration manager permits complex, redundant vehicle/arm combinations. Experiments were carried out in a mock-up of a Dept. of Energy exhaust shaft, to show the effectiveness of the inspection-on-the-fly method to determine the structural properties of the target surface and the performance of the hybrid physical interaction controller in reducing the lag associated with force control phase.
|
|
11:15-11:30, Paper MoAT20.7 | |
>AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning |
|
Tallamraju, Rahul | International Institute of Information Technology, Hyderabad |
Saini, Nitin | Max Planck Institute for Intelligent Systems |
Bonetto, Elia | Max Planck Institute for Intelligent Systems, Tuebingen |
Pabst, Michael | Max Planck Institute for Intelligent Systems |
Liu, Yu Tang | Max Planck Institute Intelligent System |
Black, Michael | Max Planck Institute for Intelligent Systems in Tübingen |
Ahmad, Aamir | Max Planck Institute for Intelligent Systems |
Keywords: Reinforecment Learning, Aerial Systems: Perception and Autonomy, Multi-Robot Systems
Abstract: In this letter, we introduce a deep reinforcement learning (DRL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system and observation models. Such models are difficult to derive and generalize across different systems. Moreover, the non-linearities and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.
|
|
MoAT21 |
Room T21 |
Perception for Aerial Systems |
Regular session |
Chair: Scherer, Sebastian | Carnegie Mellon University |
Co-Chair: Albl, Cenek | ETH Zurich |
|
10:00-10:15, Paper MoAT21.1 | |
>DR^2Track: Towards Real-Time Visual Tracking for UAV Via Distractor Repressed Dynamic Regression |
|
Fu, Changhong | Tongji University |
Ding, Fangqiang | Tongji University |
Li, Yiming | Tongji University |
Jin, Jin | Tongji University |
Feng, Chen | New York University |
Keywords: Aerial Systems: Applications, Computer Vision for Automation, Aerial Systems: Perception and Autonomy
Abstract: Visual tracking has yielded promising applications with unmanned aerial vehicle (UAV). In literature, the advanced discriminative correlation filter (DCF) type trackers generally distinguish the foreground from the background with a learned regressor which regresses the implicit circulated samples into a fixed target label. However, the predefined and unchanged regression target results in low robustness and adaptivity to uncertain aerial tracking scenarios. In this work, we exploit the local maximum points of the response map generated in the detection phase to automatically locate current distractors. By repressing the response of distractors in the regressor learning, we can dynamically and adaptively alter our regression target to leverage the tracking robustness as well as adaptivity. Substantial experiments conducted on three challenging UAV benchmarks demonstrate both excellent performance and extraordinary speed (~50fps on a cheap CPU) of our tracker.
|
|
10:15-10:30, Paper MoAT21.2 | |
>Towards Vision-Based Impedance Control for the Contact Inspection of Unknown Generically-Shaped Surfaces with a Fully-Actuated UAV |
> Video Attachment
|
|
Rashad, Ramy | University of Twente |
Bicego, Davide | University of Twente |
Jiao, Ran | Beihang University |
Sanchez-Escalonilla, Santiago | University of Twente |
Stramigioli, Stefano | University of Twente |
Keywords: Aerial Systems: Perception and Autonomy, Aerial Systems: Applications, Compliance and Impedance Control
Abstract: The integration of computer vision techniques for the accomplishment of autonomous interaction tasks represents a challenging research direction in the context of aerial robotics. In this paper, we consider the problem of contact-based inspection of a textured target of unknown geometry and pose. Exploiting state of the art techniques in computer graphics, tuned and improved for the task at hand, we designed a framework for the projection of a desired trajectory for the robot end-effector on a generically-shaped surface to be inspected. Combining these results with previous work on energy-based interaction control, we are laying the basis of what we call vision-based impedance control paradigm. To demonstrate the feasibility and the effectiveness of our methodology, we present the results of both realistic ROS/Gazebo simulations and preliminary experiments with a fully-actuated hexarotor interacting with heterogeneous curved surfaces whose geometric description is not available a priori, provided that enough visual features on the target are naturally or artificially available to allow the integration of localization and mapping algorithms.
|
|
10:30-10:45, Paper MoAT21.3 | |
>Towards Deep Learning Assisted Autonomous UAVs for Manipulation Tasks in GPS-Denied Environments |
> Video Attachment
|
|
Kumar, Ashish | Indian Institute of Technology, Kanpur |
Vohra, Mohit | Indian Institute of Technology, Kanpur |
Prakash, Ravi | Indian Institute of Technology, Kanpur |
Behera, Laxmidhar | IIT Kanpur |
Keywords: Aerial Systems: Perception and Autonomy, Deep Learning for Visual Perception, Computer Vision for Automation
Abstract: In this work, we present a pragmatic approach to enable unmanned aerial vehicle (UAVs) to autonomously perform highly complicated tasks of object pick and place. This paper is largely inspired by challenge-2 of MBZIRC 2020 and is primarily focused on the task of assembling large 3D structures in outdoors and GPS-denied environments. Primary contributions of this system are: (i) a novel computationally efficient deep learning based unified multi-task visual perception system for target localization, part segmentation, and tracking, (ii) a novel deep learning based grasp state estimation, (iii) a retracting electromagnetic gripper design, (iv) a remote computing approach which exploits state-of-the-art MIMO based high speed (5000Mb/s) wireless links to allow the UAVs to execute compute intensive tasks on remote high end compute servers, and (v) system integration in which several system components are weaved together in order to develop an optimized software stack. We use DJI Matrice-600 Pro, a hex-rotor UAV and interface it with the custom designed gripper. Our framework is deployed on the specified UAV in order to report the performance analysis of the individual modules. Apart from the manipulation system, we also highlight several hidden challenges associated with the UAVs in this context.
|
|
10:45-11:00, Paper MoAT21.4 | |
>Reconstruction of 3D Flight Trajectories from Ad-Hoc Camera Networks |
|
Li, Jingtong | ETH Zurich |
Murray, Jesse | ETH Zurich |
Ismaili, Dorina | Technical University Munich |
Schindler, Konrad | ETH Zurich |
Albl, Cenek | ETH Zurich |
Keywords: Aerial Systems: Applications, Computer Vision for Automation, Visual Tracking
Abstract: We present a method to reconstruct the 3D trajectory of an airborne robotic system only from videos recorded with cameras that are unsynchronized, may feature rolling shutter distortion, and whose viewpoints are unknown. Our approach enables robust and accurate outside-in tracking of dynamically flying targets, with cheap and easy-to-deploy equipment. We show that, in spite of the weakly constrained setting, recent developments in computer vision make it possible to reconstruct trajectories in 3D from unsynchronized, uncalibrated networks of consumer cameras, and validate the proposed method in a realistic field experiment. We make our code available along with the data, including cm-accurate ground-truth from differential GNSS navigation.
|
|
11:00-11:15, Paper MoAT21.5 | |
>Bayesian Fusion of Unlabeled Vision and RF Data for Aerial Tracking of Ground Targets |
> Video Attachment
|
|
Kanlapuli Rajasekaran, Ramya | University of Colorado Boulder |
Ahmed, Nisar | University of Colorado Boulder |
Frew, Eric W. | University of Colorado |
Keywords: Aerial Systems: Perception and Autonomy, Sensor Fusion, Visual Tracking
Abstract: This paper presents a method for target localization and tracking in clutter using Bayesian fusion of vision and Radio Frequency (RF) sensors used aboard a small Unmanned Aircraft System (sUAS). Sensor fusion is used to ensure tracking robustness and reliability in case of camera occlusion or RF signal interference. Camera data is processed using an off-the-shelf algorithm that detects possible objects of interest in a given image frame, and the true RF emitting target needs to be identified from among these if it is present. These data sources, as well as the unknown motion of the target, lead to a heavily non-linear non-Gaussian target state uncertainties, which are not amenable to typical data association methods for tracking. A probabilistic model is thus first rigorously developed to relate conditional dependencies between target movements, RF data, and visual object detections. A modified particle filter is then developed to simultaneously reason over target states and RF emitter association hypothesis labels for visual object detections. Truth model simulations are presented to compare and validate the effectiveness of the RF + visual data fusion filter.
|
|
11:15-11:30, Paper MoAT21.6 | |
>Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations |
> Video Attachment
|
|
Bonatti, Rogerio | Carnegie Mellon University |
Madaan, Ratnesh | Carnegie Mellon University |
Vineet, Vibhav | Stanford University |
Scherer, Sebastian | Carnegie Mellon University |
Kapoor, Ashish | MicroSoft |
Keywords: Aerial Systems: Perception and Autonomy, Visual-Based Navigation, Representation Learning
Abstract: Machines are a long way from robustly solving open-world perception-control tasks, such as first-person view (FPV) aerial navigation. While recent advances in end-to-end Machine Learning, especially Imitation Learning and Reinforcement appear promising, they are constrained by the need of large amounts of difficult-to-collect labeled real-world data. Simulated data, on the other hand, is easy to generate, but generally does not render safe behaviors in diverse real-life scenarios. In this work we propose a novel method for learning robust visuomotor policies for real-world deployment which can be trained purely with simulated data. We develop rich state representations that combine supervised and unsupervised environment data. Our approach takes a cross-modal perspective, where separate modalities correspond to the raw camera data and the system states relevant to the task, such as the relative pose of gates to the drone in the case of drone racing. We feed both data modalities into a novel factored architecture, which learns a joint low-dimensional embedding via Variational Auto Encoders. This compact representation is then fed into a control policy, which we trained using imitation learning with expert trajectories in a simulator. We analyze the rich latent spaces learned with our proposed representations, and show that the use of our cross-modal architecture significantly improves control policy performance as compared to end-to-end learning or purely unsupervised feature extractors. We also present real-world results for drone navigation through gates in different track configurations and environmental conditions. Our proposed method, which runs fully onboard, can successfully generalize the learned representations and policies across simulation and reality, significantly outperforming baseline approaches. Supplementary video available at: https://youtu.be/AxE7qGKJWaw and open-sourced code available at: https://github.com/microsoft/AirSim-Drone-Racing-VAE-Imitation
|
|
MoAT22 |
Room T22 |
Sensor Fusion for Aerial, Autonomous, and Marine Robotics |
Regular session |
Chair: Englot, Brendan | Stevens Institute of Technology |
Co-Chair: Atkins, Ella | University of Michigan |
|
10:00-10:15, Paper MoAT22.1 | |
>Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor |
> Video Attachment
|
|
Tagliabue, Andrea | Eth Zurich |
Paris, Aleix | Massachusetts Institute of Technology |
Kim, Suhan | Carnegie Mellon University |
Kubicek, Regan | Carnegie Mellon University |
Bergbreiter, Sarah | Carnegie Mellon University |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Keywords: Sensor Fusion, Aerial Systems: Perception and Autonomy, Aerial Systems: Applications
Abstract: Disturbance estimation for Micro Aerial Vehicles (MAVs) is crucial for robustness and safety. In this paper, we use novel, bio-inspired airflow sensors to measure the airflow acting on a MAV, and we fuse this information in an Unscented Kalman Filter (UKF) to simultaneously estimate the three-dimensional wind vector, the drag force, and other interaction forces (e.g. due to collisions, interaction with a human) acting on the robot. To this end, we present and compare a fully model-based and a deep learning-based strategy. The model-based approach considers the MAV and airflow sensor dynamics and its interaction with the wind, while the deep learning-based strategy uses a Long Short-Term Memory (LSTM) to obtain an estimate of the relative airflow, which is then fused in the proposed filter. We validate our methods in hardware experiments, showing that we can accurately estimate relative airflow of up to 4 m/s, and we can differentiate drag and interaction force.
|
|
10:15-10:30, Paper MoAT22.2 | |
>Fusing Concurrent Orthogonal Wide-Aperture Sonar Images for Dense Underwater 3D Reconstruction |
> Video Attachment
|
|
McConnell, John | Stevens Institute of Technology |
Martin, John D. | Stevens Institute of Technology |
Englot, Brendan | Stevens Institute of Technology |
Keywords: Marine Robotics, Range Sensing, Sensor Fusion
Abstract: We propose a novel approach to handling the ambiguity in elevation angle associated with the observations of a forward looking multi-beam imaging sonar, and the challenges it poses for performing an accurate 3D reconstruction. We utilize a pair of sonars with orthogonal axes of uncertainty to independently observe the same points in the environment from two different perspectives, and associate these observations. Using these concurrent observations, we can create a dense, fully defined point cloud at every time-step to aid in reconstructing the 3D geometry of underwater scenes. We will evaluate our method in the context of the current state of the art, for which strong assumptions on object geometry limit applicability to generalized 3D scenes. We will discuss results from laboratory tests that quantitatively benchmark our algorithm's reconstruction capabilities, and results from a real-world, tidal river basin which qualitatively demonstrate our ability to reconstruct a cluttered field of underwater objects.
|
|
10:30-10:45, Paper MoAT22.3 | |
>A Scalable Framework for Robust Vehicle State Estimation with a Fusion of a Low-Cost IMU, the GNSS, Radar, a Camera and Lidar |
|
Liang, Yuran | Technical University of Berlin |
Müller, Steffen | Technical University of Berlin |
Schwendner, Daniel | BMW Group |
Rolle, Daniel | BMW Group |
Ganesch, Dieter | BMW Group |
Schaffer, Immanuel | BMW Group |
Keywords: Sensor Fusion, Autonomous Vehicle Navigation, Computer Vision for Transportation
Abstract: Automated driving requires highly precise and robust vehicle state estimation for its environmental perception, motion planning and control functions. Using GPS and environmental sensors can compensate for the deficits of the estimation based on traditional vehicle dynamics sensors. However, each type of sensor has specific strengths and limitations in accuracy and robustness due to their different properties regarding the quality of detection and robustness in diverse environmental conditions. For these reasons, we present a scalable concept for vehicle state estimation using an error-state extended Kalman filter (ESEKF) to fuse classical vehicle sensors with environmental sensors. The state variables, i.e., position, velocity and orientation, are predicted by a 6-degree-of-freedom (DoF) vehicle kinematic model that uses a low-cost inertial measurement unit (IMU) on a customer vehicle. The Error of the 6-DoF rigid body motion model is estimated using observations of global position using the global navigation satellite system (GNSS) and of the environment using radar, a camera and low-cost lidar. Our concept is scalable such that it is compatible with different sensor setups on different vehicle configurations. The experimental results compare various sensor combinations with measurement data in scenarios such as dynamic driving maneuvers on a test field. The results show that our approach ensures accuracy and robustness with redundant sensor data under regular and dynamic driving conditions.
|
|
10:45-11:00, Paper MoAT22.4 | |
>Probabilistic End-To-End Vehicle Navigation in Complex Dynamic Environments with Multimodal Sensor Fusion |
> Video Attachment
|
|
Cai, Peide | Hong Kong University of Science and Technology |
Wang, Sukai | Robotics and Multi-Perception Lab (RAM-LAB), Robotics Institute, |
Sun, Yuxiang | Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Automation Technologies for Smart Cities, Service Robots, Field Robots
Abstract: All-day and all-weather navigation is a critical capability for autonomous driving, which requires proper reaction to varied environmental conditions and complex agent behaviors. Recently, with the rise of deep learning, end-to-end control for autonomous vehicles has been well studied. However, most works are solely based on visual information, which can be degraded by challenging illumination conditions such as dim light or total darkness. In addition, they usually generate and apply deterministic control commands without considering the uncertainties in the future. In this paper, based on imitation learning, we propose a probabilistic driving model with multi-perception capability utilizing the information from the camera, lidar and radar. We further evaluate its driving performance online on our new driving benchmark, which includes various environmental conditions (e.g., urban and rural areas, traffic densities, weather and times of the day) and dynamic obstacles (e.g., vehicles, pedestrians, motorcyclists and bicyclists). The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments with heavy traffic and extreme weather.
|
|
11:00-11:15, Paper MoAT22.5 | |
>Vision Only 3-D Shape Estimation for Autonomous Driving |
|
Monica, Josephine | Cornell University |
Campbell, Mark | Cornell University |
Keywords: Sensor Fusion, Computer Vision for Automation, Autonomous Vehicle Navigation
Abstract: We present a probabilistic framework for detailed 3-D shape estimation and tracking using only vision measurements. Vision detections are processed via a bird’s eye view representation, creating accurate detections at far ranges. A probabilistic model of the vision based point cloud measurements is learned and used in the framework. A 3-D shape model is developed by fusing a set of point cloud detections via a recursive Best Linear Unbiased Estimator (BLUE). The point cloud fusion accounts for noisy and inaccurate measurements, as well as minimizing growth of points in the 3-D shape. The use of a tracking algorithm and sensor pose enables 3-D shape estimation of dynamic objects from a moving car. Results are analyzed on experimental data, demonstrating the ability of our approach to produce more accurate and cleaner shape estimates.
|
|
11:15-11:30, Paper MoAT22.6 | |
>Polylidar - Polygons from Triangular Meshes |
|
Castagno, Jeremy | University of Michigan |
Atkins, Ella | University of Michigan |
Keywords: Aerial Systems: Perception and Autonomy, Reactive and Sensor-Based Planning, Computational Geometry
Abstract: This paper presents Polylidar, an efficient algorithm to extract non-convex polygons from 2D point sets, including interior holes. Plane segmented point clouds can be input into Polylidar to extract their polygonal counterpart, thereby reducing map size and improving visualization. The algorithm begins by triangulating the point set and filtering triangles by user configurable parameters such as triangle edge length. Next, connected triangles are extracted into triangular mesh regions representing the shape of the point set. Finally each region is converted to a polygon through a novel boundary following method which accounts for holes. Real-world and synthetic benchmarks are presented to comparatively evaluate Polylidar speed and accuracy. Results show comparable accuracy and more than four times speedup compared to other concave polygon extraction methods
|
|
MoBT1 |
Room T1 |
Marine Robotics |
Regular session |
Chair: Hollinger, Geoffrey | Oregon State University |
|
11:45-12:00, Paper MoBT1.1 | |
>Active Alignment Control-Based LED Communication for Underwater Robots |
> Video Attachment
|
|
Solanki, Pratap Bhanu | Michigan State University |
Bopardikar, Shaunak D. | Michigan State University |
Tan, Xiaobo | Michigan State University |
Keywords: Marine Robotics, Optimization and Optimal Control
Abstract: Achieving and maintaining line-of-sight (LOS) is challenging for underwater optical communication systems, especially when the underlying platforms are mobile. In this work, we propose and demonstrate an active alignment control-based LED-communication system that uses the DC value of the communication signal as feedback for LOS maintenance. Utilizing the uni-modal nature of the dependence of the light signal strength on local angles, we propose a novel triangular exploration algorithm, that does not require the knowledge of the underlying light intensity model, to maximize the signal strength that leads to achieving and maintaining LOS. The method maintains an equilateral triangle shape in the angle space for any three consecutive exploration points, while ensuring the consistency of exploration direction with the local gradient of signal strength. The effectiveness of the approach is first evaluated in simulation by comparison with extremum-seeking control, where the proposed approach shows a significant advantage in the convergence speed. The efficacy is further demonstrated experimentally, where an underwater robot is controlled by a joystick via LED communication.
|
|
12:00-12:15, Paper MoBT1.2 | |
>An Electrocommunication System Using FSK Modulation and Deep Learning Based Demodulation for Underwater Robots |
|
Qinghao, Wang | Peking University |
Ruijun, Liu | Guangxi University of Science and Technology |
Wang, Wei | Massachusetts Institute of Technology |
Xie, Guangming | Peking University |
Keywords: Biologically-Inspired Robots, Biomimetics, Marine Robotics
Abstract: Underwater communication is extremely challenging for small underwater robots that have stringent power and size constraints. In our previous work, we have demonstrated that electrocommunication is an alternative method for small underwater robot communication. This paper presents a new electrocommunication system which utilizes Binary Frequency Shift Keying (2FSK) modulation and deep-learning-based demodulation for underwater robots. We first derive an underwater electrocommunication model which covers both the near-field area and a large transition area outside of the near-field area. The 2FSK modulation is adopted to improve the anti-interference ability of the signal. A deep learning algorithm is used to demodulate the signal by the receiver. Simulations and experiments show that at the same testing condition, the new communication system has a lower bit error rate and higher data rate than the previous electrocommunication system. The communication system achieves stable communication within the distance of 10 m at a data transfer rate of 5 Kbps with a power consumption of less than 0.1 W. The large improvement of the communication distance in this study further advances the application of electrocommunication.
|
|
12:15-12:30, Paper MoBT1.3 | |
>Demonstration of a Novel Phase Lag Controlled Roll Rotation Mechanism Using a Two-DOF Soft Swimming Robot |
> Video Attachment
|
|
Liu, Bangyuan | Georgia Institute of Technology |
Hammond III, Frank L. | Georgia Institute of Technology |
Keywords: Marine Robotics, Underactuated Robots, Biologically-Inspired Robots
Abstract: Underwater roll rotation is a basic but essential maneuver that allows many biological swimmers to achieve high maneuverability and complex locomotion patterns. In particular, sea mammals (e.g., sea otter) with flexible vertebra structures have a unique mechanism to efficiently achieve roll rotation, not propelled mainly by inter-digital webbing or fin, but by bending and twisting their body. In this work, we attempt to implement and effectively control the roll rotation by mimicking this kind of efficient biomorphic roll mechanism on our two degrees of freedom (DOF) soft modular swimming robot. The robot also allows the achievement of other common maneuvers, such as pitch/yaw rotation and linear swimming patterns. The proposed 2DOF soft swimming robot platform includes an underactuated, cable-driven design that mimics the flexible cascaded skeletal structure of soft spine tissue and hard spine bone seen in many fish species. The cable-driven actuation mechanism is oriented laterally for forwarding motion and steering in a 3D plane. The robot can perform a steady and controllable roll rotation with a maximum angular speed of 41.6 deg/s. A hypothesis explaining this novel roll rotation mechanism is set forth, and the phenomenon is systematically studied at different frequencies and phase lag gait conditions. Preliminary results show a linear relationship between roll angular velocity and frequency within a specific range. Additionally, the roll rotation can be controlled independently in some special conditions. These abilities form the foundation for future research on 3D underwater locomotion with adaptive, controllable maneuvering capabilities.
|
|
12:30-12:45, Paper MoBT1.4 | |
>Pauses Provide Effective Control for an Underactuated Oscillating Swimming Robot |
|
Knizhnik, Gedaliah | University of Pennsylvania |
deZonia, Philip | University of Pennsylvania |
Yim, Mark | University of Pennsylvania |
Keywords: Underactuated Robots, Marine Robotics
Abstract: We describe motion primitives and closed-loop control for a unique low-cost single-motor oscillating aquatic system: the Modboat. The Modboat is driven by the conservation of angular momentum, which is used to actuate two passive flippers in a sequential paddling motion for propulsion and steering. We propose a discrete description of the motion of the system, which oscillates around desired trajectories, and propose two motion primitives - one frequency based and one pause-based - with associated closed-loop controllers. Testing is performed to evaluate each motion primitive, the merits of each are presented, and the pause-based primitive is shown to be significantly superior. Finally, waypoint following is implemented using both primitives and shown to be significantly more successful using the pause-based motion primitive.
|
|
12:45-13:00, Paper MoBT1.5 | |
>Topology-Aware Self-Organizing Maps for Robotic Information Gathering |
|
McCammon, Seth | Oregon State University |
Jones, Dylan | Oregon State University |
Hollinger, Geoffrey | Oregon State University |
Keywords: Marine Robotics, Motion and Path Planning, Computational Geometry
Abstract: In this paper, we present a novel algorithm for constructing a maximally informative path for a robot in an information gathering task. We use a Self-Organizing Map (SOM) framework to discover important topological features in the information function. Using these features, we identify a set of distinct classes of trajectories, each of which has improved convexity compared with the original function. We then leverage a Stochastic Gradient Ascent (SGA) optimization algorithm within each of these classes to optimize promising representative paths. The increased convexity leads to an improved chance of SGA finding the globally optimal path across all homotopy classes. We demonstrate our approach in three different simulated experiments. First, we show that our SOM is able to correctly learn the topological features of a gyre environment with a well-defined topology. Then, in the second set of experiments, we compare the effectiveness of our algorithm in an information gathering task across the gyre world, a set of randomly generated worlds, and a set of worlds drawn from real-world ocean model data. In these experiments our algorithm performs competitively or better than a state-of-the-art Branch and Bound while requiring significantly less computation time. Lastly, the final set of experiments show that our method scales better than the comparison methods across different planning mission sizes in real-world environments.
|
|
13:00-13:15, Paper MoBT1.6 | |
>The SPIR: An Autonomous Underwater Robot for Bridge Pile Cleaning and Condition Assessment |
|
Le, Duy Khoa | University of Technology Sydney |
To, Andrew | University of Technology, Sydney |
Leighton, Brenton | University of Technology Sydney |
Hassan, Mahdi | University of Technology, Sydney |
Liu, Dikai | University of Technology, Sydney |
Keywords: Marine Robotics, Robotics in Hazardous Fields, Autonomous Agents
Abstract: The SPIR, Submersible Pylon Inspection Robot, is developed to provide an innovative and practical solution to keep workers safe during maintenance of underwater structures in shallow waters, which involves working in dangerous water currents, and high-pressure water-jet cleaning. More advanced than work-class Remotely Operated Vehicles technology, the SPIR is automated and required minimum involvement of humans into the working process, thus effectively lowered the learning curve required to conduct work. To make SPIR operate effectively in poor visibility and highly disturbed environments, the multiple new technologies are developed and implemented into the system, including SBL-SONAR-based navigation, 6-DOF stabilisation, and vision-based 3D mapping. Extensive testing and field trials in various bridges are conducted to verify the robotic system. The results demonstrate the suitability of the SPIR in substituting humans for underwater hazardous tasks such as autonomous cleaning and inspection of bridge and wharf piles.
|
|
13:00-13:15, Paper MoBT1.7 | |
>Vehicle-In-The-Loop Framework for Testing Long-Term Autonomy in a Heterogeneous Marine Robot Swarm |
> Video Attachment
|
|
Babic, Anja | University of Zagreb, Faculty of Electrical Engineering and Comp |
Vasiljevic, Goran | Faculty of Electrical Engineering and Computing, Zagreb, Croatia |
Miskovic, Nikola | University of Zagreb, Faculty of Electrical Engineering And |
Keywords: Marine Robotics, Task Planning, Cooperating Robots
Abstract: A heterogeneous swarm of marine robots was developed with the goal of autonomous long-term monitoring of environmental phenomena in the highly relevant ecosystem of Venice, Italy. As logistics are a continuing challenge in the field of marine robotics, especially when dealing with a large number of agents to be collected and redeployed per experimental run, an approach is needed that provides the benefits of simulation while also reflecting the complexity of the real world. This paper focuses on the development of a vehicle-in-the-loop test environment in which a surface station simulates and transmits the data of any number of simulated agents, while a real marine platform operates based on the received information. Several experimental runs of a specific use-case test scenario using the developed framework and carried out in the field are described and their results are examined.
|
|
MoBT2 |
Room T2 |
Marine Robotics: Mechanisms |
Regular session |
Chair: Sattar, Junaed | University of Minnesota |
Co-Chair: Qian, Huihuan (Alex) | The Chinese University of Hong Kong, Shenzhen |
|
11:45-12:00, Paper MoBT2.1 | |
>Roboat II: A Novel Autonomous Surface Vessel for Urban Environments |
> Video Attachment
|
|
Wang, Wei | Massachusetts Institute of Technology |
Shan, Tixiao | Massachusetts Institute of Technology |
Leoni, Pietro | Massachusetts Institute of Technology |
Meyers, Drew | MIT |
Ratti, Carlo | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Marine Robotics, Autonomous Vehicle Navigation, Automation Technologies for Smart Cities
Abstract: This paper presents a novel autonomous surface vessel (ASV), called Roboat II for urban transportation. Roboat II is capable of accurate simultaneous localization and mapping (SLAM), receding horizon tracking control and estimation, and path planning. Roboat II is designed to maximize the internal space for transport, and can carry payloads several times of its own weight. Moreover, it is capable of holonomic motions to facilitate transporting, docking, and inter-connectivity between boats. The proposed SLAM system receives sensor data from a 3D LiDAR, an IMU, and a GPS, and utilizes a factor graph to tackle the multi-sensor fusion problem. To cope with the complex dynamics in the water, Roboat II employs an online nonlinear model predictive controller (NMPC), where we experimentally estimated the dynamical model of the vessel in order to achieve superior performance for tracking control. The states of Roboat II are simultaneously estimated using a nonlinear moving horizon estimation (NMHE) algorithm. Experiments demonstrate that Roboat II is able to successfully perform online mapping and localization, plan its path and robustly track the planned trajectory in the confined river, implying that this autonomous vessel holds the promise on potential applications in transporting humans and goods in many of the waterways nowadays.
|
|
12:00-12:15, Paper MoBT2.2 | |
>A Two-Stage Automatic Latching System for the USVs Charging in Disturbed Berth |
> Video Attachment
|
|
Xue, Kaiwen | The Chinese University of Hong Kong, Shenzhen |
Liu, Chongfeng | The Chinese University of Hong Kong, Shenzhen |
Liu, Hengli | Peng Cheng Laboratory, Shenzhen |
Xu, Ruoyu | The Chinese University of Hong Kong, Shenzhen |
Sun, Zhenglong | Chinese University of Hong Kong, Shenzhen |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Qian, Huihuan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Keywords: Marine Robotics, Field Robots, Intelligent Transportation Systems
Abstract: Automatic latching for charging in a disturbed environment for Unmanned Surface Vehicle (USVs) is always a challenging problem. In this paper, we propose a two-stage automatic latching system for USVs charging in berth. In Stage I, a vision-guided algorithm is developed to calculate an optimal latching position for charging. In Stage II, a novel latching mechanism is designed to compensate the movement misalignments from the water disturbance. A set of experiments have been conducted in real-world environments. The results show the latching success rate has been improved from 40% to 73.3% in the best cases with our proposed system. Furthermore, the vision-guided algorithm provides a methodology to optimize the design radius of the latching mechanism with respect to different disturbance levels accordingly. Outdoor experiments have validated the efficiency of our proposed automatic latching system. The proposed system improves the autonomy intelligence of the USVs and provides great benefits for practical applications.
|
|
12:15-12:30, Paper MoBT2.3 | |
>Variable Pitch System for the Underwater Explorer Robot UX-1 |
|
Suarez Fernandez, Ramon A. | Universidad Politecnica De Madrid |
Grande, Davide | Politecnico Di Milano |
Martins, Alfredo | INESC TEC |
Bascetta, Luca | Politecnico Di Milano |
Dominguez, Sergio | Technical University of Madrid |
Rossi, Claudio | Universidad Politecnica De Madrid |
Keywords: Marine Robotics, Field Robots, Mining Robotics
Abstract: This paper presents the results of the experimental tests performed to validate the functionality of a variable pitch system (VPS), designed for pitch attitude control of the novel underwater robotic vehicle explorer UX-1. The VPS is composed of a mass suspended from a central rod mounted across the hull. This mass is rotated around the transverse axis of the vehicle in order to perform a change in the inclination angle for navigation in vertical mine shafts. In this work, the equations of motion are first derived with a quaternion attitude representation, and are then extended to include the dynamics of the VPS. The performance of the VPS is demonstrated in real underwater experimental tests that validate the pitch angle control independently, and coupled with the heave motion control system.
|
|
12:30-12:45, Paper MoBT2.4 | |
>Design and Experiments with LoCO AUV: A Low Cost Open-Source Autonomous Underwater Vehicle |
> Video Attachment
|
|
Edge, Chelsey | University of Minnesota |
Enan, Sadman Sakib | University of Minnesota, Twin Cities |
Fulton, Michael | University of Minnesota |
Hong, Jungseok | University of Minnesota |
Mo, Jiawei | University of Minnesota, Twin Cities |
Barthelemy, Kimberly | University of Minnesota |
Bashaw, Hunter | Clarkson University |
Kallevig, Berik | University of Minnesota Twin Cities |
Knutson, Corey | University of Minnesota - Duluth |
Orpen, Kevin | University of Minnesota |
Sattar, Junaed | University of Minnesota |
Keywords: Marine Robotics, Field Robots
Abstract: In this paper we present the LoCO AUV, a Low-Cost, Open Autonomous Underwater Vehicle. LoCO is a general-purpose, single-person-deployable, vision-guided AUV, rated to a depth of 100 meters. We discuss the open and expandable design of this underwater robot, as well as the design of a simulator in Gazebo. Additionally, we explore the platform’s preliminary local motion control and state estimation abilities, which enable it to perform maneuvers autonomously. In order to demonstrate its usefulness for a variety of tasks, we implement a variety of our previously presented human-robot interaction capabilities on LoCO, including gestural control, diver following, and robot communication via motion. Finally, we discuss the practical concerns of deployment and our experiences in using this robot in pools, lakes, and the ocean. All design details, instructions on assembly, and code will be released under a permissive, open-source license.
|
|
MoBT3 |
Room T3 |
Marine Robotics: Perception |
Regular session |
Chair: Drews-Jr, Paulo | Federal University of Rio Grande (FURG) |
Co-Chair: Rekleitis, Ioannis | University of South Carolina |
|
11:45-12:00, Paper MoBT3.1 | |
>Semantic Segmentation of Underwater Imagery: Dataset and Benchmark |
|
Islam, Md Jahidul | University of Minnesota-Twin Cities |
Edge, Chelsey | University of Minnesota |
Xiao, Yuyang | University of Minnesota |
Luo, Peigen | University of Minnesota, Twin Cities |
Mehtaz, Muntaqim | University of Minnesota (IRV Lab) |
Morse, Christopher | University of Minnesota - Twin Cities |
Enan, Sadman Sakib | University of Minnesota, Twin Cities |
Sattar, Junaed | University of Minnesota |
Keywords: Marine Robotics, Field Robots, Object Detection, Segmentation and Categorization
Abstract: In this paper, we present the first large-scale dataset for semantic Segmentation of Underwater IMagery (SUIM). It contains over 1500 images with pixel annotations for eight object categories: fish (vertebrates), reefs (invertebrates), aquatic plants, wrecks/ruins, human divers, robots, and sea-floor. The images have been rigorously collected during oceanic explorations and human-robot collaborative experiments, and annotated by human participants. We also present a comprehensive benchmark evaluation of several state-of-the-art semantic segmentation approaches based on standard performance metrics. Additionally, we present SUIM-Net, a fully-convolutional deep residual model that balances the trade-off between performance and computational efficiency. It offers competitive performance while ensuring fast end-to-end inference, which is essential for its use in the autonomy pipeline by visually-guided underwater robots. In particular, we demonstrate its usability benefits for visual servoing, saliency prediction, and detailed scene understanding. With a variety of use cases, the proposed model and benchmark dataset open up promising opportunities for future research in underwater robot vision.
|
|
12:00-12:15, Paper MoBT3.2 | |
>DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization |
> Video Attachment
|
|
Joshi, Bharat | University of South Carolina |
Modasshir, Md | University of South Carolina |
Manderson, Travis | McGill University |
Damron, Hunter | University of South Carolina |
Xanthidis, Marios | University of South Carolina |
Quattrini Li, Alberto | Dartmouth College |
Rekleitis, Ioannis | University of South Carolina |
Dudek, Gregory | McGill University |
Keywords: Field Robots, Deep Learning for Visual Perception, Localization
Abstract: In this paper, we propose a real-time deep-learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communication-constrained underwater environment is essential for many applications such as underwater exploration, mapping, multi-robot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image-to-image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental results in real-world underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness and accuracy of the proposed technique in terms of translation error and orientation error over the state-of-the-art methods. The code is publicly available.
|
|
12:15-12:30, Paper MoBT3.3 | |
>Underwater Monocular Image Depth Estimation Using Single-Beam Echosounder |
> Video Attachment
|
|
Roznere, Monika | Dartmouth College |
Quattrini Li, Alberto | Dartmouth College |
Keywords: Marine Robotics, SLAM, Sensor Fusion
Abstract: This paper proposes a methodology for real-time depth estimation of underwater monocular camera images, fusing measurements from a single-beam echosounder. Our system exploits the echosounder's detection cone to match its measurements with the detected feature points from a monocular SLAM system. Such measurements are integrated in a monocular SLAM system to adjust the visible map points and the scale. We also provide a novel calibration process to determine the extrinsic between camera and echosounder to have reliable matching. Our proposed approach is implemented within ORB-SLAM2 and evaluated in a swimming pool and in the ocean to validate image depth estimation improvement. In addition, we demonstrate its applicability for improved underwater color correction. Overall, the proposed sensor fusion system enables inexpensive underwater robots with a monocular camera and echosounder to correct the depth estimation and scale in visual SLAM, leading to interesting future applications, such as underwater exploration and mapping.
|
|
12:30-12:45, Paper MoBT3.4 | |
>Matching Color Aerial Images and Underwater Sonar Images Using Deep Learning for Underwater Localization |
|
Machado dos Santos, Matheus | FURG |
Giacomo, Giovanni | FURG |
Drews-Jr, Paulo | Federal University of Rio Grande (FURG) |
Botelho, Silvia | University Federal of Rio Grande (FURG) |
Keywords: Marine Robotics, Deep Learning for Visual Perception, Aerial Systems: Perception and Autonomy
Abstract: Underwater localization is a challenging task due to the lack of a Global Positioning System (GPS). However, the capability to match georeferenced aerial images and acoustic data can help with this task. Autonomous hybrid aerial and underwater vehicles also demand a new localization method capable of combining the perception from both environments. This study proposes a cross-domain and cross-view image matching, using a color aerial image and an underwater acoustic image to identify if these images are captured in the same place. The method is designed to match images acquired in partially structured environments with shared features, such as harbors and marinas. Our pipeline combines traditional image processing methods and deep neural network techniques. Real-world datasets from multiple regions are used to validate our work, obtaining a matching precision of up to 80%
|
|
12:45-13:00, Paper MoBT3.5 | |
>ACMarker: Acoustic Camera-Based Fiducial Marker System in Underwater Environment |
> Video Attachment
|
|
Wang, Yusheng | The University of Tokyo |
Ji, Yonghoon | JAIST |
Liu, Dingyu | The University of Tokyo |
Tamura, Yusuke | Tohoku University |
Tsuchiya, Hiroshi | Wakachiku Construction Co., Ltd |
Yamashita, Atsushi | The University of Tokyo |
Asama, Hajime | The University of Tokyo |
Keywords: Marine Robotics, Computer Vision for Other Robotic Applications
Abstract: ACMarker is an acoustic camera-based fiducial marker system designed for underwater environments. Optical camera-based fiducial marker systems have been widely used in computer vision and robotics applications such as augmented reality (AR), camera calibration, and robot navigation. However, in underwater environments, the performance of optical cameras is limited owing to water turbidity and illumination conditions. Acoustic cameras, which are forward-looking sonars, have been gradually applied in underwater situations. They can acquire high-resolution images even in turbid water with poor illumination. We propose methods to recognize a simply designed marker and to estimate the relative pose between the acoustic camera and the marker. The proposed system can be applied to various underwater tasks such as object tracking and localization of unmanned underwater vehicles. Simulation and real experiments were conducted to test the recognition of such markers and pose estimation based on the markers.
|
|
MoBT4 |
Room T4 |
Marine Robotics: Planning and Control |
Regular session |
Chair: Arbanas, Barbara | University of Zagreb, Faculty of Electrical Engineering and Computing |
Co-Chair: Kaess, Michael | Carnegie Mellon University |
|
12:00-12:15, Paper MoBT4.2 | |
>Risk Vector-Based Near Miss Obstacle Avoidance for Autonomous Surface Vehicles |
> Video Attachment
|
|
Jeong, Mingi | Dartmouth College |
Quattrini Li, Alberto | Dartmouth College |
Keywords: Marine Robotics, Collision Avoidance, Autonomous Vehicle Navigation
Abstract: This paper presents a novel risk vector-based near miss prediction and obstacle avoidance that can be used for computing an efficient, dynamic, and robust action in real-time. Simulation experiments with parameters inferred from experiments in the ocean with our custom-made robotic boat show flexibility and adaptability to many obstacles present in the environment.
|
|
12:15-12:30, Paper MoBT4.3 | |
>Model Identification of a Small Omnidirectional Aquatic Surface Vehicle: A Practical Implementation |
|
Groves, Keir | The University of Manchester |
Dimitrov, Marin | University of Manchester |
Peel, Harriet | University of Manchester |
Marjanovic, Ognjen | University of Manchester |
Lennox, Barry | The University of Manchester |
Keywords: Marine Robotics, Calibration and Identification, Dynamics
Abstract: This work presents a practical method of obtaining a dynamic system model for small omnidirectional aquatic vehicles. The models produced can be used to improve vehicle localisation, aid in the design or tuning of control systems and facilitate the development of simulated environments. The use of a dynamic model for onboard real-time velocity prediction is of particular importance for aquatic vehicles because, unlike ground vehicles, fast and direct measurement of velocity using encoders is not possible. Previous work on model identification of aquatic vehicles has focused on large vessels that are typically underactuated and have low controllability in the sway direction. In this paper it is demonstrated that the procedure for identifying the model coefficients can be performed quickly, without specialist equipment and using only onboard sensors. This is of key importance because the dynamic model coefficients will change with the payload. Two different thrust allocation schemes are tested, one of which is a known method and another is proposed here. Validation tests are performed and the models generated are shown to be suitable for their intended applications. Significant reduction in model error is demonstrated using the novel thrust allocation method that is designed to avoid deadbands in the thruster responses.
|
|
12:30-12:45, Paper MoBT4.4 | |
>Towards Micro Robot Hydrobatics: Vision-Based Guidance, Navigation, and Control for Agile Underwater Vehicles in Confined Environments |
> Video Attachment
|
|
Duecker, Daniel Andre | Hamburg University of Technology |
Bauschmann, Nathalie | Hamburg University of Technology |
Hansen, Tim | Technical University of Hamburg |
Kreuzer, Edwin | Hamburg University of Technology |
Seifried, Robert | Hamburg University of Technology |
Keywords: Marine Robotics, Field Robots, Robotics in Hazardous Fields
Abstract: Despite the recent progress, guidance, navigation, and control (GNC) are largely unsolved for agile micro autonomous underwater vehicles (micro AUVs). Hereby, robust and accurate self-localization systems which fit micro AUVs play a key role and their lack is, thus, a severe bottleneck in micro underwater robotics research. In this work we present, first, a small-size low-cost high performance vision-based self-localization module which solves this bottleneck even for the requirements highly agile robot platforms. Second, we present its integration into a powerful GNC-framework which allows the deployment of micro AUVs in fully autonomous mission. Finally, we critically evaluate the performance of the localization system and the GNC-framework in two experimental scenarios.
|
|
12:45-13:00, Paper MoBT4.5 | |
>Coverage Path Planning with Track Spacing Adaptation for Autonomous Underwater Vehicles |
> Video Attachment
|
|
Yordanova, Veronika | CMRE |
Gips, Bart | Nato Sto Cmre |
Keywords: Marine Robotics, Motion and Path Planning, Robotics in Hazardous Fields
Abstract: In this paper we address the mine countermeasures (MCM) search problem for an autonomous underwater vehicle (AUV) surveying the seabed using a side-looking sonar. We propose a coverage path planning method that adapts the AUV track spacing with the objective of collecting better data. We achieve this by shifting the coverage overlap at the tail of the sensor range where the lowest data quality is expected. To assess the algorithm, we collected data from three at-sea experiments. The adaptive survey allowed the AUV to recover from a situation where the sensor range was overestimated and resulted in reducing area coverage gaps. In another experiment, the adaptive survey showed a 4.2% improvement in data quality for nearly 30% of the 'worst' data.
|
|
13:00-13:15, Paper MoBT4.6 | |
>Dynamic Median Consensus for Marine Multi-Robot Systems Using Acoustic Communication |
|
Vasiljevic, Goran | Faculty of Electrical Engineering and Computing, Zagreb, Croatia |
Petrovic, Tamara | Univ. of Zagreb |
Arbanas, Barbara | University of Zagreb, Faculty of Electrical Engineering and Comp |
Bogdan, Stjepan | University of Zagreb |
Keywords: Marine Robotics, Multi-Robot Systems, Autonomous Agents
Abstract: In this paper, we present a dynamic median consensus protocol for multi-agent systems using acoustic communication. The motivating target scenario is a multi-agent system consisting of underwater robots acting as intelligent sensors, applied to continuous monitoring of the state of a marine environment. The proposed protocol allows each agent to track the median value of individual measurements of all agents through local communication with neighbouring agents. Median is chosen as a measure robust to outliers, as opposed to average value, which is usually used. In contrast to the existing consensus protocols, the proposed protocol is dynamic, uses a switching communication topology and converges to median of measured signals. Stability and correctness of the protocol are theoretically proven. The protocol is tested in simulation, and accuracy and influence of protocol parameters on the system output are analyzed. The protocol is implemented and validated by a set of experiments on an underwater group of robots comprising of aMussel units. This experimental setup is one of the first deployments of any type of consensus protocol for an underwater setting. Both simulation and experimental results confirm the correctness of the presented approach.
|
|
MoBT5 |
Room T5 |
Space Robotics: Control |
Regular session |
Chair: McBryan, Katherine | US Naval Research Laboratory |
Co-Chair: Papadopoulos, Evangelos | National Technical University of Athens |
|
11:45-12:00, Paper MoBT5.1 | |
>On Parameter Estimation of Flexible Space Manipulator Systems |
|
Christidi-Loumpasefski, Olga-Orsalia | National Technical University of Athens |
Nanos, Kostas | National Technical University of Athens |
Papadopoulos, Evangelos | National Technical University of Athens |
Keywords: Space Robotics and Automation, Flexible Robots, Calibration and Identification
Abstract: Space manipulator systems on orbit are subject to link flexibilities since they are designed to be lightweight and long reaching. Often, their joints are driven by harmonic gear-motor units, which introduce joint flexibility. Both of these types of flexibility may cause structural vibrations. To improve endpoint tracking, advanced control strategies that benefit from the knowledge of system parameters, including those describing link and joint flexibilities, are required. In this paper, first, the equations of motion of space manipulator systems whose manipulators are subject to both link and joint flexibilities are derived. Then, a parameter estimation method is developed, based on the energy balance during the motion of a flexible space manipulator. The method estimates all system parameters including those that describe both link and joint flexibilities and can reconstruct the system full dynamics required for the application of advanced control strategies. The method, developed for spatial systems, is illustrated by a planar example.
|
|
12:00-12:15, Paper MoBT5.2 | |
>Comparison between Stationary and Crawling Multi-Arm Robotics for In-Space Assembly |
|
McBryan, Katherine | US Naval Research Laboratory |
Keywords: Space Robotics and Automation, Assembly, Dual Arm Manipulation
Abstract: In-space assembly (ISA) is the next step to building larger and more permanent structures in orbit. The use of a robotic in-space assembler will save on costly and potentially risky EVAs. Determining the best robot for ISA is difficult as it will depend on the structure being assembled. A comparison between two categories of robots are presented: a stationary robot and robot which crawls along the truss. The estimated mass, energy, and time are presented for each system as it, in simulation, builds a desired truss system. There are trade-offs to every robot design and understanding those trade-offs is essential to building a system that is not only efficient but also cost-effective.
|
|
12:15-12:30, Paper MoBT5.3 | |
>Interactive Planning and Supervised Execution for High-Risk, High-Latency Teleoperation |
> Video Attachment
|
|
Pryor, Will | Johns Hopkins University |
Vagvolgyi, Balazs | Johns Hopkins University |
Deguet, Anton | Johns Hopkins University |
Leonard, Simon | The Johns Hopkins University |
Whitcomb, Louis | The Johns Hopkins University |
Kazanzides, Peter | Johns Hopkins University |
Keywords: Telerobotics and Teleoperation, Virtual Reality and Interfaces, Space Robotics and Automation
Abstract: Ground-based teleoperation of robot manipulators for on-orbit servicing of spacecraft represents an example of high-payoff, high-risk operations that are challenging to perform due to high latency communications, with telemetry time delays of several seconds. In these scenarios, confidence of operating without failure is paramount. We report the development of an Interactive Planning and Supervised Execution (IPSE) system that takes advantage of accurate 3D reconstruction of the remote environment to enable operators to plan motions in the virtual world, evaluate and adjust the plan, and then supervise execution with the ability to pause and return to the planning environment at any time. We report the results of an experimental evaluation of a representative on-orbit telerobotic servicing task from NASA's upcoming OSAM-1 mission to refuel a satellite in low earth orbit; specifically, to change the robot tool to acquire the fuel supply line and then to insert it into the satellite fill/drain valve. Results of a pilot study show that the operators preferred, and were more successful with, the IPSE system when compared to a conventional teleoperation implementation.
|
|
12:30-12:45, Paper MoBT5.4 | |
>Parameter Identification for an Uncooperative Captured Satellite with Spinning Reaction Wheels |
|
Christidi-Loumpasefski, Olga-Orsalia | National Technical University of Athens |
Papadopoulos, Evangelos | National Technical University of Athens |
Keywords: Space Robotics and Automation, Calibration and Identification
Abstract: A novel identification method is developed which identifies the accumulated angular momentum (AAM) of spinning reaction wheels (RWs) of an uncooperative satellite captured by a robotic servicer. In contrast to other methods that treat captured satellite’s RWs as non-spinning, the developed method provides simultaneously accurate estimates of the AAM of the captured satellite’s RWs and of the inertial parameters of the entire system consisting of the robotic servicer and of the captured satellite. These estimates render the system free-floating dynamics fully identified and available to model-based control. Three-dimensional simulations demonstrate the method’s validity. To show its usefulness, the performance of a model-based controller is evaluated with and without knowledge of the captured satellite’s RWs AAM.
|
|
12:45-13:00, Paper MoBT5.5 | |
>Tumbling and Hopping Locomotion Control for a Minor Body Exploration Robot |
> Video Attachment
|
|
Kobashi, Keita | Tohoku University |
Bando, Ayumu | Tohoku University |
Nagaoka, Kenji | Tohoku University |
Yoshida, Kazuya | Tohoku University |
Keywords: Space Robotics and Automation, Contact Modeling, Simulation and Animation
Abstract: This paper presents the modeling and analysis of a novel moving mechanism ``tumbling'' for asteroid exploration. The system actuation is provided by an internal motor and torque wheel; elastic spring-mounted spikes are attached to the perimeter of a circular-shaped robot, protruding normal to the surface and distributed uniformly. Compared with the conventional motion mechanisms, this simple layout enhances the capability of the robot to traverse a diverse microgravity environment. Technical challenges involved in conventional moving mechanisms, such as uncertainty of moving direction and inability to traverse uneven asteroid surfaces, can now be solved. A tumbling locomotion approach demonstrates two beneficial characteristics in this environment. First, tumbling locomotion maintains contact between the rover spikes and the ground. This enables the robot to continually apply control adjustments to realize precise and controlled motion. Second, owing to the nature of the mechanical interaction of the spikes and potential uneven surface protrusions, the robot can traverse uneven surfaces. In this paper, we present the dynamics modeling of the robot and analyze the motion of the robot experimentally and via numerical simulations. The results of this study help establish a moving strategy to approach the desired locations on asteroid surfaces.
|
|
13:00-13:15, Paper MoBT5.6 | |
>Inertia-Decoupled Equations for Hardware-In-The-Loop Simulation of an Orbital Robot with External Forces |
> Video Attachment
|
|
Mishra, Hrishik | German Aerospace Center (DLR) |
Giordano, Alessandro Massimo | DLR (German Aerospace Center) |
De Stefano, Marco | German Aerospace Center (DLR) |
Lampariello, Roberto | German Aerospace Center (DLR) |
Ott, Christian | German Aerospace Center (DLR) |
Keywords: Space Robotics and Automation, Simulation and Animation, Compliance and Impedance Control
Abstract: In this paper, we propose three novel Hardware-in-the-loop simulation (HLS) methods for a fully-actuated orbital robot in the presence of external interactions using On-Ground Facility Manipulators (OGFM). In particular, a fixed-base and a vehicle-driven manipulator are considered in the analyses. The key idea is to describe the orbital robot's dynamics using the Lagrange-Poincare (LP) equations, which reveal a block-diagonalized inertia. The resulting advantage is that noisy joint acceleration/torque measurements are avoided in the computation of the spacecraft motion due to manipulator interaction even while considering external forces. The proposed methods are a consequence of two facilitating theorems, which are proved herein. These theorems result in two actuation maps between the simulated orbital robot and the physical OGFM. The chief advantage of the proposed methods is physical consistency without level-set assumptions on the momentum map. We validate this through experiments on both types of OGFM in the presence of external forces. Finally, the effectiveness of our approach is validated through a HLS of a fully-actuated orbital robot while interacting with the environment.
|
|
MoBT6 |
Room T6 |
Space Robotics: Perception |
Regular session |
Chair: Triebel, Rudolph | German Aerospace Center (DLR) |
Co-Chair: Leonard, Simon | The Johns Hopkins University |
|
11:45-12:00, Paper MoBT6.1 | |
>A Target Tracking and Positioning Framework for Video Satellites Based on SLAM |
|
Zhao, Xuhui | Wuhan University |
Gao, Zhi | Temasek Laboratories @ NUS |
Zhang, Yongjun | Wuhan University |
Chen, Ben M. | Chinese University of Hong Kong |
Keywords: Space Robotics and Automation, SLAM, Visual Tracking
Abstract: With the booming development in aerospace technology,the video satellite has gradually emerged as a new Earth observation method, which observes the live phenomena on the ground by video shooting and opens a “dynamic” era of remote sensing. Thus, some new techniques are needed, especially the near-real-time tracking and positioning algorithm for ground moving targets. However, many researches only extract pixel-level trajectories in the post-processed video product, resulting in fairly limited applications. We regard the video satellite as a robot flying in space and adopt the SLAM framework for the positioning of ground moving targets. We design our framework based on the representative ORB-SLAM and make improvements mainly in feature extraction, satellite pose estimation, moving target tracking, and positioning. We install GPS-RTK (Real-time Kinematic) devices on a fishing boat to measure its ground truth and use the Zhuhai-1 video satellite to observe it simultaneously. We conduct experiments on this video and demonstrate that our framework can provide the geolocation of the moving target in satellite videos.
|
|
12:00-12:15, Paper MoBT6.2 | |
>Gaussian Process Gradient Maps for Loop-Closure Detection in Unstructured Planetary Environments |
|
Le Gentil, Cedric | University of Technology Sydney |
Vayugundla, Mallikarjuna | DLR (German Aerospace Center) |
Giubilato, Riccardo | German Aerospace Center (DLR) |
Stuerzl, Wolfgang | DLR, Institute of Robotics and Mechantronics |
Vidal-Calleja, Teresa A. | University of Technology Sydney |
Triebel, Rudolph | German Aerospace Center (DLR) |
Keywords: Space Robotics and Automation, Mapping, SLAM
Abstract: The ability to recognize previously mapped locations is an essential feature for autonomous systems.Unstructured planetary-like environments pose a major challenge to these systems due to the similarity of the terrain. As a result, the ambiguity of the visual appearance makes state-of-the-art visual place recognition approaches less effective than in urban or man-made environments. This paper presents a method to solve the loop closure problem using only spatial information. The key idea is to use a novel continuous and probabilistic representations of terrain elevation maps. Given 3D point clouds of the environment, the proposed approach exploits Gaussian Process (GP) regression with linear operators to generate continuous gradient maps of the terrain elevation information. Traditional image registration techniques are then used to search for potential matches. Loop closures are verified by leveraging both the spatial characteristic of the elevation maps (SE(2) registration) and the probabilistic nature of the GP representation. A submap-based localization and mapping framework is used to demonstrate the validity of the proposed approach. The performance of this pipeline is evaluated and benchmarked using real data from a rover that is equipped with a stereo camera and navigates in challenging, unstructured planetary-like environments in Morocco and on Mt. Etna.
|
|
12:15-12:30, Paper MoBT6.3 | |
>Visual Monitoring and Servoing of a Cutting Blade During Telerobotic Satellite Servicing |
> Video Attachment
|
|
Mahmood, Amama | Johns Hopkins University |
Vagvolgyi, Balazs | Johns Hopkins University |
Pryor, Will | Johns Hopkins University |
Whitcomb, Louis | The Johns Hopkins University |
Kazanzides, Peter | Johns Hopkins University |
Leonard, Simon | The Johns Hopkins University |
Keywords: Space Robotics and Automation, Force Control, Visual Servoing
Abstract: We propose a system for visually monitoring and servoing the cutting of a multi-layer insulation (MLI) blanket that covers the envelope of satellites and spacecraft. The main contributions of this paper are: 1) to propose a model for relating visual features describing the engagement depth of the blade to the force exerted on the MLI blanket by the cutting tool, 2) a blade design and algorithm to reliably detect the engagement depth of the blade inside the MLI, and 3) a servoing mechanism to achieve the desired applied force by monitoring the engagement depth. We present results that validate these contributions by comparing forces estimated from visual feedback to measured forces at the blade. We also demonstrate the robustness of the blade design and vision processing under challenging conditions.
|
|
12:30-12:45, Paper MoBT6.4 | |
>Terrain-Aware Path Planning and Map Update for Mars Sample Return Mission |
> Video Attachment
|
|
Hedrick, Gabrielle | West Virginia University |
Ohi, Nicholas | West Virginia University |
Gu, Yu | West Virginia University |
Keywords: Space Robotics and Automation, Robotics in Hazardous Fields, Autonomous Vehicle Navigation
Abstract: This work aims at developing an efficient path planning algorithm for the driving objective of a Martian day(sol) that can take into account terrain information for application to the proposed Mars Sample Return (MSR) mission. To prepare the planning process for one sol (i.e., with a limited time allocated to driving), a map of expected rover velocity over a chosen area is constructed, obtained by combining terrain classes, rock abundance and slope at that location.The planning phase starts offline by computing several paths that can be traversed in one sol (i.e., a few hours), which will later provide suitable options to the rover if replanning is necessary due to unexpected mobility difficulties. Online, the rover gains information about its environment as it drives (via slip monitoring and/or instrument deployment) and updates the map if major discrepancies are found. If an update is made, the remaining driving time along the different options is recalculated and the most efficient path is chosen. The online process is repeated until the rover has reached its daily goal.When simulated on different maps of expected rover speed at Gusev Crater, Mars, the algorithm correctly captured changes of terrain initially not mapped, and rerouted the rover to a more efficient path only when necessary, in which case it effectively complied with the time constraint to reach the goal.
|
|
12:45-13:00, Paper MoBT6.5 | |
>Virtual IR Sensing for Planetary Rovers: Improved Terrain Classification and Thermal Inertia Estimation |
|
Iwashita, Yumi | NASA / Caltech Jet Propulsion Laboratory |
Nakashima, Kazuto | Kyushu University |
Gatto, Joseph | Columbia University |
Higa, Shoya | Jet Propulsion Laboratory |
Stoica, Adrian | NASA/JPL |
Khoo, Norris | NASA Jet Propulsion Laboratory |
Kurazume, Ryo | Kyushu University |
Keywords: Space Robotics and Automation, Multi-Modal Perception
Abstract: Terrain classification is critically important for Mars rovers, which rely on it for planning and autonomous navigation. On-board terrain classification using visual information has limitations, and is sensitive to illumination conditions. Classification can be improved if one fuses visual imagery with additional infrared (IR) imagery of the scene, yet unfortunately there are no IR image sensors on the current Mars rovers. A virtual IR sensor, estimating IR from RGB imagery using deep learning, was proposed in the context of a MU-Net architecture. However, virtual IR estimation was limited by the fact that slope angle variations induce temperature differences within the same terrain. This paper removes this limitation, giving good IR estimates and as a consequence improving terrain classification by including the additional angle from the surface normal to the Sun and the measurement of solar radiation. The estimates are also useful when estimating thermal inertia, which can enhance slip prediction and small rock density estimation. Our approach is demonstrated in two applications. We collected a new data set to verify the effectiveness of the proposed approach and show its benefit by applying to the two applications.
|
|
MoBT7 |
Room T7 |
Space Robotics: Systems |
Regular session |
Chair: Komendera, Erik | Virginia Polytechnic Institute and State University |
Co-Chair: Kubota, Takashi | JAXA ISAS |
|
11:45-12:00, Paper MoBT7.1 | |
>Subsurface Sampling Robot for Time-Limited Asteroid Exploration |
> Video Attachment
|
|
Kato, Hiroki | Japan Aerospace Exploration Agency |
Satou, Yasutaka | JAXA |
Yoshikawa, Kent | JAXA |
Otsuki, Masatsugu | Japan Aerospace Exploration Agency |
Sawada, Hirotaka | JAXA |
Kuratoi, Takeshi | WEL Research |
Hidaka, Nana | WEL Research |
Keywords: Space Robotics and Automation, Field Robots
Abstract: This paper presents a novel approach to sampling subsurface asteroidal regolith under severe time constraints. Sampling operations that must be completed within a few hours require techniques that can manage subsurface obstructions that may be encountered. The large uncertainties due to our lack of knowledge of regolith properties also make sampling difficult. To aid in managing these challenges, machine learning-based detection methods using tactile feedback can detect the presence of rocks deeper than the length of the probe, ensuring reliable sampling in unobstructed areas. In addition, given the variability of soil hardness and the short time available, a corer shooting mechanism has been developed that uses a special shape-memory alloy to collect regolith in about a minute. Experiments on subsurface obstacle detection and shooting-corer ejection tests were conducted to demonstrate the functionality of this approach.
|
|
12:00-12:15, Paper MoBT7.2 | |
>Robots Made from Ice: An Analysis of Manufacturing Techniques |
> Video Attachment
|
|
Carroll, Devin | University of Pennsylvania |
Yim, Mark | University of Pennsylvania |
Keywords: Space Robotics and Automation, Product Design, Development and Prototyping, Wheeled Robots
Abstract: Modular robotic systems with self-repair or self-replication capabilities have been presented as a robust, low cost solution to extraterrestrial or Arctic exploration. This paper explores using ice as the sole structure element to build robots. The ice allows for increased flexibility in the system design, enabling the robotic structure to be designed and built post deployment, after tasks and terrain obstacles have been better identified and analyzed. However, ice presents many difficulties in manufacturing. The authors explore a structure driven approach to examine compatible manufacturing processes with an emphasis on conserving process energies. The energy analysis shows the optimal manufacturing technique depends on the volume of the final part relative to the volume of material that must be removed. Based on experiments three general design principles are presented. A mobile robotic platform made from ice is presented as a proof of concept and first demonstration.
|
|
12:15-12:30, Paper MoBT7.3 | |
>Autonomous Navigation Over Europa Analogue Terrain for an Actively Articulated Wheel-On-Limb Rover |
> Video Attachment
|
|
Reid, William | Jet Propulsion Laboratory |
Paton, Michael | Jet Propulsion Laboratory |
Karumanchi, Sisir | Jet Propulsion Lab, Caltech |
Emanuel, Blair | Jet Propulsion Laboratory |
Chamberlain-Simon, Brendan | Jet Propulsion Laboratory |
Meirion-Griffith, Gareth | Jet Propulsion Laboratory |
Keywords: Space Robotics and Automation, Field Robots, Whole-Body Motion Planning and Control
Abstract: The ocean world Europa is a prime target for exploration given its potential habitability. We propose a mobile platform that is capable of autonomously traversing tens of meters to visit multiple sites of interest on a Europan analogue surface. Due to the topology of Europan terrain being largely unknown, it is desired that this mobility system traverse a large variety of terrain types. The mobility system should also be capable of crossing unstructured terrain in an autonomous manner given the communications limitations between Earth and Europa. A wheel-on-limb robotic rover is presented that may actively conform to terrain features up to 1.5 wheel diameters tall while driving. The robot uses a sampling-based motion planner to generate paths that leverage its unique locomotive capabilities. The planner assesses terrain hazards and wheel workspace limits as obstacles. It may also select a mobility mode based on predicted energy usage and the need for limb articulation on the terrain being traversed. This autonomous mobility was evaluated on chaotic salt-evaporite terrain found in Death Valley, CA, an analogue to the Europan surface. Over the course of 38 trials, the rover autonomously traversed 435 m of extreme terrain while maintaining a rate of 0.64 traverse ending failures for every 10 m driven.
|
|
12:30-12:45, Paper MoBT7.4 | |
>Autonomous Multi-Robot Assembly of Solar Array Modules: Experimental Analysis and Insights |
|
Everson, Holly | Virginia Polytechnic Institute and State University |
Moser, Joshua | Virginia Polytechnic Institute and State University |
Quartaro, Amy | Virginia Polytechnic Institute and State University |
Glassner, Samantha | Virginia Tech |
Komendera, Erik | Virginia Polytechnic Institute and State University |
Keywords: Space Robotics and Automation, Cooperating Robots, Robotics in Construction
Abstract: To allow for the construction of large space structures to support future space endeavors, autonomous robotic solutions would serve to reduce cost and risk of human extravehicular activity (EVA). Practicality of autonomous assembly requires both theoretical and algorithmic advances, and hardware experimentation across a spectrum of technological readiness levels. Analysis of hardware experiments provides novel insights not readily apparent in simulations alone, which serves to inform future developments. This paper describes analysis and insights gained from an autonomous assembly experiment consisting of a dexterous manipulator, a gross positioning serial arm, and a 1 degree of freedom (DOF) turntable to facilitate the assembly and deployment of a solar array mockup. This experiment combined state estimation in an uncertain environment with contact-heavy robot operations such as grasping, self-reconfiguring, joining, and deploying. Insights gained are presented here due to their applicability to other field-based manipulation tasks by teams of robots.
|
|
12:45-13:00, Paper MoBT7.5 | |
>The ARCHES Space-Analogue Demonstration Mission: Towards Heterogeneous Teams of Autonomous Robots for Collaborative Scientific Sampling in Planetary Exploration |
> Video Attachment
|
|
Schuster, Martin J. | German Aerospace Center (DLR) |
Müller, Marcus Gerhard | German Aerospace Center |
Brunner, Sebastian Georg | DLR German Aerospace Center, Robotics and Mechatronics Center |
Lehner, Hannah | German Aerospace Center (DLR) |
Lehner, Peter | German Aerospace Center (DLR) |
Sakagami, Ryo | German Aerospace Center (DLR) |
Dömel, Andreas | German Aerospace Center (DLR) |
Meyer, Lukas | German Aerospace Center (DLR) |
Vodermayer, Bernhard | German Aerospace Center (DLR) |
Giubilato, Riccardo | German Aerospace Center (DLR) |
Vayugundla, Mallikarjuna | DLR (German Aerospace Center) |
Reill, Joseph | German Aerospace Center (DLR) |
Steidle, Florian | German Aerospace Center |
von Bargen, Ingo | German Aerospace Center (DLR) |
Bussmann, Kristin | German Aerospace Center (DLR) |
Belder, Rico | German Aerospace Center |
Lutz, Philipp | German Aerospace Center (DLR) |
Stuerzl, Wolfgang | DLR, Institute of Robotics and Mechantronics |
Smisek, Michal | German Aerospace Center (DLR) |
Maier, Moritz | German Aerospace Center (DLR) |
Stoneman, Samantha | DLR (German Space Center) |
Fonseca Prince, Andre | German Aerospace Center (DLR) |
Rebele, Bernhard | German Aerospace Center (DLR) |
Durner, Maximilian | German Aerospace Center DLR |
Staudinger, Emanuel | DLR |
Zhang, Siwei | German Aerospace Center (DLR) |
Pöhlmann, Robert | German Aerospace Center (DLR) |
Bischoff, Esther | Karlsruhe Institute of Technology (KIT) |
Braun, Christian | Karlsruhe Institute of Technology (KIT) |
Schröder, Susanne | German Aerospace Center (DLR) |
Dietz, Enrico | German Aerospace Center (DLR) |
Frohmann, Sven | German Aerospace Center (DLR) |
Börner, Anko | DLR |
Hübers, Heinz-Wilhelm | German Aerospace Center (DLR) |
Foing, Bernard | European Space Agency (ESA) |
Triebel, Rudolph | German Aerospace Center (DLR) |
Albu-Schäffer, Alin | DLR - German Aerospace Center |
Wedler, Armin | DLR - German Aerospace Center |
Keywords: Space Robotics and Automation, Multi-Robot Systems, Autonomous Agents
Abstract: Teams of mobile robots will play a crucial role in future missions to explore the surfaces of extraterrestrial bodies. Setting up infrastructure and taking scientific samples are expensive tasks when operating in distant, challenging, and unknown environments. In contrast to current single-robot space missions, future heterogeneous robotic teams will increase efficiency via enhanced autonomy and parallelization, improve robustness via functional redundancy, as well as benefit from complementary capabilities of the individual robots. In this article, we present our heterogeneous robotic team, consisting of flying and driving robots that we plan to deploy on scientific sampling demonstration missions at a Moon-analogue site on Mt. Etna, Sicily, Italy in 2021 as part of the ARCHES project. We describe the robots' individual capabilities and their roles in two mission scenarios. We then present components and experiments on important tasks therein: automated task planning, high-level mission control, spectral rock analysis, radio-based localization, collaborative multi-robot 6D SLAM in Moon-analogue and Mars-like scenarios, and demonstrations of autonomous sample return.
|
|
13:00-13:15, Paper MoBT7.6 | |
>A Routing Framework for Heterogeneous Multi-Robot Teams in Exploration Tasks |
> Video Attachment
|
|
Sakamoto, Takuma | The University of Tokyo |
Bonardi, Stephane | Institute of Space and Astronautical Science (ISAS), Japan Aeros |
Kubota, Takashi | JAXA ISAS |
Keywords: Space Robotics and Automation, Path Planning for Multiple Mobile Robots or Agents, Motion and Path Planning
Abstract: This paper proposes a routing framework for heterogeneous multi-robot teams in exploration tasks. The proposed framework deals with a combinatorial optimization problem and provides a new solving algorithm, for Generalized Team Orienteering Problem (GTOP). In this paper, a route optimization problem is formulated for a heterogeneous multi-robot system. A novel problem solver is also proposed based on self-organizing map. The proposed framework has a strong advantage in its scalability because the processing time is independent from the number of robots and the heterogeneity of the team. The validity of the proposed framework is evaluated in the exploration and mapping tasks by heterogeneous robot team with overlapping abilities. The simulation results show the effectiveness of the proposed framework and how it outperforms the conventional greedy exploration scheme.
|
|
MoBT8 |
Room T8 |
AI and Learning for Autonomous Driving Applications |
Regular session |
Chair: Pillai, Sudeep | Toyota Research Institute |
|
11:45-12:00, Paper MoBT8.1 | |
>Accurate, Low-Latency Visual Perception for Autonomous Racing: Challenges, Mechanisms, and Practical Solutions |
|
Strobel, Kieran | MIT |
Zhu, Sibo | Brandeis University |
Chang, Raphael | Massachusetts Institute of Technology |
Koppula, Skanda | Google DeepMind |
Keywords: Deep Learning for Visual Perception, Computer Vision for Automation, Autonomous Vehicle Navigation
Abstract: Autonomous racing provides the opportunity to test safety-critical perception pipelines at their limit. This paper describes the practical challenges and solutions to applying state-of-the-art computer vision algorithms to build a low-latency, high-accuracy perception system for DUT18 Driverless (DUT18D), a 4WD electric race car with podium finishes at all Formula Driverless competitions for which it raced. The key components of DUT18D include YOLOv3-based object detection, pose estimation, and time synchronization on its dual stereovision/monovision camera setup. We highlight modifications required to adapt perception CNNs to racing domains, improvements to loss functions used for pose estimation, and methodologies for sub-microsecond camera synchronization among other improvements. We perform a thorough experimental evaluation of the system, demonstrating its accuracy and low-latency in real-world racing scenarios.
|
|
12:00-12:15, Paper MoBT8.2 | |
>Spatio-Temporal Ultrasonic Dataset: Learning Driving from Spatial and Temporal Ultrasonic Cues |
|
Wang, Shuai | University of Science and Technology of China |
Qin, Jiahu | University of Science and Technology of China |
Zhang, Zhanpeng | University of Science and Technology of China |
Keywords: Autonomous Vehicle Navigation, Big Data in Robotics and Automation, Model Learning for Control
Abstract: Recent works have proved that combining spatial and temporal visual cues can significantly improve the performance of various vision-based robotic systems. However, for the ultrasonic sensors used in most robotic tasks (e.g. collision avoidance, localization and navigation), there is a lack of benchmark ultrasonic datasets that consist of spatial and temporal data to verify the usability of spatial and temporal ultrasonic cues. In this paper, we are the first to propose a Spatio-Temporal Ultrasonic Dataset (STUD), which aims to develop the ability of ultrasonic sensors by mining spatial and temporal information from multiple ultrasonic measurements. In particular, we first propose a novel Spatio-Temporal (ST) ultrasonic data gathering scheme, in which an innovatory data instance is designed. Besides, part of the data in the STUD is collected in a robot simulator, in which a well-designed corridor map is utilized to increase data diversity. Then a selection algorithm is proposed to find a proper length of data sequences to obtain the best description of the navigation environments. Finally, we present an end-to-end learning benchmark model that learns driving policies by extracting spatial and temporal ultrasonic cues from the STUD. With the help of our STUD and this benchmark model, more powerful deep neural networks can be trained for addressing the tasks of indoor navigation or motion planning of mobile robots, which is unachievable by simply using the existing ultrasonic datasets. Comparison experiments verified the effectiveness of spatial and temporal ultrasonic cues for the driving policy learning.
|
|
12:15-12:30, Paper MoBT8.3 | |
>A POMDP Treatment of Vehicle-Pedestrian Interaction: Implicit Coordination Via Uncertainty-Aware Planning |
|
Hsu, Ya-Chuan | Texas A&M University |
Gopalswamy, Swaminathan | Texas A&M University |
Saripalli, Srikanth | Texas A&M |
Shell, Dylan | Texas A&M University |
Keywords: AI-Based Methods, Autonomous Vehicle Navigation, Social Human-Robot Interaction
Abstract: Drivers and other road users often encounter situations (e.g., arriving at an intersection simultaneously) where priority is ambiguous or unclear but must be resolved via communication to reach agreement. This poses a challenge for autonomous vehicles, for which no direct means for expressing intent and acknowledgment has yet been established. This paper contributes a minimal model to manage ambiguity and produce actions that are expressive and encode aspects of intent. Specifically, intent is treated as a latent variable, communicated implicitly through a partially observable Markov decision process (POMDP). We validate the model in a simple setting: a simulation of a prototypical crossing with a vehicle and one pedestrian at an unsignalized intersection. We further report use of our self-driving Ford Lincoln MKZ platform, through which we conducted experimental trials of the method involving real-time interaction. The experiment shows the method achieves safe and efficient navigation.
|
|
12:30-12:45, Paper MoBT8.4 | |
>Multiple Trajectory Prediction with Deep Temporal and Spatial Convolutional Neural Networks |
|
Strohbeck, Jan | Ulm University |
Belagiannis, Vasileios | Universität Ulm |
Müller, Johannes | Ulm University |
Schreiber, Marcel | Ulm University |
Herrmann, Martin | Ulm University |
Wolf, Daniel | Ulm University |
Buchholz, Michael | University of Ulm |
Keywords: Autonomous Vehicle Navigation, Novel Deep Learning Methods, AI-Based Methods
Abstract: Automated vehicles need to not only perceive their environment, but also predict the possible future behavior of all detected traffic participants in order to safely navigate in complex scenarios and avoid critical situations, ranging from merging on highways to crossing urban intersections. Due to the availability of datasets with large numbers of recorded trajectories of traffic participants, deep learning based approaches can be used to model the behavior of road users. This paper proposes a convolutional network that operates on rasterized actor-centric images which encode the static and dynamic actor-environment. We predict multiple possible future trajectories for each traffic actor, which include position, velocity, acceleration, orientation, yaw rate and position uncertainty estimates. To make better use of the past movement of the actor, we propose to employ temporal convolutional networks (TCNs) and rely on uncertainties estimated from the previous object tracking stage. We evaluate our approach on the public "Argoverse Motion Forecasting" dataset, on which it won the first prize at the Argoverse Motion Forecasting Challenge, as presented on the NeurIPS 2019 workshop on "Machine Learning for Autonomous Driving".
|
|
12:45-13:00, Paper MoBT8.5 | |
>End-To-End Autonomous Driving Perception with Sequential Latent Representation Learning |
|
Chen, Jianyu | UC Berkeley |
Xu, Zhuo | UC Berkeley |
Tomizuka, Masayoshi | University of California |
Keywords: Representation Learning, Deep Learning for Visual Perception, Semantic Scene Understanding
Abstract: Current autonomous driving systems are composed of a perception system and a decision system. Both of them are divided into multiple subsystems built up with lots of human heuristics. An end-to-end approach might clean up the system and avoid huge efforts of human engineering, as well as obtain better performance with increasing data and computation resources. Compared to the decision system, the perception system is more suitable to be designed in an end-to-end framework, since it does not require online driving exploration. In this paper, we propose a novel end-to-end approach for autonomous driving perception. A latent space is introduced to capture all relevant features useful for perception, which is learned through sequential latent representation learning. The learned end-to-end perception model is able to solve the detection, tracking, localization and mapping problems altogether with only minimum human engineering efforts and without storing any maps online. The proposed method is evaluated in a realistic urban driving simulator, with both camera image and lidar point cloud as sensor inputs.
|
|
13:00-13:15, Paper MoBT8.6 | |
>PillarFlow: End-To-End Birds-Eye-View Flow Estimation for Autonomous Driving |
> Video Attachment
|
|
Lee, Kuan-Hui | Toyota Research Institute |
Kliemann, Matthew | Toyota Research Institute |
Gaidon, Adrien | Toyota Research Institute |
Li, Jie | University of Michigan |
Fang, Chao | Toyota Research Institute |
Pillai, Sudeep | Toyota Research Institute |
Burgard, Wolfram | Toyota Research Institute |
Keywords: Deep Learning for Visual Perception, Computer Vision for Automation, Visual Learning
Abstract: In autonomous driving, accurately estimating the state of surrounding obstacles is critical for safe and robust path planning. However, this perception task is difficult, particularly for generic obstacles/objects, due to appearance and occlusion changes. To tackle this problem, we propose an end-to-end deep learning framework for LIDAR-based flow estimation in bird's eye view (BeV). Our method takes consecutive point cloud pairs as input and produces a 2-D BeV "flow" grid describing the dynamic state of each cell. The experimental results show that the proposed method not only estimates 2-D BeV flow accurately but also improves tracking performance of both dynamic and static objects.
|
|
MoBT9 |
Room T9 |
Autonomous Vehicles: Behavior |
Regular session |
Chair: Borges, Paulo Vinicius Koerich | CSIRO |
|
11:45-12:00, Paper MoBT9.1 | |
>Real-Time Detection of Distracted Driving Using Dual Cameras |
|
Tran, Duy | Oklahoma State University |
Do, Ha Manh | Oklahoma State University |
Lu, Jiaxing | Oklahoma State University |
Sheng, Weihua | Oklahoma State University |
Keywords: Intelligent Transportation Systems, Robot Safety
Abstract: Distracted driving is one of the main contributors to traffic accidents. This paper proposes a deep learning approach to detecting multiple distracted driving behaviors. In order to obtain more accurate detection results, a synchronized image recognition system based on two cameras is designed, by which the body movements and face of the driver are monitored respectively. The images captured from driver's body and face areas are fed to two Convolutional Neural Networks (CNNs) simultaneously to ensure the performance of classification. The data collection and validation processes of the proposed distraction detection approach were conducted on a laboratory-based assisted driving testbed to provide near-realistic driving experiences. Our dataset includes distracted and safe driving images of the drivers. Furthermore, we developed a meaningful and practical application of a voice-alert system that alerts the distracted driver to focus on the driving task. We evaluated VGG-16, ResNet, and MobileNet-v2 networks for the proposed approach. Experimental results show that by using two cameras and VGG-16 networks, we can achieve a recognition accuracy of 96.7% with a computation speed of 8 fps.
|
|
12:00-12:15, Paper MoBT9.2 | |
>Expressing Diverse Human Driving Behavior with ProbabilisticRewards and Online Inference |
|
Sun, Liting | University of California, Berkeley |
Wu, Zheng | University of California, Berkeley |
Ma, Hengbo | University of California, Berkeley |
Tomizuka, Masayoshi | University of California |
Keywords: Intelligent Transportation Systems, Learning from Demonstration
Abstract: In human-robot interaction (HRI) systems, such as autonomous vehicles, understanding and representing human behavior are important. Human behavior is naturally rich and diverse. Cost/reward learning, as an efficient way to learn and represent human behavior, has been successfully applied in many domains. Most traditional inverse reinforcement learning (IRL) algorithms, however, cannot adequately capture the diversity of human behavior since they assume that all behavior in a given dataset is generated by a single cost function. In this paper, we propose a probabilistic IRL framework that directly learns a distribution of cost functions in the continuous domain. Evaluations of both synthetic data and real human driving data are conducted. Both the quantitative and subjective results show that our proposed framework can better express diverse human driving behaviors, as well as extracting different driving styles that match what human participants interpret in our user study.
|
|
12:15-12:30, Paper MoBT9.3 | |
>Identification of Effective Motion Primitives for Ground Vehicles |
|
Löw, Tobias | ETH Zürich |
Bandyopadhyay, Tirthankar | CSIRO |
Borges, Paulo Vinicius Koerich | CSIRO |
Keywords: Autonomous Vehicle Navigation, Field Robots, Motion and Path Planning
Abstract: Understanding the kinematics of a ground robot is essential for efficient navigation. Based on the kinematic model of a robot, its full motion capabilities can be represented by theoretical motion primitives. However, depending on the environment and/or human preferences, not all of those theoretical motion primitives are desirable and/or achievable. This work presents a method to identify effective motion primitives (eMP) from continuous trajectories for autonomous ground robots. The pipeline efficiently performs segmentation, representation and reconstruction of the motion primitives, using initial human-driving behaviour as a guide to create a motion primitive library. Hence, this strategy incorporates how the environment affects the robot operation regarding accelerations, speed, braking, and steering behaviours. The method is thoroughly tested on an autonomous car-like electric vehicle, and the results show excellent generalisation of the theoretical motion primitive distribution to real vehicle. The experiments are carried out on large site with very diverse characteristics, illustrating the applicability of the method.
|
|
12:30-12:45, Paper MoBT9.4 | |
>CMetric: A Driving Behavior Measure Using Centrality Functions |
> Video Attachment
|
|
Chandra, Rohan | University of Maryland |
Bhattacharya, Uttaran | UMD College Park |
Mittal, Trisha | University of Maryland, College Park |
Bera, Aniket | University of Maryland |
Manocha, Dinesh | University of Maryland |
Keywords: Intelligent Transportation Systems
Abstract: We present a new measure, CMetric, to classify driver behaviors using centrality functions. Our formulation combines concepts from computational graph theory and social traffic psychology to quantify and classify the behavior of human drivers. CMetric is used to compute the probability of a vehicle executing a driving style, as well as the intensity used to execute the style. Our approach is designed for realtime autonomous driving applications, where the trajectory of each vehicle or road-agent is extracted from a video. We compute a dynamic geometric graph (DGG) based on the positions and proximity of the road-agents and centrality functions corresponding to closeness and degree. These functions are used to compute the CMetric based on style likelihood and style intensity estimates. Our approach is general and makes no assumption about traffic density, heterogeneity, or how driving behaviors change over time. We present an algorithm to compute CMetric and demonstrate its performance on real-world traffic datasets. To test the accuracy of CMetric, we introduce a new evaluation protocol (called ``Time Deviation Error'') that measures the difference between human prediction and the prediction made by CMetric.
|
|
12:45-13:00, Paper MoBT9.5 | |
>Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs |
|
Chandra, Rohan | University of Maryland |
Guan, Tianrui | University of Maryland |
Panuganti, Srujan | University of Maryland, College Park |
Mittal, Trisha | University of Maryland, College Park |
Bhattacharya, Uttaran | UMD College Park |
Bera, Aniket | University of Maryland |
Manocha, Dinesh | University of Maryland |
Keywords: Intelligent Transportation Systems, Autonomous Agents
Abstract: We present a novel approach for traffic forecasting in urban traffic scenarios using a combination of spectral graph analysis and deep learning. We predict both the low-level information (future trajectories) as well as the high-level information (road-agent behavior) from the extracted trajectory of each road-agent. Our formulation represents the proximity between the road agents using a weighted dynamic geometric graph (DGG). We use a two-stream graph-LSTM network to perform traffic forecasting using these weighted DGGs. The first stream predicts the spatial coordinates of road-agents, while the second stream predicts whether a road-agent is going to exhibit overspeeding, underspeeding, or neutral behavior by modeling spatial interactions between road-agents. Additionally, we propose a new regularization algorithm based on spectral clustering to reduce the error margin in long-term prediction (3-5 seconds) and improve the accuracy of the predicted trajectories. Moreover, we prove a theoretical upper bound on the regularized prediction error. We evaluate our approach on the Argoverse, Lyft, Apolloscape, and NGSIM datasets and highlight the benefits over prior trajectory prediction methods. In practice, our approach reduces the average prediction error by approximately 75% over prior algorithms and achieves a weighted average accuracy of 91.2% for behavior prediction. Additionally, our spectral regularization improves long-term prediction by up to 70%.
|
|
MoBT10 |
Room T10 |
Autonomous Vehicles: Mapping |
Regular session |
Chair: Tombari, Federico | Technische Universität München |
Co-Chair: Liu, Lantao | Indiana University |
|
11:45-12:00, Paper MoBT10.1 | |
>Frontier Detection and Reachability Analysis for Efficient 2D Graph-SLAM Based Active Exploration |
> Video Attachment
|
|
Sun, Zezhou | Nanjing University of Science and Technology |
Wu, Banghe | Nanjing University of Science and Technology |
Xu, Cheng-Zhong | University of Macau |
Sarma, Sanjay E. | MIT |
Yang, Jian | Nanjing University of Science & Technology |
Kong, Hui | Nanjing University of Science and Technology |
Keywords: Autonomous Vehicle Navigation, Path Planning for Multiple Mobile Robots or Agents, Mapping
Abstract: We propose an integrated approach to active exploration by exploiting the Cartographer method as the base SLAM module for submap creation and performing efficient frontier detection in the geometrically co-aligned submaps induced by graph optimization. We also carry out analysis on the reachability of frontiers and their clusters to ensure that the detected frontier can be reached by robot. Our method is tested on a mobile robot in real indoor scene to demonstrate the effectiveness and efficiency of our approach.
|
|
12:00-12:15, Paper MoBT10.2 | |
>Probabilistic Semantic Mapping for Urban Autonomous Driving Applications |
> Video Attachment
|
|
Paz, David | University of California, San Diego |
Zhang, Hengyuan | University of California, San Diego |
Li, Qinru | University of California San Diego |
Xiang, Hao | University of California, San Diego |
Christensen, Henrik Iskov | UC San Diego |
Keywords: Autonomous Vehicle Navigation, Semantic Scene Understanding, Mapping
Abstract: Recent advancement in statistical learning and computational abilities have enabled autonomous vehicle technology to develop at a much faster rate. While many of the architectures previously introduced are capable of operating under highly dynamic environments, many of these are constrained to smaller-scale deployments, require constant maintenance due to the associated scalability cost with high-definition (HD) maps, and involve tedious manual labeling. As an attempt to tackle this problem, we propose to fuse image and pre-built point cloud map information to perform automatic and accurate labeling of static landmarks such as roads, sidewalks, crosswalks and lanes. The method performs semantic segmentation on 2D images, associates the semantic labels with point cloud maps to accurately localize them in the world, and leverages the confusion matrix formulation to construct a probabilistic semantic map in bird's eye view from semantic point clouds. Experiments from data collected in an urban environment show that this model is able to predict most road features and can be extended for automatically incorporating road features into HD maps with potential future work directions.
|
|
12:15-12:30, Paper MoBT10.3 | |
>City-Scale Grid-Topological Hybrid Maps for Autonomous Mobile Robot Navigation in Urban Area |
> Video Attachment
|
|
Niijima, Shun | Tokyo University of Science, National Institute of Advanced Indu |
Umeyama, Ryusuke | Tokyo University of Science |
Sasaki, Yoko | National Inst. of Advanced Industrial Science and Technology |
Mizoguchi, Hiroshi | Tokyo University of Science |
Keywords: Wheeled Robots, Autonomous Vehicle Navigation
Abstract: Extensive city navigation remains an unresolved problem for autonomous mobile robots that share space with pedestrians. This paper proposes a configuration for a navigation map that expresses urban structures and an autonomous navigation scheme that uses the configuration. The proposed map configuration is a hybrid structure of multiple 2D grid maps and a topological graph. The occupancy grids for path planning are automatically converted from a given 3D point cloud and publicly available maps. The topological graph enables the connections between the subdivisions of occupancy grids to be managed and are used for route planning. This hybrid configuration can embed various urban structures automatically and is applicable to a wide range of autonomous navigation tasks. We evaluated the map by generating the proposed navigation map in real city and performing path planning using on the hybrid map. Experimental results demonstrated that the hybrid map can reduce the planning time and memory usage compared to the conventional single 2D grid map based path planning.
|
|
12:30-12:45, Paper MoBT10.4 | |
>State-Continuity Approximation of Markov Decision Processes Via Finite Element Methods for Autonomous System Planning |
|
Xu, Junhong | Indiana University |
Yin, Kai | HomeAway |
Liu, Lantao | Indiana University |
Keywords: Autonomous Vehicle Navigation, Motion and Path Planning, Marine Robotics
Abstract: Motion planning under uncertainty for an autonomous system can be formulated as a Markov Decision Process with a continuous state space. In this paper, we propose a novel solution to this decision-theoretic planning problem that directly obtains the continuous value function with only the first and second moments of the transition probabilities, alleviating the assumption of requiring an explicit transition model in the literature. We achieve this by taking advantage of the linear span in basis functions for the value function and a partial differential equation to approximate the Bellman equation, where the value function can be naturally constructed using a finite element method. We have validated our approach via extensive simulations, and the evaluations reveal that comparing to baseline methods, our solution leads to the best path results in terms of path smoothness, travel distance, and time costs.
|
|
12:45-13:00, Paper MoBT10.5 | |
>APPLD: Adaptive Planner Parameter Learning from Demonstration |
> Video Attachment
|
|
Xiao, Xuesu | The University of Texas at Austin |
Liu, Bo | University of Texas at Austin |
Warnell, Garrett | U.S. Army Research Laboratory |
Fink, Jonathan | US Army Research Laborator |
Stone, Peter | University of Texas at Austin |
Keywords: Autonomous Vehicle Navigation, Learning from Demonstration, Motion and Path Planning
Abstract: Existing autonomous robot navigation systems allow robots to move from one point to another in a collision-free manner. However, when facing new environments, these systems generally require re-tuning by expert roboticists with a good understanding of the inner workings of the navigation system. In contrast, even users who are unversed in the details of robot navigation algorithms can generate desirable navigation behavior in new environments via teleoperation. In this paper, we introduce APPLD, Adaptive Planner Parameter Learning from Demonstration, that allows existing navigation systems to be successfully applied to new complex environments, given only a human-teleoperated demonstration of desirable navigation. APPLD is verified on two robots running different navigation systems in different environments. Experimental results show that APPLD can outperform navigation systems with the default and expert-tuned parameters, and even the human demonstrator themselves.
|
|
13:00-13:15, Paper MoBT10.6 | |
>Explicit Domain Adaptation with Loosely Coupled Samples |
> Video Attachment
|
|
Scheel, Oliver | BMW Group |
Schwarz, Loren | BMW Group |
Navab, Nassir | TU Munich |
Tombari, Federico | Technische Universität München |
Keywords: Autonomous Vehicle Navigation, AI-Based Methods, Novel Deep Learning Methods
Abstract: Transfer learning is an important field of machine learning in general, and particularly in the context of fully autonomous driving, which needs to be solved simultaneously for many different domains, such as changing weather conditions and country-specific driving behaviors. Traditional transfer learning methods often focus on image data and are black-box models. In this work we propose a transfer learning framework, core of which is learning an explicit mapping between domains. Due to its interpretability, this is beneficial for safety-critical applications, like autonomous driving. We show its general applicability by considering image classification problems and then move on to time-series data, particularly predicting lane changes. In our evaluation we adapt a pre-trained model to a dataset exhibiting different driving and sensory characteristics.
|
|
MoBT11 |
Room T11 |
Autonomous Vehicles: Navigation I |
Regular session |
Chair: Zhang, Shiqi | SUNY Binghamton |
Co-Chair: Johnson-Roberson, Matthew | University of Michigan |
|
11:45-12:00, Paper MoBT11.1 | |
>SCALE-Net: Scalable Vehicle Trajectory Prediction Network under Random Number of Interacting Vehicles Via Edge-Enhanced Graph Convolutional Neural Network |
> Video Attachment
|
|
Jeon, Hyeongseok | Korea Advanced Institute of Science and Technology (KAIST) |
Choi, Jun-Won | Hanyang University |
Kum, Dongsuk | KAIST |
Keywords: Intelligent Transportation Systems, Autonomous Agents, Novel Deep Learning Methods
Abstract: Predicting the future trajectory of surrounding vehicles in a randomly varying traffic level is one of the most challenging problems in developing an autonomous vehicle. Since there is no pre-defined number of interacting vehicles participated in, the prediction network has to be scalable with respect to the number of vehicles in order to guarantee consistent performance in terms of both accuracy and computational load. In this paper, the first fully scalable trajectory prediction network, SCALE-Net, is proposed that can ensure both high prediction performance while keeping the computational load low regardless of the number of surrounding vehicles. The SCALE-Net employs the Edge-enhanced Graph Convolutional Neural Network (EGCN) for the inter-vehicular interaction embedding network. Since the proposed EGCN is inherently scalable with respect to the graph node (an agent in this study), the model can be operated independently from the total number of vehicles considered. We evaluated the scalability of the SCALE-Net on the publically available NGSIM datasets by comparing variations on computation time and prediction accuracy per single driving scene with respect to the varying vehicle number. The experimental test shows that both computation time and prediction performance of the SCALE-Net consistently outperform those of previous models regardless of the level of traffic complexities.
|
|
12:00-12:15, Paper MoBT11.2 | |
>Behaviorally Diverse Traffic Simulation Via Reinforcement Learning |
> Video Attachment
|
|
Shiroshita, Shinya | Preferred Networks, Inc |
Maruyama, Shirou | Preferred Networks, Inc |
Nishiyama, Daisuke | Preferred Networks, Inc |
Ynocente Castro, Mario | Preferred Networks, Inc |
Hamzaoui, Karim | Preferred Networks Inc |
Rosman, Guy | Massachusetts Institute of Technology |
DeCastro, Jonathan | Cornell University |
Lee, Kuan-Hui | Toyota Research Institute |
Gaidon, Adrien | Toyota Research Institute |
Keywords: Intelligent Transportation Systems, Reinforecment Learning, Autonomous Agents
Abstract: Traffic simulators are important tools in autonomous driving development. While continuous progress has been made to provide developers more options for modeling various traffic participants, tuning these models to increase their behavioral diversity while maintaining quality is often very challenging. This paper introduces an easily-tunable policy generation algorithm for autonomous driving agents. The proposed algorithm balances diversity and driving skills by leveraging the representation and exploration abilities of deep reinforcement learning via a distinct policy set selector. Moreover, we present an algorithm utilizing intrinsic rewards to widen behavioral differences in the training. To provide quantitative assessments, we develop two trajectory-based evaluation metrics which measure the differences among policies and behavioral coverage. We experimentally show the effectiveness of our methods on several challenging intersection scenes.
|
|
12:15-12:30, Paper MoBT11.3 | |
>Predictive Runtime Monitoring of Vehicle Models Using Bayesian Estimation and Reachability Analysis |
> Video Attachment
|
|
Chou, Yi | University of Colorado, Boulder |
Yoon, Hansol | University of Colorado Boulder |
Sankaranarayanan, Sriram | University of Colorado, Boulder |
Keywords: Autonomous Vehicle Navigation, Formal Methods in Robotics and Automation, Collision Avoidance
Abstract: We present a predictive runtime monitoring technique for estimating future vehicle positions and the probability of collisions with obstacles. Vehicle dynamics model how the position and velocity change over time as a function of external inputs. They are commonly described by discrete-time stochastic models. Whereas positions and velocities can be measured, the inputs (steering and throttle) are not directly measurable in these models. In our paper, we apply Bayesian inference techniques for real-time estimation, given prior distribution over the unknowns and noisy state measurements. Next, we pre-compute the set-valued reachability analysis to approximate future positions of a vehicle. The pre-computed reachability sets are combined with the posterior probabilities computed through Bayesian estimation to provided a predictive verification framework that can be used to detect impending collisions with obstacles. Our approach is evaluated using the coordinated-turn vehicle model for a UAV using on-board measurement data obtained from a flight test of a Talon UAV. We also compare the results with sampling-based approaches. We find that precomputed reachability analysis can provide accurate warnings up to 6 seconds in advance and the accuracy of the warnings improve as the time horizon is narrowed from 6 to 2 seconds. The approach also outperforms sampling in terms of on-board computation cost and accuracy measures.
|
|
12:30-12:45, Paper MoBT11.4 | |
>Task-Motion Planning for Safe and Efficient Urban Driving |
> Video Attachment
|
|
Ding, Yan | SUNY Binghamton |
Zhang, Xiaohan | SUNY Binghamton |
Zhan, Xingyue | Binghamton University |
Zhang, Shiqi | SUNY Binghamton |
Keywords: Autonomous Vehicle Navigation, Task Planning, Motion and Path Planning
Abstract: Autonomous vehicles need to plan at the task level to compute a sequence of symbolic actions, such as merging left and turning right, to fulfill people's service requests, where efficiency is the main concern. At the same time, the vehicles must compute continuous trajectories to perform actions at the motion level, where safety is the most important. Task-motion planning in autonomous driving faces the problem of maximizing task-level efficiency while ensuring motion-level safety. To this end, we develop algorithm Task-Motion Planning for Urban Driving (TMPUD) that, for the first time, enables the task and motion planners to communicate about the safety level of driving behaviors. TMPUD has been evaluated using a realistic urban driving simulation platform. Results suggest that TMPUD performs significantly better than competitive baselines from the literature in efficiency, while ensuring the safety of driving behaviors.
|
|
12:45-13:00, Paper MoBT11.5 | |
>Feedback Enhanced Motion Planning for Autonomous Vehicles |
> Video Attachment
|
|
Sun, Ke | University of Pennsylvania |
Schlotfeldt, Brent | University of Pennsylvania |
Chaves, Stephen | Qualcomm Research Philadelphia |
Martin, Paul | Qualcomm |
Mandhyan, Gulshan | Qualcomm |
Kumar, Vijay | University of Pennsylvania, School of Engineering and Applied Sc |
Keywords: Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: In this work, we address the motion planning problem for autonomous vehicles through a new lattice planning approach, called Feedback Enhanced Lattice Planner (FELP). Existing lattice planners have two major limitations, namely the high dimensionality of the lattice and the lack of modeling of agent vehicle behaviors. We propose to apply the Intelligent Driver Model (IDM)~cite{treiber2013traffic} as a speed feedback policy to address both of these limitations. IDM both enables the responsive behavior of the agents, and uniquely determines the acceleration and speed profile of the ego vehicle on a given path. Therefore, only a spatial lattice is needed, while discretization of higher order dimensions is no longer required. Additionally, we propose a directed-graph map representation to support the implementation and execution of lattice planners. The map can reflect local geometric structure, embed the traffic rules adhering to the road, and is efficient to construct and update. We show that FELP is more efficient compared to other existing lattice planners through runtime complexity analysis, and we propose two variants of FELP to further reduce the complexity to polynomial time. We demonstrate the improvement by comparing FELP with an existing spatiotemporal lattice planner using simulations of a merging scenario and continuous highway traffic. We also study the performance of FELP under different traffic densities.
|
|
13:00-13:15, Paper MoBT11.6 | |
>Low Latency Trajectory Predictions for Interaction Aware Highway Driving |
|
Anderson, Cyrus | University of Michigan |
Vasudevan, Ram | University of Michigan |
Johnson-Roberson, Matthew | University of Michigan |
Keywords: Autonomous Vehicle Navigation, Autonomous Agents
Abstract: Highway driving places significant demands on human drivers and autonomous vehicles (AVs) alike due to high speeds and the complex interactions in dense traffic. Merging onto the highway poses additional challenges by limiting the amount of time available for decision-making. Predicting others' trajectories accurately and quickly is crucial to safely execute maneuvers. Many existing prediction methods based on neural networks have focused on modeling interactions to achieve better accuracy while assuming the existence of observation windows over 3s long. This paper proposes a novel probabilistic model for trajectory prediction that performs competitively with as little as 400ms of observations. The proposed model extends a deterministic car-following model to the probabilistic setting by treating model parameters as unknown random variables and introducing regularization terms. A realtime inference procedure is derived to estimate the parameters from observations in this new model. Experiments on dense traffic in the NGSIM dataset demonstrate that the proposed method achieves state-of-the-art performance with both highly constrained and more traditional observation windows.
|
|
13:00-13:15, Paper MoBT11.7 | |
>Stable Autonomous Spiral Stair Climbing of Tracked Vehicles Using Wall Reaction Force |
> Video Attachment
|
|
Kojima, Shotaro | Tohoku University |
Ohno, Kazunori | Tohoku University |
Suzuki, Takahiro | Tohoku University |
Okada, Yoshito | Tohoku University |
Westfechtel, Thomas | Tohoku University |
Tadokoro, Satoshi | Tohoku University |
Keywords: Autonomous Vehicle Navigation, Motion Control, Kinematics
Abstract: In this paper, an autonomous spiral stair climbing method for tracked vehicles using the reaction force from side walls has been proposed. Spiral stairs are one of the most difficult terrains for tracked vehicles because of their asymmetrical ground shape and small turning radius. Tracked vehicles are expected to be used in industrial plant inspection tasks, where robots should navigate on multiple floors by ascending the stairs. Spiral or curved stairs are installed as part of inspection passages for cylindrical facilities, such as boilers, chimneys, or large tanks. Previously, the authors have experimentally demonstrated that the wall-following motion is effective for stabilizing and accelerating spiral stair climbing. However, the complete automation of climbing motion or the analysis of why the same motion is generated even if a disturbance exists in the initial entry angle to the wall should be investigated. In this study, the authors developed an autonomous spiral stair climbing method using the wall reaction force and clarified the applicable limitations of this method using a geometrical model. Autonomous spiral stair climbing is realized by attaching passive wheels on its collision point and automating the motions of main-tracks and sub-tracks. The geometrical model shows the expected trajectory of the robot on the spiral stairs, which suggests that the robot' s rotation radius converges to a specific value; this is experimentally confirmed by measuring the robot's motion. The wall-following motion of robots is equivalent to human inspectors grasping handrails while climbing stairs. Through collisions with surrounding objects, motion is stabilized and certainty is guaranteed.
|
|
MoBT12 |
Room T12 |
Autonomous Vehicles: Navigation II |
Regular session |
Chair: Bezzo, Nicola | University of Virginia |
Co-Chair: Miao, Fei | University of Connecticut |
|
11:45-12:00, Paper MoBT12.1 | |
>GndNet: Fast Ground Plane Estimation and Point Cloud Segmentation for Autonomous Vehicles |
> Video Attachment
|
|
Paigwar, Anshul | Institut National De Recherche En Informatique Et En Automatique |
Erkent, Ozgur | Inria |
Sierra-Gonzalez, David | Inria Grenoble Rhône-Alpes |
Laugier, Christian | INRIA |
Keywords: Intelligent Transportation Systems, Autonomous Vehicle Navigation, Novel Deep Learning Methods
Abstract: Ground plane estimation and ground point segmentation is a crucial precursor for many applications in robotics and intelligent vehicles like navigable space detection and occupancy grid generation, 3D object detection, point cloud matching for localization and registration for mapping. In this paper, we present GndNet, a novel end-to-end approach that estimates the ground plane elevation information in a grid-based representation and segments the ground points simultaneously in real-time. GndNet uses PointNet and Pillar Feature Encoding network to extract features and regresses ground height for each cell of the grid. We augment the SemanticKITTI dataset to train our network. We demonstrate qualitative and quantitative evaluation of our results for ground elevation estimation and semantic segmentation of point cloud. GndNet establishes a new state-of-the-art, achieves a run-time of 55Hz for ground plane estimation and ground point segmentation.
|
|
12:00-12:15, Paper MoBT12.2 | |
>Intelligent Exploration and Autonomous Navigation in Confined Spaces |
> Video Attachment
|
|
Akbari, Aliakbar | Royal Holloway University of London |
Chhabra, Puneet Singh | Headlight AI Limited |
Bhandari, Ujjar | Headlight AI Limited |
Bernardini, Sara | Royal Holloway University of London |
Keywords: Autonomous Vehicle Navigation, Semantic Scene Understanding, Motion and Path Planning
Abstract: Autonomous navigation and exploration in confined spaces are currently setting new challenges for robots. The presence of narrow passages, flammable atmosphere, dust, smoke, and other hazards makes the mapping and navigation tasks extremely difficult. To tackle these challenges, robots need to make intelligent decisions, maximising information while maintaining the safety of the system and their surroundings. In this paper, we present a suite of reasoning mechanisms along with a software architecture for exploration tasks that can be used to underpin the behavior of a broad range of robots operating in confined spaces. We present an autonomous navigation module that allows the robot to safely traverse known areas of the environment and extract features of the unknown frontier regions. An exploration component, by reasoning about these frontiers, provides the robot with the ability to venture into new spaces. From low-level sensory input and contextual information, the robot incrementally builds a semantic network that represents known and unknown parts of the environment and then uses a logic-based, high-level reasoner to interrogate such a network and decide the best course of actions. We evaluate our approach against several mine-like challenging scenarios with different characteristics using a small drone. The experimental results indicate that our method allows the robot to make informed decisions on how to best explore the environment while preserving safety.
|
|
12:15-12:30, Paper MoBT12.3 | |
>Data-Driven Distributionally Robust Electric Vehicle Balancing for Mobility-On-Demand Systems under Demand and Supply Uncertainties |
|
He, Sihong | University of Connecticut |
Pepin, Lynn | University of Connecticut |
Guang, Wang | Rutgers University |
Zhang, Desheng | Rutgers University |
Miao, Fei | University of Connecticut |
Keywords: Intelligent Transportation Systems, Optimization and Optimal Control, Robust/Adaptive Control of Robotic Systems
Abstract: As electric vehicle (EV) technologies become mature, EV has been rapidly adopted in modern transportation systems, and is expected to provide future autonomous mobility-on-demand (AMoD) service with economic and societal benefits. However, EVs require frequent recharges due to their limited and unpredictable cruising ranges, and they have to be managed efficiently given the dynamic charging process. It is urgent and challenging to investigate a computationally efficient algorithm that provides EV AMoD system performance guarantees under model uncertainties, instead of using heuristic demand or charging models. To accomplish this goal, this work designs a data-driven distributionally robust optimization approach for vehicle supply-demand ratio and charging station utilization balancing, while minimizing the worst-case expected cost considering both passenger mobility demand uncertainties and EV supply uncertainties. We then derive an equivalent computationally tractable form for solving the distributionally robust problem in a computationally efficient way under ellipsoid uncertainty sets constructed from data. Based on E-taxi system data of Shenzhen city, we show that the average total balancing cost is reduced by 14.49%, the average unfairness of supply-demand ratio and utilization is reduced by 15.78% and 34.51% respectively with the distributionally robust vehicle balancing method, compared with solutions which do not consider model uncertainties.
|
|
12:30-12:45, Paper MoBT12.4 | |
>GP-Based Runtime Planning, Learning, and Recovery for Safe UAV Operations under Unforeseen Disturbances |
> Video Attachment
|
|
Yel, Esen | University of Virginia |
Bezzo, Nicola | University of Virginia |
Keywords: Autonomous Vehicle Navigation, Aerial Systems: Applications, Motion and Path Planning
Abstract: Autonomous vehicles are typically developed and trained to work under certain system and environmental conditions defined at design time and can fail or perform poorly if unforeseen conditions such as disturbances or changes in model dynamics appear at runtime. In this work, we present a fast online planning, learning, and recovery approach for safe autonomous operations under unknown runtime disturbances. Our approach estimates the behavior of the system with an unknown model and provides safe plans at runtime under previously unseen disturbances by leveraging Gaussian Process regression theory in which a model is continuously trained and adapted using data collected during the autonomous operation. A recovery procedure is event-triggered any time a safety constraint is violated to guarantee safety and enable learning and replanning. The proposed framework is applied and validated both in simulation and experiment on an unmanned aerial vehicle (UAV) delivery case study in which the UAV is tasked to carry an a priori unknown payload to a goal location in a cluttered/constrained environment.
|
|
12:45-13:00, Paper MoBT12.5 | |
>DiversityGAN: Diversity-Aware Vehicle Motion Prediction Via Latent Semantic Sampling |
> Video Attachment
|
|
Huang, Xin | MIT |
McGill, Stephen | Toyota Research Institute |
DeCastro, Jonathan | Cornell University |
Fletcher, Luke | Toyota Research Institute |
Leonard, John | MIT |
Williams, Brian | MIT |
Rosman, Guy | Massachusetts Institute of Technology |
Keywords: Intelligent Transportation Systems, Representation Learning, Computer Vision for Transportation
Abstract: Vehicle trajectory prediction is crucial for autonomous driving and advanced driver assistant systems. While existing approaches may sample from a predicted distribution of vehicle trajectories, they lack the ability to explore it -- a key ability for evaluating safety from a planning and verification perspective. In this work, we devise a novel approach for generating realistic and diverse vehicle trajectories. We extend the generative adversarial network (GAN) framework with a low-dimensional approximate semantic space, and shape that space to capture semantics such as merging and turning. We sample from this space in a way that mimics the predicted distribution, but allows us to control coverage of semantically distinct outcomes. We validate our approach on a publicly available dataset and show results that achieve state-of-the-art prediction performance, while providing improved coverage of the space of predicted trajectory semantics.
|
|
13:00-13:15, Paper MoBT12.6 | |
>Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning with Application to Autonomous Driving |
|
Wu, Zheng | University of California, Berkeley |
Sun, Liting | University of California, Berkeley |
Zhan, Wei | Univeristy of California, Berkeley |
Yang, Chenyu | Shanghai Jiao Tong University(SJTU) |
Tomizuka, Masayoshi | University of California |
Keywords: Intelligent Transportation Systems, Autonomous Agents, Behavior-Based Systems
Abstract: In the past decades, we have witnessed significant progress in the domain of autonomous driving. Advanced techniques based on optimization and reinforcement learning become increasingly powerful when solving the forward problem: given designed reward/cost functions, how we should optimize them and obtain driving policies that interact with the environment safely and efficiently. Such progress has raised another equally important question: emph{what should we optimize}? Instead of manually specifying the reward functions, it is desired that we can extract what human drivers try to optimize from real traffic data and assign that to autonomous vehicles to enable more naturalistic and transparent interaction between humans and intelligent agents. To address this issue, we present an efficient sampling-based maximum-entropy inverse reinforcement learning (IRL) algorithm in this paper. Different from existing IRL algorithms, by introducing an efficient continuous-domain trajectory sampler, the proposed algorithm can directly learn the reward functions in the continuous domain while considering the uncertainties in demonstrated trajectories from human drivers. We evaluate the proposed algorithm via real-world driving data, including both non-interactive and interactive scenarios. The experimental results show that the proposed algorithm achieves more accurate prediction performance with faster convergence speed and better generalization compared to other baseline IRL algorithms.
|
|
MoBT13 |
Room T13 |
Autonomous Vehicles: Planning & Environment |
Regular session |
Chair: Kong, Yu | Rochester Institute of Technology |
Co-Chair: Azaria, Amos | Computer Science Department, Ariel |
|
11:45-12:00, Paper MoBT13.1 | |
>Object-Aware Centroid Voting for Monocular 3D Object Detection |
> Video Attachment
|
|
Bao, Wentao | Rochester Institute of Technology |
Yu, Qi | Rochester Institute of Technology |
Kong, Yu | Rochester Institute of Technology |
Keywords: Autonomous Vehicle Navigation, Computer Vision for Automation, Deep Learning for Visual Perception
Abstract: Monocular 3D object detection aims to detect objects in a 3D physical world from a single image. However, recent approaches either rely on expensive LiDAR devices, or resort to dense pixel-wise depth estimation that causes prohibitive computational cost. In this paper, we propose an end-to-end trainable monocular 3D object detector without learning the dense depth. Specifically, the grid coordinates of a 2D box are first projected back to 3D space with the pinhole model as 3D centroids proposals. Then, a novel object-aware voting approach is introduced, which considers both the region-wise appearance attention and the geometric projection distribution, to vote the 3D centroid proposals for 3D object localization. With the late fusion and the predicted 3D orientation and dimension, the 3D bounding boxes of objects can be detected from a single RGB image. The method is straightforward yet significantly superior to other monocular-based even the recent LiDAR-based methods in localizing faraway objects. Extensive experimental results on the challenging KITTI benchmark validate the effectiveness of the proposed method.
|
|
12:00-12:15, Paper MoBT13.2 | |
>Estimating Pedestrian Crossing States Based on Single 2D Body Pose |
|
Wang, Zixing | University of Minnesota |
Papanikolopoulos, Nikos | University of Minnesota |
Keywords: Intelligent Transportation Systems, Computer Vision for Transportation
Abstract: The Crossing or Not-Crossing (C/NC) problem is important to autonomous vehicles (AVs) for safe vehicle/pedestrian interactions. However, this problem setup often ignores pedestrians walking along the direction of the vehicles’ movement (LONG). To enhance the AVs’ awareness of pedestrian behavior, we make the first step towards extending the C/NC to the C/NC/LONG problem and recognize them based on single body pose. In contrast, previous C/NC state classifiers depend on multiple poses or contextual information. Our proposed shallow neural network classifier aims to recognize these three states swiftly. We tested it on the JAAD dataset and reported an average 81.23% accuracy. Furthermore, this model can be integrated with different sensors and algorithms that provide 2D pedestrian body pose so that it is able to function across multiple light and weather conditions.
|
|
12:15-12:30, Paper MoBT13.3 | |
>SSP: Single Shot Future Trajectory Prediction |
|
Dwivedi, Isht | Honda Research Institute USA |
Malla, Srikanth | Honda Research Institute |
Dariush, Behzad | Honda Research Institute USA |
Choi, Chiho | Honda Research Institute |
Keywords: Intelligent Transportation Systems, Deep Learning for Visual Perception, Computer Vision for Transportation
Abstract: We propose a robust solution to future trajectory forecast, which can be practically applicable to autonomous agents in highly crowded environments. For this, three aspects are particularly addressed in this paper. First, we use composite fields to predict future locations of all road agents in a single-shot, which results in a constant time complexity, regardless of the number of agents in the scene. Second, interactions between agents are modeled as a non-local response, enabling spatial relationships between different locations to be captured temporally as well (i.e., in spatio-temporal interactions). Third, the semantic context of the scene are modeled and take into account the environmental constraints that potentially influence the future motion. To this end, we validate the robustness of the proposed approach using the ETH, UCY, and SDD datasets and highlight its practical functionality compared to the current state-of-the-art methods.
|
|
12:30-12:45, Paper MoBT13.4 | |
>Probabilistic Crowd GAN: Multimodal Pedestrian Trajectory Prediction Using a Graph Vehicle-Pedestrian Attention Network |
> Video Attachment
|
|
Eiffert, Stuart | The University of Sydney: The Australian Centre for Field Roboti |
Li, Kunming | University of Sydney |
Shan, Mao | The University of Sydney |
Worrall, Stewart | University of Sydney |
Sukkarieh, Salah | The University of Sydney: The Australian Centre for Field Roboti |
Nebot, Eduardo | Unversity of Sydney |
Keywords: Intelligent Transportation Systems, Social Human-Robot Interaction, Autonomous Vehicle Navigation
Abstract: Understanding and predicting the intention of pedestrians is essential to enable autonomous vehicles and mobile robots to navigate crowds. This problem becomes increasingly complex when we consider the uncertainty and multimodality of pedestrian motion, as well as the implicit interactions between members of a crowd, including any response to a vehicle. Our approach, Probabilistic Crowd GAN, extends recent work in trajectory prediction, combining Recurrent Neural Networks (RNNs) with Mixture Density Networks (MDNs) to output probabilistic multimodal predictions, from which likely modal paths are found and used for adversarial training. We also propose the use of Graph Vehicle-Pedestrian Attention Network (GVAT), which models social interactions and allows input of a shared vehicle feature, showing that inclusion of this module leads to improved trajectory prediction both with and without the presence of a vehicle. Through evaluation on various datasets we demonstrate improvements on existing state of the art methods for trajectory prediction and illustrate how the true multimodal and uncertain nature of crowd interactions can be directly modelled.
|
|
12:45-13:00, Paper MoBT13.5 | |
>Model-Based Reinforcement Learning for Time-Optimal Velocity Control |
|
Hartmann, Gabriel | Ariel University |
Shiller, Zvi | Ariel University |
Azaria, Amos | Computer Science Department, Ariel |
Keywords: Autonomous Vehicle Navigation, Reinforecment Learning, Motion and Path Planning
Abstract: Autonomous navigation has recently gained great interest in the field of reinforcement learning. However, little attention was given to the time-optimal velocity control problem, i.e. controlling a vehicle such that it travels at the maximal speed without becoming dynamically unstable (roll-over or sliding). Time optimal velocity control can be solved numerically using existing methods that are based on optimal control and vehicle dynamics. In this paper, we develop a model-based deep reinforcement learning to generate the time-optimal velocity control. Moreover, we introduce a method that uses a numerical solution that predicts whether the vehicle may become unstable and intervenes if needed. We show that our combined model outperforms several baselines as it achieves higher velocities (with only one minute of training) and does not encounter any failures during the training process.
|
|
13:00-13:15, Paper MoBT13.6 | |
>Learning Hierarchical Behavior and Motion Planning for Autonomous Driving |
> Video Attachment
|
|
Wang, Jingke | Zhejiang University |
Wang, Yue | Zhejiang University |
Zhang, Dongkun | Zhejiang University |
Yang, Yezhou | Arizona State University |
Xiong, Rong | Zhejiang University |
Keywords: Autonomous Vehicle Navigation, Reinforecment Learning
Abstract: Learning-based driving solution, a new branch for autonomous driving, is expected to simplify the modeling of driving by learning the underlying mechanisms from data. To improve the tactical decision-making for learning-based driving solution, we introduce hierarchical behavior and motion planning (HBMP) to explicitly model the behavior in learning-based solution. Due to the coupled action space of behavior and motion, it is challenging to solve HBMP problem using reinforcement learning (RL) for long-horizon driving tasks. We transform HBMP problem by integrating a classical sampling-based motion planner, of which the optimal cost is regarded as the rewards for high-level behavior learning. As a result, this formulation reduces action space and diversifies the rewards without losing the optimality of HBMP. In addition, we propose a sharable representation for input sensory data across simulation platforms and real-world environment, so that models trained in a fast event-based simulator, SUMO, can be used to initialize and accelerate the RL training in a dynamics based simulator, CARLA. Experimental results demonstrate the effectiveness of the method. Besides, the model is successfully transferred to the real-world, validating the generalization capability.
|
|
MoBT14 |
Room T14 |
Autonomous Vehicles: Safety & Systems |
Regular session |
Chair: Berman, Spring | Arizona State University |
Co-Chair: Zhao, Ding | Carnegie Mellon University |
|
11:45-12:00, Paper MoBT14.1 | |
>Learning to Collide: An Adaptive Safety-Critical Scenarios Generating Method |
> Video Attachment
|
|
Ding, Wenhao | Carnegie Mellon University |
Chen, Baiming | Tsinghua University |
Xu, Minjun | Carnegie Mellon University |
Zhao, Ding | Carnegie Mellon University |
Keywords: Autonomous Vehicle Navigation, Reinforecment Learning, Semantic Scene Understanding
Abstract: ng-tail and rare event problems become crucial when autonomous driving algorithms are applied in the real world. For the purpose of evaluating systems in challenging settings, we propose a generative framework to create safety-critical scenarios for evaluating specic task algorithms. We first represent the trafc scenarios with a series of autoregres- sive building blocks and generate diverse scenarios by sampling from the joint distribution of these blocks. We then train the generative model as an agent (or a generator) to search the risky scenario parameters for a given driving algorithm. We treat the driving algorithm as an environment that returns high reward to the agent when a risky scenario is generated. The whole process is optimized by policy gradient reinforce- ment learning method. Through the experiments conducted on several scenarios in the simulation, we demonstrate that the proposed framework generates safety-critical scenarios more efciently than grid search or human design methods. Another advantage of this method is its adaptiveness to the routes and parameters
|
|
12:00-12:15, Paper MoBT14.2 | |
>Synchrono: A Scalable, Physics-Based Simulation Platform for Testing Groups of Autonomous Vehicles And/or Robots |
|
Taves, Jay | University of Wisconsin–Madison |
Elmquist, Asher | University of Wisconsin-Madison |
Young, Aaron | University of Wisconsin–Madison |
Serban, Radu | University of Wisconsin - Madison |
Negrut, Dan | University of Wisconsin |
Keywords: Autonomous Vehicle Navigation, Autonomous Agents, Automation Technologies for Smart Cities
Abstract: This contribution is concerned with the topic of using simulation to understand the behavior of groups of mutually interacting autonomous vehicles (AVs) or robots engaged in traffic/maneuvers that involve coordinated operation. We outline the structure of a multi-agent simulator called SynChrono and provide results pertaining to its scalability and ability to run real-time scenarios with humans in the loop. SynChrono is a scalable multi-agent, high-fidelity environment whose purpose is that of testing AV and robot control strategies. Four main components make up the core of the simulation platform: a physics-based dynamics engine that can simulate rigid and compliant systems, fluid-solid interactions, and deformable terrains; a module that provides sensing simulation; an agent-to-agent communication server; dynamic virtual worlds, which host the interacting agents operating in a coordinated scenario. The platform provides a virtual proving ground that can be used to answer questions such as ``what will an AV do when it skids on a patch of ice and moves one way while facing the other way?''; ``is a new agent-control strategy robust enough to handle unforeseen circumstances?''; and ``what is the effect of a loss of communication between agents engaged in a coordinated maneuver?''. Full videos based on work in the paper are available at https://tinyurl.com/ChronoIROS2020 and additional descriptions on the particular version of software used is available at https://github.com/uwsbel/publications-data/tree/master/2020/IROS.
|
|
12:15-12:30, Paper MoBT14.3 | |
>Output Only Fault Detection and Mitigation of Networks of Autonomous Vehicles |
> Video Attachment
|
|
Khalil, Abdelrahman | Memorial University of Newfoundland |
Al Janaideh, Mohammad | Memorial University &University of Toronto |
Aljanaideh, Khaled | Jordan University of Science and Technology |
Kundur, Deepa | University of Toronto |
Keywords: Autonomous Vehicle Navigation
Abstract: An autonomous vehicle platoon is a network of autonomous vehicles that communicate together to move in a desired way. One of the greatest threats to the operation of an autonomous vehicle platoon is the failure of either a physical component of a vehicle or a communication link between two vehicles. This failure affects the safety and stability of the autonomous vehicle platoon. Transmissibility-based health monitoring uses available sensor measurements for fault detection under unknown excitation and unknown dynamics of the network. After a fault is detected, a sliding mode controller is used to mitigate the fault. Different fault scenarios are considered including vehicle internal disturbances, cyber attacks, and communication delays. We apply the proposed approach to a bond graph model of the platoon and an experimental setup consisting of three autonomous robots.
|
|
12:30-12:45, Paper MoBT14.4 | |
>Go-CHART: A Miniature Remotely Accessible Self-Driving Car Robot |
> Video Attachment
|
|
Kannapiran, Shenbagaraj | Arizona State University |
Berman, Spring | Arizona State University |
Keywords: Intelligent Transportation Systems, Distributed Robot Systems, Education Robotics
Abstract: The Go-CHART is a four-wheel, skid-steer robot that resembles a 1:28 scale standard commercial sedan. It is equipped with an onboard sensor suite and both onboard and external computers that replicate many of the sensing and computation capabilities of a full-size autonomous vehicle. The Go-CHART can autonomously navigate a small-scale traffic testbed, responding to its sensor input with programmed controllers. Alternatively, it can be remotely driven by a user who views the testbed through the robot's four camera feeds, which facilitates safe, controlled experiments on driver interactions with driverless vehicles. We demonstrate the Go-CHART's ability to perform lane tracking and detection of traffic signs, traffic signals, and other Go-CHARTs in real-time, utilizing an external GPU that runs computationally intensive computer vision and deep learning algorithms.
|
|
MoBT15 |
Room T15 |
Autonomous Vehicles: Sensors |
Regular session |
Chair: Urtasun, Raquel | University of Toronto |
Co-Chair: Bonnabel, Silvere | Mines ParisTech |
|
11:45-12:00, Paper MoBT15.1 | |
>An RLS-Based Instantaneous Velocity Estimator for Extended Radar Tracking |
> Video Attachment
|
|
Gosala, Nikhil Bharadwaj | ETH Zürich |
Meng, Xiaoli | APTIV AM |
Keywords: Intelligent Transportation Systems, Autonomous Vehicle Navigation, Range Sensing
Abstract: Radar sensors have become an important part of the perception sensor suite due to their long range and their ability to work in adverse weather conditions. However, several shortcomings such as large amounts of noise and extreme sparsity of the point cloud result in them not being used to their full potential. In this paper, we present a novel Recursive Least Squares (RLS) based approach to estimate the instantaneous velocity of dynamic objects in real-time that is capable of handling large amounts of noise in the input data stream. We also present an end-to-end pipeline to track extended objects in real-time that uses the computed velocity estimates for data association and track initialisation. The approaches are evaluated using several real-world inspired driving scenarios that test the limits of these algorithms. It is also experimentally proven that our approaches run in real-time with frame execution time not exceeding 30 ms even in dense traffic scenarios, thus allowing for their direct implementation on autonomous vehicles.
|
|
12:00-12:15, Paper MoBT15.2 | |
>Lidar Essential Beam Model for Accurate Width Estimation of Thin Poles |
> Video Attachment
|
|
Long, Yunfei | Michigan State University |
Morris, Daniel | Michigan State University |
Keywords: Computer Vision for Transportation, Computer Vision for Automation, Range Sensing
Abstract: While Lidar beams are often represented as rays, they actually have finite beam width and this width impacts the measured shape and size of objects in the scene. Here we investigate the effects of beam width on measurements of thin objects such as vertical poles. We propose a model for beam divergence and show how this can explain both object dilation and erosion. We develop a calibration method to estimate beam divergence angle. This calibration method uses one or more vertical poles observed from a Lidar on a moving platform. In addition, we derive an incremental method for using the calibrated beam angle to obtain accurate estimates of thin object diameters, observed from a Lidar on a moving platform. Our method achieves significantly more accurate diameter estimates than is obtained when beam divergence is ignored.
|
|
12:15-12:30, Paper MoBT15.3 | |
>MVLidarNet: Real-Time Multi-Class Scene Understanding for Autonomous Driving Using Multiple Views |
> Video Attachment
|
|
Chen, Ke | Nvidia |
Smolyanskiy, Nikolai | NVIDIA |
Oldja, Ryan | NVIDIA |
Birchfield, Stan | NVIDIA Corporation |
Popov, Alexander (Sasha) | CSE, UMN |
Wehr, David | NVIDIA |
Eden, Ibrahim | NVIDIA |
Pehserl, Joachim | Microsoft |
Keywords: Autonomous Vehicle Navigation, Computer Vision for Transportation, Intelligent Transportation Systems
Abstract: Autonomous driving requires the inference of actionable information such as detecting and classifying objects, and determining the drivable space. To this end, we present Multi-View LidarNet (MVLidarNet), a two-stage deep neural network for multi-class object detection and drivable space segmentation using multiple views of a single LiDAR point cloud. The first stage processes the point cloud projected onto a perspective view in order to semantically segment the scene. The second stage then processes the point cloud (along with semantic labels from the first stage) projected onto a bird's eye view, to detect and classify objects. Both stages use an encoder-decoder architecture. We show that our multi-view, multi-stage, multi-class approach is able to detect and classify objects while simultaneously determining the drivable space using a single LiDAR scan as input, in challenging scenes with more than one hundred vehicles and pedestrians at a time. The system operates efficiently at 150 fps on an embedded GPU designed for a self-driving car, including a postprocessing step to maintain identities over time. We show results on both KITTI and a much larger internal dataset, thus demonstrating the method's ability to scale by an order of magnitude.
|
|
12:30-12:45, Paper MoBT15.4 | |
>The Importance of Prior Knowledge in Precise Multimodal Prediction |
> Video Attachment
|
|
Casas Romero, Sergio | Uber ATG, University of Toronto |
Gulino, Cole | Uber ATG |
Suo, Simon | University of Toronto |
Urtasun, Raquel | University of Toronto |
Keywords: Autonomous Vehicle Navigation, Deep Learning for Visual Perception, Robot Safety
Abstract: Roads have well defined geometries, topologies, and traffic rules. While this has been widely exploited in motion planning methods to produce maneuvers that obey the law, little work has been devoted to utilize these priors in perception and motion forecasting methods. In this paper we propose to incorporate these structured priors as a loss function. In contrast to imposing hard constraints, this approach allows the model to handle non-compliant maneuvers when those happen in the real world. Safe motion planning is the end goal, and thus a probabilistic characterization of the possible future developments of the scene is key to choose the plan with the lowest expected cost. Towards this goal, we design a framework that leverages REINFORCE to incorporate non-differentiable priors over sample trajectories from a probabilistic model, thus optimizing the whole distribution. We demonstrate the effectiveness of our approach on real-world self-driving datasets containing complex road topologies and multi-agent interactions. Our motion forecasts not only exhibit better precision and map understanding, but most importantly result in safer motion plans taken by our self-driving vehicle. We emphasize that despite the importance of this evaluation, it has been often overlooked by previous perception and motion forecasting works.
|
|
12:45-13:00, Paper MoBT15.5 | |
>Simultaneous Estimation of Vehicle Position and Data Delays Using Gaussian Process Based Moving Horizon Estimation |
|
Mori, Daiki | Toyota Central R&D Labs. Inc |
Hattori, Yoshikazu | Toyota Central Research and Development Laboratories, Inc |
Keywords: Autonomous Vehicle Navigation, Localization, Sensor Fusion
Abstract: Automobiles or robots with recent advanced autonomous systems are equipped with multiple types of sensors to overcome different weather and geographical conditions. These sensors generally have various data delays and sampling rates. Additionally, the communication delays or time synchronization errors between the onboard computers significantly affect the robustness and accuracy of localization for autonomous vehicles. In this paper, the simultaneous estimation of vehicle position and sensor delays using a Gaussian process based moving horizon estimation (GP-MHE) is presented. The GP-MHE can estimate the unknown delays of multiple sensors with the resolution less than that of GP-MHE sampling rate. The localization performance of GP-MHE was confirmed using full-vehicle simulator, then evaluated in a real vehicle experiment on a highway scenario. Experimental result verified the sufficient localization accuracy of sub 0.3m using data that had irregular sampling rate and delay of more than 150ms. The proposed algorithm extends the capability of integrating various data with large unknown delays for vehicles, robots, drones and remote autonomy.
|
|
13:00-13:15, Paper MoBT15.6 | |
>A Real-Time Unscented Kalman Filter on Manifolds for Challenging AUV Navigation |
|
Cantelobre, Theophile | Mines ParisTech |
Chahbazian, Clément | Schlumberger-Doll Research |
Croux, Arnaud | Schlumberger-Doll Research |
Bonnabel, Silvere | Mines ParisTech |
Keywords: Autonomous Vehicle Navigation, Marine Robotics, Sensor Fusion
Abstract: We consider the problem of localization and navigation of Autonomous Underwater Vehicles (AUV) in the context of high performance subsea asset inspection missions in deep water. We propose a solution based on the recently introduced Unscented Kalman Filter on Manifolds (UKF-M) for onboard navigation to estimate the robot’s location, attitude and velocity, using a precise round and rotating Earth navigation model. Our algorithm has the merit of seamlessly handling nonlinearity of attitude, and is far more simpler to implement than the extended Kalman filter (EKF), which is state of the art in the navigation industry. The unscented transform notably spares the user the computation of Jacobians and lends itself well to fast prototyping in the context of multi-sensor data fusion. Besides, we provide the community with feedback about implementation, and execution time is shown to be compatible with real-time. Realistic extensive Monte-Carlo simulations prove uncertainty is estimated with accuracy by the filter, and illustrate its convergence ability. Real experiments in the context of a 900m deep dive near Marseille (France) illustrate the relevance of the method.
|
|
MoBT16 |
Room T16 |
Perception for Autonomous Driving |
Regular session |
Chair: Xiang, Zhiyu | Zhejiang University |
|
11:45-12:00, Paper MoBT16.1 | |
>DSSF-Net: Dual-Task Segmentation and Self-Supervised Fitting Network for End-To-End Lane Mark Detection |
|
Du, Wentao | Zhejiang University |
Xiang, Zhiyu | Zhejiang University |
Chen, Yiman | Zhejiang University |
Chen, Shuya | Zhejiang University |
Keywords: Computer Vision for Transportation, Deep Learning for Visual Perception, AI-Based Methods
Abstract: Lane mark detection is one of the key tasks for autonomous driving systems. Accurate detection of lane marks under complex urban environments remains a challenge. In this paper, an end-to-end lane mark detection network named DSSF-net, which is capable of directly outputting the accurate fitted lane curves, is proposed. First, a dual-task segmentation framework for jointing lane category prediction and spatial partition is presented. An IoU-based loss function is put forward to tackle the severely imbalanced category distribution problem. Then a fully self-supervised curve fitting network is proposed to directly output the parameters of lane line upon the probability map. To achieve better accuracy, the fitting network is trained with two sub-stages: coarse regression and confidence-based optimization. Finally the entire DSSF-net is implemented end-to-end. Comprehensive experiments conducted on challenging CULane dataset show that our model achieves 74.9% in F1-score and outperforms the state-of-the-art models.
|
|
12:00-12:15, Paper MoBT16.2 | |
>Lane Marking Verification for High Definition Map Maintenance Using Crowdsourced Images |
|
Li, Binbin | Texas A&M University |
Song, Dezhen | Texas A&M University |
Kingery, Aaron | Texas A&M University |
Zheng, Dongfang | Tencent |
Xu, Yiliang | Tencent America |
Guo, Huiwen | Tencent America |
Keywords: Computer Vision for Transportation, Mapping, Visual-Based Navigation
Abstract: Autonomous vehicles often rely on high-definition (HD) maps to navigate around. However, lane markings (LMs) are not necessarily static objects due to wear & tear from usage and road reconstruction & maintenance. Therefore, the wrong matching between LMs in the HD map and sensor readings may lead to erroneous localization or even cause traffic accidents. It is imperative to keep LMs up-to-date. However, frequently recollecting data to update HD maps is cost-prohibitive. Here we propose to utilize crowdsourced images from multiple vehicles at different times to help verify LMs for HD map maintenance. We obtain the LM distribution in the image space by considering the camera pose uncertainty in perspective projection. Both LMs in HD map and LMs in the image are treated as observations of LM distributions which allow us to construct posterior conditional distribution (a.k.a Bayesian belief functions) of LMs from either sources. An LM is consistent if belief functions from the map and the image satisfy statistical hypothesis testing. We further extend the Bayesian belief model into a sequential belief update using crowdsourced images. LMs with a higher probability of existence are kept in the HD map whereas those with a lower probability of existence are removed from the HD map. We verify our approach using real data. Experimental results show that our method is capable of verifying and updating LMs in the HD map.
|
|
12:15-12:30, Paper MoBT16.3 | |
>Toward Hierarchical Self-Supervised Monocular Absolute Depth Estimation for Autonomous Driving Applications |
> Video Attachment
|
|
Xue, Feng | Tongji University, Shanghai |
Zhuo, Guirong | Tongji University, Shanghai |
Huang, Ziyuan | National Universitu of Singapore |
Fu, Wufei | Tongji University |
Wu, Zhuoyue | Tongji University |
Ang Jr, Marcelo H | National University of Singapore |
Keywords: Computer Vision for Transportation, Deep Learning for Visual Perception
Abstract: In recent years, self-supervised methods for monocular depth estimation has rapidly become an significant branch of depth estimation task, especially for autonomous driving applications. Despite the high overall precision achieved, current methods still suffer from a) imprecise object-level depth inference and b) uncertain scale factor. The former problem would cause texture copy or provide inaccurate object boundary, and the latter would require current methods to have an additional sensor like LiDAR to provide depth ground-truth or stereo camera as additional training inputs, which makes them difficult to implement. In this work, we propose to address these two problems together by introducing DNet. Our contributions are twofold: a) a novel dense connected prediction (DCP) layer is proposed to provide better object-level depth estimation and b) specifically for autonomous driving scenarios, dense geometrical constrains (DGC) is introduced so that precise scale factor can be recovered without additional cost for autonomous vehicles. Extensive experiments have been conducted and, both DCP layer and DGC module are proved to be effectively solving the aforementioned problems respectively. Thanks to DCP layer, object boundary can now be better distinguished in the depth map and the depth is more continues on object level. It is also demonstrated that the performance of using DGC to perform scale recovery is comparable to that using groundtruth information, when the camera height is given and the ground point takes up more than 1.03% of the pixels. Code is available at https://github.com/TJ-IPLab/DNet.
|
|
12:30-12:45, Paper MoBT16.4 | |
>Label Efficient Visual Abstractions for Autonomous Driving |
> Video Attachment
|
|
Behl, Aseem | MPI Tübingen |
Chitta, Kashyap | Max Planck Institute for Intelligent Systems |
Prakash, Aditya | Max Planck Institute for Intelligent Systems |
Ohn-Bar, Eshed | Max Planck Institute |
Geiger, Andreas | Max Planck Institute for Intelligent Systems, Tübingen |
Keywords: Computer Vision for Transportation, Autonomous Vehicle Navigation, Imitation Learning
Abstract: It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as distance traveled per intervention or safety. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, i.e., the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (e.g., object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.
|
|
12:45-13:00, Paper MoBT16.5 | |
>Learning Accurate and Human-Like Driving Using Semantic Maps and Attention |
|
Hecker, Simon | ETH Zurich |
Dai, Dengxin | ETH Zurich |
Liniger, Alexander | ETH Zurich |
Hahner, Martin | ETH Zurich |
Van Gool, Luc | ETH Zurich |
Keywords: Computer Vision for Transportation, Big Data in Robotics and Automation, Learning from Demonstration
Abstract: This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like. To tackle the first issue we exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such. The maps are used in an attention mechanism that promotes segmentation confidence masks, thus focusing the network on semantic classes in the image that are important for the current driving situation. Human-like driving is achieved using adversarial learning, by not only minimizing the imitation loss with respect to the human driver but by further defining a discriminator, that forces the driving model to produce action sequences that are human-like. Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving models are more accurate and behave more human-like than previous methods.
|
|
13:00-13:15, Paper MoBT16.6 | |
>IDDA: A Large-Scale Multi-Domain Dataset for Autonomous Driving |
|
Alberti, Emanuele | Politecnico Di Torino |
Tavera, Antonio | Politecnico Di Torino |
Masone, Carlo | Max Planck Institute for Biological Cybernetics |
Caputo, Barbara | Sapienza University |
Keywords: Semantic Scene Understanding, Deep Learning for Visual Perception, Computer Vision for Transportation
Abstract: Semantic Segmentation is key in autonomous driving. Using deep visual learning architectures is not trivial in this context, because of the challenges in creating suitable large scale annotated datasets. This issue has been traditionally circumvented through the use of synthetic datasets, that have become a popular resource in this field. They have been released with the need to develop semantic segmentation algorithms able to close the visual domain shift between the training and test data. Although exacerbated by the use of artificial data, the problem is extremely relevant in this field even when training on real data. Indeed, weather conditions, viewpoint changes and variations in the city appearances can vary considerably from car to car, and even at test time for a single, specific vehicle. How to deal with domain adaptation in semantic segmentation, and how to leverage effectively several different data distributions (source domains) are important research questions in this field. To support work in this direction, this paper contributes a new large scale, synthetic dataset for semantic segmentation with more than 100 different source visual domains. The dataset has been created to explicitly address the challenges of domain shift between training and test data in various weather and view point conditions, in seven different city types. Extensive benchmark experiments assess the dataset, showcasing open challenges for the current state of the art. The dataset will be available at: https://idda-dataset.github.io/home/.
|
|
MoBT17 |
Room T17 |
Planning for Autonomous Vehicles I |
Regular session |
Chair: Haddon, David | CSIRO |
Co-Chair: Jiang, Jingjing | Loughborough University |
|
11:45-12:00, Paper MoBT17.1 | |
>PaintPath: Defining Path Directionality in Maps for AutonomousGround Vehicles |
|
Bowyer, Riley | CSIRO |
Lowe, Tom | CSIRO |
Borges, Paulo Vinicius Koerich | CSIRO |
Bandyopadhyay, Tirthankar | CSIRO |
Löw, Tobias | ETH Zürich |
Haddon, David | CSIRO |
Keywords: Field Robots, Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: Directionality in path planning is essential forefficient autonomous navigation in a number of real-worldenvironments. In many map-based navigation scenarios, the viable path from a given point A to point B is not the same as the viable path from B to A. We present a method that automatically incorporates preferred navigation directionality into a path planning costmap. This ‘preference’ is representedby coloured paths in the costmap. The colourisation is obtainedbased on an analysis of the driving trajectory generated bythe robot as it navigates through the environment. Hence,our method augments this driving trajectory by intelligently colouring it according to the orientation of the robot during the run. Creating an analogy between the vehicle orientation angleand the hue angle in the Hue-Saturation-Value colour space,the method uses the hue, saturation and value components toencode the direction, directionality and scalar cost, respectively,into a costmap image. We describe how we modify the A* algorithm to incorporate this information to plan direction-aware vehicle paths. Our experiments with LiDAR-based localisation and autonomous driving in real environments illustratethe applicability of the method
|
|
12:00-12:15, Paper MoBT17.2 | |
>Probabilistic Multi-Modal Trajectory Prediction with Lane Attention for Autonomous Vehicles |
|
Luo, Chenxu | Johns Hopkins University |
Sun, Lin | HKUST, Stanford, Samsung |
Dabiri, Dariush | Samsung Electronics |
Yuille, Alan | Johns Hopkins University |
Keywords: Autonomous Vehicle Navigation, Autonomous Agents, Intelligent Transportation Systems
Abstract: Trajectory prediction is crucial for autonomous vehicles. The planning system not only needs to know the current state of the surrounding objects but also their possible states in the future. As for vehicles, their trajectories are significantly influenced by the lane geometry and how to effectively use the lane information is of active interest. Most of the existing works use rasterized maps to explore road information, which does not distinguish different lanes. In this paper, we propose a novel instance-aware representation for lane representation. By integrating the lane features and trajectory features, a goal-oriented lane attention module is proposed to predict the future locations of the vehicle. We show that the proposed lane representation together with the lane attention module can be integrated into the widely used encoder-decoder framework to generate diverse predictions. Most importantly, each generated trajectory is associated with a probability to handle the uncertainty. Our method does not suffer from collapsing to one behavior modal and can cover diverse possibilities. Extensive experiments and ablation studies on the benchmark datasets corroborate the effectiveness of our proposed method. Notably, our proposed method ranks third place in the Argoverse motion forecasting competition at NeurIPS 2019.
|
|
12:15-12:30, Paper MoBT17.3 | |
>Safe Planning for Self-Driving Via Adaptive Constrained ILQR |
> Video Attachment
|
|
Pan, Yanjun | Carnegie Mellon University |
Lin, Qin | Carnegie Mellon University |
Shah, Het | Indian Institute of Technology Kharagpur |
Dolan, John M. | Carnegie Mellon University |
Keywords: Motion and Path Planning, Collision Avoidance
Abstract: Constrained Iterative Linear Quadratic Regulator (CILQR), a variant of ILQR, has been recently proposed for motion planning problems of autonomous vehicles to deal with constraints such as obstacle avoidance and reference tracking. However, the previous work considers either deterministic trajectories or persistent prediction for target dynamical obstacles. The other drawback is lack of generality - it requires manual weight tuning for different scenarios. In this paper, two significant improvements are achieved. Firstly, a two-stage uncertainty-aware prediction is proposed. The short-term prediction with safety guarantee based on reachability analysis is responsible for dealing with extreme maneuvers conducted by target vehicles. The long-term prediction leveraging an adaptive least square filter preserves the long-term optimality of the planned trajectory since using reachability only for long-term prediction is too pessimistic and makes the planner over-conservative. Secondly, to allow a wider coverage over different scenarios and to avoid tedious parameter tuning case by case, this paper designs a scenario-based analytical function taking the states from the ego vehicle and the target vehicle as input, and carrying weights of a cost function as output. It allows the ego vehicle to execute multiple behaviors (such as lane-keeping and overtaking) under a single planner. We demonstrate safety, effectiveness, and real-time performance of the proposed planner in simulations.
|
|
12:30-12:45, Paper MoBT17.4 | |
>Automatic Lane Change Maneuver in Dynamic Environment Using Model Predictive Control Method |
|
Li, Zhaolun | Loughborough University |
Jiang, Jingjing | Loughborough University |
Chen, Wen-Hua | Loughborough University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: The lane change maneuver is one of the typical maneuvers in various driving situations. Therefore the automatic lane change function is one of the key functions for autonomous vehicles. Many researches have been conducted in this field. Most existing work focused on the solutions for the static environment and assume that the surrounding vehicles are running at constant speeds. However, in reality, if not all the vehicles on the road are fully autonomous, the situation could be much more complicated and the ego vehicle has to deal with the dynamic environment. This paper proposes a Model Predictive Control (MPC)-based method to achieve automatic lane change in a dynamic environment. A two-wheel dynamic bicycle model, which combines the longitudinal and lateral motion of the ego vehicle, together with a utility function, which helps to automatically determine the target lane have been used in the algorithm. The simulation results have demonstrated the capability of the proposed algorithm in a dynamic environment.
|
|
12:45-13:00, Paper MoBT17.5 | |
>Real-Time Optimal Control of an Autonomous RC Car with Minimum-Time Maneuvers and a Novel Kineto-Dynamical Model |
> Video Attachment
|
|
Pagot, Edoardo | University of Trento |
Piccinini, Mattia | University of Trento |
Biral, Francesco | University of Trento |
Keywords: Motion and Path Planning, Optimization and Optimal Control, Autonomous Vehicle Navigation
Abstract: In this paper, we present a real-time non-linear model-predictive control (NMPC) framework to perform minimum-time motion planning for autonomous racing cars. We introduce an innovative kineto-dynamical vehicle model, able to accurately predict non-linear longitudinal and lateral vehicle dynamics. The main parameters of this vehicle model can be tuned with only experimental or simulated maneuvers, aimed to identify the handling diagram and the maximum performance G-G envelope. The kineto-dynamical model is adopted to generate on-line minimum time trajectories with an indirect optimal control method. The motion planning framework is applied to control an autonomous 1:8 RC vehicle near the limits of handling along a test circuit. Finally, the effectiveness of the proposed algorithms is illustrated by comparing the experimental results with the solution of an off-line minimum-time optimal control problem.
|
|
MoBT18 |
Room T18 |
Planning for Autonomous Vehicles II |
Regular session |
Chair: Liu, Lantao | Indiana University |
Co-Chair: Bopardikar, Shaunak D. | Michigan State University |
|
11:45-12:00, Paper MoBT18.1 | |
>Optimization-Based Hierarchical Motion Planning for Autonomous Racing |
> Video Attachment
|
|
Vazquez, Jose | ETH Zürich |
Bruehlmeier, Marius | ETH Zürich |
Liniger, Alexander | ETH Zurich |
Rupenyan, Alisa | ETH Zürich |
Lygeros, John | ETH Zurich |
Keywords: Motion and Path Planning, Optimization and Optimal Control
Abstract: In this paper we propose a hierarchical controller for autonomous racing where the same vehicle model is used in a two level optimization framework for motion planning. The high-level controller computes a trajectory that minimizes the lap time, and the low-level nonlinear model predictive path following controller tracks the computed trajectory online. Following a computed optimal trajectory avoids online planning and enables fast computational times. The efficiency is further enhanced by the coupling of the two levels through a terminal constraint, computed in the high-level controller. Including this constraint in the real-time optimization level ensures that the prediction horizon can be shortened, while safety is guaranteed. This proves crucial for the experimental validation of the approach on a full size driverless race car. The vehicle in question won two international student racing competitions using the proposed framework; moreover, our hierarchical controller achieved an improvement of 20% in the lap time compared to the state of the art result achieved using a very similar car and track.
|
|
12:00-12:15, Paper MoBT18.2 | |
>Secure Route Planning Using Dynamic Games with Stopping States |
> Video Attachment
|
|
Banik, Sandeep | Michigan State University |
Bopardikar, Shaunak D. | Michigan State University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation, Intelligent Transportation Systems
Abstract: This paper studies a motion planning problem over a roadmap in which a vehicle aims to travel from a start to a destination in presence of an attacker who can launch a cyber-attack on the vehicle over any one edge of the roadmap. The vehicle (defender) has the capability to switch on/off a countermeasure that can detect and permanently disable the attack if it occurs concurrently. We first model the problem of traversing an edge as a zero-sum dynamic game with a stopping state, termed as an edge-game played between an attacker and defender. We characterize Nash equilibria of the edge-game and provide closed form expressions for the case of two actions per player. We further provide an analytic and approximate expression on the value of an edge-game and characterize conditions under which it grows sub-linearly with the length of the edge. We study the sensitivity of Nash equilibrium to the (i) cost of using the countermeasure, (ii) cost of motion and (iii) benefit of disabling the attack. The solution of the edge-game is used to formulate and solve the secure route planning problem. We design an efficient heuristic by converting the problem to a shortest path problem using the edge cost as the solution of corresponding edge-games. We illustrate our findings through several insightful simulations.
|
|
12:15-12:30, Paper MoBT18.3 | |
>Online Planning in Uncertain and Dynamic Environment in the Presence of Multiple Mobile Vesicles |
|
Xu, Junhong | Indiana University |
Yin, Kai | HomeAway |
Liu, Lantao | Indiana University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: We investigate the autonomous navigation of a mobile robot in the presence of other moving vehicles under time-varying uncertain environmental disturbances. We first predict the future state distributions of other vehicles to account for their uncertain behaviors affected by the time-varying disturbances. We then construct a dynamic-obstacle-aware reachable space that contains states with high probabilities to be reached by the robot, within which the optimal policy is searched. Since, in general, the dynamics of both the vehicle and the environmental disturbances are nonlinear, we utilize a nonlinear Gaussian filter -- the unscented transform -- to approximate the future state distributions. Finally, the forward reachable space computation and backward policy search are iterated until convergence. Our simulation evaluations have revealed significant advantages of this proposed method in terms of computation time, decision accuracy, and planning reliability.
|
|
12:30-12:45, Paper MoBT18.4 | |
>Minimum Time - Minimum Jerk Optimal Traffic Management for AGVs |
> Video Attachment
|
|
Frego, Marco | University of Trento |
Bevilacqua, Paolo | University of Trento |
Divan, Stefano | University of Trento |
Zenatti, Fabiano | University of Trento |
Palopoli, Luigi | University of Trento |
Biral, Francesco | University of Trento |
Fontanelli, Daniele | University of Trento |
Keywords: Optimization and Optimal Control, Collision Avoidance, Motion and Path Planning
Abstract: A combined minimum time - minimum jerk traffic management system for the vehicle coordination in an automated warehouse is presented. The algorithm is organised in two steps: in the first, a simple minimum time optimisation problem is solved, in the second step, this time-optimal solution is refined into a smooth minimum jerk plan for the autonomous forklifts in order to avoid impulsive forces that may unbalance the vehicle. For the first step, we propose a novel approach based on Linear Programming, which guarantees convergence to the optimal solution starting from a feasible point, and a low computational overhead, which makes it suitable for real-time applications. The output of this step is a piecewise constant velocity profile for all the moving robots that ensures collision avoidance. The second step takes such speed profile and generates its smoothed version, which minimises the jerk while respecting the same levels of safety of the solution generated by the first step. We discuss the different solutions with simulation and experimental data.
|
|
12:45-13:00, Paper MoBT18.5 | |
>Non-Gaussian Chance-Constrained Trajectory Planning for Autonomous Vehicles under Agent Uncertainty |
|
Wang, Allen | Massachusetts Institute of Technology |
M. Jasour, Ashkan | MIT |
Williams, Brian | MIT |
Keywords: Motion and Path Planning, Probability and Statistical Methods, Intelligent Transportation Systems
Abstract: Agent behavior is arguably the greatest source of uncertainty in trajectory planning for autonomous vehicles. This problem has motivated significant amounts of work in the behavior prediction community on learning rich distributions of the future states and actions of agents. However, most current works on chance-constrained trajectory planning under agent or obstacle uncertainty either assume Gaussian uncertainty or linear constraints, which is limiting, or requires sampling, which can be computationally intractable to encode in an optimization problem. In this paper, we extend the state-of-the-art by presenting a methodology to upper-bound chance-constraints defined by polynomials and mixture models with potentially non-Gaussian components. Our method achieves its generality by using statistical moments of the distributions in concentration inequalities to upper-bound the probability of constraint violation. With this method, optimization-based trajectory planners can plan trajectories that are chance-constrained with respect to a wide range of distributions representing predictions of agent future positions. In experiments, we show that the resulting optimization problem can be solved with state-of-the-art nonlinear program solvers to plan trajectories fast enough for use online.
|
|
MoCT1 |
Room T1 |
Agricultural Automation |
Regular session |
Chair: Williams, Ryan | Virginia Polytechnic Institute and State University |
Co-Chair: Stachniss, Cyrill | University of Bonn |
|
14:00-14:15, Paper MoCT1.1 | |
>Segmentation-Based 4D Registration of Plants Point Clouds for Phenotyping |
|
Magistri, Federico | University of Bonn |
Chebrolu, Nived | University of Bonn |
Stachniss, Cyrill | University of Bonn |
Keywords: Robotics in Agriculture and Forestry, Computer Vision for Other Robotic Applications, Mapping
Abstract: Plant phenotyping, i.e., the task of measuring plant traits to describe the anatomy and physiology of plants, is a central task in crop science and plant breeding. Standard methods require intrusive and time-consuming operations involving a lot of manual labor. Cameras and range sensors paired with 3D reconstructions methods can support phenotyping but the task yields several challenges. In this paper, we address the problem of finding correspondences between plants recorded at different points in time in order to track phenotyping traits in an autonomous fashion. Our approach makes use of successive learning stages to compute a minimal representation of plant point clouds encoding both topology and semantic information. In this way, we are able to tackle the data association problem for 4D point cloud data od plants. We tested our approach on different 3D+time sequences of plant point clouds of different plant species. The experiments presented in this paper suggest that our 4D matching approach allows for non-rigid registration of the plants. Moreover, we show that our method allows for tracking different phenotyping traits at an organ level forming a basis for automated temporal phenotyping.
|
|
14:15-14:30, Paper MoCT1.2 | |
>Incorporating Spatial Constraints into a Bayesian Tracking Framework for Improved Localisation in Agricultural Environments |
|
Khan, Muhammad Waqas Khan | University of Lincoln |
Das, Gautham | University of Lincoln |
Hanheide, Marc | University of Lincoln |
Cielniak, Grzegorz | University of Lincoln |
Keywords: Robotics in Agriculture and Forestry, Localization, Probability and Statistical Methods
Abstract: Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle's proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF). The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.
|
|
14:30-14:45, Paper MoCT1.3 | |
>Learning Continuous Object Representations from Point Cloud Data |
|
Henry, Nelson | CSE, UMN |
Papanikolopoulos, Nikos | University of Minnesota |
Keywords: Agricultural Automation, Object Detection, Segmentation and Categorization
Abstract: Continuous representations of objects have always been used in robotics in the form of geometric primitives and surface models. Recently, learning techniques have emerged which allow more complex continuous representations to be learned from data, but these learning techniques require training data in the form of watertight meshes which restricts their application as meshes of this form are difficult to obtain from real data. This paper proposes a modification to existing methods that allows real world point cloud data to be used for training these surface representations allowing the techniques to be used in broader applications. The modification is evaluated on ModelNet10 to quantify the difference between the existing and the proposed methods as well as on a novel precision agriculture dataset that has been released publicly to show the modification’s applicability to new areas. The proposed method enables obtaining training data from real world sensors that produce point clouds rather than requiring an expensive meshing step which may not be possible for some applications. This opens the possibility of using techniques like this for complex shapes in areas like grasping and agricultural data collection.
|
|
14:45-15:00, Paper MoCT1.4 | |
>Solving Large-Scale Stochastic Orienteering Problems with Aggregation |
|
Thayer, Thomas C. | University of California, Merced |
Carpin, Stefano | University of California, Merced |
Keywords: Planning, Scheduling and Coordination, Agricultural Automation
Abstract: In this paper we consider the stochastic cost orienteering problem, i.e., a version of the classic orienteering problem where the cost associated with each edge is a random variable with known distribution. Such a model is relevant when travel costs are variable, e.g., when a robot moves in uncertain terrain conditions. We model this problem using a composite state space tracking both how much progress the robot has made towards the goal and how much time it has left. On top of this state space, we compute a time-aware policy that allows the robot to dynamically adjust its path and avoid missing the temporal deadline. This policy is determined using a Constrained Markov Decision Process that allows tuning the accepted failure probability upfront. This approach suffers from a significant growth in the composite state space, and to mitigate this problem we introduce an aggregation technique where nearby vertices are compounded together, effectively reducing the original routing problem to an instance with a smaller state space. We then analyze this approach over large scale problem instances associated with robotic irrigation on a commercial grade vineyard.
|
|
15:00-15:15, Paper MoCT1.5 | |
>DIAT (Depth-Infrared Image Annotation Transfer) for Training a Depth-Based Pig-Pose Detector |
> Video Attachment
|
|
Yik, Steven | Michigan State University |
Benjamin, Madonna | Michigan State University |
Lavagnino, Michael | Michigan State University |
Morris, Daniel | Michigan State University |
Keywords: Agricultural Automation, Novel Deep Learning Methods, Computer Vision for Automation
Abstract: Precision livestock farming uses artificial intelligence to individually monitor livestock activity and health. Tracking individuals over time can reveal health indicators that correlate with productivity and longevity. For instance, locomotion patterns observed in lame pigs have been shown to correlate with poor animal welfare and productivity. Kinematic analysis of pigs using pose estimates provides a means of assessing locomotion. New dense depth sensors have potential to achieve full 3D pose estimation and tracking. However, the lack of annotated dense depth datasets has limited use of these sensors in detecting animal pose. Current annotation methods rely on human labeling, but identifying hip and shoulder locations is difficult for pigs with few prominent features, and is especially difficult in-depth images as these lack albedo texture. This work proposes a solution to quickly generate high accuracy pig landmark annotations for depth-based postestimation. We propose Depth-Infrared Annotation Transfer (DIAT), an approach that semi-automatically finds, identifies, and tracks marks visible in infrared, and transfers these labels to depth images. As a result, we are able to train a precise pig pose detector that operates on depth images.
|
|
15:15-15:30, Paper MoCT1.6 | |
>Data-Driven Models with Expert Influence: A Hybrid Approach to Spatiotemporal Process Estimation |
|
Liu, Jun | Virginia Tech |
Williams, Ryan | Virginia Polytechnic Institute and State University |
Keywords: Agricultural Automation, Robotics in Agriculture and Forestry, Optimization and Optimal Control
Abstract: In this paper, our motivating application lies in precision agriculture where accurate modeling of forage is essential for informing rotational grazing strategies. Unfortunately, a major difficulty arises in modeling forage processes as they evolve on large scales according to complex ecological influences. As robots can collect data over large scales in a forage environment, they act as a promising resource for the forage modeling problem when combined with a data-driven Gaussian processes (GPs) technique. However, GPs are non-parametric in nature and may be blind to certain nuances of a process that a parameterized expert model may predict well. Indeed, for the forage modeling problem specifically, there exist several highly parameterized models from agricultural experts that exhibit powerful predictive capabilities. Expert models, however, often come with two shortcomings: (1) parameters may be difficult to determine in general; and (2) the model may not make complete spatiotemporal predictions. For example, a stochastic differential equation (SDE) that models the dynamics of the average output of an environment may be available from experts (a typical case). In such cases, we propose to take advantage of both data-driven (GPs) and expert (SDE) models, by fusing data collected by robots, which often yields spatial insight, with models from experienced professionals that often yield temporal insights. Specifically, we propose to leverage Bayesian inference to combine these two methods, resulting in a posterior prediction that is a hybrid of data-driven and expert models. Finally, we provide simulations to demonstrate the effectiveness of the proposed method.
|
|
MoCT2 |
Room T2 |
Environment Monitoring |
Regular session |
Chair: Triebel, Rudolph | German Aerospace Center (DLR) |
Co-Chair: Kovac, Mirko | Imperial College London |
|
14:00-14:15, Paper MoCT2.1 | |
>Robust MUSIC-Based Sound Source Localization in Reverberant and Echoic Environments |
> Video Attachment
|
|
Sewtz, Marco | Deutsches Zentrum Für Luft Und Raumfahrt E.V |
Bodenmueller, Tim | German Aerospace Center (DLR) |
Triebel, Rudolph | German Aerospace Center (DLR) |
Keywords: Robot Audition, Service Robots, Environment Monitoring and Management
Abstract: Intuitive human robot interfaces like speech or gesture recognition are essential for gaining acceptance for robots in daily life. However, such interaction requires that the robot detects the human’s intention to interact, tracks his position and keeps its sensor systems in an optimal configuration. Audio is a suitable modality for such task as it allows for detecting a speaker in arbitrary positions around the robot. In this paper, we present a novel approach for localization of sound sources by analyzing the frequency spectrum of the received signal and applying a motion model to the estimation process. We use an improved version of the Generalized Singular Value Decomposition (GSVD) based MUltiple SIgnal Classification (MUSIC) algorithm as a direction of arrival (DoA) estimator. Further, we introduce a motion model to enable robust localization in reverberant and echoic environments. We evaluate the system under real conditions in an experimental setup. Our experiments show that our approach outperforms current state-of-the-art algorithm and demonstrate the robustness against the previously mentioned disruptive factors.
|
|
14:15-14:30, Paper MoCT2.2 | |
>OceanVoy: A Hybrid Energy Planning System for Autonomous Sailboat |
> Video Attachment
|
|
Sun, Qinbo | The Chinese Univeristy of Hong Kong, Shenzhen |
Qi, Weimin | The Chinese University of Hong Kong, Shenzhen |
Liu, Hengli | Peng Cheng Laboratory, Shenzhen |
Sun, Zhenglong | Chinese University of Hong Kong, Shenzhen |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Qian, Huihuan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Keywords: Energy and Environment-Aware Automation, Field Robots, Marine Robotics
Abstract: Towards long range and high endurance sailing, energy is of utmost importance. Moreover, benefiting from the dominance of the sailboat itself, it is energy-saving and environment-friendly. Thus, the sailboat with energy planning problem is meaningful. However, until now, the sailboat energy optimization problem has rarely been considered. In this paper, we focus on the energy consumption optimization of an autonomous sailboat. It has been formulated as a Non-linear Programming problem (NLP). We deal with it with a hybrid control scheme, in which pseudo-spectral (PS) optimal control method is used in heading control, and a model-free framework guided by Extreme Seeking Control (ESC) is used in sail control. The optimal path is generated with the optimal input motor torques in time series. As a result, both simulation and experiments have validated motion planning and energy planning performance. Notably, about 7% of energy is saved on average. Our proposed method can make sailboats sailing longer and sustainable.
|
|
14:30-14:45, Paper MoCT2.3 | |
>LAVAPilot: Lightweight UAVTrajectory Planner with Situational Awarenessfor Embedded Autonomy to Track and Locate Radio-Tags |
> Video Attachment
|
|
Nguyen, Hoa Van | The University of Adelaide |
Chen, Fei | The University of Adelaide |
Chesser, Joshua | The University of Adelaide |
Rezatofighi, S. Hamid | The University of Adelaide |
Ranasinghe, Damith | The University of Adelaide |
Keywords: Field Robots, Range Sensing, Environment Monitoring and Management
Abstract: Tracking and locating radio-tagged wildlife is a labor-intensive and time-consuming task necessary in wildlife conservation. In this article, we focus on the problem of achieving embedded autonomy for a resource-limited aerial robot for the task capable of avoiding undesirable disturbances to wildlife. We employ a lightweight sensor system capable of simultaneous (noisy) measurements of radio signal strength information from multiple tags for estimating object locations. We formulate a new lightweight task-based trajectory planning method-LAVAPilot-with a greedy evaluation strategy and a void functional formulation to achieve situational awareness to maintain a safe distance from objects of interest. Conceptually, we embed our intuition of moving closer to reduce the uncertainty of measurements into LAVAPilot instead of employing a computationally intensive information gain based planning strategy. We employ LAVAPilot and the sensor to build a lightweight aerial robot platform with fully embedded autonomy for jointly tracking and planning to track and locate multiple VHF radio collar tags used by conservation biologists. Using extensive Monte Carlo simulation-based experiments, implementations on a single board compute module, and field experiments using an aerial robot platform with multiple VHF radio collar tags, we evaluate our joint planning and tracking algorithms. Further, we compare our method with other information-based planning methods with and without situational awareness to demonstrate the effectiveness of our robot executing LAVAPilot. Our experiments demonstrate that LAVAPilot significantly reduces (by 98.5%) the computational cost of planning to enable real-time planning decisions whilst achieving similar localization accuracy of objects compared to information gain based planning methods albeit taking a slightly longer time to complete a mission. To support research in the field, and conservation biology, we also open source the complete project. In particular, to the best of our knowledge, this is the first demonstration of a fully autonomous aerial robot system where trajectory planning and tracking to survey and locate multiple radio-tagged objects are achieved onboard.
|
|
14:45-15:00, Paper MoCT2.4 | |
>Coordinate-Free Isoline Tracking in Unknown 2-D Scalar Fields |
|
Dong, Fei | Tsinghua University |
You, Keyou | Tsinghua University |
Wang, Jian | Tsinghua Univ |
Keywords: Autonomous Vehicle Navigation, Whole-Body Motion Planning and Control, Environment Monitoring and Management
Abstract: The isoline tracking of this work is concerned with the control design for a sensing robot to track a given isoline of an unknown 2-D scalar filed. To this end, we propose a coordinate-free controller with a simple PI-like form using only the concentration feedback for a Dubins robot, which is particularly useful in GPS-denied environments. The key idea lies in the novel design of a sliding surface based error term in the standard PI controller. Interestingly, we also prove that the tracking error can be reduced by increasing the proportion gain, and be eliminated for circular fields with a non-zero integral gain. The effectiveness of our controller is validated via simulations by using a fixed-wing UAV on the real dataset of the concentration distribution of PM2.5 in an area of China.
|
|
15:15-15:30, Paper MoCT2.7 | |
>MEDUSA: A Multi-Environment Dual-Robot for Underwater Sample Acquisition |
> Video Attachment
|
|
Debruyn, Diego | Imperial College London |
Zufferey, Raphael | Imperial College of London |
Armanini, Sophie Franziska | Imperial College London |
Winston, Crystal | Imperial College London |
Farinha, Andre | Imperial College |
Jin, Yufei | Imperial College London |
Kovac, Mirko | Imperial College London |
Keywords: Environment Monitoring and Management, Aerial Systems: Applications, Marine Robotics
Abstract: Aerial-aquatic robots possess the unique ability of operating in both air and water. However, this capability comes with tremendous challenges, such as communication incompatibility, increased airborne mass, potentially inefficient operation in each of the environments and manufacturing difficulties. Such robots, therefore, typically have small payloads and a limited operational envelope, often making their field usage impractical. We propose a novel robotic water sampling approach that combines the robust technologies of multirotors and underwater micro-vehicles into a single integrated tool usable for field operations. The proposed solution encompasses a multirotor capable of landing and floating on the water, and a tethered mobile underwater pod that can be deployed to depths of several meters. The pod is controlled remotely in three dimensions and transmits video feed and sensor data via the floating multirotor back to the user. The 'dual-robot' approach considerably simplifies robotic underwater monitoring, while also taking advantage of the fact that multirotors can travel long distances, fly over obstacles, carry payloads and manoeuvre through difficult terrain, while submersible robots are ideal for underwater sampling or manipulation. The presented system can perform challenging tasks which would otherwise require boats or submarines. The ability to collect aquatic images, samples and metrics will be invaluable for ecology and aquatic research, supporting our understanding of local climate in difficult-to-access environments.
|
|
MoCT3 |
Room T3 |
Field Robots |
Regular session |
Chair: Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Co-Chair: Detweiler, Carrick | University of Nebraska-Lincoln |
|
14:00-14:15, Paper MoCT3.1 | |
>Efficient Trajectory Library Filtering for Quadrotor Flight in Unknown Environments |
> Video Attachment
|
|
Viswanathan, Vaibhav | Carnegie Mellon University |
Dexheimer, Eric | Carnegie Mellon University |
Li, Guanrui | New York University |
Loianno, Giuseppe | New York University |
Kaess, Michael | Carnegie Mellon University |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: Field Robots, Aerial Systems: Perception and Autonomy, Perception-Action Coupling
Abstract: Quadrotor flight in unknown environments is challenging due to the limited range of perception sensors, state estimation drift, and limited onboard computation. In this work, we tackle these challenges by proposing an efficient, reactive planning approach. We introduce the Bitwise Trajectory Eliminiation (BiTE) algorithm for efficiently filtering out in-collision trajectories from a trajectory library by using bitwise operations. Then, we outline a full planning approach for quadrotor flight in unknown environments. This approach is evaluated extensively in simulation and shown to require up to 90% less computation than comparable approaches. Finally, we validate our planner in over 120 minutes of flights in forest-like and urban subterranean environments.
|
|
14:15-14:30, Paper MoCT3.2 | |
>Autonomous Spot: Long-Range Autonomous Exploration of Extreme Environments with Legged Locomotion |
> Video Attachment
|
|
Bouman, Amanda | Caltech |
Ginting, Muhammad Fadhil | Jet Propulsion Laboratory |
Alatur, Nikhilesh | ETH Zurich |
Palieri, Matteo | Polytechnic University of Bari |
Fan, David D | Georgia Institute of Technology |
Kim, Sung-Kyun | NASA Jet Propulsion Laboratory, Caltech |
Touma, Thomas | Caltech |
Pailevanian, Torkom | Jet Propulsion Laboratory |
Otsu, Kyohei | California Institute of Technology |
Burdick, Joel | California Institute of Technology |
Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Keywords: Field Robots, Autonomous Vehicle Navigation, Robotics in Hazardous Fields
Abstract: This paper serves as one of the first efforts to enable large-scale and long-duration autonomy using the Boston Dynamics Spot robot. Motivated by exploring extreme environments, particularly those involved in the DARPA Subterranean Challenge, this paper pushes the boundaries of the state-of-practice in enabling legged robotic systems to accomplish real-world complex missions in relevant scenarios. In particular, we discuss the behaviors and capabilities which emerge from the integration of the autonomy architecture NeBula (Networked Belief-aware Perceptual Autonomy) with next-generation mobility systems. We will discuss the hardware and software challenges, and solutions in mobility, perception, autonomy, and very briefly, wireless networking, as well as lessons learned and future directions. We demonstrate the performance of the proposed solutions on physical systems in real-world scenarios. The proposed solution contributed to winning 1st-place in the 2020 DARPA Subterranean Challenge, Urban Circuit.
|
|
14:30-14:45, Paper MoCT3.3 | |
>Towards In-Flight Transfer of Payloads between Multirotors |
> Video Attachment
|
|
Shankar, Ajay | University of Nebraska-Lincoln |
Elbaum, Sebastian | University of Virginia |
Detweiler, Carrick | University of Nebraska-Lincoln |
Keywords: Field Robots, Aerial Systems: Applications, Visual Servoing
Abstract: Multirotor unmanned aerial systems (UASs) are often used to transport a variety of payloads. However, the maximum time that the cargo can remain airborne is limited by the flight endurance of the UAS. In this paper, we present a novel approach for two multirotors to transfer a payload between them in-air, while keeping the payload aloft and stationary. Our framework is built on a visual-feedback and grasping pipeline that enables one UAS to grasp the payload held by another, thereby allowing the UASs to act as swappable carriers. By connecting the payload outwards along a single rigid link, and allowing the UASs to maneuver about it, we let the payload remain online while it is transferred to a different carrier. Furthermore, building entirely on monocular vision, the approach does not rely on precise extrinsic localization systems. We demonstrate our proposed strategy in a variety of indoor and GPS-free outdoor experiments, and explore the range of operating limits for our system.
|
|
14:45-15:00, Paper MoCT3.4 | |
>Improvement in Measurement Area of 3D LiDAR for a Mobile Robot Using a Mirror Mounted on a Manipulator |
|
Matsubara, Kazuki | Tohoku University |
Nagatani, Keiji | The University of Tokyo |
Hirata, Yasuhisa | Tohoku University |
Keywords: Field Robots
Abstract: Light Detection and Ranging (LiDAR) is widely employed in mobile robots to acquire environmental information. However, it has a limited laser irradiation direction and cannot measure the backside of an object. In this study, a method that expands the LiDAR measurement range to various directions using a mirror installed on the manipulator mounted on mobile robots is developed. As mirrors can easily be mounted on robots, this method is expected to have a wide range of applications. This paper also proposes a method for determining the mirror position and attitude to expand the measurement area to obtain target data. In addition, we conducted an accuracy evaluation test of the reflection acquisition point. Using the proposed method, we demonstrate the measurement of the shape of descending staircase as an example of a potential application.
|
|
15:00-15:15, Paper MoCT3.5 | |
>Wide Area Exploration System Using Passive-Follower Robots Towed by Multiple Winches |
|
Salazar Luces, Jose Victorio | Tohoku University |
Hoshi, Manami | Tohoku University |
Hirata, Yasuhisa | Tohoku University |
Keywords: Field Robots, Motion Control, Multi-Robot Systems
Abstract: In this study, we propose a wide area exploration system that consists on passive wheeled robots equipped with exploration sensors that are pulled from a high position with wires fed out from two winches. The robots are driven by the pulling force from the winches and they are able to steer by controlling brakes attached to their wheels. By adjusting the wire length, the passive-follower robot is pulled within the exploration area and it controls the braking torque of the wheels to follow a desired trajectory based on its current position. This system has the advantage that it is effective for ground exploration, does not require advanced calibration, and can be installed quickly. In this paper, we first explain the outline of the proposed system. Then, we introduce the hardware design of the developed winches and passive-follower robot. Next, the control method of the winch unit and the passive-follower robots are described. Here, we introduce the feasible braking control region for motion analysis and control of the passive-follower robot. Finally, we apply these control methods to the proposed system and report the results of verification experiments. We describe the feasible range of a follower robot, which changes depending on the position of the winches. We conducted an outdoor experiment, and confirmed the effectiveness of this system by evaluating the trajectories of the passive-follower robot.
|
|
15:15-15:30, Paper MoCT3.6 | |
>End-To-End Velocity Estimation for Autonomous Racing |
|
Srinivasan, Sirish | ETH Zürich |
Sa, Inkyu | CSIRO |
Zyner, Alex | The University of Sydney |
Reijgwart, Victor | ETH Zurich |
de la Iglesia Valls, Miguel | ETH Zürich |
Siegwart, Roland | ETH Zurich |
Keywords: Field Robots, Autonomous Vehicle Navigation, Sensor Fusion
Abstract: Velocity estimation plays a central role in driverless vehicles, but standard and affordable methods struggle to cope with extreme scenarios like aggressive maneuvers due to the presence of high sideslip. To solve this, autonomous race cars are usually equipped with expensive external velocity sensors. In this paper, we present an end-to-end recurrent neural network that takes available raw sensors as input (IMU, wheel odometry, and motor currents) and outputs velocity estimates. The results are compared to two state-of-the-art Kalman filters, which respectively include and exclude expensive velocity sensors. All methods have been extensively tested on a formula student driverless race car with very high sideslip (10° at the rear axle) and slip ratio (≈ 20%), operating close to the limits of handling. The proposed network is able to estimate lateral velocity up to 15x better than the Kalman filter with the equivalent sensor input and matches (0.06 m/s RMSE) the Kalman filter with the expensive velocity sensor setup.
|
|
MoCT4 |
Room T4 |
Wheeled Robots |
Regular session |
Chair: La, Hung | University of Nevada at Reno |
Co-Chair: Yamaguchi, Tomoyuki | University of Tsukuba |
|
14:00-14:15, Paper MoCT4.1 | |
>RoVaLL: Design and Development of a Multi-Terrain Towed Robot with Variable Lug-Length Wheels |
> Video Attachment
|
|
Salazar Luces, Jose Victorio | Tohoku University |
Matsuzaki, Shin | Tohoku University |
Hirata, Yasuhisa | Tohoku University |
Keywords: Multi-Robot Systems, Mechanism Design, Wheeled Robots
Abstract: Robotic systems play a very important role in exploration, allowing us to reach places that would otherwise be unsafe or unreachable to humans, such as volcanic areas, disaster sites or unknown areas in other planets. As the area to be explored increases, so does the time it takes for robots to explore it. One approach to reduce the required time is using multiple autonomous robots to perform distributed exploration. However, this significantly increases the associated cost and the complexity of the exploration process. To address these issues, in the past we proposed a leader-follower architecture where multiple two-wheeled passive robots capable of steering only using brakes are pulled by a leader robot. By controlling their relative angle with respect to the leader, the followers could move in arbitrary formations. The proposed follower robots used rubber tires, which allowed it to perform well in rigid ground, but poorly in soft soil. One alternative is to use lugged wheels, which increase the traction in soft soils. In this paper we propose a robot with shape-shifting wheels that allow it to steer in both rigid and soft soils. The wheels use a cam mechanism to push out and retract lugs stored on its inside. The shape of the wheel can be manipulated by controlling the driving torque exerted on the cam mechanism. Through experiments we verified that the developed mechanism allowed the follower robots to control their relative angle with respect to the leader in both rigid and soft soils.
|
|
14:15-14:30, Paper MoCT4.2 | |
>Modeling and Control of a Hybrid Wheeled Jumping Robot |
> Video Attachment
|
|
Dinev, Traiko | The University of Edinburgh |
Xin, Songyan | The University of Edinburgh |
Merkt, Wolfgang Xaver | University of Oxford |
Ivan, Vladimir | University of Edinburgh |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Wheeled Robots, Motion Control, Optimization and Optimal Control
Abstract: In this paper, we study a wheeled robot with a prismatic extension joint. This allows the robot to build up momentum to perform jumps over obstacles and to swing up to the upright position after the loss of balance. We propose a template model for the class of such two-wheeled jumping robots. This model can be considered as the simplest wheeled-legged system. We provide an analytical derivation of the system dynamics which we use inside a model predictive controller (MPC). We study the behavior of the model and demonstrate highly dynamic motions such as swing-up and jumping. Furthermore, these motions are discovered through optimization from first principles. We evaluate the controller on a variety of tasks and uneven terrains in a simulator.
|
|
14:30-14:45, Paper MoCT4.3 | |
>Ospheel: Design of an Omnidirectional Spherical-Sectioned Wheel |
> Video Attachment
|
|
Hayat, Abdullah Aamir | Singapore University of Technology and Design |
Shi, Yuyao | SUTD |
Elangovan, Karthikeyan | Singapore University of Technology and Design |
Elara, Mohan Rajesh | Singapore University of Technology and Design |
Abdulkader, Raihan Enjikalayil | Singapore University of Technology and Design |
Keywords: Wheeled Robots, Mechanism Design
Abstract: The holonomic and omnidirectional capabilities to the mobile platform are dependent on the wheel design and its various arrangements in the platform chassis. This paper reports on the development of an omnidirectional spherical sectioned wheel named Ospheel. It is modular, and the spherical sectioned geometry of the wheel is driven using two actuators placed inside the housing above the wheel that rotates it independently about two perpendicular axes. The mechanical drive system for Ospheel consists of two gear trains, namely, internal spur gear and crown gear spatially assembled in orthogonal planes and are driven by two driving pinions. The omnidirectional movement is achieved using the combination of two rotations, and its kinematics is presented. Two wheels at a fixed inclination assembled with the base and experiments were carried out to illustrate its holonomic motion. The robustness of the wheel design is experimented with different trajectories and on different terrains.
|
|
14:45-15:00, Paper MoCT4.4 | |
>Dynamics and Aerial Attitude Control for Rapid Emergency Deployment of the Agile Ground Robot AGRO |
> Video Attachment
|
|
Gonzalez, Daniel | United States Military Academy at West Point |
Lesak, Mark C. | United States Military Academy |
Rodriguez, Andres | United States Military Academy |
Cymerman, Joseph | Department of Civil and Mechanical Engineering, United States Mi |
Korpela, Christopher M. | United States Military Academy at West Point |
Keywords: Wheeled Robots, Dynamics, Motion Control
Abstract: In this work we present a Four-Wheeled Independent Drive and Steering (4WIDS) robot named AGRO and a method of controlling its orientation while airborne using wheel reaction torques. This is the first documented use of independently steerable wheels to both drive on the ground and achieve aerial attitude control when thrown. Inspired by a cat's self-righting reflex, this capability was developed to allow emergency response personnel to rapidly deploy AGRO by throwing it over walls and fences or through windows without the risk of it landing upside down. It also allows AGRO to drive off of ledges and ensure it lands on all four wheels. We have demonstrated a successful thrown deployment of AGRO. A novel parametrization and singularity analysis of 4WIDS kinematics reveals independent yaw authority with simultaneous adjustment of the ratio between roll and pitch authority. Simple PD controllers allow for stabilization of roll, pitch, and yaw. These controllers were tested in a simulation using derived dynamic equations of motion, then implemented on the AGRO prototype. An experiment comparing a controlled and non-controlled fall was conducted in which AGRO was dropped from a height of 0.85 m with an initial roll and pitch angle of 16 degrees and -23 degrees respectively. With the controller enabled, AGRO can use the reaction torque from its wheels to stabilize its orientation within 402 milliseconds.
|
|
15:00-15:15, Paper MoCT4.5 | |
>Control Framework for a Hybrid-Steel Bridge Inspection Robot |
> Video Attachment
|
|
Bui, Hoang-Dung | University of Nevada Reno |
Nguyen, Son | University of Nevada, Reno |
Billah, Umme-Hafsa | University of Nevada, Reno |
Le, Chuong | University of Oklahoma |
Tavakkoli, Alireza | University of Nevada, Reno |
La, Hung | University of Nevada at Reno |
Keywords: Field Robots, Search and Rescue Robots, Wheeled Robots
Abstract: Autonomous navigation of steel bridge inspection robots are essential for proper maintenance. Majority of existing robotic solutions for bridge inspection require human intervention to assist in the control and navigation. In this paper, a control system framework has been proposed for a previously designed ARA robot, which facilitates autonomous real-time navigation and minimizes human involvement. The mechanical design and control framework of ARA robot enables two different configurations, namely the mobile and inch-worm transformation. In addition, a switching control was developed with 3D point clouds of steel surfaces as the input which allow the robot to switch between mobile and inch-worm transformation. The surface availability algorithm (considers plane, area and height) of the switching control enables the robot to perform inch-worm jumps autonomously. The mobile transformation allows the robot to move on continuous steel surfaces and perform visual inspection of steel bridge structures. Practical experiments on actual steel bridge structures highlight the effective performance of ARA robot with the proposed control framework for autonomous navigation during visual inspection of steel bridges.
|
|
15:15-15:30, Paper MoCT4.6 | |
>Development of a Steep Slope Mobile Robot with Propulsion Adhesion |
|
Nishimura, Yuki | University of Tsukuba |
Yamaguchi, Tomoyuki | University of Tsukuba |
Keywords: Wheeled Robots
Abstract: A mobile robot that can achieve a stable attitude and locomotion on steep slopes is needed to overcome the problems of slipping and falling for automation of works on steep slopes. The conventional approaches to achieve a stable attitude and locomotion have been researched by adopting tracked wheels and multi-legged mechanisms instead of wheel mechanisms. However, these robots have limitations in term of application angles. A systematic theory for stable attitude and locomotion on steep slopes has not been established. Therefore, research on control strategies for wheeled mobile robots on steep slopes is essential. In this paper, a method to realize a stable attitude and locomotion on a steep slope for the wheeled mobile robot by using propellers for propulsion adhesion is proposed. The proposed robot can generate a large frictional force by pushing its body against the slope with a thrust force. This force prevents the robot from slipping while maneuvering on the slope. The magnitude and the direction of the thrust force is optimized using an appropriate control mechanism influencing the moment of force acting on it to avoid falling and side slipping during locomotion on steep slopes. A simulation experiment was conducted from the perspective of mechanics and dynamics to arrive at an optimal design of the mobile robot. The developed robot has four propellers to generate thrust forces and a rotation axis to control the direction of the generated thrust forces. Evaluation experiments were performed to validate the stability of the robot at a resting position and during lateral locomotion and its ability to climb over a slope. The experimental results confirmed that the proposed robot with propellers realized a steady attitude and locomotion on a slope of up to 90°by controlling the magnitude and the direction of the thrust force.
|
|
15:15-15:30, Paper MoCT4.7 | |
>Definition and Application of Variable Resistance Coefficient for Wheeled Mobile Robots on Deformable Terrain (I) |
|
Ding, Liang | Harbin Institute of Technology |
Huang, Lan | Harbin Institute of Technology |
Li, Shu | Harbin Institute of Technology |
Gao, Haibo | Harbin Institute of Technology |
Deng, Huichao | Beihang university |
Li, Yuankai | Department of Aerospace Engineering, Ryerson University |
Liu, Guangjun | Ryerson University |
|
|
MoCT5 |
Room T5 |
Robotics in Agriculture and Forestry |
Regular session |
Chair: Tokekar, Pratap | University of Maryland |
Co-Chair: Isler, Volkan | University of Minnesota |
|
14:00-14:15, Paper MoCT5.1 | |
>Interactive Movement Primitives: Planning to Push Occluding Pieces for Fruit Picking |
> Video Attachment
|
|
Mghames, Sariah | University of Lincoln |
Hanheide, Marc | University of Lincoln |
Ghalamzan Esfahani, Amir Masoud | University of Lincoln |
Keywords: Agricultural Automation, Robotics in Agriculture and Forestry, Motion and Path Planning
Abstract: Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3-D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.
|
|
14:15-14:30, Paper MoCT5.2 | |
>Robotic Untangling of Herbs and Salads with Parallel Grippers |
> Video Attachment
|
|
Ray, Prabhakar | King's College London |
Howard, Matthew | King's College London |
Keywords: Robotics in Agriculture and Forestry, Agricultural Automation, Computer Vision for Automation
Abstract: Robotic packaging of fresh leafy produce such as herbs and salads generally involves picking out a target mass from a pile or crate of plant material. Typically, for low-complexity parallel grippers, the weight picked can be controlled by varying the opening aperture. However, often individual strands of plant material get entangled with each other, causing more to be picked out than desired. This paper presents a simple spread-and-pick approach that significantly reduces the degree of entanglement in a herb pile when picking. Compared to the traditional approach of picking from an entanglement-free point in the pile, the proposed approach results in a decrease of up to 29.06% of the variance in for separate homogeneous piles of fresh herbs. Moreover, it shows good generalisation with up to 55.53% decrease in picked weight variance for herbs previously unseen by the system.
|
|
14:30-14:45, Paper MoCT5.3 | |
>Choosing Classification Thresholds for Mobile Robot Coverage |
|
Maini, Parikshit | University of Minnesota |
Isler, Volkan | University of Minnesota |
Keywords: Field Robots, Robotics in Agriculture and Forestry
Abstract: Many robotic coverage applications involve detection of spatially distributed targets, followed by path planning to visit them for service. In these applications, the performance of the detection algorithm can have profound effect on planning decisions and costs. Range of operation, in both space and time, for robots is typically finite over a single mission and is a common constraint that needs to be accounted for in decision making. Misclassification may result in wastage of resources and can even jeopardize the completion of a mission if the length of a path extends beyond the range of the robot. In this work, we develop techniques on the computation of planning-aware classification thresholds. We discuss two versions that compute binary classification thresholds as a function of planning budget and detection accuracy. We present an implementation of our methods in path planning applications for an autonomous mower and show results on real and simulated data. Our method allows up to 25% improvement in coverage as compared to standard thresholding methods.
|
|
14:45-15:00, Paper MoCT5.4 | |
>Unsupervised Domain Adaptation for Transferring Plant Classification Systems to New Field Environments, Crops, and Robots |
|
Gogoll, Dario | University of Bonn |
Lottes, Philipp | University of Bonn |
Weyler, Jan | University of Bonn |
Petrinic, Nik | University of Oxford |
Stachniss, Cyrill | University of Bonn |
Keywords: Robotics in Agriculture and Forestry, Agricultural Automation
Abstract: Crops are an important source of food and other products. In conventional farming, tractors apply large amounts of agrochemicals uniformly across fields for weed control and plant protection. Autonomous farming robots have the potential to provide environment-friendly weed control on a per plant basis. A system that reliably distinguishes crops, weeds, and soil under varying environment conditions is the basis for plant-specific interventions such as spot applications. Such semantic segmentation systems, however, often show a performance decay when applied under new field conditions. In this paper, we therefore propose an effective approach to unsupervised domain adaptation for plant segmentation systems in agriculture and thus to adapt existing systems to new environments, different value crops, and other farm robots. Our system yields a high segmentation performance in the target domain by exploiting labels only from the source domain. It is based on CycleGANs and enforces a semantic consistency domain transfer by constraining the images to be pixel-wise classified in the same way before and after translation. We perform an extensive evaluation, which indicates that we can substantially improve the transfer of our semantic segmentation system to new field environments, different crops, and different sensors or robots.
|
|
15:00-15:15, Paper MoCT5.5 | |
>Crop Height and Plot Estimation for Phenotyping from Unmanned Aerial Vehicles Using 3D LiDAR |
> Video Attachment
|
|
Dhami, Harnaik | University of Maryland |
Yu, Kevin | Virginia Tech |
Xu, Tianshu | University of Maryland |
Zhu, Qian | Virginia Tech |
Dhakal, Kshitiz | Virginia Tech |
Friel, James | Virginia Tech |
Li, Song | Virginia Tech |
Tokekar, Pratap | University of Maryland |
Keywords: Robotics in Agriculture and Forestry, Computer Vision for Other Robotic Applications, Agricultural Automation
Abstract: We present techniques to measure crop heights using a 3D Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV). Knowing the height of plants is crucial to monitor their overall health and growth cycles, especially for high-throughput plant phenotyping. We present a methodology for extracting plant heights from 3D LiDAR point clouds, specifically focusing on plot-based phenotyping environments. We also present a toolchain that can be used to create phenotyping farms for use in Gazebo simulations. The tool creates a randomized farm with realistic 3D plant and terrain models. We conducted a series of simulations and hardware experiments in controlled and natural settings. Our algorithm was able to estimate the plant heights in a field with 112 plots with a root mean square error (RMSE) of 6.1 cm. This is the first such dataset for 3D LiDAR from an airborne robot over a wheat field. The developed simulation toolchain, algorithmic implementation, and datasets can be found on our GitHub repository. https://github.com/hsd1121/PointCloudProcessing
|
|
MoCT6 |
Room T6 |
Robotics in Construction I |
Regular session |
Chair: Lee, Dongjun | Seoul National University |
Co-Chair: Liu, Yunhui | Chinese University of Hong Kong |
|
14:00-14:15, Paper MoCT6.1 | |
>A Robotic Gripper Design and Integrated Solution towards Tunnel Boring Construction Equipment |
> Video Attachment
|
|
Yuan, Jianjun | Shanghai University, China |
Guan, Renming | Shanghai Jiao Tong University |
Du, Liang | Ritsumeikan University |
Ma, Shugen | Ritsumeikan University |
Keywords: Mechanism Design, Robotics in Construction
Abstract: Creative design of grippers on their configurations, mechatronic control system, and multi-component collaborative algorithms is often utilized to realize complex operations in industrial applications, due to the environmental constraints or specific task requirements. Firstly, this paper introduces the background problems. As the main automatic equipment -- the shield machine -- in the field of tunnel boring construction, needs frequent tool (cutter) replacement during underground process, but has no practical automatic method yet, due to heavy payload, complex environment and work procedure. Thus, an integrated solution was proposed by developing a specific gripper and a snake-like manipulator to accomplish tool replacement in a cooperative way. Through simple and unique design of relative components, the solution realizes a fully automatic and precise approach including heavy load tool grasping and regrasping, posture adjustment, unlocking and disassembly, and installation and locking. Finally, this paper also describes the experimental process of tool replacement by the prototype under a real working condition, and discusses the feasibility of putting the scheme into practical application through comparison.
|
|
14:15-14:30, Paper MoCT6.2 | |
>Expert-Emulating Excavation Trajectory Planning for Autonomous Robotic Industrial Excavator |
> Video Attachment
|
|
Son, Bukun | Seoul National University |
Kim, ChangU | Seoul National University |
Kim, ChangMuk | Seoul National University, Doosan |
Lee, Dongjun | Seoul National University |
Keywords: Robotics in Construction, Motion and Path Planning, Imitation Learning
Abstract: We propose a novel excavation (i.e., digging) trajectory planning framework for industrial autonomous robotic excavators, which emulates the strategies of human expert operators to optimize the excavation of (complex/unmodellable) soils while also upholding robustness and safety in practice. First, we encode the trajectory with dynamic movement primitives (DMP), which is known to robustly preserve qualitative shape of the trajectory and attraction to (variable) end-points (i.e., start-points of swing/dumping), while also being data-efficient due to its structure, thus, suitable for our purpose, where expert data collection is expensive. We further shape this DMP-based trajectory to be expert-emulating, by learning the shaping force of the DMP dynamics from the real expert excavation data via a neural network (i.e., MLP (multi-layer perception)). To cope with (possibly dangerous) underground uncertainties (e.g., pipes, rocks), we also real-time modulate the expert-emulating (nominal) trajectory to prevent excessive build-up of excavation force by using the feedback of its online estimation. The proposed framework is then validated/demonstrated by using an industrial-scale autonomous robotic excavator, with the associated data also presented here.
|
|
14:30-14:45, Paper MoCT6.3 | |
>Prediction of Backhoe Loading Motion Via the Beta-Process Hidden Markov Model |
> Video Attachment
|
|
Yamada, Kento | Tohoku Univ |
Ohno, Kazunori | Tohoku University |
Hamada, Ryunosuke | Tohoku University |
Westfechtel, Thomas | Tohoku University |
Bezerra, Ranulfo | Tohoku University |
Miyamoto, Naoto | Tohoku Univ |
Suzuki, Taro | Chiba Institute of Technology |
Suzuki, Takahiro | Tohoku University |
Nagatani, Keiji | The University of Tokyo |
Shibata Yukinori, Shibata | Sato Komuten Co |
Asano, Kimitaka | Sanyo-Technics Co |
Komatsu, Tomohiro | KOWATECH Co |
Tadokoro, Satoshi | Tohoku University |
Keywords: Behavior-Based Systems, Robotics in Construction, Human-Centered Automation
Abstract: At a construction site, a backhoe loads sediment onto the bed of a dump truck for earthmoving work. In cooperation between the backhoe and the dump truck, the dump truck must move for the loading spot at the instant the backhoe complete preparation for loading, like gathering sediment. To automate transport of sediment by a dump truck, it is required to predict the instant immediately. However, it is difficult to predict the instant at which the backhoe is ready to load sediment, owing to the similarity in motions that are observed during preparation for loading. Moreover, the level of skill required to operate a backhoe differs between operators. Thus the prediction of the instant requires a unique model for each operator. Through this study, we attempt to predict the instant at which the backhoe is in the ideal position to load sediment into the dump truck. We employ the beta-process hidden Markov model (BP-HMM) to develop a motion model of a backhoe used for earthmoving works and operated by a specific operator, to predict the instant at which the backhoe is ready to load sediment into the dump truck. The BP-HMM classifies the backhoe motion into several primitive motions. Furthermore, for a series of primitive motions, such as loading sediment, we identify a specific series of actions that are unique to waiting for the dump truck to drive into the loading spot. For input for the model, we gathered 6-axis inertial data along the cab, boom, and arm of the backhoe using attachable sensor boxes, which include inertial measurement units (IMU). Thus, our measurement methodology could also be used for older backhoes without sensors. As a result, we were able to identify three kind of primitive motions that could help predict the instant at which the backhoe is ready to load sediment into the dump truck, using the backhoe motion data by a specific operator. At best, the instant could be predicted with a probability of 67% and 100%, at 6 s and 0.7 s before loading process began, respectively. This phased prediction could be used to reduce the idle time and risk to dump trucks during earthmoving work with the backhoe.
|
|
14:45-15:00, Paper MoCT6.4 | |
>Robust Dynamic State Estimation for Lateral Control of an Industrial Tractor Towing Multiple Passive Trailers |
|
Zhou, Shunbo | The Chinese University of Hong Kong |
Zhao, Hongchao | The Chinese University of Hong Kong |
Chen, Wen | The Chinese University of Hong Kong |
Liu, Zhe | University of Cambridge |
Wang, Hesheng | Shanghai Jiao Tong University |
Liu, Yunhui | Chinese University of Hong Kong |
Keywords: Industrial Robots, Logistics, Robotics in Construction
Abstract: In this paper, we propose a dynamic state estimation framework for lateral control of a heavy tractor-trailers system using only mass-produced low-cost sensors. This issue is challenging since the lateral velocity of the lead tractor is difficult to measure directly. The performance of existing dynamic model-based estimation methods will also be degraded, as different trailers and payloads cause the tractor model parameters to change. We address this issue by incorporating a kinematic estimator into a dynamic model-based estimation scheme. Accurate and reliable tire cornering stiffness and dynamics-informed lateral velocity of the lead tractor can be output in real-time by using our method. The stability and robustness of the proposed method are theoretically proved. The feasibility of our method is verified by full-scale experiments. It is also verified that the estimated model parameters and lateral states do improve the control performance by integrating the estimator into a lateral control system.
|
|
MoCT7 |
Room T7 |
Robotics in Construction II |
Regular session |
Chair: Liu, Zhe | University of Cambridge |
Co-Chair: Hutter, Marco | ETH Zurich |
|
14:00-14:15, Paper MoCT7.1 | |
>End-To-End 3D Point Cloud Learning for Registration Task Using Virtual Correspondences |
|
Wei, Huanshu | Chinese University of Hong Kong |
Qiao, Zhijian | Shanghai Jiao Tong University |
Liu, Zhe | University of Cambridge |
Suo, Chuanzhe | The Chinese University of Hong Kong |
Yin, Peng | Carnegie Mellon University |
Shen, Yueling | Shanghai Jiao Tong University |
Li, Haoang | The Chinese University of Hong Kong |
Wang, Hesheng | Shanghai Jiao Tong University |
Keywords: Robotics in Construction
Abstract: 3D Point cloud registration is still a very challenging topic due to the difficulty in finding the rigid transformation between two point clouds with partial correspondences, and it's even harder in the absence of any initial estimation information. In this paper, we present an end-to-end deep-learning based approach to resolve the point cloud registration problem. Firstly, the revised LPD-Net is introduced to extract features and aggregate them with the graph network. Secondly, the self-attention mechanism is utilized to enhance the structure information in the point cloud and the cross-attention mechanism is designed to enhance the corresponding information between the two input point clouds. Based on which, the virtual corresponding points can be generated by a voted-based method, and finally, the point cloud registration problem can be solved by implementing the SVD method. Comparison results in ModelNet40 dataset validate that the proposed approach reaches the state-of-the-art in point cloud registration tasks and experiment resutls in KITTI dataset validate the effectiveness of the proposed approach in real applications.
|
|
14:15-14:30, Paper MoCT7.2 | |
>Terrain-Adaptive Planning and Control of Complex Motions for Walking Excavators |
> Video Attachment
|
|
Jelavic, Edo | Swiss Federal Institute of Technology Zurich |
Berdou, Yannick | ETH Zurich |
Jud, Dominic | ETH Zurich |
Kerscher, Simon | Eth Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Robotics in Construction, Whole-Body Motion Planning and Control
Abstract: This article presents a planning and control pipeline for legged-wheeled (hybrid) machines. It consists of a Trajectory Optimization based planner that computes references for end-effectors and joints. The references are tracked using a whole-body controller based on a hierarchical optimization approach. Our controller is capable of performing terrain adaptive whole-body control. Furthermore, it computes both torque and position/velocity references, depending on the actuator capabilities. We perform experiments on a Menzi Muck M545, a full size 31 Degree of Freedom (DoF) walking excavator with five limbs: four wheeled legs and an arm. We show motions that require full-body coordination executed in realistic conditions. To the best of our knowledge, this is the first work that shows the execution of whole-body motions on a full size walking excavator, using all DoFs for locomotion.
|
|
14:30-14:45, Paper MoCT7.3 | |
>Towards RL-Based Hydraulic Excavator Automation |
> Video Attachment
|
|
Egli, Pascal Arturo | RSL, ETHZ |
Hutter, Marco | ETH Zurich |
Keywords: Robotics in Construction, Reinforecment Learning
Abstract: In this article we present a data-driven approach for automated arm control of a hydraulic excavator. Except for the link lengths of the excavator, our method does not require machine-specific knowledge nor gain tuning. Using data collected during operation of the excavator, we train a general purpose model to effectively represent the highly non-linear dynamics of the hydraulic actuation and joint linkage. Together with the link lengths a simulation is set up to train a neural network control policy for end-effector position tracking using reinforcement learning (RL). The control policy directly outputs the actuator commands that can be applied to the machine without unfounded filtering or modification. The proposed method is implemented and tested on a 12t hydraulic excavator, controlling its 4 main arm joints to track desired positions of the shovel in free-space. The results demonstrate the feasibility of directly applying control policies trained in simulation to the physical excavator for accurate and stable position tracking.
|
|
14:45-15:00, Paper MoCT7.4 | |
>Multimodal Teleoperation of Heterogeneous Robots within a Construction Environment |
|
Wallace, Dylan | University of Nevada, Las Vegas |
He, Yu Hang | University of Nevada, Las Vegas |
Chagas Vaz, Jean M. | University of Nevada Las Vegas |
Georgescu, Leonardo | University of Nevada, Las Vegas |
Oh, Paul Y. | University of Nevada, Las Vegas (UNLV) |
Keywords: Robotics in Construction, Telerobotics and Teleoperation, Virtual Reality and Interfaces
Abstract: Automation in construction continues to be a topic of interest for many in industry and academia. However, the dynamic environments presented in construction sites prove these tasks to be difficult to automate reliably. This paper proposes a novel method of teleoperation for multiple heterogeneous robots within a construction environment. The system is achieved by creating a virtual reality interface that allows an operator to control multiple robots both synchronously and asynchronously. Feedback is provided from an array of RGBD cameras, force sensors, and precise odometry data. The DRC-Hubo and Spot robot platforms are used for implementation and experimentation. Experiments include useful tasks for construction including item manipulation and item delivery of tools and components. Results demonstrate the feasibility of implementing the system in a construction environment, including trajectory comparisons, task learning curves, and successful multi-robot collaboration.
|
|
MoCT8 |
Room T8 |
Service Robots |
Regular session |
Chair: Liu, Ming | Hong Kong University of Science and Technology |
Co-Chair: Fernandez-Carmona, Manuel | University of Lincoln |
|
14:00-14:15, Paper MoCT8.1 | |
>Applying Surface Normal Information in Drivable Area and Road Anomaly Detection for Ground Mobile Robots |
|
Wang, Hengli | The Hong Kong University of Science and Technology |
Fan, Rui Ranger | UC San Diego |
Sun, Yuxiang | Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Automation Technologies for Smart Cities, Service Robotics, Logistics
Abstract: The joint detection of drivable areas and road anomalies is a crucial task for ground mobile robots. In recent years, many impressive semantic segmentation networks, which can be used for pixel-level drivable area and road anomaly detection, have been developed. However, the detection accuracy still needs improvement. Therefore, we develop a novel module named the Normal Inference Module (NIM), which can generate surface normal information from dense depth images with high accuracy and efficiency. Our NIM can be deployed in existing convolutional neural networks (CNNs) to refine the segmentation performance. To evaluate the effectiveness and robustness of our NIM, we embed it in twelve state-of-the-art CNNs. The experimental results illustrate that our NIM can greatly improve the performance of the CNNs for drivable area and road anomaly detection. Furthermore, our proposed NIM-RTFNet ranks 8th on the KITTI road benchmark and exhibits a real-time inference speed.
|
|
14:15-14:30, Paper MoCT8.2 | |
>Performance Characterization of an Algorithm to Estimate the Search Skill of a Human or Robot Agent |
|
Balaska, Audrey | Tufts University |
Rife, Jason | Tufts University |
Keywords: Search and Rescue Robots, Performance Evaluation and Benchmarking, Object Detection, Segmentation and Categorization
Abstract: This paper characterizes an algorithm that estimates searcher skill level to support planning for search activities involving heterogeneous robot and human/robot teams. Specifically, we use Monte-Carlo simulations to determine the empirical accuracy of the estimator, to assess the quality of its predicted distribution (nonparametric) of agent skill levels, and the convergence rate of the estimate. The simulation study suggests that a single challenging search task can be used to estimate searcher skill within about 10%; however, the quality of the estimate is higher when searcher skill is high.
|
|
14:30-14:45, Paper MoCT8.3 | |
>The Marathon 2: A Navigation System |
> Video Attachment
|
|
Macenski, Steven | Samsung Research America |
Martin Rico, Francisco | Carnegie Mellon University |
White, Ruffin | University of California San Diego |
Gines Clavero, Jonatan | King Juan Carlos University |
Keywords: Service Robots, Behavior-Based Systems, Software, Middleware and Programming Environments
Abstract: Developments in mobile robot navigation have enabled robots to operate in warehouses, retail stores, and on sidewalks around pedestrians. Various navigation solutions have been proposed, though few as widely adopted as ROS Navigation. 10 years on, it is still one of the most popular navigation solutions. Yet, ROS Navigation has failed to keep up with modern trends. We propose the new navigation solution, Navigation2, which builds on the successful legacy of ROS Navigation. Navigation2 uses a behavior tree for navigator task orchestration and employs new methods designed for dynamic environments applicable to a wider variety of modern sensors. It is built on top of ROS2, a secure message passing framework suitable for safety critical applications and program lifecycle management. We present experiments in a campus setting utilizing Navigation2 to operate safely alongside students over a marathon as an extension of the experiment proposed in Eppstein et al. The Navigation2 system is freely available at https://github.com/ros-planning/navigation2 with a rich community and instructions.
|
|
14:45-15:00, Paper MoCT8.4 | |
>Path Planning for Nonholonomic Multiple Mobile Robot System with Applications to Robotic Autonomous Luggage Trolley Collection at Airports |
|
Wang, Jiankun | The Chinese University of HongKong |
Meng, Max Q.-H. | The Chinese University of Hong Kong |
Keywords: Service Robots, Service Robotics, Motion and Path Planning
Abstract: In this paper, we propose a novel path planning algorithm for the nonholonomic multiple mobile robot system with applications to a robotic autonomous luggage trolley collection system at airports. We consider this path planning algorithm as a Multiple Traveling Salesman Problem (MTSP). Our path planning algorithm consists of three parts. First, we use the Minimum Spanning Tree (MSP) algorithm to divide the MTSP into a number of independent TSPs, which achieves the task assignment for each mobile robot. Secondly, we implement a closed-loop forward control policy based on the kinematic model of the mobile robot to get a feasible and smooth path. The control cost of the path is used as the new metric in solving the TSPs. Finally, in order to adapt to our case, we modify the TSP as an Open Dynamic Traveling Salesman Problem with Fixed Start (ODTSP-FS) and implement an ant colony algorithm to achieve the path planning for each mobile robot. We evaluate our algorithm with simulation experiments and the experimental results demonstrate that our algorithm can quickly generate feasible and smooth paths for each robot while satisfying the nonholonomic constraints.
|
|
15:00-15:15, Paper MoCT8.5 | |
>Affordance-Based Mobile Robot Navigation among Movable Obstacles |
> Video Attachment
|
|
Wang, Maozhen | Northeastern University |
Luo, Rui | Northeastern University |
Onol, Aykut Ozgun | Northeastern University |
Padir, Taskin | Northeastern University |
Keywords: Service Robotics, Motion and Path Planning, Visual-Based Navigation
Abstract: Avoiding obstacles in the perceived world has been the classical approach to autonomous mobile robot navigation. However, this usually leads to unnatural and inefficient motions that significantly differ from the way humans move in tight and dynamic spaces, as we do not refrain interacting with the environment around us when necessary. Inspired by this observation, we propose a framework for autonomous robot navigation among movable obstacles (NAMO) that is based on the theory of affordances and contact-implicit motion planning. We consider a realistic scenario in which a mobile service robot negotiates unknown obstacles in the environment while navigating to a goal state. An affordance extraction procedure is performed for novel obstacles to detect their movability, and a contact-implicit trajectory optimization method is used to enable the robot to interact with movable obstacles to improve the task performance or to complete an otherwise infeasible task. We demonstrate the performance of the proposed framework by hardware experiments with Toyota's Human Support Robot.
|
|
15:15-15:30, Paper MoCT8.6 | |
>Next-Best-Sense: A Multi-Criteria Robotic Exploration Strategy for RFID Tags Discovery |
|
Polvara, Riccardo | University of Lincoln |
Fernandez-Carmona, Manuel | University of Lincoln |
Hanheide, Marc | University of Lincoln |
Neumann, Gerhard | Karlsruhe Institute of Technology |
Keywords: Service Robotics, Inventory Management, Environment Monitoring and Management
Abstract: Automated exploration is one of the most relevant applications for autonomous robots. In this paper, we propose a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms which optimizes the exploration task balancing multiple criteria. NBS is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robot. We cast this problem as a coverage planning problem by defining a basic sensing operation – a scan with the RFID reader – as the field of “view” of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics, such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors’ repository 1.
|
|
MoCT9 |
Room T9 |
Automation at Micro-Nano Scales |
Regular session |
Chair: Gauthier, Michael | FEMTO-ST Institute |
Co-Chair: Cappelleri, David | Purdue University |
|
14:00-14:15, Paper MoCT9.1 | |
>Magnetically Actuated Pick-And-Place Operations of Cellular Micro-Rings for High-Speed Assembly of Micro-Scale Biological Tube |
> Video Attachment
|
|
Wu, Yang | Beijing Institute of Technology |
Sun, Tao | Beijing Institute of Technology |
Shi, Qing | Beijing Institute of Technology |
Wang, Huaping | Beijig Institute of Technology |
Huang, Qiang | Beijing Institute of Technology |
Fukuda, Toshio | Meijo University |
Keywords: Micro/Nano Robots, Automation at Micro-Nano Scales, Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care
Abstract: Tissue engineering is trying to use modular tissue micro-rings to construct artificial biological microtubes as substitute of autologous tissue tubes to alleviate the shortage of donor sources. However, because of the lack of effective assembly strategies, it is still challenging to achieve high-speed fabrication of biological microtubes with high cell density. In this paper, we proposed a robotic-based magnetic assembly strategy to handle this challenge. We first encapsulated magnetic alginate microfibers into micro-rings formed by cell self-assembly to enhance the controllability. Afterwards, a 3D long-stroke manipulator with visual servoing system was designed to achieve magnetic pick-and-place operations of micro-rings for 3D assembly. Moreover, we developed a mathematical model of the motion of micro-ring in solution environments. Based on visual feedback, we analyzed the feasibility of automatic assembly and following response of micro-rings with the moving magnets, which shows our proposed method has great potential to achieve high-speed bio-assembly. Finally, we successfully assembled multi-micro-rings into a biological microtube with high cell density.
|
|
14:15-14:30, Paper MoCT9.2 | |
>Design of the uMAZE Platform and Microrobots for Independent Control and Micromanipulation Tasks |
> Video Attachment
|
|
Johnson, Benjamin | Purdue University |
Esantsi, Nathan | Purdue University |
Cappelleri, David | Purdue University |
Keywords: Micro/Nano Robots, Automation at Micro-Nano Scales
Abstract: We present the uMAZE (u(Micro) Magnetic Actuation Zone control Environment) platform for independent control of multiple magnetic microrobots for performing individual and collaborative micromanipulation tasks. %Manipulation and assembly in microscale can be achieved with teams of such microrobots. We present a new local magnetic field generating coil system design, microrobot design, actuation scheme, and orientation control for actuating multiple magnetic microrobots independently. The new designs are validated and experiments showcasing their abilities are presented. The demonstrations include closed-loop independent and simultaneous control of four microrobots and a sample micromanipulation task involving two microrobots pushing micro-parts into a prescribed formation.
|
|
14:30-14:45, Paper MoCT9.3 | |
>Dielecrophoretic Introduction of the Membrane Proteins into the BLM Platforms for the Electrophygiological Analysis Systems |
|
Sugiura, Hirotaka | Nagoya University |
Osaki, Toshihisa | Kanagawa Institute of Industrial Science and Technology |
Mimura, Hisatoshi | Kanagawa Institute of Industrial Science and Technology (KISTEC) |
Yamada, Tetsuya | Kanagawa Institute of Industrial Science and Technology |
Takeuchi, Shoji | UTokyo |
Keywords: Micro/Nano Robots, Automation at Micro-Nano Scales, Medical Robots and Systems
Abstract: This paper proposed a technique to introduce the membrane protein into the lan-on-chip analysis system having a planar lipid bilayer. The proposed technique utilized a dielectrophoretic force generated by the asymmetric configuration of the solid electrodes on the aqueous buffer separator. By applying the alternating current to the separator and the counter electrode, we manipulated liposomes that could host the membrane proteins on the surface. The key point for the dielectrophoretic manipulation on this system was to fabricate an effective configuration of the droplet separator having the taper-edge on the contour of the micropore. This configuration made a strong interpenetrating DEP force at the lipid bilayer, and prompted the fusion of liposome into the lipid bilayer. The separator was fabricated by micromachining technique. Using the separator, we formed the lipid bilayer without evading the solid electrode on the surface. Finally, we elucidated the introduction of the liposome by monitoring with the optical microscopy.
|
|
14:45-15:00, Paper MoCT9.4 | |
>Miniaturized Robotics: The Smallest Camera Operator Bot Pays Tribute to David Bowie (I) |
|
Lehmann, Olivier | Universite de Franche-Comté |
Rauch, Jean-Yves | FEMTO-ST institute |
Vitry, Youen | ULB |
Pinsard, Tibo | Darrowan Prod |
Lambert, Pierre | Université libre de Bruxelles |
Gauthier, Michael | FEMTO-ST Institute |
|
|
15:00-15:15, Paper MoCT9.5 | |
>Electromagnetic Actuation of Microrobots in a Simulated Vascular Structure with a Position Estimator Based Motion Controller |
> Video Attachment
|
|
Dong, Dingran | City University of Hong Kong |
Lam, Wah Shing | City University of Hong Kong |
Sun, Dong | City University of Hong Kong |
Keywords: Motion Control, Automation at Micro-Nano Scales, Micro/Nano Robots
Abstract: The use of microrobots to achieve micromanipulation in vivo has attracted considerable attention in recent years to meet the request of non-invasiveness, precision and high efficiency in medical treatment. This paper reports the use of a home-designed electromagnetic manipulation system to control the movements of microrobots in a simulated vascular structure. After dynamic modeling, the moving trajectory of the microrobot is designed on the basis of an artificial potential field. Estimator for position is then designed with stability analysis by a Lyapunov approach. A super-twisting algorithm is further applied to control the microrobot to move along with the desired trajectory. Simulations and experiments are finally performed to demonstrate the effectiveness of the proposed control approach.
|
|
MoCT10 |
Room T10 |
Biological Cell Manipulation |
Regular session |
Chair: Hayakawa, Takeshi | Chuo University |
Co-Chair: Yaxiaer, Yalikun | Nara Institute of Science and Technology |
|
14:00-14:15, Paper MoCT10.1 | |
>On-Chip Integration of Ultra-Thin Glass Cantilever for Physical Property Measurement Activated by Femtosecond Laser Impulse |
|
Tang, Tao | Nara Institute of Science and Technology |
Hao, Yansheng | Nara Institute of Science and Technology |
Shen, Yigang | Osaka University |
Tanaka, Yo | Riken |
Huang, Ming | Nara Institute of Science and Technology |
Hosokawa, Yoichiroh | NAIST |
Li, Ming | Macquarie University |
Yaxiaer, Yalikun | Nara Institute of Science and Technology |
Keywords: Biological Cell Manipulation, Soft Sensors and Actuators
Abstract: Under the excitation of acoustic radiation, the amount of energy absorbed and rebounded by cells have the relationship with mechanical properties, e.g. stiffness, shape, weight and so on. In this paper, a femtosecond laser-activated micro-detector is designed to convert this relationship into an electrical signal. First, the acoustic radiation is generated by a femtosecond laser pulse in a microchannel and acts on neighbor cells / beads. Then, an ultra-thin glass sheet (UTGS)-based pressure sensor (cantilever) is fabricated at the bottom of the microfluidic chip to monitor changes in acoustic pressure during detection process. In this detection system, the pressure sensor is fabricated with a 10 𝛍m UTGS in a shape of rectangular cantilever and functions like a detector to convert acoustic waves into shift response. Based on the amplitude of detected pulses, we can directly analyze the acoustic energy, coming from either femtosecond laser pulse or that remains after penetrating target cells. We have taken experiments on 10 𝛍m beads and verified the applicability of this micro-detector, and the proposed method has great potential to be applied in label-free cell manipulation (i.e., sorting) as a detection mechanism.
|
|
14:15-14:30, Paper MoCT10.2 | |
>A Novel Portable Cell Sonoporation Device Based on Open-Source Acoustofluidics |
|
Song, Bin | BEIHANG UNIVERSITY |
Zhang, Wei | Beihang University |
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Feng, Lin | Beihang University |
Zhang, Deyuan | Beihang University |
Arai, Fumihito | Nagoya University |
Keywords: Biological Cell Manipulation, Micro/Nano Robots
Abstract: Sonoporation, which typically employs acoustic cavitation microbubbles, can enhance the permeability of the cell membrane, allowing foreign matter to enter cells across the natural barriers. However, the diameter nonuniformity and random distribution of microbubbles make it difficult to achieve controllable and high-efficiency sonoporation, while complex extern acoustic driving system also limits its applicability. Herein, we demonstrate a low-cost, expandable, and portable acoustofluidic device for cell sonoporation using acoustic streaming generated by oscillating sharp edges. The streaming-induced high shear forces can (i) quickly trap target cells at the tip of sharp edges and (ii) transiently modulate the permeability of the cell membrane, which is utilized to perform cell sonoporation events. Using our device, sonoporation is successfully achieved in a microbubble-free manner, with a sonoporation efficiency of more than 90%. Furthermore, our acoustic driving system is designed around the open-source Arduino prototyping platform due to its extendibility and portability. In addition to these benefits, our acoustofluidic device is simple to fabricate and operate, and it can work at relatively low frequency (4.6 kHz). All these advantages make our novel cell sonoporation device invaluable for many biological and biomedical applications such as drug delivery and gene transfection.
|
|
14:30-14:45, Paper MoCT10.3 | |
>Robotic Micromanipulation of Biological Cells with Friction Force-Based Rotation Control |
> Video Attachment
|
|
Cui, Shuai | Nanyang Technological University |
Ang, Wei Tech | Nanyang Technological University |
Keywords: Biological Cell Manipulation, Automation at Micro-Nano Scales
Abstract: Cell manipulation is a critical procedure in related biological applications such as embryo biopsy and intracytoplasmic sperm injection (ICSI), where the biological cell is required to be oriented to the desired position. To bridge the gap between the techniques and the clinical applications, a robotic micromanipulation method, which utilizes friction forces to rotate the cell with standard micropipettes, is presented in this paper. Force models for both in-plane and out-of-plane rotations are well established and analyzed for the rotation control. For better controllability, calibration steps are also designed for adjusting the orientation of the micropipette with a more efficient way. A cell orientation recognition algorithm based on the superpixel segmentation and spectral clustering is reported and achieved high validation accuracy (96%) for estimating the orientation of the oocyte. The extracted visual information further facilitates the feedback control of cell rotation. Experimental results show that the overall success rate for the cell rotation control was about 95% with orientation precision of ±1◦.
|
|
14:45-15:00, Paper MoCT10.4 | |
>Construction of Multiple Hepatic Lobule Like 3D Vascular Networks by Manipulating Magnetic Tweezers Toward Tissue Engineering |
|
Kim, Eunhye | Meijo University |
Takeuchi, Masaru | Nagoya University |
Kozuka, Taro | Meijo University |
Nomura, Takuto | Meijo University |
Ichikawa, Akihiko | Meijo University |
Hasegawa, Yasuhisa | Nagoya University |
Huang, Qiang | Beijing Institute of Technology |
Fukuda, Toshio | Meijo University |
Keywords: Biological Cell Manipulation, Micro/Nano Robots, Medical Robots and Systems
Abstract: In this paper, we have constructed actively perfusable multiple hepatic lobule-like vascular networks in a 3D cellular structure by using magnetic tweezers. Without well-organized channel networks, cells in a large 3D tissue cannot receive nutrients and oxygen from the channel, and therefore, the cells will be dead after few days. To construct well-organized channel networks, we fabricated a hepatic lobule like vascular networks by using magnetic fields in our previous work. However, the size of the hepatic lobule like vascular network was more than five times larger than real hepatic tissue. To improve the previous research, we have proposed several things. First, we have constructed the vascular network having similar size of the real thing in this step. Second, we have cultured the constructed structure for a long-time (more than two weeks) to verify the biocompatible condition. Third, we assemble the constructed hepatic tissues to make a large size of organ, liver. Finally, an actively perfusable system have been adopted to implement a bioreactor system by adding micro pump.
|
|
15:00-15:15, Paper MoCT10.5 | |
>Evaluations of Response Characteristics of On-Chip Gel Actuators for Various Single Cell Manipulations |
> Video Attachment
|
|
Wada, Hiroki | Chuo University |
Koike, Yuha | Chuo University |
Yokoyama, Yoshiyuki | Toyama Industrial Technology Research and Development Center |
Hayakawa, Takeshi | Chuo University |
Keywords: Micro/Nano Robots, Biological Cell Manipulation
Abstract: On-chip gel actuators are potential candidates for single cell manipulation because they can realize low-invasive manipulation of various cells. We propose an on-chip gel actuator driven by light irradiation. By patterning the gel actuator with light absorber, we can control the temperature of the actuator and drive it. The proposed drive method can realize highly localized temperature control of the gel actuator and can be applied to mass integration of on-chip gel actuators. In this study, we evaluate the heat conduction of the actuator during driving and its response characteristics as a function of various design parameters. We theoretically and experimentally evaluate the response characteristics and confirm that the response characteristics can be changed by altering the size of the light absorber. Furthermore, we show some examples of cell manipulation including trapping, transport, and sorting with various sizes of light absorber. Finally, we show proof of concept for the application of the proposed drive method for massive integration of on-chip gel actuators.
|
|
15:15-15:30, Paper MoCT10.6 | |
>Detection and Control of Air Liquid Interface with an Open-Channel Microfluidic Chip for Circulating Tumor Cells Isolation from Human Whole Blood |
> Video Attachment
|
|
Turan, Bilal | Nagoya University |
Tomori, Yusuke | Nagoya University |
Masuda, Taisuke | Nagoya University |
Weng, Ruixuan | Nagoya University |
Shen, Larina Tzu-Wei | Tsukuba University |
Matsusaka, Satoshi | Tsukuba University |
Arai, Fumihito | Nagoya University |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Biological Cell Manipulation, Micro/Nano Robots
Abstract: We have proposed a bio-automation system to isolate and recover circulating tumor cells (CTCs) individually from whole blood. An open-channel microfluidic chip-based approach is used to isolate the CTCs. The proposed microfluidic chip design can form a stable air-liquid interface. CTCs are trapped by the gaps in between the pillars of the microfluidic chip due to capillary force associated with the meniscus of the air-liquid interface. We propose a chip design to stabilize air-liquid interface and sample flow speed. We introduce an image analysis algorithm to detect the position of the air-liquid interface. Using the visual feedback from the image analysis algorithm, a control system is proposed to control air-liquid interface position. We succeeded in stabilizing the flow speed, making it feasible for the isolation of 5 mL of whole blood to be completed within 30 min. We achieved an average of position error of air-liquid interface of 4 µm with standard deviation of 7 µm. We have confirmed that air-liquid interface position is a deciding factor for trapping area of CTCs. By controlling air-liquid interface position, we have achieved trapping CTCs in a narrow band with a high concentration.
|
|
MoCT11 |
Room T11 |
Micro/Nano Robotics |
Regular session |
Chair: Petruska, Andrew J. | Colorado School of Mines |
Co-Chair: Jayaram, Kaushik | University of Colorado Boulder |
|
14:00-14:15, Paper MoCT11.1 | |
>Piezoelectric Grippers for Mobile Micromanipulation |
> Video Attachment
|
|
Abondance, Tristan | Harvard University |
Jayaram, Kaushik | University of Colorado Boulder |
Jafferis, Noah T. | Harvard University |
Shum, Jennifer | Harvard University |
Wood, Robert | Harvard University |
Keywords: Grippers and Other End-Effectors, Micro/Nano Robots, Mobile Manipulation
Abstract: The ability to efficiently and precisely manipulate objects in inaccessible environments is becoming an essential requirement for many applications of mobile robots, particularly at small sizes. Here, we propose and implement a mobile micromanipulation solution using a piezoelectric microgripper integrated into a dexterous robot, HAMR (the Harvard Ambulatory MicroRobot), that has a size of approximately 4.5cm by 4cm by 2.3cm and a maximum payload of approximately 3g. Our 100mg miniature gripper is composed of recurve piezoelectric actuators that produce parallel jaw motions (stroke of 205µm at 200V) while providing high gripping forces (blocked force of 0.575N at 200V), making it effective for micromanipulation applications with tiny objects. Using this gripper, we successfully demonstrated a grasping and lifting task with an object of 1.3g and thickness of 250µm at an operating voltage of 100V. Finally, by taking advantage of the locomotion capabilities of HAMR, we demonstrate mobile manipulation by changing the position and orientation of small objects weighing up to 2.8g controlled by the movement of the robot. We expect that the addition of this novel manipulation capability will increase the effectiveness of such miniature robots for accomplishing real-world tasks.
|
|
14:15-14:30, Paper MoCT11.2 | |
>A Novel and Controllable Cell-Robot in Real Vascular Network for Target Tumor Therapy |
|
Feng, Yanmin | Beihang University |
Feng, Lin | Beihang University |
Dai, Yuguo | Beihang University |
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Zhang, Chaonan | Beihang University |
Chen, Yuanyuan | Beihang University |
Arai, Fumihito | Nagoya University |
Keywords: Micro/Nano Robots
Abstract: Magnetic microrobots can be propelled precisely and wirelessly in vivo using magnetic field for targeted drug delivery and early detection. They are promising for clinical trials since magnetic fields are capable of penetrating most materials with minimal interaction, and are nearly harmless to human beings. However, challenges like the biocompatibility, biodegradation and therapeutic effects of these robots must be resolved before this technique is allowed for preclinical development. In this study, we proposed a cell-robot based on macrophages for carrying drugs to kill tumors propelled by magnetic gradient-based pulling. A custom-designed system with strong gradient magnetic field system in three-dimensional (3D) space using the minimum number of coils is used for precise control of the cell-robot. The cell-robots were fabricated by assembling magnetic nanoparticles (Fe3O4), anti-cancer drugs (DOX) into macrophages for magnetic actuation and therapeutic effects. Vitro experiments show that cell-robots can be accurately transported to the destination or approaching a targeted cancer cell. The magnetic nanoparticles have negligible effects on the cell-robot and the organism, which makes the cell-robot safe for in vivo experiments. The carried drugs in the cell-robot can be released by the irradiation of the near-field infrared and kill the cancer cells. Further in vivo experiments prove that the cell-robot can be transported to tumor area and release drugs to kill cancer effectively. The research provides biocompatible and biodegradable cell-robots for early tumor prevention and targeted precision therapy.
|
|
14:30-14:45, Paper MoCT11.3 | |
>Magnetized Cell-Robot Propelled by Magnetic Field for Cancer Killing |
|
Dai, Yuguo | Beihang University |
Feng, Yanmin | Beihang University |
Feng, Lin | Beihang University |
Chen, Yuanyuan | Beihang University |
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Liang, Shuzhang | Beihang University |
Song, Li | Beihang University |
Arai, Fumihito | Nagoya University |
Keywords: Micro/Nano Robots, Medical Robots and Systems, Automation at Micro-Nano Scales
Abstract: In this paper, we present a magnetized cell-robot using macrophages as templates, which can be controlled under a strong gradient magnetic field, to approach and kill cancer cells in both vitro and vivo environment. Firstly, we establish a magnetic control system using only four coils which can generate gradient field up to 4.14 T/m utilizing the coupled field contributed by multiple electromagnets acting in concert. Most importantly, the cell-robot which is based on the macrophage is proposed, and can be transported to the vicinity of cancer cells precisely using strong gradient magnetic field. Then the cell-robot will actively phagocytose the cancer cells and eventually kill them, achieving the cancer treatment at the cellular level. It has important significance for guiding accurate targeted therapy in vivo for the future, under the premise of zero harm to the human body.
|
|
14:45-15:00, Paper MoCT11.4 | |
>Control of Magnetically-Driven Screws in a Viscoelastic Medium |
> Video Attachment
|
|
Zhang, Zhengya | University Medical Center Groningen |
Klingner, Anke | German University in Cairo |
Misra, Sarthak | University of Twente |
Khalil, Islam S.M. | University of Twente |
Keywords: Micro/Nano Robots
Abstract: Magnetically-driven screws operating in softtissue environments could be used to deploy localized therapy or achieve minimally invasive interventions. In this work, we characterize the closed-loop behavior of magnetic screws in an agar gel tissue phantom using a permanent magnet-based robotic system with an open-configuration. Our closed-loop control strategy capitalizes on an analytical calculation of the swimming speed of the screw in viscoelastic fluids and the magnetic point-dipole approximation of magnetic fields. The analytical solution is based on the Stokes/Oldroyd-B equations and its predictions are compared to experimental results at different actuation frequencies of the screw. Our measurements matches the theoretical prediction of the analytical model before the step-out frequency of the screw owing to the linearity of the analytical model. We demonstrate open-loop control in two-dimensional space, and point-to-point closed-loop motion control of the screw (length and diameter of 6 mm and 2 mm, respectively) with maximum positioning error of 1.8 mm.
|
|
15:00-15:15, Paper MoCT11.5 | |
>Open-Loop Orientation Control Using Dynamic Magnetic Fields |
> Video Attachment
|
|
Petruska, Andrew J. | Colorado School of Mines |
Keywords: Micro/Nano Robots, Motion Control, Dynamics
Abstract: Remote magnetic control of soft magnetic objects has been limited to 2D orientation and 3D position. In this paper, we extend the five degree-of-freedom (5-DoF) control approach to full 6-DoF. We prove that 6-DoF control is possible for objects that have an apparent magnetic susceptibility tensor with unique eigenvalues. We further show that the object's orientation can be specified with a dynamic magnetic field and can be controlled without orientation feedback. The theory is demonstrated by rotating a soft magnetic object about each of its principle axes using a metronome like dynamic field.
|
|
15:15-15:30, Paper MoCT11.6 | |
>A Manipulability Criterion for Magnetic Actuation of Miniature Swimmers with Flexible Flagellum |
> Video Attachment
|
|
Begey, Jérémy | University of Strasbourg |
Etievant, Maxime | FEMTO-ST Institute |
Quispe, Johan Edilberto | Sorbonne University, CNRS Institut Des Systčmes Intelligents Et |
Bolopion, Aude | Femto-St Institute |
Vedrines, Marc | ICube - INSA De Strasbourg |
Abadie, Joel | UFC ENSMM |
Régnier, Stéphane | Sorbonne University |
Andreff, Nicolas | Université De Franche Comté |
Renaud, Pierre | ICube AVR |
Keywords: Micro/Nano Robots, Kinematics, Automation at Micro-Nano Scales
Abstract: The use of untethered miniature swimmers is a promising trend, especially in biomedical applications. These swimmers are often operated remotely using a magnetic field commonly generated using fixed coils that can suffer from a lack of compactness and heating issues. The analysis of the swimming capabilities is still an ongoing topic of research. In this paper, we focus on the ability of a magnetic actuation system to operate the propulsion of miniature swimmers with flexible flagellum. As a first contribution, we present a new manipulability criterion to assess the ability of a magnetic actuation system to operate a swimming robot, i.e. to ensure a displacement in any desired direction with a fixed minimum speed. This criterion is developed thanks to an analogy with cable-driven parallel robots. As a second contribution, this manipulability criterion is exploited to identify the dexterous swimming workspace which can be used to design of new coil configurations as well as to highlight the possibilities of moving coil systems. A case study for a planar workspace surrounded by three coils is in particular carried out. The accompanying video illustrates the application of the proposed criterion in 3D, for a large number of coils.
|
|
MoCT12 |
Room T12 |
Micro-Scale Perception and Manipulation |
Regular session |
Chair: Liu, Ming | Hong Kong University of Science and Technology |
Co-Chair: Liu, Xinyu | University of Toronto |
|
14:00-14:15, Paper MoCT12.1 | |
>Smart-Inspect: Micro Scale Localization and Classification of Smartphone Glass Defects for Industrial Automation |
|
Bhutta, M Usman Maqbool | The Hong Kong University of Science and Technology (HKUST) |
Aslam, Shoaib | The Hong Kong University of Science and Technology (HKUST), Clea |
Yun, Peng | The Hong Kong University of Science and Technology |
Jiao, Jianhao | The Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Localization, Automation Technologies for Smart Cities, Manufacturing, Maintenance and Supply Chains
Abstract: The presence of any type of defect on the glass screen of smart devices has a great impact on their quality. We present a robust semi-supervised learning framework for intelligent micro-scaled localization and classification of defects on a 16K pixel image of smartphone glass. Our model features the efficient recognition and labeling of three types of defects: scratches, light leakage due to cracks, and pits. Our method also differentiates between the defects and light reflections due to dust particles and sensor regions, which are classified as non-defect areas. We use a partially labeled dataset to achieve high robustness and excellent classification of defect and non-defect areas as compared to principal components analysis (PCA), multi-resolution and information-fusion-based algorithms. In addition, we incorporated two classifiers at different stages of our inspection framework for labeling and refining the unlabeled defects. We successfully enhanced the inspection depth-limit up to 5 microns. The experimental results show that our method outperforms manual inspection in testing the quality of glass screen samples by identifying defects on samples that have been marked as good by human inspection.
|
|
14:15-14:30, Paper MoCT12.2 | |
>An SEM-Based Nanomanipulation System for Multi-Physical Characterization of Single InGaN/GaN Nanowires |
|
Qu, Juntian | McGill University |
Wang, Renjie | McGill University |
Pan, Peng | McGill University |
Du, Linghao | University of Toronto |
Mi, Zetian | University of Michigan |
Sun, Yu | University of Toronto |
Liu, Xinyu | University of Toronto |
Keywords: Automation at Micro-Nano Scales, Micro/Nano Robots
Abstract: Functional nanomaterials possess exceptional multi-physical (e.g., mechanical, electrical and optical) proper- ties compared with their bulk counterparts. To facilitate both synthesis and device applications of these nanomaterials, it is highly desired to characterize their multi-physical properties with high accuracy and efficiency. The nanomanipulation tech- niques under scanning electron microscopy (SEM) has enabled the testing of mechanical and electrical properties of various nanomaterials. However, the seamless integration of mechan- ical, electrical, and optical testing techniques into an SEM for triple-field-coupled characterization of single nanostructures is still unexplored. In this work, we report the first SEM- based nanomanipulation system for high-resolution mechano- optoelectronic testing of single semiconductor InGaN/GaN nanowires (NWs). A custom-made optical measurement setup was integrated onto a four-probe nanomanipulator inside an SEM, with two optical microfibers actuated by the nanoma- nipulator for NW excitation and emission measurement. A conductive tungsten nanoprobe and a conductive atomic force microscopy (AFM) cantilever probe were integrated onto the nanomanipulator for electrical nanoprobing of single NWs for electroluminescence (EL) measurement. The AFM probe also served as a force sensor for quantifying the contact force applied to the NW during nanoprobing. Using this unique system, we examined, for the first time, the effect of mechanical compression applied to an InGaN/GaN NW on its optoelectronic properties.
|
|
14:30-14:45, Paper MoCT12.3 | |
>Observer-Based Disturbance Control for Small-Scale Collaborative Robotics |
|
Awde, Ahmad | Université Bourgogne Franche-Comté - Sorbonne Université |
Boudaoud, Mokrane | Sorbonne Université |
Régnier, Stéphane | Sorbonne University |
Clévy, Cédric | Franche-Comté University |
Keywords: Automation at Micro-Nano Scales, Haptics and Haptic Interfaces, Micro/Nano Robots
Abstract: Collaborative robotics allows merging the best capabilities of humans and robots to perform complex tasks. This allows the user to interact with remote and directly inaccessible environments such as the micro-scale world. This interaction is made possible by the bidirectional exchange of information (displacement - force) between the user and the environment through a haptic interface. The effectiveness of the human/robot interaction is highly dependent on how the human feels the forces. This is a key point to enable humans to make the right decisions in a collaborative task. This paper discusses the design of a dynamic observer to estimate the forces applied by a human operator on a class of parallel pantograph-type haptic interfaces used to control small-scale robotic systems. The objective is to reject disturbances in order to improve the human force perception capability over a wide frequency range. A dynamic pantograph model is proposed and experimentally validated. The observer is designed on the basis of the proposed dynamic model and its efficiency in estimating the applied human force is demonstrated for the first time with pantograph-type interfaces. Experimental validation first shows the effectiveness of the perturbation observer for external human force estimation with a response time of less than 0.2 s and a mean error of less than 7 mN and then the effectiveness of the controller in improving the quality of human sensation of forces down to 10 mN.
|
|
14:45-15:00, Paper MoCT12.4 | |
>Robust Micro-Particle Manipulation in a Microfluidic Channel Network Using Gravity-Induced Pressure Actuators |
> Video Attachment
|
|
Lee, Donghyeon | Pohang University of Science and Technology(POSTECH) |
Lee, Woongyong | POSTECH |
Chung, Wan Kyun | POSTECH |
Kim, Keehoon | POSTECH, Pohang University of Science and Technology |
Keywords: Automation at Micro-Nano Scales, Biological Cell Manipulation, Mechanism Design
Abstract: Robust particle manipulation is a challenging but essential technique for single-cell analysis and processing of microfluidic devices. This paper proposes a micro-particle manipulation system with a microfluidic channel network. We built gravity-induced pressure actuators, which can generate high-resolution output pressure with a wide range so that the multiple particles can be delivered from the inlet of the chip. In this paper, we studied how to model the proposed multi-input-single-output system and sources of disturbances, and designed a robust controller using disturbance observer technique. The performance of the proposed system was verified through experiments.
|
|
15:00-15:15, Paper MoCT12.5 | |
>Deep Learning-Based Autonomous Scanning Electron Microscope |
> Video Attachment
|
|
Jang, Jonggyu | Ulsan National Institute of Science and Technology (UNIST) |
Lyu, Hyeonsu | Ulsan National Institute of Science and Technology (UNIST) |
Yang, Hyun Jong | Pohang University of Science and Technology (POSTECH) |
Oh, Moohyun | Egovid Inc |
Lee, Junhee | Coxem Co. Ltd |
Keywords: Autonomous Agents, Reinforecment Learning, Computer Vision for Automation
Abstract: By virtue of their ultra high resolution, scanning electron microscopes (SEMs) are essential to study topography, morphology, composition, and crystallography of materials, and thus are widely used for advanced researches in physics, chemistry, pharmacy, geology, etc. The major hindrance of using SEMs is that obtaining high quality images from SEMs requires a professional control of many control parameters. Therefore, it is not an easy task even for an experienced researcher to get high quality sample images without any help from SEM experts. In this paper, we propose and implement a deep learning-based autonomous SEM machine, which assesses image quality and controls parameters autonomously to get high quality sample images just as if human experts do. This world’s first autonomous SEM machine may be the first step to bring SEMs, previously used only for advanced researches due to its difficulty in use, into much broader applications such as education, manufacture, and mechanical diagnosis, which are previously meant for optical microscopes.
|
|
MoCT13 |
Room T13 |
Computer Vision for Medical Robotics |
Regular session |
Chair: Yin, Hu | Beihang University |
Co-Chair: Hannaford, Blake | University of Washington |
|
14:00-14:15, Paper MoCT13.1 | |
>The Application of Navigation Technology for the Medical Assistive Devices Based on Aruco Recognition Technology |
|
Tian, Weihan | Beihang University |
Chen, Diansheng | Beihang University |
Yang, Zihao | Beihang University |
Yin, Hu | Beihang University |
Keywords: Visual Servoing, Service Robots, Visual-Based Navigation
Abstract: In order to improve the convenience of operation for the medical assistive devices and reduce the use and maintenance cost, the Aruco recognition technology is applied to the navigation and positioning of visual guided electric assistive devices. Firstly, the differential control kinematic model of the electric wheelchair is analyzed. We discuss the feasibility of Aruco recognition technology in the application of medical assistive devices. The camera on wheelchair captures the Aruco marker data and transmits it to controller. The controller calculates the position and posture information of electric wheelchair, which provides reference for the next movement of electric wheelchair. Combining with the kinematic model of electric wheelchair, this method can realize the navigation and positioning of electric wheelchair. Experiments show that the vision guidance of Electric Wheelchair based on Aruco recognition is accurate, stable, low cost, and can be flexibly applied to the auxiliary equipment of medical institutions。
|
|
14:15-14:30, Paper MoCT13.2 | |
>Endoscopic Navigation Based on Three-Dimensional Structure Registration |
|
Han, Minghui | Nankai University |
Dai, Yu | Nankai University |
Zhang, Jianxun | Nankai University |
Keywords: Visual-Based Navigation, Computer Vision for Medical Robotics, Computer Vision for Automation
Abstract: Surgical navigation is challenging on complicated multi-branch structures such as intrarenal collecting systems or bronchi. The objective of this work is to help surgeons quickly establish the corresponding relationship between intraoperative endoscopic images and preoperative CT data. An endoscopic navigation method is proposed based on three-dimensional structure registration. It mainly includes three sections. First, a reconstruction method is presented to obtain three-dimensional information of porous structures from endoscopic images. It combines image enhancement, structure-from-motion and template matching. Second, a hole search strategy based on slicing is given for detecting and extracting three-dimensional porous structures from CT data. Third, a similarity measurement algorithm is developed for registering endoscopic images to CT data. The performance of this work is evaluated on the data from the ureteroscopic holmium laser lithotripsy and the results show its accuracy, robustness and time cost.
|
|
14:30-14:45, Paper MoCT13.3 | |
>Z-Net: An Anisotropic 3D DCNN for Medical CT Volume Segmentation |
|
Li, Peichao | Imperial College London |
Zhou, Xiao-Yun | Imperial College London |
Wang, Zhaoyang | Imperial College London |
Yang, Guang-Zhong | Shanghai Jiao Tong University |
Keywords: Computer Vision for Medical Robotics, Object Detection, Segmentation and Categorization, Novel Deep Learning Methods
Abstract: Accurate volume segmentation from the Computed Tomography (CT) scan is a common prerequisite for pre-operative planning, intra-operative guidance and quantitative assessment of therapeutic outcomes in robot-assisted Minimally Invasive Surgery (MIS). 3D Deep Convolutional Neural Network (DCNN) is a viable solution for this task, but is memory intensive. Small isotropic patches are cropped from the original and large CT volume to mitigate this issue in practice, but it may cause discontinuities between the adjacent patches and severe class-imbalances within individual sub-volumes. This paper presents a new 3D DCNN framework, namely Z-Net, to tackle the discontinuity and class-imbalance issue by preserving a full field-of-view of the objects in the XY planes using anisotropic spatial separable convolutions. The proposed Z-Net can be seamlessly integrated into existing 3D DCNNs with isotropic convolutions such as 3D U-Net and V-Net, with improved volume segmentation Intersection over Union (IoU) - up to 12.6%. Detailed validation of Z-Net is provided for CT aortic, liver and lung segmentation, demonstrating the effectiveness and practical value of Z-Net for intra-operative 3D navigation in robot-assisted MIS.
|
|
14:45-15:00, Paper MoCT13.4 | |
>LC-GAN: Image-To-Image Translation Based on Generative Adversarial Network for Endoscopic Images |
|
Lin, Shan | University of Washington |
Qin, Fangbo | Institute of Automation, Chinese Academy of Sciences |
Li, Yangming | Rochester Institute of Technology |
Bly, Randall | University of Washington |
Moe, Kris | University of Washington |
Hannaford, Blake | University of Washington |
Keywords: Computer Vision for Medical Robotics, Medical Robots and Systems
Abstract: The intelligent perception of endoscopic vision is appealing in many computer-assisted and robotic surgeries. Achieving good vision-based analysis with deep learning techniques requires large labeled datasets, but manual data labeling is expensive and time-consuming in medical problems. When applying a trained model to a different but relevant dataset, a new labeled dataset may be required for training to avoid performance degradation. In this work, we investigate a novel cross-domain strategy to reduce the need for manual data labeling by proposing an image-to-image translation model called live-cadaver GAN (LC-GAN) based on generative adversarial networks (GANs). More specifically, we consider a situation when a labeled cadaveric surgery dataset is available while the task is instrument segmentation on a live surgery dataset. We train LC-GAN to learn the mappings between the cadaveric and live datasets. To achieve instrument segmentation on live images, we can first translate the live images to fake-cadaveric images with LC-GAN, and then perform segmentation on the fake-cadaveric images with models trained on the real cadaveric dataset. With this cross-domain strategy, we fully leverage the labeled cadaveric dataset for segmentation on live images without the need to label the live dataset again. Two generators with different architectures are designed for LC-GAN to make use of the deep feature representation learned from the cadaveric image based instrument segmentation task. Moreover, we propose structural similarity loss and segmentation consistency loss to improve the semantic consistency during translation. The results demonstrate that LC-GAN achieves better image-to-image translation results, and leads to improved segmentation performance in the proposed cross-domain segmentation task.
|
|
MoCT14 |
Room T14 |
Surgical Robotics: Control |
Regular session |
Chair: Krieger, Axel | University of Maryland |
Co-Chair: Eagleson, Roy | University of Western Ontario |
|
14:00-14:15, Paper MoCT14.1 | |
>DaVinciNet: Joint Prediction of Motion and Surgical State in Robot-Assisted Surgery |
> Video Attachment
|
|
Qin, Yidan | Intuitive Surgical |
Feyzabadi, Seyedshams | UC Merced |
Allan, Max | Intuitive Surgical |
Burdick, Joel | California Institute of Technology |
Azizian, Mahdi | Intuitive Surgical |
Keywords: Surgical Robotics: Laparoscopy, Deep Learning for Visual Perception, Medical Robots and Systems
Abstract: This paper presents a technique to concurrently and jointly predict the future trajectories of surgical instruments and the future state(s) of surgical subtasks in robot-assisted surgeries (RAS) using multiple input sources. Such predictions are a necessary first step towards shared control and supervised autonomy of surgical subtasks. Minute-long surgical subtasks, such as suturing or ultrasound scanning, often have distinguishable tool kinematics and visual features, and can be described as a series of fine-grained states with transition schematics. We propose daVinciNet - an end-to-end dual-task model for robot motion and surgical state predictions. daVinciNet performs concurrent end-effector trajectory and surgical state predictions using features extracted from multiple data streams, including robot kinematics, endoscopic vision, and system events. We evaluate our proposed model on an extended Robotic Intra-Operative Ultrasound (RIOUS+) imaging dataset collected on a da Vinci Xi surgical system and the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our model achieves up to 93.85% short-term (0.5s) and 82.11% long-term (2s) state prediction accuracy, as well as 1.07mm short-term and 5.62mm long term trajectory prediction error.
|
|
14:15-14:30, Paper MoCT14.2 | |
>Hierarchical Optimization Control of Redundant Manipulator for Robot-Assisted Minimally Invasive Surgery |
> Video Attachment
|
|
Hu, Yingbai | Technische Universität München |
Su, Hang | Politecnico Di Milano |
Chen, Guang | Technical University of Munich |
Ferrigno, Giancarlo | Politecnico Di Milano |
De Momi, Elena | Politecnico Di Milano |
Knoll, Alois | Tech. Univ. Muenchen TUM |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy, Motion and Path Planning
Abstract: For the time varying optimization problem, the tracking error cannot converge to zero at the finite time because of the optimal solution changing over time. This paper proposes a novel varying parameter recurrent neural network (VPRNN) based hierarchical optimization of a 7-DoF surgical manipulator for Robot-Assisted Minimally Invasive Surgery (RAMIS), which guarantees task tracking, Remote Center of Motion (RCM) and manipulability index optimization. A theoretically grounded hierarchical optimization framework based is introduced to control multiple tasks based on their priority. Finally, the effectiveness of the proposed control strategy is demonstrated with both simulation and experimental results. The results show that the proposed VPRNN-based method can optimal three tasks at the same time and have better performance than previous work.
|
|
14:30-14:45, Paper MoCT14.3 | |
>Towards Autonomous Control of Magnetic Suture Needles |
> Video Attachment
|
|
Fan, Matthew | University of Maryland, College Park |
Liu, Xiaolong | University of Maryland College Park |
Jain, Kamakshi | University of Maryland College Park |
Lerner, Daniel | University of Maryland, College Park |
Mair, Lamar | Weinberg Medical Physics, Inc |
Irving, Weinberg | Weinberg Medical Physics, Inc |
Diaz-Mercado, Yancy | University of Maryland |
Krieger, Axel | University of Maryland |
Keywords: Medical Robots and Systems, Motion and Path Planning, Surgical Robotics: Planning
Abstract: This paper proposes a magnetic needle steering controller to manipulate mesoscale magnetic suture needles for executing planned suturing motion. This is an initial step towards our research objective: enabling autonomous control of magnetic suture needles for suturing tasks in minimally invasive surgery. To demonstrate the feasibility of accurate motion control, we employ a cardinally-arranged four-coil electromagnetic system setup and control magnetic suture needles in a 2-dimensional environment, i.e., a Petri dish filled with viscous liquid. Different from only using magnetic field gradients to control small magnetic agents under high damping conditions, the dynamics of a magnetic suture needle are investigated and encoded in the controller. Based on mathematical formulations of magnetic force and torque applied on the needle, we develop a kinematically constrained dynamic model that controls the needle to rotate and only translate along its central axis for mimicking the behavior of surgical sutures. A current controller of the electromagnetic system combining with closed-loop control schemes is designed for commanding the magnetic suture needles to achieve desired linear and angular velocities. To evaluate control performance of magnetic suture needles, we conduct experiments including needle rotation control, needle position control by using discretized trajectories, and velocity control by using a time-varying circular trajectory. The experiment results demonstrate our proposed needle steering controller can perform accurate motion control of mesoscale magnetic suture needles.
|
|
14:45-15:00, Paper MoCT14.4 | |
>Supervised Semi-Autonomous Control for Surgical Robot Based on Bayesian Optimization |
|
Chen, Junhong | Imperial College London |
Zhang, Dandan | Imperial College London |
Munawar, Adnan | Johns Hopkins University |
Zhu, Ruiqi | Imperial College London |
Lo, Benny Ping Lai | Imperial College London |
Fischer, Gregory Scott | Worcester Polytechnic Institute, WPI |
Yang, Guang-Zhong | Shanghai Jiao Tong University |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy
Abstract: The recent development of Robot-Assisted Minimally Invasive Surgery (RAMIS) has brought much benefit to ease the performance of complex Minimally Invasive Surgery (MIS) tasks and lead to more clinical outcomes. Compared to direct master-slave manipulation, semi-autonomous control for the surgical robot can enhance the efficiency of the operation, particularly for repetitive tasks. However, operating in a highly dynamic in-vivo environment is complex. Supervisory control functions should be included to ensure flexibility and safety during the autonomous control phase. This paper presents a haptic rendering interface to enable supervised semi-autonomous control for a surgical robot. Bayesian optimization is used to tune user-specific parameters during the surgical training process. User studies were conducted on a customized simulator for validation. Detailed comparisons are made between with and without the supervised semi-autonomous control mode in terms of the number of clutching events, task completion time, master robot end-effector trajectory and average control speed of the slave robot. The effectiveness of the Bayesian optimization is also evaluated, demonstrating that the optimized parameters can significantly improve users' performance. Results indicate that the proposed control method can reduce the operator's workload and enhance operation efficiency.
|
|
15:00-15:15, Paper MoCT14.5 | |
>Parallel Haptic Rendering for Orthopedic Surgery Simulators |
|
Faieghi, Reza | Ryerson University |
Atashzar, S. Farokh | New York University (NYU), US |
Tutunea-Fatan, O. Remus | Western University |
Eagleson, Roy | University of Western Ontario |
Keywords: Haptics and Haptic Interfaces, Virtual Reality and Interfaces, Computational Geometry
Abstract: This study introduces a haptic rendering algorithm for simulating surgical bone machining operations. The proposed algorithm is a new variant of the voxmap pointshell method, where the bone and surgical tool geometries are represented by voxels and points, respectively. The algorithm encompasses computationally efficient methods in a data parallel framework to rapidly query intersecting voxel-point pairs, remove intersected bone voxels to replicate bone removal and compute elemental cutting forces. A new force model is adopted from the composite machining literature to calculate the elemental forces with higher accuracy. The integration of the algorithm with graphics rendering for visuo-haptic simulations is also outlined. The algorithm is benchmarked against state-of-the-art methods and is validated against prior experimental data collected during bone drilling and glenoid reaming trials. The results indicate improvements in computational efficiency and the force/torque prediction accuracy compared to the existing methods, which can be ultimately translated into higher realism in simulating orthopedic procedures.
|
|
MoCT15 |
Room T15 |
Surgical Robotics: Image-Guided I |
Regular session |
Chair: Cavusoglu, M. Cenk | Case Western Reserve University |
Co-Chair: Nageotte, Florent | University of Strasbourg |
|
14:00-14:15, Paper MoCT15.1 | |
>Differential Image Based Robot to MRI Scanner Registration with Active Fiducial Markers for an MRI-Guided Robotic Catheter System |
|
Tuna, Eser Erdem | Case Western Reserve Univesity |
Poirot, Nate Lombard | Case Western Reserve University |
Barrera Bayona, Juana | Case Western Reserve University |
Franson, Dominique | Case Western Reserve University |
Huang, Sherry | Case Western Reserve University |
Narvaez, Julian | Case Western Reserve University |
Seiberlich, Nicole | Case Western Reserve University |
Griswold, Mark | Case Western Reserve University |
Cavusoglu, M. Cenk | Case Western Reserve University |
Keywords: Calibration and Identification, Surgical Robotics: Steerable Catheters/Needles, Medical Robots and Systems
Abstract: In magnetic resonance imaging (MRI) guided robotic catheter ablation procedures, reliable tracking of the catheter within the MRI scanner is needed to safely navigate the catheter. This requires accurate registration of the catheter to the scanner. This paper presents a differential, multi-slice image-based registration approach utilizing active fiducial coils. The proposed method would be used to preoperatively register the MRI image space with the physical catheter space. In the proposed scheme, the registration is performed with the help of a registration frame, which has a set of embedded electromagnetic coils designed to actively create MRI image artifacts. These coils are detected in the MRI scanner’s coordinate system by background subtraction. The detected coil locations in each slice are weighted by the artifact size and then registered to known ground truth coil locations in the catheter’s coordinate system via least-squares fitting. The proposed approach is validated by using a set of target coils placed withing the workspace, employing multiplanar capabilities of the MRI scanner. The average registration and validation errors are respectively computed as 1.97 mm and 2.49 mm. The multi-slice approach is also compared to the single-slice method and shown to improve registration and validation by respectively 0.45 mm and 0.66 mm.
|
|
14:15-14:30, Paper MoCT15.2 | |
>Robot-Assisted Ultrasound-Guided Biopsy on MR-Detected Breast Lesions |
|
Welleweerd, Marcel Klaas | University of Twente |
Pantelis, Dimitrios | University of Twente |
De Groot, Antonius Gerardus | University of Twente |
Siepel, Françoise J | University of Twente |
Stramigioli, Stefano | University of Twente |
Keywords: Medical Robots and Systems, Computer Vision for Medical Robotics
Abstract: One out of eight women will get breast cancer during their lifetime. A biopsy, a procedure in which a tissue sample is acquired from the lesion, is required to confirm the diagnosis. A biopsy is preferably executed under ultrasound (US) guidance because it is simple, fast, and cheap, gives real-time image feedback and causes little patient discomfort. However, Magnetic Resonance (MR)-detected lesions may be barely or not visible on US and difficult to find due to deformations of the breast. This paper presents a robotic setup and workflow that assists the radiologist in targeting MR-detected breast lesions under US guidance, taking into account deformations and giving the radiologist robotic accuracy. The setup consists of a seven degree-of-freedom robotic serial manipulator equipped with an end-effector carrying a US transducer and a three degree-of-freedom actuated needle guide. During probe positioning, the US probe is positioned on the patient's skin while the system tracks skin contact and tissue deformation. During the intervention phase, the radiologist inserts the needle through the actuated guide. During insertion, the tissue deformation is tracked and the needle path is adjusted accordingly. The workflow is demonstrated on a breast phantom. It is shown that lesions with a radius down to 2.9 mm can be targeted. While MRI is becoming more important in breast cancer detection, the presented robot-assisted approach helps the radiologist to effectively and accurately confirm the diagnosis utilizing the preferred US-guided method.
|
|
14:30-14:45, Paper MoCT15.3 | |
>Towards in Situ Backlash Estimation of Continuum Robots Using an Endoscopic Camera |
> Video Attachment
|
|
Poignonec, Thibault | University of Strasbourg, Icube Laboratory |
Zanne, Philippe | University of Strasbourg |
Rosa, Benoît | CNRS, France |
Nageotte, Florent | University of Strasbourg |
Keywords: Flexible Robots, Computer Vision for Medical Robotics, Medical Robots and Systems
Abstract: Accurate control of continuum robots requires handling non-linear behaviors between actuators and distal effectors. In this paper, we develop a method for estimating the non-linearities of tendon-driven degrees of freedom of flexible endoscopic systems by using a distal endoscopic camera and encoders at the proximal side. The proposed approach separates the non-linearities in two parts, namely a pure non uniform backlash and a non-linear function. The backlash is estimated without relying on any model, while the non-linear function is obtained by a pose estimation process. Experiments realized on a robotic flexible endoscopy platform (STRAS) show the validity of the approach for estimating in situ the quasi-static behavior of the robot and for compensating the non-linearities of the motion transmission.
|
|
14:45-15:00, Paper MoCT15.4 | |
>Towards Better Surgical Instrument Segmentation in Endoscopic Vision: Multi-Angle Feature Aggregation and Contour Supervision |
> Video Attachment
|
|
Qin, Fangbo | Institute of Automation, Chinese Academy of Sciences |
Lin, Shan | University of Washington |
Li, Yangming | Rochester Institute of Technology |
Bly, Randall | University of Washington |
Moe, Kris | University of Washington |
Hannaford, Blake | University of Washington |
Keywords: Computer Vision for Medical Robotics, Medical Robots and Systems
Abstract: Accurate and real-time surgical instrument segmentation is important in the endoscopic vision of robot-assisted surgery, and significant challenges are posed by frequent instrument-tissue contacts and continuous change of observation perspective. For these challenging tasks more and more deep neural networks (DNN) models are designed in recent years. We are motivated to propose a general embeddable approach to improve these current DNN segmentation models without increasing the model parameter number. Firstly, observing the limited rotation-invariance performance of DNN, we proposed the Mul-ti-Angle Feature Aggregation (MAFA) method, leveraging active image rotation to gain richer visual cues and make the prediction more robust to instrument orientation changes. Secondly, in the end-to-end training stage, the auxiliary con-tour supervision is utilized to guide the model to learn the boundary awareness, so that the contour shape of segmentation mask is more precise. The effectiveness of the proposed methods is validated with ablation experiments conducted on novel Sinus-Surgery datasets.
|
|
15:00-15:15, Paper MoCT15.5 | |
>Deep Learning Based Real-Time OCT Image Segmentation and Correction for Robotic Needle Insertion Systems |
|
Park, Ikjong | POSTECH |
Kim, Hong-Kyun | Kyungpook National University School of Medicine |
Chung, Wan Kyun | POSTECH |
Kim, Keehoon | POSTECH, Pohang University of Science and Technology |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care
Abstract: This paper proposes deep learning based real-time optical coherence tomography (OCT) image segmentation and correction algorithm for vision-based robotic needle insertion systems that can be used in DALK (deep anterior lamellar keratoplasty) surgery. The proposed algorithm is able to provide important information, such as the position of needle tip, the lower boundary of the tissue, and the marginal insertion depth solving traditional issues of OCT images like refractive error, optical noise from surgical tools, and the slow speed of volumetric scanning. Through the ex-vivo experiment using 10 porcine corenas, the performance of the proposed algorithm with a robotic system was verified. The segmentation errors were 7.4 um for the upper boundary, 10.5 um for the lower boundary, and 3.6 um for the needle tip. The difference in needle slope between the outside and inside of the cornea was reduced from 5.87 degree to 0.78 degree. The frame rate of the OCT image was 9.7Hz, and the time delay of the image processing algorithm was 542.6 ms for 10 images of 512x512 pixels. The results of the proposed algorithm were compared with those of the previous studies.
|
|
MoCT16 |
Room T16 |
Surgical Robotics: Image-Guided II |
Regular session |
Chair: De Momi, Elena | Politecnico Di Milano |
Co-Chair: Valdastri, Pietro | University of Leeds |
|
14:00-14:15, Paper MoCT16.1 | |
>SCAN: System for Camera Autonomous Navigation in Robotic-Assisted Surgery |
> Video Attachment
|
|
Da Col, Tommaso | Politecnico Di Milano |
Mariani, Andrea | Scuola Superiore Sant'Anna |
Deguet, Anton | Johns Hopkins University |
Menciassi, Arianna | Scuola Superiore Sant'Anna - SSSA |
Kazanzides, Peter | Johns Hopkins University |
De Momi, Elena | Politecnico Di Milano |
Keywords: Medical Robots and Systems, Human-Centered Automation, Telerobotics and Teleoperation
Abstract: Robot-Assisted systems for Minimally Invasive Surgery enhance the surgeon capability, however, direct control over both the surgical tools and the endoscope results in an increased workload that leads to longer operation times. This work investigates the introduction of SCAN (System for Camera Autonomous Navigation) to overcome this limitation. An experimental study involving 12 participants was carried out with the da Vinci Research Kit. Each user tested two novel camera control modalities, autonomous and semi-autonomous, as well as the current manual control of the camera, while carrying out a dry-lab task. Among the camera control modalities, the autonomous navigation achieved better objective performances and the highest user confidence. Moreover, the autonomous control (along with the semi-autonomous one) was able to optimize some metrics related to the robotic surgery workflow.
|
|
14:15-14:30, Paper MoCT16.2 | |
>Autonomous Tissue Retraction in Robotic Assisted Minimally Invasive Surgery - a Feasibilty Study |
|
Attanasio, Aleks | University of Leeds |
Scaglioni, Bruno | University of Leeds |
Leonetti, Matteo | University of Leeds |
Frangi, Alejandro | University of Leeds |
Cross, William | Department of Urology, St James University Hospital |
Biyani, Chandra Shekhar | Department of Urology, St James University Hospital |
Valdastri, Pietro | University of Leeds |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy, Computer Vision for Medical Robotics
Abstract: In this work, we describe a novel framework for planning and execution of semi-autonomous tissue retraction in robotic minimally invasive surgery. The approach is aimed at autonomously removing flaps of organs or connective tissue from the surgical area, thus exposing the underlying anatomy. Initially, a deep neural network is used to analyse the endoscopic image and detect candidate flaps of tissue that obstruct the scene. Subsequently, a procedural algorithm, aimed at planning and executing the retraction gesture, is developed from extended discussions with clinicians. Experimental validation, carried out on a DaVinci Research Kit, shows an enhancement of the visible background ranging from 151.9% to 235.2%. Another significant contribution of this paper is a dataset, containing 1080 labelled surgical stereo images and the associated depth maps, representing tissue flaps in different scenarios. The work described in this paper is a fundamental step towards the autonomous execution of tissue retraction and the first example based on the simultaneous use of deep learning and procedural algorithms. The same framework could be applied to a wide range of autonomous tasks, such as debridement and placement of laparoscopic clips.
|
|
14:30-14:45, Paper MoCT16.3 | |
>Fully Actuated Body-Mounted Robotic System for MRI-Guided Lower Back Pain Injections: Initial Phantom and Cadaver Studies |
> Video Attachment
|
|
Li, Gang | Johns Hopkins University |
Patel, Niravkumar | Johns Hopkins University |
Wang, Yanzhou | Johns Hopkins University |
Dumoulin, Charles | Cincinnati Children's Hospital Medical Center |
Loew, Wolfgang | Cincinnati Children's Hospital Medical Center |
Loparo, Olivia | University of Cincinnati |
Schneider, Katherine | University of Cincinnati |
Sharma, Karun | Sheikh Zayed Institute for Pediatric Surgical Innovation, Childr |
Cleary, Kevin | Children's National Medical Center |
Fritz, Jan | John Hopkins |
Iordachita, Ioan Iulian | Johns Hopkins University |
Keywords: Medical Robots and Systems
Abstract: This paper reports the improved design, system integration, and initial experimental evaluation of a fully actuated body-mounted robotic system for real-time MRI-guided lower back pain injections. The 6-DOF robot is composed of a 4- DOF needle alignment module and a 2-DOF remotely actuated needle driver module, which together provide a fully actuated manipulator that can operate inside the scanner bore during imaging. The system minimizes the need to move the patient in and out of the scanner during a procedure, and thus may shorten the procedure time and streamline the clinical workflow. The robot is devised with a compact and lightweight structure that can be attached directly to the patient’s lower back via straps. This approach minimizes the effect of patient motion by allowing the robot to move with the patient. The robot is integrated with an image-based surgical planning module. A dedicated clinical workflow is proposed for robot-assisted lower back pain injections under real-time MRI guidance. Targeting accuracy of the system was evaluated with a real-time MRI-guided phantom study, demonstrating the mean absolute errors (MAE) of the tip position to be 1.50±0.68mm and of the needle angle to be 1.56±0.93◦. An initial cadaver study was performed to validate the feasibility of the clinical workflow, indicating the maximum error of the position to be less than 1.90mm and of the angle to be less than 3.14◦.
|
|
14:45-15:00, Paper MoCT16.4 | |
>Force-Ultrasound Fusion: Bringing Spine Robotic-US to the Next “Level” |
> Video Attachment
|
|
Tirindelli, Maria | Computer Aided Medical Procedures, Technical University of Munic |
Victorova, Maria | PolyU |
Esteban, Javier | Technische Universität München |
Kim, Seong Tae | TU Munich |
Navarro-Alarcon, David | The Hong Kong Polytechnic University |
Zheng, Yong Ping | The Hong Kong Polytechnic University |
Navab, Nassir | TU Munich |
Keywords: Medical Robots and Systems, Computer Vision for Medical Robotics
Abstract: Spine injections are commonly performed in several clinical procedures. The localization of the target vertebral level (i.e. the position of a vertebra in a spine) is typically done by back palpation or under X-ray guidance, yielding either higher chances of procedure failure or exposure to ionizing radiation. Preliminary studies have been conducted in the literature, suggesting that ultrasound imaging may be a precise and safe alternative to X-ray for spine level detection. However, ultrasound data are noisy and complicated to interpret. In this study, a robotic-ultrasound approach for automatic vertebral level detection is introduced. The method relies on the fusion of ultrasound and force data, thus providing both ``tactile" and visual feedback during the procedure, which results in higher performances in presence of data corruption. A robotic arm automatically scans the volunteer's back along the spine by using force-ultrasound data to locate vertebral levels. The occurrences of vertebral levels are visible on the force trace as peaks, which are enhanced by properly controlling the force applied by the robot on the patient back. Ultrasound data are processed with a Deep Learning method to extract a 1D signal modelling the probabilities of having a vertebra at each location along the spine. Processed force and ultrasound data are fused using both a non deep learning method and a Temporal Convolutional Network to compute the locations of the vertebral levels. The benefits of fusing force and image signals for the identification of vertebrae locations are showcased through extensive evaluation.
|
|
15:00-15:15, Paper MoCT16.5 | |
>Ultrasound-Guided Wireless Tubular Robotic Anchoring System |
> Video Attachment
|
|
Wang, Tianlu | ETH Zurich |
Hu, Wenqi | Max Planck Institute for Intelligent Systems |
Ren, Ziyu | Max Planck Institute for Intelligent Systems |
Sitti, Metin | Max-Planck Institute for Intelligent Systems |
Keywords: Medical Robots and Systems, Mechanism Design, Micro/Nano Robots
Abstract: Untethered miniature robots have significant potential and promise in diverse minimally invasive medical applications inside the human body. For drug delivery and physical contraception applications inside tubular structures, it is desirable to have a miniature anchoring robot with self-locking mechanism at a target tubular region. Moreover, the behavior of this robot should be tracked and feedback-controlled by a medical imaging-based system. While such a system is unavailable, we report a reversible untethered anchoring robot design based on remote magnetic actuation. The current robot prototype's dimension is 7.5 mm in diameter, 17.8 mm in length, and made of soft polyurethane elastomer, photopolymer, and two tiny permanent magnets. Its relaxation and anchoring states can be maintained in a stable manner without supplying any control and actuation input. To control the robot's locomotion, we implement a two-dimensional (2D) ultrasound imaging-based tracking and control system, which automatically sweeps locally and updates the robot's position. With such a system, we demonstrate that the robot can be controlled to follow a pre-defined 1D path with the maximal position error of 0.53 ± 0.05 mm inside a tubular phantom, where the reversible anchoring could be achieved under the monitoring of ultrasound imaging.
|
|
15:15-15:30, Paper MoCT16.6 | |
>Tracking Strategy Based on Magnetic Sensors for Microrobot Navigation in the Cochlea |
|
Kroubi, Tarik | University Mouloud Mammeri of Tizi-Ouzou, Algeria & HEI Campus C |
Belharet, Karim | Hautes Etudes d'Ingénieur - HEI Campus Centre |
Bennamane, Kamal | University Mouloud Mammeri , TiziOuzou |
Keywords: Medical Robots and Systems, Micro/Nano Robots, Soft Sensors and Actuators
Abstract: One approach to control drug delivery in the cochlea is to use a magnetic microrobot powered by externally applied magnetic fields. However, it is necessary to integrate a localization system to ensure the precise navigation of the microrobot in the cochlear canal. To avoid integrating a clinical imaging modality for the navigation of microrobots in the cochlea, we propose in this work the application of magnetic sensors to localize the magnetic microrobot. In our method, we propose a real-time localization system based only on two sensors to keep a precise localization of the spherical magnetic microrobot. The first sensor measures both the magnetic field of the environment and the magnetic field generated by the microrobot (localization sensor). The second sensor (surrounding sensor) is placed away from the localization sensor, this sensor measures the magnetic field of the environment, which will be subtracted from the signal of the localization sensor to determine the value of the magnetic field of the microrobot. We have proposed a new magnetic sensor calibration method and a robust localization algorithm for precise localization of the microrobot. The experiments demonstrate the effectiveness of the designed system and show the precision of the proposed localization strategy.
|
|
MoCT17 |
Room T17 |
Surgical Robotics: Laparoscopy |
Regular session |
Chair: Kim, Keri | Korea Institute of Science and Technology |
Co-Chair: Drake, James | Hospital for Sick Children, University of Toronto |
|
14:00-14:15, Paper MoCT17.1 | |
>Developing Thermal Endoscope for Endoscopic Photothermal Therapy for Peritoneal Dissemination |
|
Ohara, Mutsuki | Waseda University |
Sanpei, Sohta | Waseda University |
Seo, Chanjin | Waseda University |
Ohya, Jun | Waseda University |
Masamune, Ken | Tokyo Women's Medical University |
Nagahashi, Hiroshi | Tokyo Institute of Technology |
Morimoto, Yuji | National Defense Medical College |
Harada, Manabu | National Defense Medical College |
Keywords: Surgical Robotics: Laparoscopy, Medical Robots and Systems, Computer Vision for Medical Robotics
Abstract: As a novel therapy for peritoneal dissemination, it is desired to actualize an endoscopic photothermal therapy, which is minimally invasive and is highly therapeutically effective. However, since the endoscopic tumor temperature control has not been actualized, conventional therapies could damage healthy tissues by overhearing. In this paper, we develop a thermal endoscope system that controls the tumor temperature so that the heated tumor gets necrotic. In fact, our thermal endoscope contains a thermal image sensor, a visible light endoscope and a laser fiber. Concerning the thermal image sensor, the conventional thermal endoscope has the problem that the diameter is too large, because the conventional endoscope loads a large thermal image sensor with high-resolution. Therefore, this paper uses a small thermal image sensor with low resolution, because the diameter of the thermal endoscope needs to be smaller than 15mm in order to be inserted into the trocar. However, this thermal image sensor is contaminated by much noise. Thus, we develop a tumor temperature control system using a feedback control and tumor temperature estimation based on Gaussian function, so that the noisy, small thermal image sensor can be used. As experimental results of the proposed endoscopic photothermal therapy for the hepatophyma carcinoma model of rats, it turns out that the tumor temperature by which the heated tumor gets necrotic can be kept stable. It can be said that our endoscopic photothermal therapy achieves a certain degree of therapy effect.
|
|
14:15-14:30, Paper MoCT17.2 | |
>Development of Deployable Bending Wrist for Minimally Invasive Laparoscopic Endoscope |
> Video Attachment
|
|
Kim, Jongwoo | The Hospital for Sick Children, University of Toronto |
Looi, Thomas | Hospital for Sick Children |
Newman, Allen | University of Toronto |
Drake, James | Hospital for Sick Children, University of Toronto |
Keywords: Surgical Robotics: Laparoscopy, Medical Robots and Systems, Soft Robot Materials and Design
Abstract: During the last two decades, minimally invasive surgery (MIS) has become popular because it offers advantages such as less pain, faster recovery, improved cosmesis, and reduced complications. Single-port laparoscopic surgery is a form of MIS where surgeons operate exclusively through a single entry. However, the view from the rigid endoscope is often obscured by the instruments which pass through the same single entry. To remove the need for a secondary viewing port and the blind spots during operation, we propose a deployable wrist mechanism for minimally invasive laparoscopic surgery. It utilizes an S-shape nitinol tube with a curvature of 15 mm and 1.83 mm in diameter. When retracted, the s-shaped wrist is straightened into the main shaft of the laparoscopic tool. As the wrist translates outward, the S-shaped nitinol wrist emerges from an opening on the tool shaft and bends to point at the tooltip. The wrist has two degrees of freedom: translational displacement for controlling the bending and rotational movement of the wrist. The bending mechanism was analyzed by finite element method simulation and validated by experiments. For future work, we will try to widen the scope of its applications including laser ablation tools, triangularization, and other microsurgical procedures.
|
|
14:30-14:45, Paper MoCT17.3 | |
>Accurate Estimation of the Position and Shape of the Rolling Joint in Hyper-Redundant Manipulators |
|
Kim, Jeongryul | Korea Institute of Science and Technology |
Moon, Yonghwan | Korea Institute of Science and Technology |
Kwon, Seong-il | Korea Institute of Science and Technology |
Kim, Keri | Korea Institute of Science and Technology |
Keywords: Surgical Robotics: Laparoscopy, Redundant Robots, Underactuated Robots
Abstract: Hyper-redundant manipulators driven by cables are used in minimally invasive surgery because of their flexibility and small diameters. In particular, manipulators composed of many rigid links and joints have the advantages of high stiffness and payload. However, these manipulators have difficulty in estimating their positions and shapes using calculations based only on the kinematics model that assumes all joint angles are equal. In this paper, we present a method for estimating the position and shape of the rolling joint in hyper-redundant manipulators by minimizing the joint moments. This allows the determination the equilibrium position of all segments of the rolling joint, and therefore an estimation of its shape. We experimentally determine the position and shape of a prototype of the rolling joint and compare them to a simulation of our method. The maximum error between the simulation and the experimental results is 4.13 mm, which is a 77.22% improvement over the kinematic model that calculates the same joint angle. This verifies that our method accurately estimates the position and shape of the rolling joint.
|
|
14:45-15:00, Paper MoCT17.4 | |
>Joints-Space Metrics for Automatic Robotic Surgical Gestures Classification |
|
Bombieri, Marco | University of Verona |
Dall'Alba, Diego | University of Verona |
Ramesh, Sanat | University of Verona |
Menegozzo, Giovanni | University of Verona |
Schneider, Caitlin | University of British Columbia |
Fiorini, Paolo | University of Verona |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Kinematics, Medical Robots and Systems
Abstract: Automated surgical gestures classification and recognition are important precursors for achieving the goal of objective evaluation of surgical skills. Many works have been done to discover and validate metrics based on the motion of instruments that can be used as features for automatic classification of surgical gestures. In this work, we present a series of angular metrics that can be used together with Cartesian-based metrics to better describe different surgical gestures. These metrics can be calculated both in Cartesian and joint space, and they are used in this work as features for automatic classification of surgical gestures. To evaluate the proposed metrics, we introduce a novel surgical dataset that contains both Cartesian and joint spaces data acquired with da Vinci Research Kit (dVRK) while a single expert operator is performing 40 subsequent suturing exercises. The obtained results confirm that the application of metrics in the joint space improves the accuracy of automatic gesture classification.
|
|
15:00-15:15, Paper MoCT17.5 | |
>Augmented Reality and Robotic-Assistance for Percutaneous Nephrolithotomy |
> Video Attachment
|
|
Ferraguti, Federica | Universitŕ Degli Studi Di Modena E Reggio Emilia |
Minelli, Marco | University of Modena and Reggio Emilia |
Farsoni, Saverio | University of Ferrara |
Bazzani, Stefano | University of Modena and Reggio Emilia |
Bonfe, Marcello | University of Ferrara |
Vandanjon, Alexandre | Icam - Institut Catholique d’Arts Et M´etiers, France |
Puliatti, Stefano | University of Modena and Reggio Emilia |
Bianchi, Giampaolo | University of Modena and Reggio Emilia |
Secchi, Cristian | Univ. of Modena & Reggio Emilia |
Keywords: Surgical Robotics: Laparoscopy, Medical Robots and Systems, Human Performance Augmentation
Abstract: Percutaneous nephrolithotomy (PCNL) is considered the gold standard for the treatment of patients with renal stones larger than 20 mm in diameter. The success and treatment outcomes of the surgery are very well known to be highly dependent on the precision and accuracy of the puncture step, since it must allow to reach the stone with a precise and direct path. Thus, performing the renal access during PCNL is the most crucial and challenging step of the procedure with the steepest learning curve. In this paper, we propose an innovative solution, based on an AR application combined with a robotic system, that can assist both an expert surgeon in improving the performance of the surgical operation and a novel surgeon in strongly reducing his/her learning curve. The proposed system is validated on a setup including a KUKA LWR 4+ robot and the Microsoft HoloLens as augmented reality headset, through experiments performed by a sample of 11 users.
|
|
MoCT18 |
Room T18 |
Surgical Robotics: Manipulation |
Regular session |
Chair: Becker, Aaron | University of Houston |
Co-Chair: Arai, Fumihito | The University of Tokyo |
|
14:00-14:15, Paper MoCT18.1 | |
>A Learning-Driven Framework with Spatial Optimization for Surgical Suture Thread Reconstruction and Autonomous Grasping under Multiple Topologies and Environmental Noises |
> Video Attachment
|
|
Lu, Bo | The Chinese University of Hong Kong |
Chen, Wei | The Chinese University of Hong Kong |
Jin, Yueming | The Chinese University of Hong Kong |
Zhang, Dandan | Imperial College London |
Dou, Qi | The Chinese University of Hong Kong |
Chu, Henry | The Hong Kong Polytechnic University |
Heng, Pheng Ann | The Chinese University of Hong Kong |
Liu, Yunhui | Chinese University of Hong Kong |
Keywords: Medical Robots and Systems, Computer Vision for Medical Robotics
Abstract: Surgical knot tying is one of the most fundamental and important procedures in surgery, and a high-quality knot can significantly benefit the postoperative recovery of the patient. However, a longtime operation may easily cause fatigue to surgeons, especially during the tedious wound closure task. In this paper, we present a vision-based method to automate the suture thread grasping, which is a sub-task in surgical knot tying and an intermediate step between the stitching and looping manipulations. To achieve this goal, the acquisition of a suture’s three-dimensional (3D) information is critical. Towards this objective, we adopt a transfer-learning strategy first to fine-tune a pre-trained model by learning the information from large legacy surgical data and images obtained by the on-site equipment. Thus, a robust suture segmentation can be achieved regardless of inherent environment noises. We further leverage a searching strategy with termination policies for a suture’s sequence inference based on the analysis of multiple topologies. Exact results of the pixel-level sequence along a suture can be obtained, and they can be further applied for a 3D shape reconstruction using our optimized shortest path approach. The grasping point considering the suturing criterion can be ultimately acquired. Experiments regarding the suture 2D segmentation and ordering sequence inference under environmental noises were extensively evaluated. Results related to the automated grasping operation were demonstrated by simulations in V-REP and by robot experiments using Universal Robot (UR) together with the da Vinci Research Kit (dVRK) adopting our learning-driven framework.
|
|
14:15-14:30, Paper MoCT18.2 | |
>Resonating Magnetic Manipulation for 3D Path-Following and Blood Clot Removal Using a Rotating Swimmer |
> Video Attachment
|
|
Julien, Leclerc | University of Houston |
Lu, Yitong | University of Houston |
Becker, Aaron | University of Houston |
Ghosn, Mohamad | Houston Methodist DeBakey Heart and Vascular Center |
Shah, Dipan J. | Houston Methodist DeBakey Heart & Vascular Center |
Keywords: Medical Robots and Systems, Motion Control, Visual-Based Navigation
Abstract: There are many design trade-offs when building a magnetic manipulator to control millimeter-scale rotating magnetic swimmers for surgical applications. For example, increasing the magnitude of the flux density generated by the magnetic manipulator increases the torque applied to the swimmer, which could enable performing a wider variety of surgical tasks in the future. However, producing stronger magnetic fields has drawbacks, such as increased active power usage. To produce a quickly rotating field, EMs must be quickly charged and discharged. This results in a low power factor (high reactive power used in comparison with the active power). Adding capacitors in series with the electromagnets improves the power factor because the capacitors can provide reactive power. With this method, larger flux densities can be produced without necessitating an increase of the apparent power delivered by the power supplies. This paper highlights the benefits of using capacitors for the magnetic manipulation of rotating swimmers. Rotating swimmers can be used to remove blood clots. The clot removal rate of resonating magnetic manipulators is measured using a realistic blood clot model. This paper also presents a control method for the currents inside the electromagnets that enable 3D navigation without current sensing.
|
|
14:30-14:45, Paper MoCT18.3 | |
>Anticipating Tumor Metastasis by Circulating Tumor Cells Captured by Acoustic Microstreaming |
|
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Song, Bin | BEIHANG UNIVERSITY |
Chen, Dixiao | Beihang University |
Dai, Yuguo | Beihang University |
Feng, Lin | Beihang University |
Arai, Fumihito | Nagoya University |
Keywords: Force Control, Biological Cell Manipulation
Abstract: Circulating tumor cells (CTCs) are the primary cause of tumor metastasis after surgery. Metastatic tumor recurrence is the leading reason of cancer death. It is prerequisite to develop a platform for CTCs separation to predict the cancer cell transfer in important organs. Herein, a novel acoustic microfluidic device was designed to capture the “true” CTCs from the whole blood sample. The blood got from the mice with breast tumors removed. There are some CTCs that have escaped from the solid tumor contained in these blood samples, instead of artificially mixing individual tumor cells into normal blood. In addition, the predictions of tumor prognosis are made based on the number of CTCs captured by the acoustofluidic device. Finally, the prediction has been confirmed through long-term observation of mice with tumor excised. The acoustofluidic device can efficiently capture CTCs and predict the tumor metastasis, which can help clinicians plan follow-up treatment for patients who have had their tumors surgically removed.
|
|
14:45-15:00, Paper MoCT18.4 | |
>In Vitro Design Investigation of a Rotating Helical Magnetic Swimmer for Combined 3-D Navigation and Blood Clot Removal (I) |
> Video Attachment
|
|
Julien, Leclerc | University of Houston |
Zhao, Haoran | University of Houston |
Bao, Daniel | University of Houston |
Becker, Aaron | University of Houston |
Keywords: Medical Robots and Systems, Autonomous Vehicle Navigation, Motion Control
Abstract: This article presents a miniature magnetic swimmer and a control apparatus able to perform both 3-D path following and blood clot removal. The robots are 2.5 mm in diameter, 6 mm in length, contain an internal permanent magnet, and have cutting tips coated in diamond powder. The robots are magnetically propelled by an external magnetic system using three coil pairs arranged orthogonally. A range of robot tip designs were tested for abrading human blood clots in vitro. The best design removed a blood clot at a maximum rate of 20.13 mm3/min. A controller for 3-D navigation is presented and tested. The best prototype was used in an experiment that combined both 3-D path following and blood clot removal.
|
|
MoCT19 |
Room T19 |
Surgical Robotics: Mechanisms |
Regular session |
Chair: Iordachita, Ioan Iulian | Johns Hopkins University |
Co-Chair: Hashizume, Makoto | Kyushu University |
|
14:00-14:15, Paper MoCT19.1 | |
>An Optimized Tilt Mechanism for a New Steady-Hand Eye Robot |
> Video Attachment
|
|
Wu, Jiahao | The Chinese University of Hong Kong |
Li, Gang | Johns Hopkins University |
Urias, Muller | Wilmer Eye Institute |
Patel, Niravkumar | Johns Hopkins University |
Liu, Yunhui | Chinese University of Hong Kong |
Gehlbach, Peter | Johns Hopkins Medical Institute |
Taylor, Russell H. | The Johns Hopkins University |
Iordachita, Ioan Iulian | Johns Hopkins University |
Keywords: Medical Robots and Systems, Mechanism Design
Abstract: Robot-assisted vitreoretinal surgery can filter surgeons' hand tremors and provide safe, accurate tool manipulation. In this paper, we report the design, optimization, and evaluation of a novel tilt mechanism for a new Steady-Hand Eye Robot (SHER). The new tilt mechanism features a four-bar linkage design and has a compact structure. Its kinematic configuration is optimized to minimize the required linear range of motion (LRM) for implementing a virtual remote center-of-motion (V-RCM) while tilting a surgical tool. Due to the different optimization constraints for the robots at the left and right sides of the human head, two configurations of this tilt mechanism are proposed. Experimental results show that the optimized tilt mechanism requires a significantly smaller LRM (e.g. 5.08 mm along Z direction and 8.77 mm along Y direction for left side robot) as compared to the slider-crank tilt mechanism used in the previous SHER (32.39 mm along Z direction and 21.10 mm along Y direction). The feasibility of the proposed tilt mechanism is verified in a mock bilateral robot-assisted vitreoretinal surgery. The ergonomically acceptable robot postures needed to access the surgical field is also determined.
|
|
14:15-14:30, Paper MoCT19.2 | |
>Automated Design and Construction of a Single Incision Laparoscopic System Adapted to the Required Workspace |
> Video Attachment
|
|
Brecht, Sandra V. | Technical University of Munich |
Voegerl, Johannes S. A. | Technical University of Munich |
Lueth, Tim C. | Technical University of Munich |
Keywords: Surgical Robotics: Laparoscopy, Medical Robots and Systems, Product Design, Development and Prototyping
Abstract: Currently, laparoscopic surgery systems are adapted for a large number of indications and patients and are therefore not optimized for one specific case. The challenge to create systems with an optimized kinematic structure for a specific patient regarding reachability and manipulability in the needed workspace is the automated design and construction process. We have developed an automated design and construction process for a patient-specific Single Incision Laparoscopic System that is optimized for a specific indication, procedure, patient, and surgeon. The kinematic structure is adapted to the required workspace, needed instrumentation, and manufacturing parameters. First results show that the patient-specific Single Incision Laparoscopic System is better suited for the specific application regarding the combination of reachability, manipulability, and system size in the required workspace than the standard Single Incision Laparoscopic System in different standard sizes or one simple standard size.
|
|
14:30-14:45, Paper MoCT19.3 | |
>A Novel Endoscope Design Using Spiral Technique for Robotic-Assisted Endoscopy Insertion |
> Video Attachment
|
|
Li, Wei | Imperial College London |
Tsai, Ya-Yen | Imperial College London |
Yang, Guang-Zhong | Shanghai Jiao Tong University |
Lo, Benny Ping Lai | Imperial College London |
Keywords: Medical Robots and Systems
Abstract: Gastrointestinal (GI) endoscopy is a conventional and prevalent procedure used to diagnose and treat diseases in the digestive tract. This procedure requires inserting an endoscope equipped with a camera and instruments inside a patient to the target of interest. To manoeuvre the endoscope, an endoscopist would rotate the knob at the handle to change the direction of the distal tip and apply the feeding force to advance the endoscope. However, due to the nature of the design, this often causes a looping problem during insertion making it difficult to be further advanced to the deeper section of the tract such as the transverse and ascending colon. To this end, in this paper, we propose a novel robotic endoscope which is covered by a rotating screw-like sheath and uses a spiral insertion technique to generate 'pull' forces at the distal tip of the endoscope to facilitate insertion. The whole shaft of the endoscope can be actively rotated, providing the crawling ability from the attached spiral sheath. With the redundant control on a spring-like continuum joint, the bending tip is capable of maintaining its orientation to assist endoscope navigation. To test its functions and feasibility to address the looping problem, three experiments were carried out. The first two experiments were to analyse the kinematic of the device and test the ability of the device to hold its distal tip at different orientation angles during spiral insertion. In the third experiment, we inserted the device in the bent colon phantom to evaluate the effectiveness of the proposed design against looping when advancing through a curved section of a colon. Results show the moving ability using spiral technique and verify its potential of clinical application.
|
|
14:45-15:00, Paper MoCT19.4 | |
>Development of Selective Driving Joint Forceps Using Shape Memory Polymer |
> Video Attachment
|
|
Fukukshima, Katsuhiko | Tokyo Medical and Dental University |
Kanno, Takahiro | Riverfield Inc |
Miyazaki, Tetsuro | The University of Tokyo |
Kawase, Toshihiro | Tokyo Medical and Dental University |
Kawashima, Kenji | The University of Tokyo |
Keywords: Medical Robots and Systems, Hydraulic/Pneumatic Actuators, Redundant Robots
Abstract: In this study, we developed a selective driving joint forceps (SDJF) for laparoscopic surgery. The SDJF has a mechanism that the driving joints can be selected arbitrarily, therefore each joint doesn’t require an individual actuator for operating. The developed SDJF has six joints that can be operated using only four actuators. Each joint has 2-degrees-of-freedom (DOF) of flexion. Therefore, the SDJF has the same working area as the forceps having six driving joints (each joints can bend ±30° around the X and Y axes). The mechanism of the SDJF is realized by fixing each joint with a collar made of shape memory polymer. The proposed mechanism not only reduces the number of actuators required for joint operation, but also has the rigidity of the forceps, which is important in surgery. In addition, a driving section of the forceps is driven by pneumatic cylinders, therefore, the forceps joint has high-back-drivability, lightweight and high-output. We measured the heating and cooling time required to change the driving joint, dynamic response and rigidity of the prototype SDJF.
|
|
15:00-15:15, Paper MoCT19.5 | |
>Payload Optimization of Surgical Instruments with Rolling Joint Mechanisms |
> Video Attachment
|
|
Lee, Dong-Ho | Korea Advanced Institute of Science and Technology |
Hwang, Minho | University of California Berkeley |
Kim, Joonhwan | Korea Advanced Institute of Science and Technology(KAIST) |
Kwon, Dong-Soo | KAIST |
Keywords: Medical Robots and Systems, Optimization and Optimal Control
Abstract: Many surgical robots with steerable surgical instruments have been proposed for endoscopic surgery. Surgical instruments should be small in size for insertion into the body and be able to handle large payloads such as tissue. Because the overall diameter and payload parameters are a trade-off, it is difficult to design an instrument with a large payload while reducing its diameter. In this paper, we optimize the payload of a rolling joint mechanism by deriving the moment equilibrium equation and constraints for endoscopic surgery. A scaled-up prototype was fabricated with the design variables obtained from the optimization, and the validity of the method for calculating the payload was confirmed by the experimentally measured payload. By plotting the distribution of payloads obtained from the moment equilibrium equation, we also confirmed that the payload obtained from the optimization is the maximum. In addition, optimizations with different numbers of joints confirm that the payload tends to decrease as the number of joints increases. This payload optimization method could also be extended to minimizing the deflection of the bending section against external forces and minimizing the diameter of the surgical instrument given the minimum required payload.
|
|
15:15-15:30, Paper MoCT19.6 | |
>Self-Propelled Colonoscopy Robot Using Flexible Paddles |
> Video Attachment
|
|
Osawa, Keisuke | Kyushu University |
Nakadate, Ryu | Kyushu University |
Arata, Jumpei | Kyushu University |
Nagao, Yoshihiro | Kyushu University |
Akahoshi, Tomohiko | Kyushu University |
Eto, Masatoshi | Kyushu University |
Hashizume, Makoto | Kyushu University |
Keywords: Modeling, Control, and Learning for Soft Robots, Medical Robots and Systems, Flexible Robots
Abstract: The number of patients suffering from colorectal cancer (CRC) has been increasing. CRC is known to be curable if detected and treated early. Colonoscopy is currently one of the best screening methods for CRC because it can observe and treat disorders in the large intestine. However, operating the colonoscope is technically demanding for doctors because the insertion of the instrument into the large intestine requires considerable training and skill. To address this issue, we propose a novel self-propelled robot with flexible paddles for the intestinal tract. In this device, the torque is transmitted from a motor outside the patient body to a worm gear at the tip of the colonoscope by a flexible shaft. The worm gear is engaged with two spur gears, and flexible paddles fixed to these spur gears contact the wall of the large intestine to provide the propulsive force. We constructed a force transmission model of the robot to confirm the suitability of the design. The prototype of the self-propelled robot was fabricated by a 3D printer, and its locomotion in a simulated rubber intestine was evaluated. The velocity of the robot was faster than the required speed of 6.5 mm/s. The propulsive force was approximately 1 N; thus, the effectiveness of the robotic principle was confirmed. The mechanical locomotion design, its fabrication, and analysis results are reported in this paper.
|
|
MoCT20 |
Room T20 |
Surgical Robotics: Motion Planning |
Regular session |
Chair: Hammond III, Frank L. | Georgia Institute of Technology |
Co-Chair: De Momi, Elena | Politecnico Di Milano |
|
14:00-14:15, Paper MoCT20.1 | |
>Autonomous Task Planning and Situation Awareness in Robotic Surgery |
> Video Attachment
|
|
Ginesi, Michele | University of Verona |
Meli, Daniele | University of Verona |
Roberti, Andrea | University of Verona |
Sansonetto, Nicola | University of Verona |
Fiorini, Paolo | University of Verona |
Keywords: Surgical Robotics: Planning, Planning, Scheduling and Coordination, Semantic Scene Understanding
Abstract: The use of robots in minimally invasive surgery has improved the quality of standard surgical procedures. So far, only the automation of simple surgical actions has been investigated by researchers, while the execution of structured tasks requiring reasoning on the environment and the choice among multiple actions is still managed by human surgeons. In this paper, we propose a framework to implement surgical task automation. The framework consists of a task-level reasoning module based on answer set programming, a low-level motion planning module based on dynamic movement primitives, and a situation awareness module. The logic-based reasoning module generates explainable plans and is able to recover from failure conditions, which are identified and explained by the situation awareness module interfacing to a human supervisor, for enhanced safety. Dynamic Movement Primitives allow to replicate the dexterity of surgeons and to adapt to obstacles and changes in the environment. The framework is validated on different versions of the standard surgical training peg-and-ring task.
|
|
14:15-14:30, Paper MoCT20.2 | |
>Improving Motion Planning for Surgical Robot with Active Constraints |
> Video Attachment
|
|
Su, Hang | Politecnico Di Milano |
Hu, Yingbai | Technische Universität München |
Li, Jiehao | Beijing Institute of Technology |
Guo, Jing | Guangdong University of Technology |
Liu, Yuan | Guangdong University of Technology |
Li, Mengyao | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
Knoll, Alois | Tech. Univ. Muenchen TUM |
Ferrigno, Giancarlo | Politecnico Di Milano |
De Momi, Elena | Politecnico Di Milano |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy, Motion and Path Planning
Abstract: In this paper, an improved motion planning scheme is proposed for surgical robot control with multiple active constraints, including joint constraints, joint velocity constraints and remote center of motion constraints. It introduces an improved recurrent neural network (RNN) to optimize the online motion planning respect to multiple constraints. The demonstrated surgical operation trajectory is derived using teaching by demonstration. An improved motion planning scheme using the novel recurrent neural network is then designed to achieve the accurate task tracking under the multiple constraints. The general quadratic performance index is adopted to represent the constraints. Finally, the effectiveness of the proposed algorithm is demonstrated using KUKA LWR4+ robot in a lab setup environment.
|
|
14:30-14:45, Paper MoCT20.3 | |
>Integrating Model Predictive Control and Dynamic Waypoints Generation for Motion Planning in Surgical Scenario |
> Video Attachment
|
|
Minelli, Marco | University of Modena and Reggio Emilia |
Sozzi, Alessio | University of Ferrara |
De Rossi, Giacomo | University of Verona |
Ferraguti, Federica | Universitŕ Degli Studi Di Modena E Reggio Emilia |
Setti, Francesco | University of Verona |
Muradore, Riccardo | University of Verona |
Bonfe, Marcello | University of Ferrara |
Secchi, Cristian | Univ. of Modena & Reggio Emilia |
Keywords: Surgical Robotics: Planning, Medical Robots and Systems, Optimization and Optimal Control
Abstract: In this paper we present a novel strategy for motion planning of autonomous robotic arms in Robotic Minimally Invasive Surgery (R-MIS). We consider a scenario where several laparoscopic tools must move and coordinate in a shared environment. The motion planner is based on a Model Predictive Controller (MPC) that predicts the future behavior of the robots and allows to move them avoiding collisions between the tools and satisfying the velocity limitations. In order to avoid the local minima that could affect the MPC, we propose a strategy for driving it through a sequence of waypoints. The proposed control strategy is validated on a realistic surgical scenario.
|
|
14:45-15:00, Paper MoCT20.4 | |
>Simultaneous Trajectory Optimization and Force Control with Soft Contact Mechanics |
> Video Attachment
|
|
Wijayarathne, Lasitha | Georgia Institute of Technology |
Sima, Qie | Georgia Institute of Technology |
Zhou, Ziyi | Georgia Institute of Technology |
Zhao, Ye | Georgia Institute of Technology |
Hammond III, Frank L. | Georgia Institute of Technology |
Keywords: Optimization and Optimal Control, Manipulation Planning, Surgical Robotics: Planning
Abstract: Force modulation of robotic manipulators has been extensively studied for several decades but is not yet commonly used in safety-critical applications due to a lack of accurate interaction contact modeling and weak performance guarantees - a large proportion of them concerning the modulation of interaction forces. This study presents a high-level framework for simultaneous trajectory optimization and force control of the interaction between manipulator and soft environments. Sliding friction and normal contact force are taken into account. The dynamics of the soft contact model and the manipulator dynamics are simultaneously incorporated in a trajectory optimizer to generate desired motion and force profiles. A constrained optimization framework based on Differential Dynamic Programming and Alternative Direction Method of Multipliers has been employed to generate optimal control inputs and high-dimensional state trajectories. Experimental validation of the model performance is conducted on a soft substrate with known material properties using a Cartesian space force control mode. Results show a comparison of ground truth and predicted model based contact force states for multiple Cartesian motions and the validity range of the friction model. The proposed high-level planning has the potential to be leveraged for medical tasks involving manipulation of compliant, delicate, and deformable tissues.
|
|
MoCT21 |
Room T21 |
Surgical Robotics: Steerable Catheters I |
Regular session |
Chair: Swensen, John | Washington State University |
Co-Chair: Liu, Hao | Chinese Academy of Sciences |
|
14:15-14:30, Paper MoCT21.2 | |
>Towards the Development of a Robotic Transcatheter Delivery System for Mitral Valve Implant |
> Video Attachment
|
|
Nayar, Namrata Unnikrishnan | Georgia Institute of Technology, RoboMed Lab |
Jeong, Seokhwan | Georgia Institute of Technology |
Desai, Jaydev P. | Georgia Institute of Technology |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Medical Robots and Systems, Mechanism Design
Abstract: Mitral regurgitation is one of the most common heart diseases caused by ventricular dysfunction or anatomic abnormality of the mitral valve. The fundamental treatment for mitral regurgitation is to repair/replace the mitral valve through open-heart surgery which is risky and requires more time to recover or through minimally invasive approaches, which have significant challenges and limitations. Through the transcatheter approach, the mitral valve implant is minimally invasively delivered directly to the mitral valve and is clamped onto the leaflet to mitigate or prevent regurgitation. However, this procedure requires delicate manipulation of the catheter in a constrained space and remains a challenging problem. In this work, we present a robotically steerable cathether design for the transcatheter procedure to address mitral regurgitation. The proposed catheter consists of two bending joints, one torsion joint, and implant delivery module at the distal end of the robot. Kinematic models for each joint design are derived and compared with experimental results. Finally, we experimentally demonstrate the feasibility of the proposed catheter to navigate in a phantom heart model. In this demonstration, the bending joint was actuated by 75 degrees, the torsion joint was actuated by 90 degrees and the implant was pushed out by 1.8 mm to deliver the implant.
|
|
14:30-14:45, Paper MoCT21.3 | |
>Design and Modeling of a Parallel Shifted-Routing Cable-Driven Continuum Manipulator for Endometrial Regeneration Surgery |
> Video Attachment
|
|
Li, Jianhua | Shenyang Institute of Automation, Chinese Academy of Sciences |
Zhou, Yuanyuan | Shenyang Institute of Automation |
Tan, Jichun | Shengjing Hospital Affiliated to China Medical University |
Wang, Zhidong | Chiba Institute of Technology |
Liu, Hao | Chinese Academy of Sciences |
Keywords: Surgical Robotics: Steerable Catheters/Needles
Abstract: Endometrial regeneration surgery is a new therapy for intrauterine adhesion (IUA). However, existing instruments lacking dexterity and compliance are with difficulty to successfully perform the tasks of generating transplant wounds and transplanting stem cells during endometrial regeneration surgery. This paper presents a novel shifted-routing continuum manipulator which is driven by only two cables but has high dexterity, simple structure and small size. The design of the continuum manipulator with novel actuation strategy is introduced and the manipulator’s kinematic model is also derived. The analysis and simulation imply that shifted-routing strategy improves the dexterity of manipulators under limited actuation numbers and enhances the ability of reaching targets on fundus and corpus of the uterus. Finally, the shifted-routing continuum manipulator is used to reach targets in a planner endometrium model. The experimental results show that the tip of the manipulator can reach all the area of endometrium from proper directions.
|
|
14:45-15:00, Paper MoCT21.4 | |
>Design, Modeling, and Control of a Coaxially Aligned Steerable (COAST) Guidewire Robot |
> Video Attachment
|
|
Jeong, Seokhwan | Georgia Institute of Technology |
Chitalia, Yash | Georgia Institute of Technology |
Desai, Jaydev P. | Georgia Institute of Technology |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Medical Robots and Systems, Mechanism Design
Abstract: Manual navigation of a guidewire is the first step in endovascular interventions. However, this procedure is time consuming with uncertain results due to tortuous vascular anatomy. This paper introduces the design of a novel COaxially Aligned STeerable (COAST) guidewire robot that is 0.40 mm in diameter demonstrating variable curvature and independently controlled bending length of the distal end. The COAST design involves three coaxially aligned tubes with a single tendon running centrally through the length of robot. The outer tubes are made from micromachined nitinol allowing for tendondriven bending of the robot at various segments of the robot, thereby enabling variable bending curvatures, while an inner stainless steel tube controls the bending length of the robot. By varying relative positions of the tubes and the tendon by insertion and retraction in the entire assembly, various joint lengths and curvatures can be achieved, which enables a follow-the-leader motion. We model the kinematics, statics, as well as the coupling within tubes of the COAST robot and develop a simple controller to control the distal tip of the robot. Finally, we experimentally demonstrate the ability of COAST guidewire to accurately navigate through phantom anatomical bifurcations and tortuous anatomy.
|
|
15:00-15:15, Paper MoCT21.5 | |
>Intermittent Insertion Control Method with Fine Needle for Adapting Lung Deformation Due to Breathing Motion |
> Video Attachment
|
|
Tsumura, Ryosuke | Waseda University |
Kakima, Kaoru | Waseda University |
Iwata, Hiroyasu | Waseda University |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Medical Robots and Systems
Abstract: Fine needle insertions into a lung are challenging in terms of the needle deflection due to the breathing motion. Although previous related works neglected the effect for the needle deflection due to the breathing motion by patients stopping the breath during the insertion, they have to suffer from the discomfort. This paper proposes the intermittent insertion control method to decrease needle deflection adapting the lung deformation due to the breathing motion. The novelty of this method is to allow for accurate needle insertion without stopping the breath, which will contribute to decreasing the discomfort and the amount of radiation exposure for patients. The intermittent insertion is to move forward the fine needle during a certain time frame that the needle deflection barely occurs since the lung is not deformed by the diaphragm motion. The feasibility of the proposed method was validated through the PVC phantom and ex vivo experiments. The results showed that the deflection can be suppressed up to 1.3 mm and 3.9 mm in the PVC phantom and ex vivo experiments, respectively.
|
|
15:15-15:30, Paper MoCT21.6 | |
>Resultant Radius of Curvature of Stylet-And-Tube Steerable Needles Based on the Mechanical Properties of the Soft Tissue, and the Needle |
|
Yang, Fan | Washington State University |
Babaiasl, Mahdieh | Washington State University |
Chen, Yao | Washington State University |
Ding, Jowlian | Washington State University |
Swensen, John | Washington State University |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Medical Robots and Systems
Abstract: Steerable needles have been widely researched in recent years, and they have multiple potential roles in the medical area. The flexibility and capability of avoiding obstacles allow the steerable needles to be applied in the biopsy, drug delivery and other medical applications that require a high degree of freedom and control accuracy. The radius of Curvature (ROC) of the needle while inserting in the soft tissue is an important parameter for evaluation of the efficacy, and steerability of these flexible needles. For our Fracture-directed Stylet-and-Tube Steerable Needles, it is important to find a relationship among the resultant insertion ROC, pre-set wire shape and the Young's Modulus of soft tissue to characterize this class of steerable needles. In this paper, an approach is provided for obtaining resultant ROC using stylet and tissue's mechanical properties. A finite element analysis is also conducted to support the reliability of the model. This work sets the foundation for other researchers to predict the insertion ROC based on the mechanical properties of the needle, and the soft tissue that is being inserted.
|
|
MoCT22 |
Room T22 |
Surgical Robotics: Steerable Catheters II |
Regular session |
Chair: Misra, Sarthak | University of Twente |
Co-Chair: Vander Poorten, Emmanuel B | KU Leuven |
|
14:00-14:15, Paper MoCT22.1 | |
>Design of a New Electroactive Polymer Based Continuum Actuator for Endoscopic Surgical Robots |
|
Jacquemin, Quentin | AMVALOR |
Sun, Quan | ENSAM |
Damiens, Thuau | IMS |
Monteiro, Eric | Arts Et Metiers Paristech, PIMM |
Tence-Girault, Sylvie | ARKEMA |
Mechbal, Nazih | Arts Et Métiers Paritech, Paris |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Soft Sensors and Actuators, Medical Robots and Systems
Abstract: This paper presents a smart continuum actuator based on a promising class of materials: ElectroActive polymer (EAP). Indeed these polymers undergo dimensional change in response to an applied electrical field and could be integrated directly in an endoscopic robot structure. We focuses on one of such materials, an electrostrictive polymer, for its valuable strain performances. An analytical model leading to the development of an experimental analysis of such a material in an attempts to overcome the technical gap of their integration into a multilayer composite sheet to perform robotic actuation is the subject of this article.
|
|
14:15-14:30, Paper MoCT22.2 | |
>Analysis of Contact Stability and Contact Safety of a Robotic Intravascular Cardiac Catheter under Blood Flow Disturbances |
|
Hao, Ran | Case Western Reserve University |
Lombard Poirot, Nathaniel | Case Western Reserve University |
Cavusoglu, M. Cenk | Case Western Reserve University |
Keywords: Medical Robots and Systems
Abstract: This paper studies the contact stability and contact safety of a robotic intravascular cardiac catheter under blood flow disturbances while in contact with tissue surface. A probabilistic blood flow disturbance model, where the blood flow drag forces on the catheter body are approximated using a quasi-static model, is introduced. Using this blood flow disturbance model, probabilistic contact stability and contact safety metrics, employing a sample based representation of the blood flow velocity distribution, are proposed. Finally, the contact stability and contact safety of an MRI-actuated robotic catheter is analyzed using these models in a specific example scenario under left pulmonary inferior vein (LIV) blood flow disturbances.
|
|
14:30-14:45, Paper MoCT22.3 | |
>Improved FBG-Based Shape Sensing Methods for Vascular Catheterization Treatment |
|
Al-Ahmad, Omar | Katholieke Universiteit Leuven |
Ourak, Mouloud | University of Leuven |
Van Roosbroeck, Jan | FBGS International NV |
Vlekken, Johan | FBGS International NV |
Vander Poorten, Emmanuel B | KU Leuven |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Calibration and Identification, Performance Evaluation and Benchmarking
Abstract: Fiber optic shape sensing is gaining popularity within areas such as medical catheterization where catheters and guidewires are used to navigate through tortuous vascular paths. Shape sensing can aid medical interventionalists by reducing damaging radiation and providing a more detailed real-time understanding of the 3-dimensional shape of the catheter/guidewire. However, despite the technology existing for several years, there is still room for improvement and steps to follow to reach the accuracy and robustness needed for these safety-critical applications. This paper discusses and provides methods for fiber integration within catheters to improve shape estimation accuracy and repeatability. A two-step calibration process is introduced for intrinsic twist compensation, which results in significant improvements in estimation accuracy. Additionally, a practical method for fiber parameter identification is introduced. The importance of estimating these parameters was found to be paramount for reaching adequate shape estimation. Further improvements to the reconstruction algorithm are proposed. Experimental validations with ground truth shapes are performed to assess the overall accuracy for static and dynamic configurations. For complex geometrical shapes and a fiber length of 170 mm, experiments show a mean spatial error of 0.70 mm (0.41%), a maximum of 2.52 mm (1.48%), and repeatability of ± 0.82 mm.
|
|
14:45-15:00, Paper MoCT22.4 | |
>Optimal Pose Estimation Method for a Multi-Segment, Programmable Bevel-Tip Steerable Needle |
|
Favaro, Alberto | Politecnico Di Milano |
Secoli, Riccardo | Imperial College London |
Rodriguez y Baena, Ferdinando | Imperial College, London, UK |
De Momi, Elena | Politecnico Di Milano |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Biologically-Inspired Robots, Medical Robots and Systems
Abstract: Needle pose tracking is fundamental to achieve a precise and safe insertion in minimally-invasive percutaneous interventions. In this work, a method for estimating the full pose of steerable needles is presented, considering a four-segment Programmable Bevel-Tip Needle (PBN) as a case study. The method estimates also the torsion of the needle that can arise during the insertion because of the interaction forces exerted between the needle and the insertion medium. A novel 3D kinematic model of the PBN is developed and used to predict the full needle pose during the insertion through an Extended Kalman Filter. The filter uses the position measurements provided by electromagnetic sensors located at the tip of the PBN segments as measurement data. The feasibility of the proposed solution is verified through in-gelatin experiments, demonstrating remarkable performance with small errors in position (RMSE<1mm) and orientation (RMSE<3°) estimation, as well as good accuracy compared to a bespoke geometric pose reconstruction method.
|
|
15:00-15:15, Paper MoCT22.5 | |
>MILiMAC: Flexible Catheter with Miniaturized Electromagnets As a Small-Footprint System for Microrobotic Tasks |
> Video Attachment
|
|
Sikorski, Jakub | University of Twente |
Mohanty, Sumit | University of Twente |
Misra, Sarthak | University of Twente |
Keywords: Medical Robots and Systems, Micro/Nano Robots
Abstract: Advancements in medical microrobotics have given rise to an abundance of agents capable of localised interaction with human body in small scales. Nevertheless, clinically-relevant applications of this technology are still limited by the auxiliary infrastructure required for actuation of micro-agents. In this paper, we approach this challenge. Using finite-element analysis, we show that miniaturization of electromagnets can be used to create systems capable of providing magnetic forces adequate for micro-agent steering, while retaining small footprint and power consumption. We use these observations to create MILiMAC (Microrobotic Infrastructure Loaded into Magnetically-Actuated Catheter). MILiMAC is a flexible catheter employing three miniaturized electromagnets to provide localized magnetic actuation at the deeply-seated microsurgery site. We test our approach in a proof-of-concept study deploying MILiMAC inside a test platform to deliver and steer a 600 [um] ferromagnetic microbead. The bead is steered along a set of user-defined trajectories using closed-loop position control. Across all trajectories the best performance metrics are the mean error of 0.41 [mm] and the steady-state error of 0.27 [mm].
|
|
15:15-15:30, Paper MoCT22.6 | |
>Optic Nerve Sheath Fenestration with a Multi-Arm Continuum Robot |
> Video Attachment
|
|
Mitros, Zisos | University College London |
Sadati, Seyedmohammadhadi | King's College London |
Seneci, Carlo Alberto | King's College London |
Bloch, Edward | University College London, Moorfields Eye Hospital |
Leibrandt, Konrad | Imperial College London |
Khadem, Mohsen | University of Edinburgh |
Da Cruz, Lyndon | Moorfields Eye Hospital |
Bergeles, Christos | King's College London |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Mechanism Design, Medical Robots and Systems
Abstract: This paper presents a medical robotic system for deep orbital interventions, with a focus on Optic Nerve Sheath Fenestration (ONSF). ONSF is a currently invasive ophthalmic surgical approach that can reduce potentially blinding elevated hydrostatic intracranial pressure on the optic disc via an incision on the optic nerve. The prototype is a multi-arm system capable of dexterous manipulation and visualization of the optic nerve area, allowing for a minimally invasive approach. Each arm is an independently controlled concentric tube robot collimated by a bespoke guide that is secured on the eye sclera via sutures. In this paper, we consider the robot’s end-effector design in order to reach/navigate the optic nerve according to the clinical requirements of ONSF. A prototype of the robot was engineered, and its ability to penetrate the optic nerve was analysed by conducting ex vivo experiments on porcine optic nerves and comparing their stiffness to human ones. The robot was successfully deployed in a custom-made realistic eye phantom. Our simulation studies and experimental results demonstrate that the robot can successfully navigate to the operation site and carry out the intervention.
|
|
MoCT23 |
Room T23 |
Surgical Robotics: Virtual Training |
Regular session |
Chair: Kazanzides, Peter | Johns Hopkins University |
Co-Chair: Tagliabue, Eleonora | University of Verona |
|
14:00-14:15, Paper MoCT23.1 | |
>Enhanced Tracking Wall: A Real-Time Computing Method for Needle Injection on Haptic Simulators |
|
Alamilla-Daniel, Ma de los Angeles | INSA De Lyon |
Moreau, Richard | INSA-Lyon |
Redarce, Tanneguy | INSA De Lyon (Institut National Des Sciences Appliquees) |
Keywords: Haptics and Haptic Interfaces, Virtual Reality and Interfaces, Medical Robots and Systems
Abstract: Haptic simulators can help medical students to train and improve their skills before practicing with a real patient. However, the vast majority of needle insertion haptic simulators are based on sophisticated models that are almost accurate but highly demanding in computing resources. Most of them do not provide haptic feedback and/or are not suitable for haptic control due to their computing time. In this paper, we presented a new low computing consuming method that aims to provide a realistic needle insertion experience to the student. A description of the proposed solution is provided, and it is illustrated by experimental results to highlight its performance.
|
|
14:15-14:30, Paper MoCT23.2 | |
>Soft Tissue Simulation Environment to Learn Manipulation Tasks in Autonomous Robotic Surgery |
> Video Attachment
|
|
Tagliabue, Eleonora | University of Verona |
Pore, Ameya | University of Verona |
Dall'Alba, Diego | University of Verona |
Magnabosco, Enrico | University of Verona |
Piccinelli, Marco | University of Verona |
Fiorini, Paolo | University of Verona |
Keywords: Simulation and Animation, Surgical Robotics: Laparoscopy, Reinforecment Learning
Abstract: Reinforcement Learning (RL) methods have demonstrated promising results for the automation of subtasks in surgical robotic systems. Since many trial and error attempts are required to learn the optimal control policy, RL agent training can be performed in simulation and the learned behavior can be then deployed in real environments. In this work, we introduce an open-source simulation environment providing support for position based dynamics soft bodies simulation and state-of-the-art RL methods. We demonstrate the capabilities of the proposed framework by training an RL agent based on Proximal Policy Optimization in fat tissue manipulation for tumor exposure during a nephrectomy procedure. Leveraging on a preliminary optimization of the simulation parameters, we show that our agent is able to learn the task on a virtual replica of the anatomical environment. The learned behavior is robust to changes in the initial end-effector position. Furthermore, we show that the learned policy can be directly deployed on the da Vinci Research Kit, which is able to execute the trajectories generated by the RL agent. The proposed simulation environment represents an essential component for the development of next-generation robotic systems, where the interaction with the deformable anatomical environment is involved.
|
|
14:30-14:45, Paper MoCT23.3 | |
>Anatomical Mesh-Based Virtual Fixtures for Surgical Robots |
> Video Attachment
|
|
Li, Zhaoshuo | Johns Hopkins University |
Gordon, Alex | University of Toronto |
Looi, Thomas | Hospital for Sick Children |
Drake, James | Hospital for Sick Children, University of Toronto |
Forrest, Christopher R. | The Hospital for Sick Children, University of Toronto |
Taylor, Russell H. | The Johns Hopkins University |
Keywords: Medical Robots and Systems, Telerobotics and Teleoperation, Surgical Robotics: Planning
Abstract: This paper presents a dynamic constraint formulation to provide protective virtual fixtures of 3D anatomical structures from polygon mesh representations. The proposed approach can anisotropically limit the tool motion of surgical robots without any assumption of the local anatomical shape close to the tool. Using a bounded search strategy and Principle Directed tree, the proposed system can run efficiently at 180 Hz for a mesh object containing 989,376 triangles and 493,460 vertices. The proposed algorithm has been validated in both simulation and skull cutting experiments. The skull cutting experiment setup uses a novel piezoelectric bone cutting tool designed for the da Vinci research kit. The result shows that the virtual fixture assisted teleoperation has statistically significant improvements in the cutting path accuracy and penetration depth control. The code has been made publicly available at https://github.com/mli0603/PolygonMeshVirtualFixture
|
|
14:45-15:00, Paper MoCT23.4 | |
>Auditory Feedback Effectiveness for Enabling Safe Sclera Force in Robot-Assisted Vitreoretinal Surgery: A Multi-User Study |
|
Ebrahimi, Ali | Johns Hopkins University |
Roizenblatt, Marina | Johns Hopkins University |
Patel, Niravkumar | Johns Hopkins University |
Gehlbach, Peter | Johns Hopkins Medical Institute |
Iordachita, Ioan Iulian | Johns Hopkins University |
Keywords: Medical Robots and Systems, Robot Safety
Abstract: Robot-assisted retinal surgery has become increasingly prevalent in recent years in part due to the potential for robots to help surgeons improve the safety of an immensely delicate and difficult set of tasks. The integration of robots into retinal surgery has resulted in diminished surgeon perception of tool-to-tissue interaction forces due to robot’s stiffness. The tactile perception of these interaction forces (sclera force) has long been a crucial source of feedback for surgeons who rely on them to guide surgical maneuvers and to prevent damaging forces from being applied to the eye. This problem is exacerbated when there are unfavorable sclera forces originating from patient movements (dynamic eyeball manipulation) during surgery which may cause the sclera forces to increase even drastically. In this study we aim at evaluating the efficacy of providing warning auditory feedback based on the level of sclera force measured by force sensing instruments. The intent is to enhance safety during dynamic eye manipulations in robot-assisted retinal surgery. The disturbances caused by lateral movement of patient’s head are simulated using a piezo-actuated linear stage. The Johns Hopkins Steady-Hand Eye Robot (SHER), is then used in a multi-user experiment. Twelve participants are asked to perform a mock retinal surgery by following painted vessels inside an eye phantom using a force sensing instrument while auditory feedback is provided. The results indicate that the users are able to handle the eye motion disturbances while maintaining the sclera forces within safe boundaries when audio feedback is provided.
|
|
15:00-15:15, Paper MoCT23.5 | |
>FlexiVision: Teleporting the Surgeon's Eyes Via Robotic Flexible Endoscope and Head-Mounted Display |
> Video Attachment
|
|
Qian, Long | Johns Hopkins University |
Song, Chengzhi | Chinese University of Hong Kong, |
Jiang, Yiwei | Johns Hopkins University |
Luo, Qi | Pacific Lutheran University |
Ma, Xin | Chinese Univerisity of HongKong |
Chiu, Philip, Wai-yan | Chinese University of Hong Kong |
Li, Zheng | The Chinese University of Hong Kong |
Kazanzides, Peter | Johns Hopkins University |
Keywords: Virtual Reality and Interfaces, Medical Robots and Systems
Abstract: A flexible endoscope introduces more dexterity to the image capturing in endoscopic surgery. However, manual control or automatic control based on instrument tracking does not handle the misorientation between the endoscopic video and the surgeon. We propose an automatic flexible endoscope control method that tracks the surgeon's head with respect to the object in the surgical scene. The robotic flexible endoscope is actuated so that it captures the surgical scene from the same perspective as the surgeon. The surgeon wears a head-mounted display to observe the endoscopic video. The frustum of the flexible endoscope is rendered as an augmented reality overlay to provide surgical guidance. We developed the prototype, FlexiVision, integrating a 6-DOF robotic flexible endoscope based on the da Vinci Research Kit and Microsoft HoloLens. We evaluated the proposed automatic control method via a lesion observation task, and evaluated the AR surgical guidance in a lesion targeting task. The multi-user study results demonstrated that, for both tasks, FlexiVision significantly reduced the completion time (by 59% and 58%), number of errors (by 75% and 95%) and subjective task load level. With FlexiVision, the flexible endoscope could act as the surgeon's eyes teleported into the abdominal cavity of the patient.
|
|
15:15-15:30, Paper MoCT23.6 | |
>A Realistic Simulation Environment for MRI-Based Robust Control of Untethered Magnetic Robots with Intra-Operational Imaging |
> Video Attachment
|
|
Tiryaki, Mehmet Efe | Max Plank Institute for Intelligent Systems |
Erin, Onder | Carnegie Mellon University, Max Planck Institute |
Sitti, Metin | Max-Planck Institute for Intelligent Systems |
Keywords: Medical Robots and Systems, Simulation and Animation
Abstract: Dual-use of magnetic resonance imaging (MRI) devices for robot tracking and actuation has transformed them to potential medical robotics platforms for targeted therapies and minimally invasive surgeries. In this paper, we present the dynamic simulations of an MRI-based tracking and actuation scheme, which performs intra-operational imaging while controlling untethered magnetic robots. In our realistic rigid-body simulation, we show that the robot could be controlled with a 1D projection-based position feedback while performing intra-operational echo-planar imaging (EPI). From the simulations, we observe that the velocity estimation error is the main source of the controller instability for low MRI sequence frequencies. To minimize the velocity estimation errors, we constrain the controller gains according to maximum closed-loop rates achievable for different sequence durations. Using the constrained controller in simulations, we confirm that EPI imaging could be introduced to the sequence as an intra-operational imaging method. Although the intro-operational imaging increases the position estimation error to 2.0 mm for a simulated MRI-based position sensing with a 0.6 mm Gaussian noise, it does not cause controller instability up to 128 k-space lines. With the presented approach, continuous physiological images could be acquired during medical operations while a magnetic robot is actuated and tracked inside an MRI device.
|
|
MoDT1 |
Room T1 |
Cellular and Modular Robots I |
Regular session |
Chair: Gerstmayr, Johannes | University Innsbruck, Institute of Mechatronics |
Co-Chair: Kim, MinJun | Southern Methodist University |
|
16:30-16:45, Paper MoDT1.1 | |
>An Obstacle-Crossing Strategy Based on the Fast Self-Reconfiguration for Modular Sphere Robots |
> Video Attachment
|
|
Luo, Haobo | The Chinese University of Hong Kong, Shenzhen |
Li, Ming | Chinese University of Hong Kong, Shenzhen |
Liang, Guanqi | The Chinese University of Hong Kong, Shenzhen |
Qian, Huihuan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Keywords: Cellular and Modular Robots, Path Planning for Multiple Mobile Robots or Agents, Collision Avoidance
Abstract: This paper introduces an obstacle-crossing strategy, and the self-reconfiguration algorithm for a new class of modular robots called the rolling sphere, which can fit obstacles represented by cubes of different sizes due to the chain connection of multiple spheres. For the self-reconfiguration of the rolling spheres, a large gradient is obtained by classifying its action types and hierarchically minimizing the distance between the initial configuration and the final configuration. The most direct use of this large gradient is the fast crossing of various obstacles, by jointing multiple self-reconfigurations according to the OctoMap of the obstacles. It is verified in simulation that the self-reconfiguration takes full advantage of the parallel movement of multiple modules to reduce the total time steps, and the obstacle-crossing strategy can adapt to a variety of obstacles.
|
|
16:45-17:00, Paper MoDT1.2 | |
>A Unique Identifier Assignment Method for Distributed Modular Robots |
|
Assakr, Joseph | University of Franche-Comté |
Makhoul, Abdallah | University of Franche-Comté |
Bourgeois, Julien | Institut FEMTO-ST |
Jacques, Demerjian | Lebanese University |
Keywords: Distributed Robot Systems, Micro/Nano Robots, Software, Middleware and Programming Environments
Abstract: Modular robots are autonomous systems with variable morphology, composed of independent connected computational elements, called particles or modules. Due to critical resource constraints and limited capabilities, globally unique identifier (ID) assignment to each particle is a very challenging task in modular robots. However, having a unique ID in each one remains essential for various operations and applications in this domain. For instance, it is required to establish communications between nodes and implement routing protocols. It helps in saving energy consumption and enhancing the security mechanisms. In this paper, we propose a distributed unique ID assignment method for modular robots. It is a three phases based algorithm. The first phase consists in discovering the system while building a logical tree. The second phase finds the total size of particles in the system needed for several operations in modular robots, and the third one is dedicated to the unique ID assignment. After fully optimizing the distributed algorithm, the effects of various system shapes and leader positions on the energy and time complexity are studied, while proposing fitting solutions for different requirements.
|
|
17:00-17:15, Paper MoDT1.3 | |
>Self-Reconfiguration Planning of Adaptive Modular Robots with Triangular Structure Based on Extended Binary Trees |
> Video Attachment
|
|
Gerbl, Michael | University of Innsbruck |
Gerstmayr, Johannes | University Innsbruck, Institute of Mechatronics |
Keywords: Cellular and Modular Robots, Planning, Scheduling and Coordination
Abstract: In this paper, we present a novel description for the configuration space of adaptive modular robots with a triangular structure based on extended binary trees. In general, binary trees can serve as a representation of kinematic trees with a maximum of two immediate descendants per element. Kinematic loops are incorporated in the tree structure by an ingenious extension of the binary tree indices. The introduction of equivalence classes then allows a unique mathematical description of specific configurations of the robot system. Subsequently, we show how the extended binary tree can serve as a systematic tool for reconfiguration planning, allowing to solve the self-reconfiguration problem for modular robots with a triangular structure, which has as yet no general solution. Reconfiguration is performed by populating the binary tree indices of a desired target configuration in an ascending manner, moving modules along the surface of the robot. We demonstrate the planning algorithm on a simple example and conclude by outlining a way to translate the individual reconfiguration steps to specific module movement commands.
|
|
17:15-17:30, Paper MoDT1.4 | |
>Linear Distributed Clustering Algorithm for Modular Robots Based Programmable Matter |
|
Bassil, Jad | FEMTO-ST Institute, Univ. Bourgogne Franche-Comte |
Moussa, Mohamad | FEMTO-ST Institute, Univ. Bourgogne Franche-Comte |
Makhoul, Abdallah | University of Franche-Comté |
Piranda, Benoît | Université De Franche-Comté / FEMTO-ST |
Bourgeois, Julien | Institut FEMTO-ST |
Keywords: Cellular and Modular Robots, Distributed Robot Systems, Control Architectures and Programming
Abstract: Modular robots are defined as autonomous kinematic machines with variable morphology. They are composed of several thousands or even millions of modules which are able to coordinate in order to behave intelligently. Clustering the modules in modular robots has many benefits, including scalability, energy-efficiency, reducing communication delay and improving the self-configuration processes that focuses on finding a sequence of reconfiguration actions to convert robots from an initial configuration to a goal one. The main idea is to divide the nodes in an initial shape into some clusters based on the final goal shape in order to reduce the time complexity and enhance the self-reconfiguration tasks. In this paper, we propose a robust clustering approach based on a distributed density-cut graph algorithm to divide the networks into a pre-defined number of clusters based on the final goal shape. The result is an algorithm with linear complexity that scales to large modular robot systems. We implement and demonstrate our algorithm on a real Blinky Blocks system and evaluate it in simulation on networks of up to 30,000 modules.
|
|
17:30-17:45, Paper MoDT1.5 | |
>Magnetically Programmable Cuboids for 2D Locomotion and Collaborative Assembly |
> Video Attachment
|
|
Rogowski, Louis | Southern Methodist University |
Bhattacharjee, Anuruddha | Southern Methodist University |
Zhang, Xiao | Southern Methodist University |
Kararsiz, Gokhan | Southern Methodist University |
Fu, Henry | University of Utah |
Kim, MinJun | Southern Methodist University |
Keywords: Assembly, Cooperating Robots, Swarms
Abstract: The modular assembly and actuation of 3D printed milliscale cuboid robots using a globally applied magnetic field is presented. Cuboids are composed of a rectangular resin shell embedded with two spherical permanent magnets that can independently align with any applied magnetic field. Placing cuboids within short distances of each other allows for modular assembly and disassembly by changing magnetic field direction. Assembled cuboids are demonstrated to stably self-propel under sequential field inputs allowing for both rolling and pivot walking motion modes. Swarms of cuboids could be actuated within the working space and exhibit near identical behavior. Specialized 'trap robots' were developed to capture objects, transport them within the working space, and subsequently release the payload in a new location. Cuboids with male and female connectors were developed to exhibit the selective mating between cuboids. The results show that cuboids are a diverse and adaptable platform that has the potential to be scaled down to the sub-millimeter regime for use in medical or small-scale assembly applications.
|
|
MoDT2 |
Room T2 |
Cellular and Modular Robots II |
Regular session |
Chair: Hawkes, Elliot Wright | University of California, Santa Barbara |
Co-Chair: Rubenstein, Michael | Northwestern University |
|
16:30-16:45, Paper MoDT2.1 | |
>An Untethered Soft Cellular Robot with Variable Volume, Friction, and Unit-To-Unit Cohesion |
> Video Attachment
|
|
Devlin, Matthew | UC Santa Barbara |
Brad, Young | UC Santa Barbara |
Naclerio, Nicholas | University of California, Santa Barbara |
Haggerty, David Arthur | UC Santa Barbara |
Hawkes, Elliot Wright | University of California, Santa Barbara |
Keywords: Cellular and Modular Robots, Soft Robot Materials and Design, Soft Robot Applications
Abstract: A fundamental challenge in the field of modular and collective robots is balancing the trade-off between unit-level simplicity, which allows scalability, and unit-level functionality, which allows meaningful behaviors of the collective. At the same time, a challenge in the field of soft robotics is creating untethered systems, especially at a large scale with many controlled degrees of freedom (DOF). As a contribution toward addressing these challenges, here we present an untethered, soft cellular robot unit. A single unit is simple and one DOF,yet can increase its volume by 8x and apply substantial forces to the environment, can modulate its surface friction, and can switch its unit-to-unit cohesion while agnostic to unit-to-unit orientation. As a soft robot, it is robust and can achieve untethered operation of its DOF. We present the design of the unit, a volumetric actuator with a perforated strain-limiting fabric skin embedded with magnets surrounding an elastomeric membrane, which in turn encompasses a low-cost micro-pump, battery, and control electronics. We model and test this unit and show simple demonstrations of three-unit configurations that lift, crawl, and perform plate manipulation. Our untethered, soft cellular robot unit lays the foundation for new robust soft robotic collectives that have the potential to apply human-scale forces to the world.
|
|
16:45-17:00, Paper MoDT2.2 | |
>FireAnt3D: A 3D Self-Climbing Robot towards Non-Latticed Robotic Self-Assembly |
> Video Attachment
|
|
Swissler, Petras | Northwestern University |
Rubenstein, Michael | Northwestern University |
Keywords: Cellular and Modular Robots, Grippers and Other End-Effectors, Swarms
Abstract: Robotic self-assembly allows robots to join to form useful, on-demand structures. Unfortunately, the methods employed by most self-assembling robotic swarms compromise this promise of adaptability through their use of fixed docking locations, which impair a swarm’s ability to handle imperfections in the structural lattice resulting from load deflection or imperfect robot manufacture; these concerns worsen as swarm size increases. Inspired by the amorphous structures built by cells and social insects, FireAnt3D uses a novel docking mechanism, the 3D continuous dock, to attach to like robots regardless of alignment. FireAnt3D demonstrates the use of the 3D continuous docks, as well as how a robot can use such docks to connect to like robots and locomote over arbitrary 3D arrangements of its peers. The research outlined in this paper presents a profoundly different approach to docking and locomotion during self-assembly and addresses longstanding challenges in the field of robotic self-assembly.
|
|
17:00-17:15, Paper MoDT2.3 | |
>Kubits: Solid-State Self-Reconfiguration with Programmable Magnets |
> Video Attachment
|
|
Hauser, Simon | École Polytechnique Fédérale De Lausanne (EPFL) |
Mutlu, Mehmet | École Polytechnique Fédérale De Lausanne (EPFL) |
Ijspeert, Auke | EPFL |
Keywords: Cellular and Modular Robots, Actuation and Joint Mechanisms, Mechanism Design
Abstract: Even though many prototypes of 3D self-reconfiguring modular robots (SRMRs) have been developed in recent years, a demonstration involving 1'000 modules remains a challenge. This is largely due to complex mechanics needed to achieve connection, disconnection and especially actuation in such a system. This work introduces ``Kubits'', which is, to the best of our knowledge, the first SRMR that achieves these functionalities without moving parts, i.e. in solid-state. Each module contains a kind of emph{programmable magnet} whose magnetization can be controlled. The simultaneous control of touching magnet pairs of two modules is used to create attraction (connection), neutrality (disconnection) and actuation (repulsion), which results in self-reconfiguration by a cube pivoting around an edge. We detail the design of the system and demonstrate a series of successful flips, including a jumping mode. The energy-efficient, lightweight and robust (both in terms of mechanics and control) method in Kubits is a promising path for scalable self-reconfiguration.
|
|
17:15-17:30, Paper MoDT2.4 | |
>ModMan: An Advanced Reconfigurable Manipulator System with Genderless Connector and Automatic Kinematic Modeling Algorithm |
> Video Attachment
|
|
Yun, Alchan | Korea Institute of Science and Technology |
Moon, Deaho | Korea Institute of Science and Technology |
Ha, Junhyoung | Korea Institute of Science and Technology |
Kang, Sung-Chul | Samsung Research, Samsung Electronics |
Lee, Woosub | Korea Institute of Science and Technology |
Keywords: Cellular and Modular Robots, Mechanism Design, Kinematics
Abstract: With the current trend of dwindling life cycle of production, necessity of systems with high-adaptability is on the rise. With their high adaptability and easy maintenance, reconfigurable manipulators are strong candidates to replace conventional non-reconfigurable manipulators in such trend. However, most of existing reconfigurable robots are designed for non-industrial use and remained in laboratory level because of their low accuracy and low mechanical/electrical capacity. In this paper, we present our newly developed manually reconfigurable manipulator, ModMan, equipped with genderless connectors which feature high mechanical/electrical capacity and multi-DOF modules, which are to increase the number of possible configurations while minimizing loss of manipulator performance. An automatic kinematic modeling algorithm for reconfigurable manipulators is also presented to deal with complexities due to genderless connections and multi-DOF modules. Evaluations of repeatability of 6-DOF configuration are performed to prove that the performance of ModMan is comparable to existing non-reconfigurable manipulators. Experiments on reconfiguration of kinematics for arbitrary connections of modules are also demonstrated.
|
|
17:30-17:45, Paper MoDT2.5 | |
>Bayesian Particles on Cyclic Graphs |
> Video Attachment
|
|
Pervan, Ana | Northwestern University |
Murphey, Todd | Northwestern University |
Keywords: Cellular and Modular Robots, Distributed Robot Systems, Biologically-Inspired Robots
Abstract: We consider the problem of designing synthetic cells to achieve a complex goal (e.g., mimicking the immune system by seeking invaders) in a complex environment (e.g., the circulatory system), where they might have to change their control policy, communicate with each other, and deal with stochasticity including false positives and negatives---all with minimal capabilities and only a few bits of memory. We simulate the immune response in cyclic, maze-like environments and use targets at unknown locations to represent invading cells. Using only a few bits of memory, the synthetic cells are programmed to perform a physically-feasible algorithm with which they update their control policy based on randomized encounters with other cells. As the synthetic cells work together to find the target, their interactions as an ensemble function as a physical implementation of a Bayesian update. That is, the particles act as a particle filter. This result provides formal properties about the behavior of the synthetic cell ensemble that can be used to ensure robustness and safety. This method of self-organization is evaluated in simulations, and applied to an actual model of the human circulatory system.
|
|
MoDT3 |
Room T3 |
Exoskeleton and Prosthesis Design and Kinematics |
Regular session |
Chair: Onal, Cagdas | WPI |
Co-Chair: Condzal, Natalie | University of Maryland |
|
16:30-16:45, Paper MoDT3.1 | |
>Mechanical Design and Preliminary Performance Evaluation of a Passive Arm-Support Exoskeleton |
> Video Attachment
|
|
Du, Zihao | Huazhong University of Science and Technology |
Yan, Zefeng | Huazhong University of Science & Technology |
Huang, Tiantian | Huazhong University of Science and Technology |
Zhang, Zhengguang | Huazhong University of Science and Technology |
Zhang, Ziquan | Huazhong University of Science and Technology |
Bai, Ou | FIU |
Huang, Qin | Huazhong University of Science and Technology |
Han, Bin | Huazhong University of Science and Technology |
Keywords: Physically Assistive Devices, Prosthetics and Exoskeletons, Wearable Robots
Abstract: In this study, a passive arm-support exoskeleton was designed to provide assistive aid for manufacturing workers. The exoskeleton has two operating states which can be altered using an unique ratchet bar mechanism with two blocks fixed on the ratchet bar. When the upper arm is elevated to the highest poiont, the pawl module will touch the lower block to allow the pawl separated, so that the arm can move freely without any resistance. When the upper arm is depressed to the lowest point, the pawl module will touch the upper block to make the pawl re-engaged, so that the upper arm can be locked at any vertical position. For purpose to improve the ergonomical property, the structural parameters of the exoskeleton were determined by particle swarm optimization. The designed exoskeleton was simulated in the Adams model to investigate its actual performance. A preliminary experimental study was conducted to evaluate the effectiveness of the designed exoskeleton on alleviating users’ physical loads in holding heavy tools; the muscular activities on the shoulder muscle groups involved in the weights bearing, elicited by the surface electromyography (EMG) over the shoulder, were significantly reduced from three healthy subjects who carried hand-held tools. The simulation and experiment results show that the designed exoskeleton could effectively relieve the shoulder burden by transferring the bearing load to the waist, where the motion of the arm was not obstructed.
|
|
16:45-17:00, Paper MoDT3.2 | |
>Analysis, Development and Evaluation of Electro-Hydrostatic Technology for Lower Limb Prostheses Applications |
> Video Attachment
|
|
Tessari, Federico | Istituto Italiano Di Tecnologia |
Galluzzi, Renato | Politecnico Di Torino |
Tonoli, Andrea | Politecnico Di Torino |
Amati, Nicola | Politecnico Di Torino |
Laffranchi, Matteo | Istituto Italiano Di Tecnologia |
De Michieli, Lorenzo | Istituto Italiano Di Tecnologia |
Keywords: Prosthetics and Exoskeletons, Rehabilitation Robotics, Compliance and Impedance Control
Abstract: This paper presents electro-hydrostatic actuation as a valid substitute of electro-mechanical devices for powered knee prostheses. The work covers the design of a test rig exploiting linear electro-hydrostatic actuation. Typical control laws for prosthesis actuators are discussed, implemented and validated experimentally. Particularly, this work focuses on position and admittance control syntheses enhanced with feed-forward friction compensation. Finally, the efficiency of the test rig is characterized experimentally and compared to that of classical electro-mechanical designs. It is demonstrated that the electro-hydrostatic prototype is able to fulfill its targets from a control perspective, while also having the potential to outperform electro-mechanical actuation in efficiency.
|
|
17:00-17:15, Paper MoDT3.3 | |
>On the Use of (lockable) Parallel Elasticity in Active Prosthetic Ankles |
|
Geeroms, Joost | Vrije Universiteit Brussel |
Flynn, Louis | Vrije Universiteit Brussel |
Ducastel, Vincent | Vrije Universiteit Brussel |
Vanderborght, Bram | Vrije Universiteit Brussel |
Lefeber, Dirk | Vrije Universiteit Brussel |
Keywords: Prosthetics and Exoskeletons, Wearable Robots, Human-Centered Robotics
Abstract: New challenges arise when investigating the use of active prostheses for lower limb replacement, such as high motor power requirements, leading to increased weight and reduced autonomy. Series and parallel elasticity are often explored to reduce the necessary motor power but often the effect on the energy consumption of the prosthesis is not directly investigated, as the mechanical power properties are examined yet the motor and gearbox dynamics and efficiencies are not considered. This paper presents the investigation of a parallel elasticity compared to a series elastic actuation system used in an active ankle prosthesis. Using a matched electromechanical model of the actuator shows that the electrical efficiency can be influenced using parallel elasticity. The optimal configuration depends on the motor characteristics (dynamic behavior) and limitations, which should always be taken into account when designing optimal series and parallel springs. It has been shown that adding parallel elasticity allows to reduce the required gear ratio and thus associated friction and inertial losses. Allowing the parallel elasticity to be lockable can further influence the behavior and allow for a more versatile actuator.
|
|
17:15-17:30, Paper MoDT3.4 | |
>Operational Space Formulation and Inverse Kinematics for an Arm Exoskeleton with Scapula Rotation |
> Video Attachment
|
|
Carignan, Craig | University of Maryland |
Gribok, Daniil | University of Maryland |
Rappaport, Tuvia | University of Maryland |
Condzal, Natalie | University of Maryland |
Keywords: Prosthetics and Exoskeletons, Kinematics, Telerobotics and Teleoperation
Abstract: The operational space of an 8-axis arm exoskeleton is partitioned into tasks based on the human arm motion, and a task priority approach is implemented to perform the inverse kinematics. The tasks are prioritized in the event that singularities or other constraints such as joint limits render the full desired operational space infeasible. The task reconstruction method is used to circumvent singularities in a deterministic manner so that the arm is never physically in a singular configuration. This is especially advantageous when the arm is fully extended because it allows the hand to move smoothly along the workspace boundary. The task priority inverse kinematics approach is also more computationally efficient than full Jacobian inverse methods and naturally manages the motion of the arm in a more anthropomorphic-friendly manner. The new methodology is demonstrated with four operational tasks on the MGA Exoskeleton.
|
|
17:30-17:45, Paper MoDT3.5 | |
>Kinematic Optimization of an Underactuated Anthropomorphic Prosthetic Hand |
> Video Attachment
|
|
Votta, Ann | Worcester Polytechnic Institute |
Gunay, Sezen Yagmur | Northeastern University |
Zylich, Brian | University of Massachusetts, Amherst |
Skorina, Erik | Worcester Polytechnic Institute |
Rameshwar, Raagini | Worcester Polytechnic Institute |
Erdogmus, Deniz | Northeastern University |
Onal, Cagdas | WPI |
Keywords: Prosthetics and Exoskeletons, Optimization and Optimal Control, Grasping
Abstract: The human hand serves as an inspiration for robotic grippers. However, the dimensions of the human hand evolved under a different set of constraints and requirements than that of robots today. This paper discusses a method of kinematically optimizing the design of an anthropomorphic robotic hand. We focus on maximizing the workspace intersection of the thumb and the other fingers as well as maximizing the size of the largest graspable object. We perform this optimization and use the resulting dimensions to construct a flexible, underactuated 3D printed prototype. We verify the results of the optimization through experimentation, demonstrating that the optimized hand is capable of grasping objects ranging from less than 1 mm to 12.8 cm in diameter with a high degree of reliability. The hand is lightweight and inexpensive, weighing 333 g and costing less than 175 USD, and strong enough to lift over 1.1 lb (500 g). We demonstrate that the optimized hand outperforms a well-known open-source 3D printed anthropomorphic hand on multiple tasks. Finally, we demonstrate the performance of our hand by employing a classification-based user intent decision system which predicts the grasp type using real-time electromyographic (EMG) activity patterns.
|
|
17:45-18:00, Paper MoDT3.6 | |
>A Novel Inverse Kinematics Method for Upper-Limb Exoskeleton under Joint Coordination Constraints |
> Video Attachment
|
|
Dalla Gasperina, Stefano | Politecnico Di Milano |
Ghonasgi, Keya | The University of Texas at Austin |
De Oliveira, Ana Christine | The University of Texas at Austin |
Gandolla, Marta | Universitŕ - Dipartimento Di Elettronica, Informazione E Bioinge |
Pedrocchi, Alessandra | Politecnico Di Milano |
Deshpande, Ashish | The University of Texas |
Keywords: Kinematics, Prosthetics and Exoskeletons, Rehabilitation Robotics
Abstract: In this study, we address the inverse kinematics problem for an upper-limb exoskeleton by presenting a novel method that guarantees the satisfaction of joint-space constraints, and solves closed-chain mechanisms in a serial robot configuration. Starting from the conventional differential kinematics method based on the inversion of the Jacobian matrix, we describe and test two improved algorithms based on the Projected-Gradient method, that take into account joint-space equality constraints. We use the Harmony exoskeleton as a platform to demonstrate the method. Specifically, we address the joint constraints that the robot maintains in order to match anatomical shoulder movement and the closed-chain mechanisms used for the robot's joint control. Results show good performances of the proposed algorithms, which are confirmed by the ability of the robot to follow the desired task-space trajectory while ensuring the fulfilment of joint-space constraints, with a maximum error of about 0.05 degrees.
|
|
MoDT4 |
Room T4 |
Exoskeletons: Control I |
Regular session |
Chair: Kim, Myunghee | University of Illinois at Chicago |
Co-Chair: Ames, Aaron | California Institute of Technology |
|
16:30-16:45, Paper MoDT4.1 | |
>Adaptive Gait Pattern Generation of a Powered Exoskeleton by Iterative Learning of Human Behavior |
> Video Attachment
|
|
Park, Kyeong-Won | KAIST |
Park, Jeongsu | KAIST |
Choi, Jungsu | Yeungnam University |
Kong, Kyoungchul | Korea Advanced Institute of Science and Technology |
Keywords: Wearable Robots, Human Performance Augmentation, Robust/Adaptive Control of Robotic Systems
Abstract: Several powered exoskeletons have been developed and commercialized to assist people with complete spinal cord injury. For motion control of a powered exoskeleton, a normal gait pattern is often applied as a reference. However, the physical ability of paraplegics and the degrees of freedom of powered exoskeletons are totally different from those of people without disabilities. Therefore, this paper introduces a novel gait pattern depart from the normal gait, which is proper to the paraplegics. Since a human is included, the system of the powered exoskeleton has lots of motion uncertainties that may not be perfectly predicted resulting from different physical properties of paraplegics (SCI level, muscular strength of the upper body, body parameters, inertia), actions from crutches (position and timing to put), several types of training (period, methodology), etc. Then, to find a stable and safe gait pattern adapted to the individual user, an iterative way to compensate the gait pattern is also required. In this paper, human iterative learning algorithm, which utilizes the accumulated data during walking to adjust the gait trajectories is proposed. Additionally, the effectiveness of the proposed gait pattern is verified by human walking experiments.
|
|
16:45-17:00, Paper MoDT4.2 | |
>Gait Training Robot with Intermittent Force Application Based on Prediction of Minimum Toe Clearance |
> Video Attachment
|
|
Miyake, Tamon | Waseda University |
Fujie, Masakatsu G. | Waseda University |
Sugano, Shigeki | Waseda University |
Keywords: Human Performance Augmentation, Physical Human-Robot Interaction, Human Factors and Human-in-the-Loop
Abstract: Adaptive assistance of gait training robots has been determined to improve gait performance through motion assistance. An important control role during walking is to avoid tripping by controlling minimum toe clearance (MTC), which is an indicator of tripping risk, to avoid its decrease among gait cycles. No conventional gait training robots can adjust assistance timing based on MTC. In this paper, we propose a system that applies force intermittently based on the MTC prediction algorithm to encourage people to avoid lowering the MTC. This prediction algorithm is based on a radial basis function network, the input data of which include the angles, angular velocities, and angular accelerations of the hip, knee, and ankle joints in the sagittal and coronal planes at toe-off. The cable-driven system that can switch between assistance and non-assistance modes applies force when the predicted MTC is lower than the mean value. Nine participants were asked to walk on a treadmill, and we tested the effect of the system. The MTC data before, during, and after the assistance phase were analyzed for 120 s. The results showed that the minimum and first quartile values of MTC could be increased after the assistance phase.
|
|
17:00-17:15, Paper MoDT4.3 | |
>Human Preference-Based Learning for High-Dimensional Optimization of Exoskeleton Walking Gaits |
> Video Attachment
|
|
Tucker, Maegan | California Institute of Technology |
Cheng, Myra | California Institute of Technology |
Novoseller, Ellen | California Institute of Technology |
Cheng, Richard | California Institute of Technology |
Yue, Yisong | California Institute of Technology |
Burdick, Joel | California Institute of Technology |
Ames, Aaron | California Institute of Technology |
Keywords: Human Factors and Human-in-the-Loop, Prosthetics and Exoskeletons, Humanoid and Bipedal Locomotion
Abstract: Optimizing lower-body exoskeleton walking gaits for user comfort requires understanding users' preferences over a high-dimensional gait parameter space. However, existing preference-based learning methods have only explored low-dimensional domains due to computational limitations. To learn user preferences in high dimensions, this work presents LineCoSpar, a human-in-the-loop preference-based framework that enables optimization over many parameters by iteratively exploring one-dimensional subspaces. Additionally, this work identifies gait attributes that characterize broader preferences across users. In simulations and human trials, we empirically verify that LineCoSpar is a sample-efficient approach for high-dimensional preference optimization. Our analysis of the experimental data reveals a correspondence between human preferences and objective measures of dynamicity, while also highlighting differences in the utility functions underlying individual users' gait preferences. This result has implications for exoskeleton gait synthesis, an active field with applications to clinical use and patient rehabilitation.
|
|
17:15-17:30, Paper MoDT4.4 | |
>The Personalization of Stiffness for an Ankle-Foot Prosthesis Emulator Using Human-In-The-Loop Optimization |
> Video Attachment
|
|
Wen, Tin-Chun | University of Illinois at Chicago |
Jacobson, Michael | University of Illinois at Chicago |
Zhou, Xingyuan | University of Illinois at Chicago |
Chung, Hyun-Joon | Korea Institute of Robot and Convergence |
Kim, Myunghee | University of Illinois at Chicago |
Keywords: Rehabilitation Robotics, Wearable Robots, Model Learning for Control
Abstract: Evidence suggests that the metabolic cost associated with the locomotive activity of walking is dependent upon ankle stiffness. This stiffness can be a control parameter in an ankle-foot prosthesis. Considering unique physical interaction between each individual with below-knee amputation and robotic ankle-foot prosthesis, individually tuned stiffness in a robotic ankle-foot prosthesis may improve assistance benefits. This personalization can be accomplished through human-in-the-loop (HIL) Bayesian optimization (BO). Here, we conducted a pilot study to identify personalized ankle-foot prosthesis stiffness using the HIL BO to minimize the cost of walking, shown by metabolic cost. We used an improved versatile ankle-foot prosthesis emulator, which enabled to test controllers with a wide range of stiffness conditions. Two participants with simulated amputation reduced their cost of walking under the condition of personalized (optimized) stiffness by 6% and 5%, respectively. This result suggests that personalized stiffness may improve assistance benefit.
|
|
MoDT5 |
Room T5 |
Exoskeletons: Control II |
Regular session |
Chair: Wensing, Patrick M. | University of Notre Dame |
Co-Chair: Rose, Chad | Rice University |
|
16:30-16:45, Paper MoDT5.1 | |
>Improving Low-Level Control of the Exoskeleton Atalante in Single Support by Compensating Joint Flexibility |
|
Vigne, Matthieu | MINES ParisTech, PSL Research University |
El Khoury, Antonio | Wandercraft |
Di Meglio, Florent | CAS - Centre Automatique Et Systčmes - MINES ParisTech, PSL Rese |
Petit, Nicolas | Ecole Des Mines De Paris |
Keywords: Prosthetics and Exoskeletons, Humanoid Robot Systems
Abstract: This paper describes a novel low-level controller for the lower-limb exoskeleton Atalante. The controller implemented on the commercialized product Atalante works under the assumption of full rigidity, performing position control through decentralized joint PIDs. However, this controller is unable to tackle the presence of flexibilities in the system, which cause static errors and undesired oscillations. We modify this controller by leveraging estimations of the position and velocity of the flexibilities, readily available on Atalante through the use of strapdown IMUs. Instead of considering feedback on the motor position only, we perform feedback on both the joint position and the flexibility angle, keeping a decentralized approach. This enables compensation of both the static error present at rest, and rapid damping of the oscillations. To tune the gains of the proposed controller, we use a linearized model of an elastic joint to which we apply a steady-state LQR, which creates desirable robustness to the flexible model. The proposed controller is experimentally validated through various single support experiments on Atalante, either empty or with a user. In all cases, the proposed controller outperforms the state-of-the-art controller, providing improved trajectory tracking and disturbance rejection.
|
|
16:45-17:00, Paper MoDT5.2 | |
>Extremum Seeking Control for Stiffness Auto-Tuning of a Quasi-Passive Ankle Exoskeleton |
|
Kumar, Saurav | University of Texas at Dallas |
Zwall, Matthew | University of Texas at Dallas |
Bolívar-Nieto, Edgar | The University of Michigan |
Gregg, Robert D. | University of Michigan |
Gans, Nicholas (Nick) | University Texas at Arlington |
Keywords: Prosthetics and Exoskeletons, Robust/Adaptive Control of Robotic Systems
Abstract: Recently, it has been shown that light-weight, passive, ankle exoskeletons with spring-based energy store-and-release mechanisms can reduce the muscular effort of human walking. The stiffness of the spring in such a device must be properly tuned in order to minimize the muscular effort. However, this muscular effort changes for different locomotion conditions (e.g., walking speed), causing the optimal spring stiffness to vary as well. Existing passive exoskeletons have a fixed stiffness during operation, preventing it from responding to changes in walking conditions. Thus, there is a need of a device and auto-tuning algorithm that minimizes the muscular effort across different walking conditions, while preserving the advantages of passive exoskeletons. In this paper, we developed a quasi-passive ankle exoskeleton with a variable stiffness mechanism capable of self-tuning. As the relationship between the muscular effort and the optimal spring stiffness across different walking speeds is not known a priori, a model-free, discrete-time extremum seeking control (ESC) algorithm was implemented for real-time optimization of spring stiffness. Experiments with an able-bodied subject demonstrate that as the walking speed of the user changes, ESC automatically tunes the torsional stiffness about the ankle joint. The average RMS EMG readings of tibialis anterior and soleus muscles at slow walking speed decreased by 26.48% and 7.42%, respectively.
|
|
17:00-17:15, Paper MoDT5.3 | |
>Application of Interacting Models to Estimate the Gait Speed of an Exoskeleton User |
|
Karulkar, Roopak M. | University of Notre Dame |
Wensing, Patrick M. | University of Notre Dame |
Keywords: Prosthetics and Exoskeletons, Human and Humanoid Motion Analysis and Synthesis, Humanoid and Bipedal Locomotion
Abstract: This paper outlines steps toward a framework for model-based user intent detection to enable fluent human-robot nteraction in assistive exoskeletons. An interacting multi-model (IMM) estimation scheme is presented to address state estimation for lower-extremity exoskeletons and to handle their hybrid dynamics. The proposed IMM scheme includes new approaches that enable it to estimate states of hybrid systems with dynamics that are unique to each phase. Traditional IMMs only consider the probabilistic likelihood of being in each phase, while the implementation in this work has been modified to consider physical likelihood as well. The IMM compares sensor readings from the exoskeleton to multiple candidate gaits of a template model of walking. Candidate gaits are generated using a numerical optimization procedure applied to a Bipedal Spring-Loaded Inverted Pendulum (B-SLIP) model. The framework was tested with sensor data acquired from walking trials in an Ekso GT exoskeleton, and was used to estimate gait phase and CoM velocity. It is shown that the standard IMM filtering approach results in incorrect estimates of gait phase, while the proposed addition to IMM using physical likelihood improves the estimates. Results with human subject data further show the ability to estimate gait phase and speed in experimental settings.
|
|
17:15-17:30, Paper MoDT5.4 | |
>A New Delayless Adaptive Oscillator for Gait Assistance |
|
Xue, Tao | Tsinghua University |
Wang, Ziwei | Tsinghua University |
Zhang, Tao | Tsinghua University |
Bai, Ou | FIU |
Zhang, Meng | Move Robotics, Co., Ltd |
Han, Bin | Huazhong University of Science and Technology |
Keywords: Wearable Robots, Rehabilitation Robotics, Robust/Adaptive Control of Robotic Systems
Abstract: To obtain synchronized gait assistance, this paper presents a new delayless adaptive dual-oscillator (ADO) scheme to address the inherent delay issue. In the ADO structure, a new oscillator is coupled with the primitive one but the phase is adaptively feed-forward compensated. It's remarkable that the compensated phase is determined by the proposed extended phase lag observer, in which both the phase lag and phase leading can be properly estimated and eliminated in the steady and non-steady gait. Moreover, a unified exoskeleton control scheme based on ADO is further proposed to improve the gait segmentation, velocity/acceleration estimation, intention estimation, and assistance generation performances, which further enhances the assistance synergy and reduces the safety risks. Experimental results demonstrate better alignment assistance and consequently reduced muscle efforts with ADO-based assistance control.
|
|
MoDT6 |
Room T6 |
Humanoid and Bipedal Locomotion I |
Regular session |
Chair: Schultz, Joshua | University of Tulsa |
Co-Chair: Kim, Jung Hoon | Pohang University of Science and Technology |
|
16:30-16:45, Paper MoDT6.1 | |
>Walking Human Trajectory Models and Their Application to Humanoid Robot Locomotion |
> Video Attachment
|
|
Maroger, Isabelle | LAAS CNRS |
Stasse, Olivier | CNRS |
Watier, Bruno | LAAS, CNRS, Université Toulouse 3 |
Keywords: Humanoid and Bipedal Locomotion, Human and Humanoid Motion Analysis and Synthesis, Motion and Path Planning
Abstract: In order to fluidly perform complex tasks in collaboration with a human being, such as table handling, a humanoid robot has to recognize and adapt to human movements. To achieve such goals, a realistic model of the human locomotion that is computable on a robot is needed. In this paper, we focus on making a humanoid robot follow a human-like locomotion path. We mainly present two models of human walking which lead to compute an average trajectory of the body center of mass from which a twist in the 2D plane can be deduced. Then the velocities generated by both models are used by a walking pattern generator to drive a real TALOS robot. To determine which of these models is the most realistic for a humanoid robot, we measure human walking paths with motion capture and compare them to the computed trajectories.
|
|
16:45-17:00, Paper MoDT6.2 | |
>Robust Gait Synthesis Combining Constrained Optimization and Imitation Learning |
> Video Attachment
|
|
Ding, Jiatao | Wuhan University |
Xiao, Xiaohui | Wuhan University |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Huang, Yanlong | University of Leeds |
Keywords: Humanoid and Bipedal Locomotion, Motion and Path Planning, Optimization and Optimal Control
Abstract: Despite plenty of motion planning strategies have been proposed for bipedal locomotion, enhancing the walking robustness in real-world environments is still an open question. This paper focuses on robust body and leg trajectories synthesis through integrating constrained optimization with imitation learning. Specifically, we first propose a Quadratically Constrained Quadratic Programming (QCQP) algorithm to make use of the ankle strategy and stepping strategy. Based on the Linear Inverted Pendulum (LIP) model, body motion can be determined by the modulated Center of Pressure (CoP) position and step parameters (including step location and step duration). After that, we exploit an imitation learning approach Kernelized Movement Primitives (KMP) to plan robot leg motions, which allows for adapting the learned motion patterns to new situations (e.g., passing through various desired points) in a straightforward manner. Several LIP simulations and whole-body dynamic simulations demonstrate that higher walking robustness can be achieved using our framework.
|
|
17:00-17:15, Paper MoDT6.3 | |
>Core-Centered Actuation for Biped Locomotion of Humanoid Robots |
|
Fuller, Caleb | University of Tulsa |
Huzaifa, Umer | University of Illinois at Urbana-Champaign |
LaViers, Amy | University of Illinois at Urbana-Champaign |
Schultz, Joshua | University of Tulsa |
Keywords: Humanoid and Bipedal Locomotion, Legged Robots, Actuation and Joint Mechanisms
Abstract: In this paper we examine a novel method of core-located actuation that we believe can be used to vary gaits in a compass-gait walker, using critical analysis of a ball-in-tray mechanism to apply forces at the robot's ``pelvis". The dynamic equations of motion of a tilting ball-tray system with several design parameters are developed and simulated for various tray designs. Results show that changes in tray design do indeed significantly affect the trajectory. When compared to a hardware ball-tray system, the results show good agreement with the simulation. The sagittal plane component of the ball's trajectory is applied to the motion of a corresponding mass at the ``pelvis'' of a compass-gait walker. Simulations of the compass-gait walker show that this trajectory generates a feasible gait.
|
|
17:15-17:30, Paper MoDT6.4 | |
>Design and Control of SLIDER: An Ultra-Lightweight, Knee-Less, Low-Cost Bipedal Walking Robot |
> Video Attachment
|
|
Wang, Ke | Imperial College London |
Marsh, David Michael | Imperial College London |
Saputra, Roni Permana | Imperial College London |
Chappell, Digby | Imperial College London |
Jiang, Zhonghe | Imperial College London |
Raut, Akshay | Imperial College London |
Kon, Bethany | Imperial College London |
Kormushev, Petar | Imperial College London |
Keywords: Humanoid and Bipedal Locomotion, Legged Robots
Abstract: Most state-of-the-art bipedal robots are designed to be highly anthropomorphic and therefore possess legs with knees. Whilst this facilitates more human-like locomotion, there are implementation issues that make walking with straight or near-straight legs difficult. Most bipedal robots have to move with a constant bend in the legs to avoid singularities at the knee joints, and to keep the center of mass at a constant height for control purposes. Furthermore, having a knee on the leg increases the design complexity as well as the weight of the leg, hindering the robot's performance in agile behaviors such as running and jumping. We present SLIDER, an ultra-lightweight, low-cost bipedal walking robot with a novel knee-less leg design. This non-anthropomorphic straight-legged design reduces the weight of the legs significantly whilst keeping the same functionality as anthropomorphic legs. Simulation results show that SLIDER's low-inertia legs contribute to less vertical motion in the center of mass (CoM) than anthropomorphic robots during walking, indicating that SLIDER's model is closer to the widely used Inverted Pendulum (IP) model. Finally, stable walking on flat terrain is demonstrated both in simulation and in the physical world, and feedback control is implemented to address challenges with the physical robot.
|
|
17:30-17:45, Paper MoDT6.5 | |
>Stable Crawling Policy for Wearable SuperLimbs Attached to a Human with Tuned Impedance |
> Video Attachment
|
|
Daniel, Phillip | MIT |
Asada, Harry | MIT |
Keywords: Humanoid and Bipedal Locomotion, Optimization and Optimal Control, Physical Human-Robot Interaction
Abstract: A control algorithm that allows a human model to crawl using a pair of supernumerary robotic limbs (SuperLimbs) is presented. The human model and SuperLimbs are coupled by a compliant harness. This work is inspired by the need for wearable robotic systems that can support workers engaged in fatiguing tasks. The walking policy is developed based on Lyapunov analysis. The volume of the region of attraction (ROA) of the system is used to quantify robustness and identify the optimal harness compliance. Simulation experiments are used to verify the performance of the algorithm. The presented formulation allows us to guarantee stable locomotion under nominal conditions and define robustness against modeling error and perturbations. This study is also the first, that the authors are aware of, to address cooperative crawling between a human and a wearable robotic system with state feedback.
|
|
17:45-18:00, Paper MoDT6.6 | |
>Lyapunov-Based Approach to Reactive Step Generation for Push Recovery of Biped Robots Via Hybrid Tracking Control of DCM |
|
Park, Gyunghoon | Korea Institute of Science and Technology |
Kim, Jung Hoon | Pohang University of Science and Technology |
Jo, Joonhee | KIST |
Oh, Yonghwan | Korea Institute of Science & Technology (KIST) |
Keywords: Humanoid and Bipedal Locomotion, Legged Robots, Reactive and Sensor-Based Planning
Abstract: This paper addresses reactive generation of step time and location of biped robots for balance recovery against a severe push. Key idea is to reformulate the balance recovery problem into a tracking problem for "hybrid" inverted pendulum model of the biped, where taking a new step implicitly yields a discrete jump of the tracking error. This interpretation offers a Lyapunov-based approach to reactive step generation, which is possibly more intuitive and easier to analyze than large-scaled or nonlinear optimization-based approaches. With a Lyapunov function associated with the continuous error dynamics for the divergent component of motion (DCM), our strategy for step generation is to decrease the "post-step" Lyapunov level for DCM error at each walking cycle, until it eventually becomes smaller than a threshold so that no more footstep needs to be adjusted. We show that implementation of this idea while obeying physical constraints can be done by employing a hybrid tracking controller (together with a reference model) as our reactive step generator, which consists of a simple DCM-based continuous controller and a small-sized quadratic programming-based discrete controller. The validity of the proposed scheme is verified by simulation results.
|
|
MoDT7 |
Room T7 |
Humanoid and Bipedal Locomotion II |
Regular session |
Chair: Inaba, Masayuki | The University of Tokyo |
Co-Chair: Ames, Aaron | Caltech |
|
16:30-16:45, Paper MoDT7.1 | |
>Sequential Motion Planning for Bipedal Somersault Via Flywheel SLIP and Momentum Transmission with Task Space Control |
> Video Attachment
|
|
Xiong, Xiaobin | California Institute of Technology |
Ames, Aaron | Caltech |
Keywords: Humanoid and Bipedal Locomotion, Whole-Body Motion Planning and Control, Legged Robots
Abstract: In this paper, we present a sequential motion planning and control method for generating somersaults on bipedal robots. The somersault (backflip or frontflip) is considered as a coupling between an axile hopping motion and a rotational motion about the center of mass of the robot; these are encoded by a hopping Spring-loaded Inverted Pendulum (SLIP) model and the rotation of a Flywheel, respectively. We thus present the Flywheel SLIP model for generating the desired motion on the ground phase. In the flight phase, we present a momentum transmission method to adjust the orientation of the lower body based on the conservation of the centroidal momentum. The generated motion plans are realized on the full-dimensional robot via momentum-included task space control. Finally, the proposed method is implemented on a modified version of the bipedal robot Cassie in simulation wherein multiple somersault motions are generated.
|
|
16:45-17:00, Paper MoDT7.2 | |
>A Compliance Control Method Based on Viscoelastic Model for Position-Controlled Humanoid Robots |
> Video Attachment
|
|
Li, Qingqing | Beijing Institute of Technology |
Yu, Zhangguo | Beijing Institute of Technology |
Chen, Xuechao | Beijing Insititute of Technology |
Meng, Libo | Beijing Institute of Technology |
Huang, Qiang | Beijing Institute of Technology |
Fu, Chenglong | Southern University of Science and Technology |
Chen, Ken | Tsinghua University |
Tao, Chunjing | National Research Center for Rehabilitation Technical Aids |
Keywords: Humanoid and Bipedal Locomotion, Reactive and Sensor-Based Planning
Abstract: Compliance is important for humanoid robots, especially a position-controlled one, to perform tasks in complicated environments where unexpected or sudden contacts will result in large impacts which may cause instability or destroy the hardware of robots. This paper presents a compliance control method based on viscoelastic model for humanoid robots to survive on these conditions. The viscoelastic model is used to obtain the relationship between the differential of contact force/torque and linear/angular position. Thus a state equation of this model can be established and a state feedback controller adjusting the position to adapt to the contact force/torque can be designed to realize the compliant movement. The proposed compliance control method based on viscoelastic model has been employed in ankle compliance for stable walking on indefinite uneven terrain and arm compliance for falling protection on BHR-6P, a position-controlled humanoid robot, which validates its effectiveness.
|
|
17:00-17:15, Paper MoDT7.3 | |
>Impedance Control of Humanoid Walking on Uneven Terrain with Centroidal Momentum Dynamics Using Quadratic Programming |
> Video Attachment
|
|
Jo, Joonhee | KIST |
Oh, Yonghwan | Korea Institute of Science & Technology (KIST) |
Keywords: Humanoid and Bipedal Locomotion, Compliance and Impedance Control, Whole-Body Motion Planning and Control
Abstract: In this paper, we propose the stabilization strategy for a soft landing in a biped walking using impedance control and the optimization-based whole-body control framework. Even though proper contact forces and desired trajectories of the robot are given, the robot can be unstable easily if unexpected forces are applied to the robot or impulsive contact force is produced in the landing state while the robot is walking. Therefore, the impedance control approach using contact forces is performed to obtain the modified references that regulate the modified desired position, velocity and acceleration of the swing foot, and improves the walking stability. Moreover, we perform a whole-body control using quadratic programming (QP) that tracks the modified trajectories constrained with the centroidal momentum dynamics. To validate the algorithm, a walking task on uneven terrain using a humanoid robot is shown.
|
|
17:30-17:45, Paper MoDT7.5 | |
>Footstep Modification Including Step Time and Angular Momentum under Disturbances on Sparse Footholds |
> Video Attachment
|
|
Kojio, Yuta | The University of Tokyo |
Omori, Yuki | The University of Tokyo |
Kojima, Kunio | The University of Tokyo |
Sugai, Fumihito | The University of Tokyo |
Kakiuchi, Yohei | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Humanoid and Bipedal Locomotion, Body Balancing
Abstract: Maintaining dynamic balance is an important requirement for bipedal robots. To deal with large disturbances, the footsteps need to be modified depending on the disturbance. Currently, there are few methods that determine footsteps by considering foothold constraints and the balance of the robot. In this paper, we propose a footstep modification method that considers the steppable region. In certain situations, robots cannot maintain balance due to the limitations of the landing position on sparse footholds, such as stepping stones. Therefore, our proposed method modifies not only the step position, but also the step timing and the angular momentum, and balance can be maintained even on the footholds where the steppable region is strictly limited. These walking parameters are analytically calculated by representing the steppable region as convex hulls and applying our previously utilized method. We verified the effectiveness of the proposed method in an experiment where a life-sized humanoid robot walked on stepping stones consisting of unsteady blocks and was able to recover when pushed.
|
|
17:45-18:00, Paper MoDT7.6 | |
>Dynamic and Versatile Humanoid Walking Via Embedding 3D Actuated SLIP Model with Hybrid LIP Based Stepping |
> Video Attachment
|
|
Xiong, Xiaobin | California Institute of Technology |
Ames, Aaron | Caltech |
Keywords: Humanoid and Bipedal Locomotion, Legged Robots, Whole-Body Motion Planning and Control
Abstract: In this paper, we propose an efficient approach to generate dynamic and versatile humanoid walking with non-constant center of mass (COM) height. We exploit the benefits of using reduced order models (ROMs) and stepping control to generate dynamic and versatile walking motion. Specifically, we apply the stepping controller based on the Hybrid Linear Inverted Pendulum Model (H-LIP) to perturb a periodic walking motion of a 3D actuated Spring Loaded Inverted Pendulum (3D-aSLIP), which yields versatile walking behaviors of the 3D-aSLIP, including various 3D periodic walking, fixed location tracking, and global trajectory tracking. The 3D-aSLIP walking is then embedded on the fully-actuated humanoid via the task space control on the COM dynamics and ground reaction forces. The proposed approach is realized on the robot model of Atlas in simulation, wherein versatile dynamic motions are generated.
|
|
MoDT8 |
Room T8 |
Humanoid Robot Systems I |
Regular session |
Chair: Yoshida, Eiichi | National Inst. of AIST |
Co-Chair: Righetti, Ludovic | New York University |
|
16:30-16:45, Paper MoDT8.1 | |
>Vision-Based Belt Manipulation by Humanoid Robot |
> Video Attachment
|
|
Qin, Yili | University of Tsukuba |
Escande, Adrien | AIST |
Tanguy, Arnaud | CNRS-UM LIRMM |
Yoshida, Eiichi | National Inst. of AIST |
Keywords: Humanoid Robot Systems, Whole-Body Motion Planning and Control, Multi-Contact Whole-Body Motion Planning and Control
Abstract: Deformable objects are very common around us in our daily life. Because they have infinitely many degrees of freedom, they present a challenging problem in robotics. Inspired by practical industrial applications, we present in this paper our research on using a humanoid robot to take a long, thin and flexible belt out of a bobbin and pick up the bending part of the belt from the ground. By proposing a novel non-prehensile manipulation strategy "scraping" which utilizes the friction between the gripper and the surface of the belt, efficient manipulation can be achieved. In addition, a 3D shape detection algorithm for deformable object is used during manipulation process. By integrating the novel "scraping" motion and the shape detection algorithm into our multi-objective QP-based controller, we show experimentally humanoid robots can complete this complex task.
|
|
16:45-17:00, Paper MoDT8.2 | |
>Enabling Remote Whole-Body Control with 5G Edge Computing |
|
Zhu, Huaijiang | New York University |
Sharma, Manali | New York University |
Pfeiffer, Kai | CNRS-AIST JRL (Joint Robotic Laboratory) UMI3218/RL, Tsukuba, Ja |
Mezzavilla, Marco | New York University |
Shen, Jia | OPPO |
Rangan, Sundeep | New York University |
Righetti, Ludovic | New York University |
Keywords: Humanoid Robot Systems, Whole-Body Motion Planning and Control, Humanoid and Bipedal Locomotion
Abstract: Real-world applications require light-weight, energy-efficient, fully autonomous robots. Yet, increasing autonomy is oftentimes synonymous with escalating computational requirements. It might thus be desirable to offload intensive computation, not only sensing and planning, but also low-level whole-body control, to remote servers in order to reduce on-board computational needs. Fifth generation (5G) wireless cellular technology, with its low latency and high bandwidth capabilities, has the potential to unlock cloud-based high performance control of complex robots. However, state-of-the-art control algorithms for legged robots can only tolerate very low control delays, which even ultra-low latency 5G edge computing can sometimes fail to achieve. In this work, we investigate the problem of cloud-based whole-body control of legged robots over a 5G link. We propose a novel approach that consists of a standard optimization-based controller on the network edge and a local linear, approximately optimal controller that significantly reduces on-board computational needs while increasing robustness to delay and possible loss of communication. Simulation experiments on humanoid balancing and walking tasks that includes a realistic 5G communication model demonstrate significant improvement of the reliability of robot locomotion under %constant and stochastic delays. jitter and delays likely to be experienced in 5G wireless links.
|
|
17:15-17:30, Paper MoDT8.4 | |
>Estimation and Control of Motor Core Temperature with Online Learning of Thermal Model Parameters: Application to Musculoskeletal Humanoids |
|
Kawaharazuka, Kento | The University of Tokyo |
Hiraoka, Naoki | The University of Tokyo |
Tsuzuki, Kei | University of Tokyo |
Onitsuka, Moritaka | The University of Tokyo |
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Humanoid Robot Systems, Biomimetics, Robust/Adaptive Control of Robotic Systems
Abstract: The estimation and management of motor temperature are important for the continuous movements of robots. In this study, we propose an online learning method of thermal model parameters of motors for an accurate estimation of motor core temperature. Also, we propose a management method of motor core temperature using the updated model and anomaly detection method of motors. Finally, we apply this method to the muscles of the musculoskeletal humanoid and verify the ability of continuous movements.
|
|
17:30-17:45, Paper MoDT8.5 | |
>Bilateral Humanoid Teleoperation System Using Whole-Body Exoskeleton Cockpit TABLIS |
> Video Attachment
|
|
Ishiguro, Yasuhiro | The University of Tokyo |
Makabe, Tasuku | The University of Tokyo |
Nagamatsu, Yuya | The University of Tokyo |
Kojio, Yuta | The University of Tokyo |
Kojima, Kunio | The University of Tokyo |
Sugai, Fumihito | The University of Tokyo |
Kakiuchi, Yohei | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Humanoid Robot Systems, Telerobotics and Teleoperation, Virtual Reality and Interfaces
Abstract: We describe a system design approach for the bilateral teleoperation of a humanoid robot. We have focused on bipedal stability and 2D/3D locomotion space problems. Our proposed system has two robot hardware and two control software. The master side hardware is newly developed seat-type whole-body exoskeleton cockpit called ``TABLIS'', and the master side software reproduces a remote 2D/3D ground surface and overcomes the space limitation of locomotion, such as the use of treadmills. The slave side software prevents the humanoid from falling down in cases where an operator provides an inaccurate input. We used humanoid robot ``JAXON'' as a slave side hardware. We have demonstrated a bilateral quasi-3D step traversing and the validity of our system design.
|
|
17:45-18:00, Paper MoDT8.6 | |
>Lyapunov-Stable Orientation Estimator for Humanoid Robots |
|
Benallegue, Mehdi | AIST Japan |
Cisneros Limon, Rafael | National Institute of Advanced Industrial Science and Technology |
Benallegue, Abdelaziz | University of Versailles St Quentin En Yvelines |
Chitour, Yacine | University of Paris Sud |
Morisawa, Mitsuharu | National Inst. of AIST |
Kanehiro, Fumio | National Inst. of AIST |
Keywords: Sensor Fusion, Humanoid and Bipedal Locomotion, Body Balancing
Abstract: In this paper, we present an observation scheme, with proven Lyapunov stability, for estimating a humanoid's floating base orientation. The idea is to use velocity aided attitude estimation, which requires to know the velocity of the system. This velocity can be obtained by taking into account the kinematic data provided by contact information with the environment and using the IMU and joint encoders. We demonstrate how this operation can be used in the case of a fixed or a moving contact, allowing it to be employed for locomotion. We show how to use this velocity estimation within a selected two-stage state tilt estimator: (i) the first which has a global and quick convergence (ii) and the second which has smooth and robust dynamics. We provide new specific proofs of almost global Lyapunov asymptotic stability and local exponential convergence for this observer. Finally, we assess its performance by employing a comparative simulation and by using it within a closed-loop stabilization scheme for HRP-5P and HRP-2KAI robots performing whole-body kinematic tasks and locomotion.
|
|
MoDT9 |
Room T9 |
Humanoid Robot Systems II |
Regular session |
Chair: Kheddar, Abderrahmane | CNRS-AIST |
|
16:30-16:45, Paper MoDT9.1 | |
>Exceeding the Maximum Speed Limit of the Joint Angle for the Redundant Tendon-Driven Structures of Musculoskeletal Humanoids |
> Video Attachment
|
|
Kawaharazuka, Kento | The University of Tokyo |
Koga, Yuya | The University of Tokyo |
Tsuzuki, Kei | University of Tokyo |
Onitsuka, Moritaka | The University of Tokyo |
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Redundant Robots, Biomimetics, Humanoid Robot Systems
Abstract: The musculoskeletal humanoid has various biomimetic benefits, and the redundant muscle arrangement is one of its most important characteristics. This redundancy can achieve fail-safe redundant actuation and variable stiffness control. However, there is a problem that the maximum joint angle velocity is limited by the slowest muscle among the redundant muscles that have various velocities around the joints. In this study, we propose two methods that can exceed the limited maximum joint angle velocity, and verify the effectiveness with actual robot experiments.
|
|
16:45-17:00, Paper MoDT9.2 | |
>Three-Dimensional Posture Optimization for Biped Robot Stepping Over Large Ditch Based on a Ducted-Fan Propulsion System |
> Video Attachment
|
|
Huang, Zhifeng | Guangdong University of Technology |
Wang, Zijun | Guangdong University of Technology |
Wei, Jiapeng | Guangdong University of Technology |
Yu, JinTao | Guangdong University of Technology |
Zhou, Yuhao | Guangdong University of Technology |
Lao, Pihao | Guangdong University of Technology |
Xiaoliang, Huang | Chalmers University of Technology |
Zhang, Xuexi | Guangdong University of Technology |
Zhang, Yun | Guangdong University of Technology |
Keywords: Humanoid and Bipedal Locomotion, Motion and Path Planning, Humanoid Robot Systems
Abstract: The recent progress of an ongoing project utilizing a ducted-fan propulsion system to improve a humanoid robot’s ability to step over large ditches is reported. A novel method (GAS) based on the genetic algorithm with smoothness constraint can effectively minimize the thrust by optimizing the robot’s posture during 3D stepping. The significant advantage of the method is that it can realize the continuity and smoothness of the thrust and pelvis trajectories. The method enables the landing point of the robot’s swing foot to be not only in the forward but also in a side direction. The methods were evaluated by simulation and by being applied on a prototype robot, Jet-HR1. By keeping a quasistatic balance, the robot could step over a ditch with a span of 450 mm (as much as 97% of the length of the robot’s leg) in 3D stepping.
|
|
17:00-17:15, Paper MoDT9.3 | |
>Applications of Stretch Reflex for the Upper Limb of Musculoskeletal Humanoids: Protective Behavior, Postural Stability, and Active Induction |
> Video Attachment
|
|
Kawaharazuka, Kento | The University of Tokyo |
Koga, Yuya | The University of Tokyo |
Tsuzuki, Kei | University of Tokyo |
Onitsuka, Moritaka | The University of Tokyo |
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Biomimetics, Modeling and Simulating Human, Humanoid Robot Systems
Abstract: The musculoskeletal humanoid has various biomimetic benefits, and it is important that we can embed and evaluate human reflexes in the actual robot. Although stretch reflex has been implemented in lower limbs of musculoskeletal humanoids, we apply it to the upper limb to discover its useful applications. We consider the implementation of stretch reflex in the actual robot, its active/passive applications, and the change in behavior according to the difference of parameters.
|
|
17:15-17:30, Paper MoDT9.4 | |
> Adaptive-Gains Enforcing Constraints in Closed-Loop QP Control |
|
Djeha, Mohamed | Université De Montpellier |
Tanguy, Arnaud | CNRS-UM LIRMM |
Kheddar, Abderrahmane | CNRS-AIST |
Keywords: Humanoid Robot Systems, Motion Control, Optimization and Optimal Control
Abstract: In this letter, we revisit an open problem of constraints formulation in the context of task-space control frameworks formulated as quadratic programs. In most inverse dynamics implementations, the decision variables are: robot joints acceleration, interaction forces (mostly physical contacts), and robot torques. Nevertheless, many constraints, like distance and velocity bounds, are not written originally in terms of one of these decision variables. Previous work proposed solutions to formulate and enforce joint limits constraints. Yet, none of them worked properly in closed-loop, specifically when bounds are reached or when they are time-varying. First, we show that constraints like collision avoidance, bounds of center of mass, constraints on field-of-view, Cartesian and velocity bounds on a given link... are written in a generic class. Then, we formulate such a class of constraints with gain-parameterized ordinary differential inequality. An adaptive-gain method enforces systematically such class of constraints, and results on a stable behavior when their bounds (even when they vary with time) are reached in closed-loop. Experimental results performed on a humanoid robot validate our solution on a large panel of constraints.
|
|
17:30-17:45, Paper MoDT9.5 | |
>Fast Tennis Swing Motion by Ball Trajectory Prediction and Joint Trajectory Modification in Standalone Humanoid Robot Real-Time System |
> Video Attachment
|
|
Hattori, Mirai | The University of Tokyo |
Kojima, Kunio | The University of Tokyo |
Noda, Shintaro | The University of Tokyo |
Sugai, Fumihito | The University of Tokyo |
Kakiuchi, Yohei | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Humanoid Robot Systems, Whole-Body Motion Planning and Control, Reactive and Sensor-Based Planning
Abstract: In this work, we propose a system for humanoid robot fast motions. When a humanoid robot performs a motion such as a tennis forehand stroke motion, a whole-body fast motion in reaction to visual information is required. There are three problems to tackle. (1) Motion is desired to be quick. (2) Real-time visual processing considering visual noises is needed. (3) Real-time joint angle modification with balance keeping is needed. To solve the problem (1), we used an offline optimization system to enhance the motion speed. To solve the problem (2), we implement a ball trajectory prediction algorithm using the Extended Kalman Filter (EKF). To solve the trade-off between (1) and (3), we propose an offline optimization condition with an estimated balance margin. By using these methods, we achieved a non-step tennis forehand stroke motion with a humanoid robot by predicting a ball's trajectory with stereo cameras on the robot's head.
|
|
MoDT10 |
Room T10 |
Legged Robots I |
Regular session |
Chair: Hubicki, Christian | Florida State University |
Co-Chair: Bhounsule, Pranav | University of Illinois at Chicago |
|
16:30-16:45, Paper MoDT10.1 | |
>A Momentum-Based Foot Placement Strategy for Stable Postural Control of Robotic Spring-Mass Running with Point Feet |
|
Secer, Gorkem | Johns Hopkins University |
Çınar, Ali Levent | Middle East Technical University |
Keywords: Legged Robots, Humanoid and Bipedal Locomotion, Motion Control
Abstract: A long-standing argument in model-based control of locomotion is about the level of complexity that a model should have to define a behavior such as running. Even though a goldilocks model based on biomechanical evidence is often sought, it is unclear what level of complexity qualifies to be such a model. This dilemma deepens further for bipedal robotic running with point feet, since these robots are underactuated. When center-of-mass (COM) trajectories defined by the spring-loaded inverted pendulum (SLIP) model are fully tracked, angular coordinates of the robot’s trunk become uncontrolled. Existing work in the literature approach this problem either by trading off COM trajectory tracking against upright trunk posture during stance or by adopting more detailed models that include effects of trunk angular dynamics. In this paper, we present a new approach based on modifying foot placement targets of the SLIP model. Theoretical analysis and numerical results show that the proposed approach can be alternative to existing strategies.
|
|
16:45-17:00, Paper MoDT10.2 | |
>Nonlinear Model Predictive Control of Hopping Model Using Approximate Step-To-Step Models for Navigation on Complex Terrain |
|
Zamani, Ali | University of Illinois at Chicago |
Bhounsule, Pranav | University of Illinois at Chicago |
Keywords: Legged Robots, Humanoid and Bipedal Locomotion, Motion and Path Planning
Abstract: We consider the motion planning problem of a hopper navigating a terrain comprising stepping stones while optimizing an energy metric. The most widely used approach of discrete searches (e.g., A-star) cannot handle boundary conditions (e.g., end path constraints on position, velocity). However, continuous optimizations can easily deal with the boundary value problem but are not widely used in motion planning because they are computationally intensive and possibly non-convex when one considers the terrain. Here we use a continuous optimization approach within a model predictive control framework. First, we generate a library comprising initial states at an instant in the locomotion cycle (e.g., apex), the controls (e.g., foot placement, amplitude of force), and the states at the same instant at the next step. Next, we fit these step-to-step models with low order polynomials (typically 2nd or 3rd order). Finally, the planner uses these low order step-to-step models to preview a fixed distance ahead and plans the optimal steps and controls. Thereafter, we implement the plan for the first step, followed by replanning. This process continues until the hopper reaches the end of the terrain. The main contributions are low-order polynomial models for fast computation and incorporation of the complex terrain as a cost function.
|
|
17:00-17:15, Paper MoDT10.3 | |
>Risk-Constrained Motion Planning for Robot Locomotion: Formulation and Running Robot Demonstration |
> Video Attachment
|
|
Hackett, Jacob | Florida State University |
Gao, Wei | Florida State University |
Daley, Monica | Royal Veterinary College, Structure and Motion Laboratory |
Clark, Jonathan | Florida State University |
Hubicki, Christian | Florida State University |
Keywords: Legged Robots, Humanoid and Bipedal Locomotion, Optimization and Optimal Control
Abstract: Robots encounter many risks that threaten the success of practical locomotion tasks. Legs break, electrical components overheat, and feet can unexpectedly slip. When all risks cannot be completely avoided, how does a robot decide its best action? We present a method for planning robot motions by reasoning about risk-of-failure probabilities instead of applying cost-penalty functions or inflexible path constraints. This work develops a risk-constrained formulation that can be straightforwardly included in existing motion planning optimizations. The risk constraints scale tractably with many risk sources, and in some cases, only add linear constraints to the optimization problem and are therefore compatible with model-predictive control techniques. We present a toy “Puck World” proof-of-concept example and a practical implementation on a planar monopod robot that runs at 3.2 m/s when permitted to take high risk maneuvers. We believe this risk approach can be used to optimize robot behaviors under numerous conflicting task pressures and model risk-conscious behaviors in animals.
|
|
17:15-17:30, Paper MoDT10.4 | |
>Evaluating the Efficacy of Parallel Elastic Actuators on High-Speed, Variable Stiffness Running |
> Video Attachment
|
|
Nicholson, John | Florida State University |
Gart, Sean | US Army Research Lab |
Pusey, Jason | U.S. Army Research Laboratory (ARL) |
Clark, Jonathan | Florida State University |
Keywords: Legged Robots, Compliance and Impedance Control, Performance Evaluation and Benchmarking
Abstract: Although they take many forms, legged robots rely upon springs to achieve high speed, dynamic locomotion. In this paper we examine the effect of adding parallel springs to robots that rely on virtual compliance. Specifically, we consider the trade-off between energetic efficiency and leg versatility that comes while using Parallel Elastic Actuators (PEAs). To do this, we vary the ratio of physical to virtual compliance for legged systems using a) a modified SLIP model, b) a single legged hopping robot, and c) a multibody simulation of the quadruped robot LLAMA. In each case we show that having a small physical compliance significantly improves the efficiency while also maintaining the robot's versatility.
|
|
17:30-17:45, Paper MoDT10.5 | |
>Line Walking and Balancing for Legged Robots with Point Feet |
> Video Attachment
|
|
Gonzalez Bolivar, Carlos Isaac | Istituto Italiano Di Tecnologia |
Barasuol, Victor | Istituto Italiano Di Tecnologia |
Frigerio, Marco | KU Leuven |
Featherstone, Roy | Istituto Italiano Di Tecnologia |
Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Semini, Claudio | Istituto Italiano Di Tecnologia |
Keywords: Legged Robots, Body Balancing, Kinematics
Abstract: The ability of legged systems to traverse highly-constrained environments depends by and large on the performance of their motion and balance controllers. This paper presents a controller that excels in a scenario that most state-of-the-art balance controllers have not yet addressed: line walking, or walking on nearly null support regions. Our approach uses a low-dimensional virtual model (2-DoF) to generate balancing actions through a previously derived four-term balance controller and transforms them to the robot through a derived kinematic mapping. The capabilities of this controller are tested in simulation, where we show the 90kg quadruped robot HyQ crossing a bridge of only 6 cm width (compared to its 4 cm diameter spherical foot), by balancing on two feet at any time while moving along a line. Additional simulations are carried to test the performance of the controller and the effect of external disturbances. Lastly, we present our preliminary experimental results showing HyQ balancing on two legs while being disturbed.
|
|
17:45-18:00, Paper MoDT10.6 | |
>Haptic Sequential Monte Carlo Localization for Quadrupedal Locomotion in Vision-Denied Scenarios |
> Video Attachment
|
|
Buchanan, Russell | University of Oxford |
Camurri, Marco | University of Oxford |
Fallon, Maurice | University of Oxford |
Keywords: Legged Robots, Localization, Kinematics
Abstract: Continuous robot operation in extreme scenarios such as underground mines or sewers is difficult because exteroceptive sensors may fail due to fog, darkness, dirt or malfunction. So as to enable autonomous navigation in these kinds of situations, we have developed a type of proprioceptive localization which exploits the foot contacts made by a quadruped robot to localize against a prior map of an environment, without the help of any camera or LIDAR sensor. The proposed method enables the robot to accurately re-localize itself after making a sequence of contact events over a terrain feature. The method is based on Sequential Monte Carlo and can support both 2.5D and 3D prior map representations. We have tested the approach online and onboard the ANYmal quadruped robot in two different scenarios: the traversal of a custom built wooden terrain course and a wall probing and following task. In both scenarios, the robot is able to effectively achieve a localization match and to execute a desired preplanned path. The method keeps the localization error down to 10 cm on feature rich terrain by only using its feet, kinematic and inertial sensing.
|
|
17:45-18:00, Paper MoDT10.7 | |
>Robust Autonomous Navigation of a Small-Scale Quadruped Robot in Real-World Environments |
> Video Attachment
|
|
Dudzik, Thomas | Massachusetts Institute of Technology |
Chignoli, Matthew | Massachusetts Institute of Technology |
Bledt, Gerardo | Massachusetts Institute of Technology (MIT) |
Lim, Bryan Wei Tern | Massachusetts Institute of Technology |
Miller, Adam | Massachusetts Institute of Technology |
Kim, Donghyun | Massachusetts Institute of Technology |
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords: Legged Robots
Abstract: Animal-level agility and robustness in robots cannot be accomplished by solely relying on blind locomotion controllers. A significant portion of a robot's ability to traverse terrain comes from reacting to the external world through visual sensing. However, embedding the sensors and compute that provide sufficient accuracy at high speeds is challenging, especially if the robot has significant space limitations. In this paper, we propose a system integration of a small-scale quadruped robot, the MIT Mini-Cheetah Vision, that exteroceptively senses the terrain and dynamically explores the world around it at high velocities. Through extensive hardware and software development, we demonstrate a fully untethered robot with all hardware onboard running a locomotion controller that combines state-of-the-art Regularized Predictive Control (RPC) with Whole-Body Impulse Control (WBIC). We devise a hierarchical state estimator that integrates kinematic, IMU, and localization sensor data to provide state estimates specific to path planning and locomotion tasks. Our integrated system has demonstrated robust autonomous waypoint tracking in dynamic real-world environments at speeds of over 1 m/s with high rates of success.
|
|
MoDT11 |
Room T11 |
Legged Robots II |
Regular session |
Chair: Wensing, Patrick M. | University of Notre Dame |
Co-Chair: Cho, Kyu-Jin | Seoul National University, Biorobotics Laboratory |
|
16:30-16:45, Paper MoDT11.1 | |
>Rapid Bipedal Gait Optimization in CasADi |
|
Fevre, Martin | University of Notre Dame |
Wensing, Patrick M. | University of Notre Dame |
Schmiedeler, James | University of Notre Dame |
Keywords: Legged Robots, Optimization and Optimal Control
Abstract: This paper shows how CasADi's state-of-the-art implementation of algorithmic differentiation can be leveraged to formulate and efficiently solve gait optimization problems, enabling rapid gait design for high-dimensional biped robots. Comparative studies on a 7-DOF planar biped show that CasADi generates optimal gaits 4 times faster than another existing advanced optimization package. The framework is also applied to simultaneously generate a gait and a feedback controller for 2 spatial bipeds: a 12-DOF model and a 20-DOF model. Results suggest that CasADi's unprecedented efficiency could provide a practical path toward real-time gait optimization for high-dimensional biped robots.
|
|
16:45-17:00, Paper MoDT11.2 | |
>Perceptive Locomotion in Rough Terrain -- Online Foothold Optimization |
> Video Attachment
|
|
Jenelten, Fabian | ETH Zurich |
Miki, Takahiro | University of Tokyo |
Elanjimattathil Vijayan, Aravind | KTH Royal Institute of Technology |
Hutter, Marco | ETH Zurich |
Keywords: Legged Robots, Motion Control, Optimization and Optimal Control
Abstract: Compared to wheeled vehicles, legged systems have a vast potential to traverse challenging terrain. To exploit the full potential, it is crucial to tightly integrate terrain perception for foothold planning. We present a hierarchical locomotion planner together with a foothold optimizer that finds locally optimal footholds within an elevation map. The map is generated in real-time from on-board depth sensors. We further propose a terrain-aware contact schedule to deal with actuator velocity limits. We validate the combined locomotion pipeline on our quadrupedal robot ANYmal with a variety of simulated and real-world experiments. We show that our method can cope with stairs and obstacles of heights up to 33% of the robot’s leg length.
|
|
17:00-17:15, Paper MoDT11.3 | |
>Risk-Aware Motion Planning for a Limbed Robot with Stochastic Gripping Forces Using Nonlinear Programming |
> Video Attachment
|
|
Shirai, Yuki | University of California, Los Angeles |
Lin, Xuan | UCLA |
Tanaka, Yusuke | University of California, Los Angeles |
Mehta, Ankur | UCLA |
Hong, Dennis | UCLA |
Keywords: Legged Robots, Motion and Path Planning, Optimization and Optimal Control
Abstract: We present a motion planning algorithm with probabilistic guarantees for limbed robots with stochastic gripping forces. Planners based on deterministic models with a worst-case uncertainty can be conservative and inflexible to consider the stochastic behavior of the contact, especially when a gripper is installed. Our proposed planner enables the robot to simultaneously plan its pose and contact force trajectories while considering the risk associated with the gripping forces. Our planner is formulated as a nonlinear programming problem with chance constraints, which allows the robot to generate a variety of motions based on different risk bounds. To model the gripping forces as random variables, we employ Gaussian Process regression. We validate our proposed motion planning algorithm on an 11.5 kg six-limbed robot for two-wall climbing. Our results show that our proposed planner generates various trajectories (e.g., avoiding low friction terrain under the low risk bound, choosing an unstable but faster gait under the high risk bound) by changing the probability of risk based on various specifications.
|
|
17:15-17:30, Paper MoDT11.4 | |
>Optimisation of Body-Ground Contact for Augmenting the Whole-Body Loco-Manipulation of Quadruped Robots |
> Video Attachment
|
|
Wolfslag, Wouter | University of Edinburgh |
McGreavy, Christopher | University of Edinburgh |
Xin, Guiyang | The University of Edinburgh |
Tiseo, Carlo | University of Edinburgh |
Vijayakumar, Sethu | University of Edinburgh |
Li, Zhibin | University of Edinburgh |
Keywords: Legged Robots, Multi-Contact Whole-Body Motion Planning and Control, Optimization and Optimal Control
Abstract: Legged robots have great potential to perform complex loco-manipulation tasks, yet it is challenging to keep the robot balanced while it interacts with the environment. In this paper we investigated the use of additional contact points for maximising the robustness of loco-manipulation motions. Specifically, body-ground contact was studied for its ability to enhance robustness and manipulation capabilities of quadrupedal robots. We proposed equipping the robot with prongs: small legs rigidly attached to the body which create body-ground contact at controllable point-contacts. The effect of these prongs on robustness was quantified by computing the Smallest Unrejectable Force (SUF), a measure of robustness related to Feasible Wrench Polytopes. We applied the SUF to evaluate the robustness of the system, and proposed an effective approximation of the SUF that can be computed at near-real-time speed. We developed a hierarchical quadratic programming based whole-body controller that can control stable interaction when the prongs are in contact with the ground. This novel prong concept and complementary control framework were implemented on hardware to validate their effectiveness by showing increased robustness and newly enabled loco-manipulation tasks, such as obstacle clearance and manipulation of a large object.
|
|
17:30-17:45, Paper MoDT11.5 | |
>Achieving Versatile Energy Efficiency with the WANDERER Biped Robot (I) |
> Video Attachment
|
|
Hobart, Clinton | Sandia National Laboratories |
Mazumdar, Anirban | Georgia Institute of Technology |
Spencer, Steven J. | Sandia National Laboratories |
Quigley, Morgan | Open Source Robotics Foundation |
Smith, Jesper | Halodi Robotics AS |
Bertrand, Sylvain | Institute for Human and Machine Cognition |
Pratt, Jerry | Inst. for Human and Machine Cognition |
Kuehl, Michael | Sandia National Labs |
Buerger, Stephen P. | Sandia National Laboratories |
Keywords: Legged Robots, Actuation and Joint Mechanisms, Humanoid and Bipedal Locomotion
Abstract: Legged humanoid robots promise revolutionary mobility and effectiveness in environments built for humans. However, inefficient use of energy significantly limits their practical adoption. The humanoid biped walking anthropomorphic novelly-driven efficient robot for emergency response (WANDERER) achieves versatile, efficient mobility, and high endurance via novel drive-trains and passive joint mechanisms. Results of a test in which WANDERER walked for more than 4 h and covered 2.8 km on a treadmill, are presented. Results of laboratory experiments showing even more efficient walking are also presented and analyzed in this article. WANDERER’s energetic performance and endurance are believed to exceed the prior literature in human-scale humanoid robots. This article describes WANDERER, the analytical methods and innovations that enable its design, and system-level energy efficiency results.
|
|
17:45-18:00, Paper MoDT11.6 | |
>Development of a Running Hexapod Robot with Differentiated Front and Hind Leg Morphology and Functionality |
|
Chiu, Jia-Ruei | National Taiwan University |
Huang, Yu-Chih | Delft University of Technology |
Chen, Huiching | University of Michigan |
Tseng, Kuan-Yu | National Taiwan University |
Lin, Pei-Chun | National Taiwan University |
Keywords: Legged Robots, Mechanism Design, Dynamics
Abstract: This article introduces an innovative model-based strategy for designing a legged robot to generate animal-like running dynamics with differentiated leg braking and thrusting force patterns. Linear springs were utilized as legs, but instead of having one end of each spring connected directly to the hip joint, one extra bar was added to offset the spring’s direction. The robot’s front and hind legs were offset with the same magnitudes but in different directions. Therefore, the legs produced different ground braking and thrusting force patterns. The robot’s running motion was planned based on its reduced-order model. The model’s fixed-point and passive-dynamics motion served as the robot’s reference motion. The proposed strategy was experimentally validated, and the results confirmed that the robot could successfully perform stable running in a differentiated leg force pattern.
|
|
17:45-18:00, Paper MoDT11.7 | |
>CaseCrawler: A Lightweight and Low-Profile Crawling Phone Case Robot |
> Video Attachment
|
|
Lee, Jongeun | Seoul National University |
Jung, Gwang-Pil | SeoulTech |
Baek, Sang-Min | Seoul National University |
Chae, Soo-Hwan | Seoul National University Biorobotics Lab |
Yim, Sojung | Seoul National University |
Kim, Woongbae | Seoul National University |
Cho, Kyu-Jin | Seoul National University, Biorobotics Laboratory |
Keywords: Legged Robots, Mechanism Design, Social Human-Robot Interaction
Abstract: The CaseCrawler is a lightweight and low-profile mobile platform with a high payload capacity; it is capable of crawling around carrying a cell phone. The body of the robot resembles a phone case but it has crawling legs stored in its back. It is designed with a deployable, in-plane transmission that is capable of crawling locomotion. The CaseCrawler’s leg structure has a knee joint that can passively bend only in one direction; this allows it to sustain a load in the other direction. This anisotropic leg allows a crank slider to be used as the main transmission for generating the crawling motion, which generates a motion only within a 2D plane. The crank slider deploys the leg when the slider is pushed and retracts it when pulled; this enables a low-profile case that can fully retract the legs flat. Furthermore, by being restricted to swinging within a plane, the hip joint is highly resistant to off-axis deformation, which results in high payload capacity. As a result, the CaseCrawler has a body thickness of 1.5mm and a total weight of 22.7g; however, it can carry a load over 300g, which is 13 times its own weight. To show the feasibility of the robot for use in real-world applications, in this study, the CaseCrawler was employed as a mobile platform that carries a 190g mass, including a cell phone and its cover. This robot can crawl around with the cell phone to charge itself on a wireless charging station, collect data, or to find its way back to its owner when needed.
|
|
MoDT12 |
Room T12 |
Legged Robots III |
Regular session |
Chair: Webster-Wood, Victoria | Carnegie Mellon University |
Co-Chair: Sreenath, Koushil | University of California, Berkeley |
|
16:30-16:45, Paper MoDT12.1 | |
>Ultra Low-Cost Printable Folding Robots |
|
Schaffer, Saul | Carnegie Mellon University |
Wang, Emily | Carnegie Mellon University |
Cooper, Nathan | Carnegie Mellon University |
Li, Bo | Case Western Reserve University |
Temel, Zeynep | Carnegie Mellon University |
Akkus, Ozan | Case Western Reserve University |
Webster-Wood, Victoria | Carnegie Mellon University |
Keywords: Additive Manufacturing, Mechanism Design, Compliant Assembly
Abstract: Current techniques in robot design and fabrication are time consuming and costly. Robot designs are needed that facilitate low-cost fabrication techniques and reduce the design to production timeline. Here we present an axial-rotational coupled metastructure that can serve as the functional core of a low-cost 3D printed walking robot. Using an origami-inspired assembly technique, the axial-rotational coupled metastructure robot can be 3D printed flat and then folded into a final configuration. This print-then-fold approach allows for the facile integration of critical subcomponents during the printing process. The axial-rotational metastructures eliminate the need for joints and linkages by enabling locomotion through a single compliant structure. Finite element models of the axial-rotational metastructures were developed and validated against experimental deformation of 3D printed units under tensile loading. As a proof-of-concept, an ultra low-cost 3D-printed metabot was designed and fabricated using the proposed axial-rotational coupled metastructure and its walking performance was characterized. A top speed of 4.30 mm/s was achieved with an alternating stepping gait at a frequency of 0.8 Hz.
|
|
16:45-17:00, Paper MoDT12.2 | |
>Knuckles That Buckle: Compliant Underactuated Limbs with Joint Hysteresis Enable Minimalist Terrestrial Robots |
> Video Attachment
|
|
Jiang, Mingsong | Ucsd |
Song, Rongzichen | University of California-San Diego |
Gravish, Nick | UC San Diego |
Keywords: Underactuated Robots, Actuation and Joint Mechanisms, Flexible Robots
Abstract: Underactuated designs of robot limbs can enable these systems to passively adapt their joint configuration in response to external forces. Passive adaptation and reconfiguration can be extremely beneficial in situations where manipulation or locomotion with complex substrates is required. A common design for underactuated systems often involves a single tendon that actuates multiple rotational joints, each with a torsional elastic spring resisting bending. However, a challenge of using those joints for legged locomotion is that limbs typically need to follow a cyclical trajectory so that feet can alternately be engaged in stance and swing phases. Such trajectories present challenges for linearly elastic underactuated limbs. In this paper, we present a new method of underactuated limb design which incorporates hysteretic joints that change their torque response during loading and unloading. A double-jointed underactuated limb with both linear and hysteretic joints can thus be tuned to create a variety of looped trajectories. We fabricate these joints inside a flexible legged robot using a modified laminate based 3D printing method, and the result shows that with passive compliance and a mechanically determined joint sequence, a 2-legged minimalist robot can successfully walk through a confined channel over uneven substrates.
|
|
17:00-17:15, Paper MoDT12.3 | |
>Animated Cassie: A Dynamic Relatable Robotic Character |
> Video Attachment
|
|
Li, Zhongyu | University of California, Berkeley |
Cummings, Christine | UC Berkeley |
Sreenath, Koushil | University of California, Berkeley |
Keywords: Simulation and Animation
Abstract: Creating robots with emotional personalities will transform the usability of robots in the real-world. As previous emotive social robots are mostly based on statically stable robots whose mobility is limited, this paper develops an animation to real-world pipeline that enables dynamic bipedal robots that can twist, wiggle, and walk to behave with emotions. First, an animation method is introduced to design emotive motions for the virtual robot's character. Second, a dynamics optimizer is used to convert the animated motion to dynamically feasible motion. Third, real-time standing and walking controllers and an automaton are developed to bring the virtual character to life. This framework is deployed on a bipedal robot Cassie and validated in experiments. To the best of our knowledge, this paper is one of the first to present an animatronic dynamic legged robot that is able to perform motions with desired emotional attributes. We term robots that use dynamic motions to convey emotions as Dynamic Relatable Robotic Characters.
|
|
17:15-17:30, Paper MoDT12.4 | |
>Drive-Train Design in JAXON3-P and Realization of Jump Motions: Impact Mitigation and Force Control Performance for Dynamic Motions |
> Video Attachment
|
|
Kojima, Kunio | The University of Tokyo |
Kojio, Yuta | The University of Tokyo |
Ishikawa, Tatsuya | University of Tokyo |
Sugai, Fumihito | The University of Tokyo |
Kakiuchi, Yohei | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Actuation and Joint Mechanisms, Force Control, Humanoid and Bipedal Locomotion
Abstract: For mitigating joint impact torques, researchers have reduced joint stiffness by series elastic actuators, reflected inertia by low gear ratios, and friction torque from drive-trains. However, these impact mitigation methods may impair the control performance of contact forces or may increase motor and robot mass. This paper proposes a design method for achieving a balance between impact mitigation performance and force control fidelity. We introduce an inertia-to-square-torque ratio as a new index for integrating the parameters of torque generation (motor continuous torque limits, gear ratios, etc.) and the parameters of impact mitigation (joint stiffness, reflected inertia, etc.). In the process, we make a hypothesis that a motor mass is negatively correlated with the ratio. Based on the hypothesis, we calculate a joint breakdown region of impact torques, joint stiffnesses, and motor masses. Finally, we decide the drive-train specifications of JAXON3-P and demonstrate that the proposed method provides high impact mitigation and force control capabilities through several experiments including the jumping motion of 0.3 m COG height.
|
|
17:30-17:45, Paper MoDT12.5 | |
>A Model for Optimising the Size of Climbing Robots for Navigating Truss Structures |
|
Au, Wesley | Monash University |
Sakaue, Tomoki | Tokyo Electric Power Company Holdings, Inc |
Liu, Dikai | University of Technology, Sydney |
Keywords: Mechanism Design, Legged Robots
Abstract: Truss structures can be found in many buildings and civil infrastructure such as bridges and towers. But as these architectures age, their maintenance is required to keep them structurally sound. A legged robotic solution capable of climbing these structures for maintenance is sought, but determining the size and shape of such a robot to maximise structure coverage is a challenging task. This paper proposes a model in which the size of a multi-legged robot is optimised for coverage in a truss structure. A detailed representation of a truss structure is presented, which forms the novel framework for constraint modelling. With this framework, the overall truss structure coverage is modelled, given a robot's size and its climbing performance constraints. This is set up as an optimisation problem, such that its solution represents the optimum size of the robot that satisfies all constraints. Three case studies of practical climbing applications are conducted to verify the model. By intuitive analysis of the model's output data, the results show that the model accurately applies these constraints in a variety of truss structures.
|
|
17:45-18:00, Paper MoDT12.6 | |
>Vitruvio: An Open-Source Leg Design Optimization Toolbox for Walking Robots |
> Video Attachment
|
|
Chadwick, Michael | ETH Zürich |
Kolvenbach, Hendrik | ETHZ |
Dubois, Fabio | Eidgenössische Technische Hochschule |
Lau, Hong Fai | ETH Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Mechanism Design, Legged Robots
Abstract: We present an open-source framework for developing optimal leg designs for walking robots. The leg design parameters (e.g. link lengths, transmission ratios, and spring parameters) are optimized for a user-defined metric such as the minimization of energy consumption or actuator peak torque, enabling the user to better navigate through the high-dimensional and unintuitive design space. Our approach uses the single rigid body dynamics trajectory optimization tool TOWR to generate realistic motion plans. The planned predefined forces and motions are then used to identify actuator velocities and torques. Next, the leg design parameters are optimized using a genetic algorithm. The framework was validated by comparison with measured data on the ANYmal quadruped robot for a trotting motion, with errors in cumulative joint torque and mechanical energy each below 8% per gait cycle. Optimization of the ANYmal link lengths demonstrate that reductions in joint torque, mechanical energy, and mechanical cost of transport in the range of 5-10% are attainable.
|
|
MoDT13 |
Room T13 |
Legged and Humanoid Systems: Learning |
Regular session |
Chair: Finn, Chelsea | Stanford University |
|
16:30-16:45, Paper MoDT13.1 | |
>Rapidly Adaptable Legged Robots Via Evolutionary Meta-Learning |
> Video Attachment
|
|
Song, Xingyou | Google Brain |
Yang, Yuxiang | Robotics at Google |
Choromanski, Krzysztof | Google Brain Robotics |
Caluwaerts, Ken | Google |
Gao, Wenbo | Columbia University |
Finn, Chelsea | Stanford University |
Tan, Jie | Google |
Keywords: Reinforecment Learning, Dynamics, Legged Robots
Abstract: Learning adaptable policies is crucial for robots to operate autonomously in our complex and quickly changing world. In this work, we present a new meta-learning method that allows robots to quickly adapt to changes in dynamics. In contrast to gradient-based meta-learning algorithms that rely on second-order gradient estimation, we introduce a more noise-tolerant Batch Hill-Climbing adaptation operator and combine it with meta-learning based on evolutionary strategies. Our method significantly improves adaptation to changes in dynamics in high noise settings, which are common in robotics applications. We validate our approach on a quadruped robot that learns to walk while subject to changes in dynamics. We observe that our method significantly outperforms prior gradient-based approaches, enabling the robot to adapt its policy to changes in dynamics based on less than 3 minutes of real data.
|
|
16:45-17:00, Paper MoDT13.2 | |
>Slope Handling for Quadruped Robots Using Deep Reinforcement Learning and Toe Trajectory Planning |
> Video Attachment
|
|
Mastrogeorgiou, Athanasios | National Technical University of Athens |
Elbahrawy, Yehia | Faculty of Engineering, University of Duisburg-Essen |
Kecskemethy, Andrés | University Duisburg-Essen |
Papadopoulos, Evangelos | National Technical University of Athens |
Keywords: Legged Robots, Reinforecment Learning, Motion Control
Abstract: Quadrupedal locomotion skills are challenging to develop. In recent years, deep Reinforcement Learning promises to automate the development of locomotion controllers and map sensory observations to low-level actions. Moreover, the full robot dynamics model can be exploited, but no model-based simplifications are to be made. In this work, a method for developing controllers for the Laelaps II robot is presented and applied to motions on slopes up to 15°. Combining deep reinforcement learning with trajectory planning at the toe level, reduces complexity and training time. The proposed control scheme is extensively tested in a Gazebo environment similar to the treadmill-robot environment at the Control Systems Lab of NTUA. The learned policies produced promising results.
|
|
17:00-17:15, Paper MoDT13.3 | |
>A Neural Primitive Model with Sensorimotor Coordination for Dynamic Quadruped Locomotion with Malfunction Compensation |
> Video Attachment
|
|
Saputra, Azhar Aulia | Tokyo Metropolitan University |
Ijspeert, Auke | EPFL |
Kubota, Naoyuki | Tokyo Metropolitan University |
Keywords: Neurorobotics, Sensor Networks, Biologically-Inspired Robots
Abstract: In the field of quadruped locomotion, dynamic locomotion behavior, and rich integration with sensory feedback represents a significant development. In this paper, we present an efficient neural model, which includes CPG and its sensorimotor coordination, and demonstrate its implementation in a quadruped robot to show how efficient integration of motor and sensory feedback can generate dynamic behavior and how sensorimotor coordination reconstructs the sensory network for leg malfunction compensation. Additionally, we delineate a network optimization strategy and suggest sensorimotor coordination as a strategy for controlling speed and regulating internal and external adaptation. The rhythm generation representing the leg injury was inactive, stimulating the sensorimotor system to reconstruct the network between CPG and feet force afferent without any commanding parameter. The performances of the simulated and real, cat-like robot on both flat and rough terrains and the leg malfunction tests demonstrated the effectiveness of the proposed model, indicating that a smooth gait-pattern transition could be generated during sudden leg malfunction.
|
|
17:15-17:30, Paper MoDT13.4 | |
>Spiking Neurons Ensemble for Movement Generation in Dynamically Changing Environments |
> Video Attachment
|
|
Favier, Kaname | The University of Tokyo, Intelligent Systems and Informatics Lab |
Yonekura, Shogo | The University of Tokyo |
Kuniyoshi, Yasuo | The University of Tokyo |
Keywords: Neurorobotics, Optimization and Optimal Control, Neural and Fuzzy Control
Abstract: Spiking neurons might play a larger role than simply as an efficient signal transmitter. Several studies have demonstrated how movements can be generated using networks of spiking neurons. However, the complexity of spiking neural networks makes their implementation difficult, and the use of spiking neurons in robotics has remained largely impractical. In this paper, we show that the addition of a single layer of spiking neurons can help improve performance on stabilization tasks in dynamically changing environments. In a one-dimensional inverted pendulum stabilization task, the spiking neurons seem to expand the space of usable parameters of the controller. Using a robot arm in 3-D space, the additional layer of spiking neurons suffices to improve performance up to 30% on an inverted pendulum stabilization task. We expect this technique to enhance performance in most stabilization tasks but also tasks that are essentially similar such as reaching tasks and posture control. We also expect the effects of this layer to be greatest when the optimal tuning of control parameters is difficult, such as when the environment is unpredictable and dynamic.
|
|
17:30-17:45, Paper MoDT13.5 | |
>Learning of Tool Force Adjustment Skills by a Life-Sized Humanoid Using Deep Reinforcement Learning and Active Teaching Request |
> Video Attachment
|
|
Kawamura, Yoichiro | The University of Tokyo |
Murooka, Masaki | The University of Tokyo |
Hiraoka, Naoki | The University of Tokyo |
Ito, Hideaki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Humanoid Robot Systems, Reinforecment Learning, Learning from Demonstration
Abstract: The purpose of this study is to make life-sized humanoid robots acquire tool manipulation skills that require complicated force adjustment. The difficulty in acquisition of tool manipulation skills comes from the hardship in physical modeling. Recent research has revealed that deep reinforcement learning (DRL), a model-free approach, performs superior in such tasks. However, DRL in general has a drawback in sample efficiency, and this becomes critical in robot learning especially in life-sized humanoid robots. In this study, we propose an integrated system incorporating DRL method and active learning. Our method also leverages a variety of previous studies on life-sized humanoid robots to overcome the sample efficiency issue. We demonstrated the effectiveness of our proposed system through a hacksaw skill acquisition and a Japanese planer (Kanna) skill acquisition by a life-sized humanoid robot.
|
|
17:45-18:00, Paper MoDT13.6 | |
>Decoding Motor Skills of AI and Human Policies: A Study on Humanoid and Human Balance Control (I) |
|
Yuan, Kai | University of Edinburgh |
McGreavy, Christopher | University of Edinburgh |
Yang, Chuanyu | University of Edinburgh |
Wolfslag, Wouter | University of Edinburgh |
Li, Zhibin | University of Edinburgh |
Keywords: Humanoid and Bipedal Locomotion, AI-Based Methods, Humanoid Robot Systems
Abstract: In this study, we propose a new paradigm of using a machine learning approach to facilitate a quicker, more efficient and effective control development, as a different approach of utilising the power of machine learning in addition to other options that intent to use learning directly in real-world applications. We first develop a DRL-based control framework to learn rich motor skills of push recovery for humanoid robots that exhibit human-like push recovery behaviour. Next, we propose to take advantage of DRL to quickly discover solutions for very difficult problems, and then extract the principles of those policies as guidelines for developing engineered controllers. Furthermore, a comparison between humanoid and human balancing is conducted to show the characteristics of the learned humanoid behaviour. This comparison will show that DRL algorithms can learn a good policy with short development and training time that may require humans years to learn. We analyse input-output data collected from humanoid and human policies and postulate a Minimum-Jerk Model-Predictive Control (MJMPC) Framework that quantitatively reflects both AI and human push recovery policies.
|
|
MoDT14 |
Room T14 |
Whole-Body Motion Planning and Control: Legged Robots |
Regular session |
Chair: Patel, Amir | University of Cape Town |
Co-Chair: Johnson, Aaron | Carnegie Mellon University |
|
16:30-16:45, Paper MoDT14.1 | |
>A Model-Free Solution for Stable Balancing and Locomotion of Floating-Base Legged Systems |
> Video Attachment
|
|
Spyrakos-Papastavridis, Emmanouil | King's College London |
Dai, Jian | School of Natural and Mathematical Sciences, King's College Lond |
Keywords: Legged Robots, Whole-Body Motion Planning and Control
Abstract: This paper presents novel control techniques for passivation and stabilisation of floating-base systems with contacts, whose dynamical models comprise both joint-space, and Cartesian floating-base coordinates. The aforementioned results are achieved using both minimally model-based, and completely model-free controllers that employ power-shaping signals. Model-free control is permitted through usage of a decoupled dynamical model, procured via coordinate transformation operations. It is demonstrated that even though passive closed-loop systems are attainable without utilisation of exteroceptive feedback, global stabilisation of a floating-base robot necessitates direct usage of either measured or estimated external forces. The presented asymptotical stabilisation results pertain to both the set-point regulation, and trajectory-tracking cases, thereby ensuring suitability for static balancing, and dynamical locomotion tasks. To ensure practicability and production of feasible input signals, a variable impedance control, power-shaping term is appended to the original design, wherein it circumstantially serves as either a power-dissipating, or power-injecting element. This enhancement provably preserves closed-loop stability, by appositely shaping the system’s power. Experiments involving a metamorphic, quadrupedal walking robot, corroborate the theoretical analysis, as they attest to the system’s ability to stably execute locomotory tasks using a single, unified, model-free control scheme.
|
|
16:45-17:00, Paper MoDT14.2 | |
>Jumping Motion Generation for Humanoid Robot Using Arm Swing Effectively and Changing in Foot Contact Status |
> Video Attachment
|
|
Mineshita, Hiroki | Waseda University |
Otani, Takuya | Waseda University |
Sakaguchi, Masanori | Waseda University |
Kawakami, Yasuo | Waseda University |
Lim, Hun-ok | Kanagawa University |
Takanishi, Atsuo | Waseda University |
Keywords: Whole-Body Motion Planning and Control, Legged Robots, Multi-legged Robots
Abstract: Human jumping involves not only lower limbs but also whole-body coordination. During jumping, the effect of sinking the center of mass for recoil and arm swing are significant, and they can cause changes in the jump height. However, upper body movements during jumping movements of humanoid robots have not been studied adequately. When jumping involves only the lower limbs, the burden on the lower limbs increases and it is difficult to jump as high as humans do. Also, if the sole is in contact with the ground during jumping movements, we cannot make good use of the ankle joint. Humans raise their heels during jumping movements, but there are few cases where humanoid robots achieve these movements. Therefore, we thought that jumping with recoil motion by the sinking, arm swing, and changing in foot contact status could result in a higher jump height higher than that possible with only lower limb movements. Hence, in this study, we generated jumping motion using sinking, arm swing and changing foot posture. First, a center of mass trajectory was generated by planning the entire jumping motion, and at the same time, the angular momentum was determined for stability. Next, the joint trajectory was calculated using these two parameters. At that time, arm trajectory and foot posture were specified in the null space. This generated a jumping motion considering arm swing. During simulations, this method provided a jump height almost four times the jump height that obtained without arm swing.
|
|
17:00-17:15, Paper MoDT14.3 | |
>Fast Global Motion Planning for Dynamic Legged Robots |
> Video Attachment
|
|
Norby, Joseph | Carnegie Mellon University |
Johnson, Aaron | Carnegie Mellon University |
Keywords: Motion and Path Planning, Legged Robots
Abstract: This work presents a motion planning algorithm for legged robots capable of constructing long-horizon dynamic plans in real-time. Many existing methods use models that prohibit flight phases or even require static stability, while those that permit these dynamics often plan over short horizons or take minutes to compute. The algorithm presented here resolves these issues through a reduced-order dynamical model that handles motion primitives with stance and flight phases and supports an RRT-Connect framework for rapid exploration. Kinematic and dynamic constraint approximations are computed efficiently and validated with a whole-body trajectory optimization. The algorithm is tested over challenging terrain requiring long planning horizons and dynamic motions in seconds -- an order of magnitude faster than existing methods. The speed and global nature of the planner offer a new level of autonomy for legged robot applications.
|
|
17:15-17:30, Paper MoDT14.4 | |
>Minor Change, Major Gains: The Effect of Orientation Formulation on Solving Time for Multi-Body Trajectory Optimization |
> Video Attachment
|
|
Knemeyer, Alexander | University of Cape Town |
Shield, Stacey Leigh | University of Cape Town |
Patel, Amir | University of Cape Town |
Keywords: Whole-Body Motion Planning and Control, Optimization and Optimal Control, Legged Robots
Abstract: Many different coordinate formulations have been established to describe the position of multi-body robot models, but the impact of this choice on the tractability of trajectory optimization problems has yet to be investigated. Relative formulations, which reference the position of each link to its predecessor, reduce the number of variables and constraints in the problem, but lead to cumbersome expressions for the equations of motion. By contrast, referencing the positions to an absolute frame simplifies these equations, but necessitates more coordinate variables and connection constraints. In this paper, we investigate whether changing the orientation coordinates of a multi-body system model from relative to absolute angles can reduce the time required to solve the problem. The two approaches are tested on a variety of two- and three-dimensional models, with and without unscheduled unilateral contacts. Across all cases, the absolute formulation was found to be the more successful option. The performance improvements increased with the complexity of the system and task, culminating in the challenging example of a 90-degree turn on a 3D quadruped model, which was only able to converge in the allotted time when absolute angles were used.
|
|
17:30-17:45, Paper MoDT14.5 | |
>Hybrid Systems Differential Dynamic Programming for Whole-Body Motion Planning of Legged Robots |
|
Li, He | University of Notre Dame |
Wensing, Patrick M. | University of Notre Dame |
Keywords: Optimization and Optimal Control, Whole-Body Motion Planning and Control, Legged Robots
Abstract: This paper presents a Differential Dynamic Programming (DDP) framework for trajectory optimization of hybrid systems with state-based switching. The proposed Hybrid-Systems DDP (HS-DDP) approach is considered for application to whole-body motion planning with legged robots. To address state-based switching constraints in these problems, the hybrid dynamics are reformulated in a time-based switching fashion with additional constraints. The proposed approach includes three coordinated algorithmic advances. First, it extends DDP to address the discontinuous impact event that occurs when legs come into contact. Second, it combines DDP with an Augmented Lagrangian (AL) method to tackle with the state-based switching constraints. Third, it incorporates a switching time optimization (STO) algorithm that efficiently finds the optimal switching time by leveraging the efficient computational structure of DDP. The performance of the proposed HS-DDP method is benchmarked on a simulation model of the MIT Mini Cheetah executing a bounding gait. We demonstrate in the simulation that the AL combined with DDP efficiently reduces the constraint violation within 1e-04 in three iterations. By comparing to previous solutions, we show that the STO algorithm achieves 2.3 times more reduction of total switching times, which demonstrates that the proposed method is more efficient in optimizing switching times.
|
|
17:45-18:00, Paper MoDT14.6 | |
>Contact-Implicit Trajectory Optimization Using an Analytically Solvable Contact Model for Locomotion on Variable Ground |
> Video Attachment
|
|
Chatzinikolaidis, Iordanis | The University of Edinburgh |
You, Yangwei | Institute for Infocomm Research |
Li, Zhibin | University of Edinburgh |
Keywords: Multi-Contact Whole-Body Motion Planning and Control, Contact Modeling, Optimization and Optimal Control
Abstract: This paper presents a novel contact-implicit trajectory optimization method using an analytically solvable contact model to enable planning of interactions with hard, soft, and slippery environments. Specifically, we propose a novel contact model that can be computed in closed-form, satisfies friction cone constraints and can be embedded into direct trajectory optimization frameworks without complementarity constraints. The closed-form solution decouples the computation of the contact forces from other actuation forces and this property is used to formulate a minimal direct optimization problem expressed with configuration variables only. Our simulation study demonstrates the advantages over the rigid contact model and a trajectory optimization approach based on complementarity constraints. The proposed model enables physics-based optimization for a wide range of interactions with hard, slippery, and soft grounds in a unified manner expressed by two parameters only. By computing trotting and jumping motions for a quadruped robot, the proposed optimization demonstrates the versatility for multi-contact motion planning on surfaces with different physical properties.
|
|
MoDT15 |
Room T15 |
Whole-Body Motion Planning and Control: Humanoids and Bipeds |
Regular session |
Chair: Benallegue, Mehdi | AIST Japan |
Co-Chair: Pucci, Daniele | Italian Institute of Technology |
|
16:30-16:45, Paper MoDT15.1 | |
>Multi-Contact Locomotion Planning for Humanoid Robot Based on Sustainable Contact Graph with Local Contact Modification |
> Video Attachment
|
|
Kumagai, Iori | National Inst. of AIST |
Morisawa, Mitsuharu | National Inst. of AIST |
Hattori, Shizuko | National Institute of Advanced Industrial Science And |
Benallegue, Mehdi | AIST Japan |
Kanehiro, Fumio | National Inst. of AIST |
Keywords: Humanoid Robot Systems, Multi-Contact Whole-Body Motion Planning and Control, Humanoid and Bipedal Locomotion
Abstract: In this paper, we propose a graph-search based multi-contact locomotion planning method for humanoid robots, focusing on the sustainability of contacts as its key feature. We introduce the idea of sustainable contact area, which represents the area on which contacts can be maintained during contact transitions. This enables us to select feasible contact candidates along a given root path. Then, we compute all the possible combinations of these candidate contacts with every limb appearing at most once, which we call contact sets. The list of these contact sets can be regarded as a list of nodes in a graph structure representing transitions between sustainable contacts, which we name as the sustainable contact graph. We apply A* search on this graph, and evaluate the connectability of nodes by planning quasi-static motion sequences for their contact transitions. In this process, we locally modify the candidate contact to satisfy kinematics constraints and static equilibrium of the robot. The proposed method enables us to plan feasible contact transition motions without random sampling or manually designed contact transition models, and solves the problem of ignoring possible contact transitions, which is caused by the discretization in existing graph-search based planners. We evaluate our proposed method in both simulation and a real robot, and confirm that it contributes to improving the multi-contact locomotion abilities of a humanoid robot.
|
|
16:45-17:00, Paper MoDT15.2 | |
>A Multi-Contact Motion Planning and Control Strategy for Physical Interaction Tasks Using a Humanoid Robot |
> Video Attachment
|
|
Ruscelli, Francesco | Istituto Italiano Di Tecnologia |
Parigi Polverini, Matteo | Istituto Italiano Di Tecnologia (IIT) |
Laurenzi, Arturo | Istituto Italiano Di Tecnologia |
Mingo Hoffman, Enrico | Fondazione Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Keywords: Multi-Contact Whole-Body Motion Planning and Control, Motion Control, Body Balancing
Abstract: This paper presents a framework providing a full pipeline to execute a complex physical interaction behaviour of a humanoid bipedal robot, both from a theoretical and a practical standpoint. Building from a multi-contact control architecture that combines contact planning and reactive force distribution capabilities, the main contribution of this work consists in the integration of a sample-based motion planning layer conceived for transitioning movements where obstacle and self-collisions avoidance is involved. To plan these motions we use Rapidly Exploring Random Tree (RRT) projected on the contacts manifold and validated through the Centroidal Statics (CS) model, to ensure static balance on non-coplanar surfaces. Finally, we successfully validate the presented planning and control architecture on the humanoid robot COMAN+ performing a wall-plank task.
|
|
17:00-17:15, Paper MoDT15.3 | |
>Can I Lift It? Humanoid Robot Reasoning about the Feasibility of Lifting a Heavy Box with Unknown Physical Properties |
|
Han, Yuanfeng | Johns Hopkins University |
Li, Ruixin | Johns Hopkins University |
Chirikjian, Gregory | Johns Hopkins University |
Keywords: Whole-Body Motion Planning and Control, Humanoid and Bipedal Locomotion, Reactive and Sensor-Based Planning
Abstract: A robot cannot lift up an object if it is not feasible to do so. However, in most research on robot lifting, “feasibility” is usually presumed to exist a priori. This paper proposes a three-step method for a humanoid robot to reason about the feasibility of lifting a heavy box with physical properties that are unknown to the robot. Since feasibility of lifting is directly related to the physical properties of the box, we first discretize a range for the unknown values of parameters describing these properties and tabulate all valid optimal quasi-static lifting trajectories generated by simulations over all combinations of indices. Second, a physical-interaction-based algorithm is introduced to identify the robust gripping position and physical parameters corresponding to the box. During this process, the stability and safety of the robot are ensured. On the basis of the above two steps, a third step of mapping operation is carried out to best match the estimated parameters to the indices in the table. The matched indices are then queried to determine whether a valid trajectory exists. If so, the lifting motion is feasible; otherwise, the robot decides that the task is beyond its capability. Our method efficiently evaluates the feasibility of a lifting task through simple interactions between the robot and the box, while simultaneously obtaining the desired safe and stable trajectory. We successfully demonstrated the proposed method using a NAO humanoid robot.
|
|
17:15-17:30, Paper MoDT15.4 | |
>Non-Linear Trajectory Optimization for Large Step-Ups: Application to the Humanoid Robot Atlas |
> Video Attachment
|
|
Dafarra, Stefano | Istituto Italiano Di Tecnologia |
Bertrand, Sylvain | Institute for Human and Machine Cognition |
Griffin, Robert J. | Institute for Human and Machine Cognition (IHMC) |
Metta, Giorgio | Istituto Italiano Di Tecnologia (IIT) |
Pucci, Daniele | Italian Institute of Technology |
Pratt, Jerry | Inst. for Human and Machine Cognition |
Keywords: Humanoid and Bipedal Locomotion, Whole-Body Motion Planning and Control, Optimization and Optimal Control
Abstract: Performing large step-ups is a challenging task for a humanoid robot. It requires the robot to perform motions at the limit of its reachable workspace while straining to move its body upon the obstacle. This paper presents a non-linear trajectory optimization method for generating step-up motions. We adopt a simplified model of the centroidal dynamics to generate feasible Center of Mass trajectories aimed at reducing the torques required for the step-up motion. The activation and deactivation of contacts at both feet are considered explicitly. The output of the planner is a Center of Mass trajectory plus an optimal duration for each walking phase. These desired values are stabilized by a whole-body controller that determines a set of desired joint torques. We experimentally demonstrate that by using trajectory optimization techniques, the maximum torque required to the full-size humanoid robot Atlas can be reduced up to 20% when performing a step-up motion.
|
|
17:30-17:45, Paper MoDT15.5 | |
>Online Dynamic Motion Planning and Control for Wheeled Biped Robots |
> Video Attachment
|
|
Xin, Songyan | The University of Edinburgh |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Humanoid and Bipedal Locomotion, Wheeled Robots, Whole-Body Motion Planning and Control
Abstract: Wheeled-legged robots combine the efficiency of wheeled robots when driving on suitably flat surfaces and versatility of legged robots when stepping over or around obstacles. This paper introduces a planning and control framework to realise dynamic locomotion for wheeled biped robots. We propose the Cart-Linear Inverted Pendulum Model (Cart-LIPM) as a template model for the rolling motion and the under-actuated LIPM for contact changes while walking. The generated motion is then tracked by an inverse dynamic whole-body controller which coordinates all joints, including the wheels. The framework has a hierarchical structure and is implemented in a model predictive control (MPC) fashion. To validate the proposed approach for hybrid motion generation, two scenarios involving different types of obstacles are designed in simulation. To the best of our knowledge, this is the first time that such online dynamic hybrid locomotion has been demonstrated on wheeled biped robots.
|
|
MoDT16 |
Room T16 |
Passive Walking |
Regular session |
Chair: Akbari Hamed, Kaveh | Virginia Tech |
Co-Chair: Travers, Matthew | Carnegie Mellon University |
|
16:30-16:45, Paper MoDT16.1 | |
>Robust Gait Design Insights from Studying a Compass Gait Biped with Foot Slipping |
> Video Attachment
|
|
Chen, Tan | University of Notre Dame |
Goodwine, Bill | University of Notre Dame |
Keywords: Humanoid and Bipedal Locomotion, Underactuated Robots, Legged Robots
Abstract: Most current bipedal robots were modeled with an assumption that there is no slip between the stance foot and ground. This paper relaxes that assumption and undertakes a comprehensive study of a compass gait biped with foot slipping. It is found that slips are most likely to happen near impact for a broad range of gaits. Among these gaits, ones with a backward swing foot velocity relative to the ground just before touch down generally require less friction to maintain stable walking than ones with a forward relative foot velocity. Moreover, a larger percentage of gaits with the ``swinging backward'' foot can tolerate some slipping without falling than those with a swinging forward foot at touch down. Thus, a gait with the swing-backward foot just before touch down should be more robust in the sense of preventing slipping and falling. It is further shown that only one parameter in gait design determines the swing-backward feature, which can help design robust gaits. Models with varying physical parameters such as mass, leg length, and position of center of mass (CoM), are also studied to validate the generality of the results.
|
|
16:45-17:00, Paper MoDT16.2 | |
>Disappearance of Chaotic Attractor of Passive Dynamic Walking by Stretch-Bending Deformation in Basin of Attraction |
|
Okamoto, Kota | Kyoto University |
Aoi, Shinya | Kyoto University |
Obayashi, Ippei | RIKEN |
Kokubu, Hiroshi | Kyoto University |
Senda, Kei | Kyoto University |
Tsuchiya, Kazuo | Kyoto University |
Keywords: Passive Walking, Humanoid and Bipedal Locomotion
Abstract: Passive dynamic walking is a model that walks down a shallow slope without any control or input. This model has been widely used to investigate how stable walking is generated from a dynamic viewpoint, which is useful to provide design principles for developing energy-efficient biped robots. However, the basin of attraction is very small and thin, and it has a fractal-like complicated shape. This makes it difficult to produce stable walking. Furthermore, the passive dynamic walking shows chaotic attractor through a period-doubling cascade by increasing the slope angle, and the chaotic attractor suddenly disappears at a critical slope angle. These make it further difficult to produce stable walking. In our previous work, we used the simplest walking model and investigated the fractal-like basin of attraction based on dynamical systems theory by focusing on the hybrid dynamics of the model composed of the continuous dynamics with saddle hyperbolicity and the discontinuous dynamics by the impact at foot contact. We elucidated that the fractal-like basin of attraction is generated through iterative stretch and bending deformations of the domain of the Poincar'e map by sequential inverse images of the Poincar'e map. In this study, we investigated the mechanism for the disappearance of the chaotic attractor by improving our previous analysis. In particular, we focused on the range of the Poincar'e map to specify the regions to be stretched and bent by the inverse image of the Poincar'e map. We clarified the condition for the chaotic attractor to disappear and the mechanism why the chaotic attractor disappears based on the stretch-bending deformation in the basin of attraction.
|
|
17:00-17:15, Paper MoDT16.3 | |
>Exponentially Stabilizing and Time-Varying Virtual Constraint Controllers for Dynamic Quadrupedal Bounding |
> Video Attachment
|
|
Martin, Joseph | Virginia Polytechnic Institute and State University |
Kamidi, Vinay | Virginia Tech |
Pandala, Abhishek | Virginia Polytechnic Institute and State University |
Fawcett, Randall | Virginia Polytechnic Institute and State University |
Akbari Hamed, Kaveh | Virginia Tech |
Keywords: Legged Robots, Motion Control, Underactuated Robots
Abstract: This paper aims to develop time-varying virtual constraint controllers that allow stable and agile bounding gaits for full-order hybrid dynamical models of quadrupedal locomotion. As opposed to state-based nonlinear controllers, time-varying controllers can initiate locomotion from zero velocity. Motivated by this property, we investigate the stability guarantees that can be provided by the time-varying approach. In particular, we systematically establish necessary and sufficient conditions that guarantee exponential stability of periodic orbits for time-varying hybrid dynamical systems utilizing the Poincare return map. Leveraging the results of the presented proof, we develop time-varying virtual constraint controllers to stabilize bounding gaits of a 14 degree of freedom planar quadrupedal robot, Minitaur. A framework for choosing the parameters of virtual constraint controllers to achieve exponential stability is shown, and the feasibility of the analytical results is numerically validated in full-order simulation models of Minitaur.
|
|
17:15-17:30, Paper MoDT16.4 | |
>Experimental Verification of Vibratory Conveyor System Based on Frequency Entrainment of Limit Cycle Walker |
> Video Attachment
|
|
Mitsuhashi, Kento | Japan Advanced Institute of Science and Technology |
Nishihara, Masatsugu | JAIST |
Asano, Fumihiko | Japan Advanced Institute of Science and Technology |
Keywords: Underactuated Robots, Passive Walking, Dynamics
Abstract: The authors have investigated underactuated locomotion robots with an inner wobbling mass, it is discovered that the wobbling mass controls the gait speed by entrainment. Supplying the wobbling from outside, outer wobbling entrains load objects and controls the transferring speed. In this paper, we propose a vibratory conveyor system based on the frequency entrainment of a limit cycle walker. The conveyance plate is vibrated by an active rimless wheel, and the system conveys a passive rimless wheel which is defined as a load object. The vibration entrains transferring of the passive rimless and controls the conveyance speed. First, we introduce the prototype experimental system and its mathematical model. Second, we report basic behavior of the passive rimless with regards to the outer vibration and results of frequency analysis through the numerical simulation. Third, we experimentally verify the results of the numerical simulation.
|
|
17:30-17:45, Paper MoDT16.5 | |
>Energy-Efficient Locomotion Generation and Theoretical Analysis of a Quasi-Passive Dynamic Walker |
> Video Attachment
|
|
Li, Longchuan | Ritsumeikan University |
Tokuda, Isao | Ritsumeikan University |
Asano, Fumihiko | Japan Advanced Institute of Science and Technology |
Keywords: Passive Walking, Underactuated Robots, Dynamics
Abstract: This paper presents a robot walking control method that we call quasi-passive dynamic walking. The method is targeted at underactuated legged robots and applied to obtain energy-efficient limit cycle gait on level ground. To achieve efficient locomotion, as well as overcome the underactuation of the system, there are two key points of this method to positively utilize the passive dynamics of the system. The first one is to initialize the walker at the fix Poincar´e section obtained from passive dynamic walking on a gentle downhill. The second one is to indirectly excite the hip angle by periodically oscillating a wobbling mass, which is attached to the body frame. The walker is, therefore, able to step forward on level ground without any torque actuation. Moreover, the phase diagram of the generated gait is entrained to limit cycle by the periodic oscillation of the wobbling mass. Numerical simulations and theoretical analysis are conducted to evaluate the efficiency and the local stability of the gait. Our control method enables underactuated legged robots to walk extremely efficiently on level ground with only one actuator, which provides the easiness for implementation on real machines.
|
|
17:45-18:00, Paper MoDT16.6 | |
>Energy Management through Footstep Selection for Bipedal Robots |
> Video Attachment
|
|
Crews, Steven | Carnegie Mellon University |
Travers, Matthew | Carnegie Mellon University |
Keywords: Humanoid and Bipedal Locomotion, Motion Control, Passive Walking
Abstract: This work proposes a method of footstep placement that controls system energy to enable a dynamically-safe walking behavior. Contrasting many other works that treat rough terrain as a series of disturbances that need to be mitigated with control, we provide some insight into how energy-targeted foot placement is enough to allow a passive system to transit over rough terrain. This work explores the underlying complexities of one of the simplest walking models, the inverted pendulum, which, in its various forms, is the skeleton behind all bipedal robots, from Asimo to Atlas. Troubling all of these humanoids is the foot placement problem, especially when the terrain is not flat. This work uses analysis of the system energy to divide the feasible stepping area into regions that would either enable dynamic walking or cause a fall. Second we subdivide the walking region into sectors that promote the accumulation or dissipation of energy, stimulating or inhibiting future steps. Third, we introduce a method of global energy management using a moving reference point over rough terrain. We present results on how these concepts can be used to prevent falls, accumulate energy to cross gaps, and even enable a passive system to walk uphill.
|
|
MoDT17 |
Room T17 |
Multi-Legged Robots I |
Regular session |
Chair: Zhang, Guoteng | Shandong University |
Co-Chair: Clark, Jonathan | Florida State University |
|
16:30-16:45, Paper MoDT17.1 | |
>Multi-Task Control for a Quadruped Robot with Changeable Leg Configuration |
> Video Attachment
|
|
Ye, Linqi | Tsinghua University Graduate School at Shenzhen |
Liu, Houde | Shenzhen Graduate School, Tsinghua University |
Wang, Xueqian | Center for Artificial Intelligence and Robotics, Graduate School |
Liang, Bin | Center for Artificial Intelligence and Robotics, Graduate School |
Yuan, Bo | Tsinghua University |
Keywords: Multi-legged Robots, Task Planning, Legged Robots
Abstract: This paper proposes a multi-task control strategy for a quadruped robot named THU-QUAD II. The mechanical design of the robot ensures a wide range of motion for all joints, which allows it to stand and walk like a mammal as well as sprawl to the ground and crawl like a reptile. Five basic leg configurations are defined for the robot, including four mammal-type configurations with bidirectional knees and one sprawling-type configuration. A multi-task control framework is developed by combining configuration selection and gait planning. According to the locomotion environments, the robot can nimbly switch between different configurations, which gives it more flexibility when facing different tasks. For the mammal-type configuration, a parametric climbing gait is designed to traverse structural terrain. For the sprawling-type configuration, a crawling gait is designed to achieve robust locomotion on uneven terrain. Simulations and experiments show that the robot is capable to move on multiple challenging terrains, including doorsills, stairs, slopes, sand and stones. This paper demonstrates that even some challenging locomotion tasks can be achieved in a rather simple way without using complicated control algorithms, which suggests us to rethink about the leg configurations in designing quadruped robots.
|
|
16:45-17:00, Paper MoDT17.2 | |
>LLAMA: Design and Control of an Omnidirectional Human Mission Scale Quadrupedal Robot |
> Video Attachment
|
|
Nicholson, John | Florida State University |
Jasper, Jay | NASA-JPL |
Kourchians, Ara | NASA-JPL |
McCutcheon, Greg | Florida State University |
Austin, Max | Florida State University |
Gonzalez, Mark | General Dynamics Land Systems |
Pusey, Jason | U.S. Army Research Laboratory (ARL) |
Karumanchi, Sisir | Jet Propulsion Lab, Caltech |
Hubicki, Christian | Florida State University |
Clark, Jonathan | Florida State University |
Keywords: Legged Robots, Field Robots, Multi-legged Robots
Abstract: This paper describes the design, control and initial experimental results of the quadruped robot LLAMA. Designed to operate in a human-scale world, this 67 kg-class, all-electric robot is capable of rapid motion over a variety of terrains. Thanks to unique leg configuration and custom high-torque, low gear-ratio motors, it is able to move omnidirectionally at speeds over 1 m/s. A heirarchical reactive control scheme allows for robust and efficient motion even under variable payloads. This paper describes the structure of the controller and outlines simulation results that probe the performance envelope of the robot suggesting payload capacities up to one third of its body weight. Initial testing shows robust motion over loose debris and a variety of ground slopes. Videos of the robot may be seen at url{https://tinyurl.com/llama-robot
|
|
17:00-17:15, Paper MoDT17.3 | |
>ALPHRED: A Multi-Modal Operations Quadruped Robot for Package Delivery Applications |
> Video Attachment
|
|
Hooks, Joshua | UCLA |
Ahn, Min Sung | University of California, Los Angeles |
Yu, Jeffrey | UCLA |
Zhang, Xiaoguang | University of California, Los Angeles |
Zhu, Taoyuanmin | University of California, Los Angeles |
Chae, Hosik | University of California at Los Angeles |
Hong, Dennis | UCLA |
Keywords: Legged Robots, Multi-legged Robots, Mobile Manipulation
Abstract: Modern quadruped robots are more capable than ever before at performing robust, dynamic locomotion over a variety of terrains, but are still mostly used as mobile inspection platforms. This paper presents ALPHRED version 2, a multi-modal operations quadruped robot designed for both locomotion and manipulation. ALPHRED is equipped with high force bandwidth proprioceptive actuators and simple one degree of freedom end-effectors. Additionally, ALPHRED has a unique radially symmetric kinematic design that provides a superior end-effector workspace and allows the robot to reconfigure itself into different modes to accomplish different tasks. For locomotion tasks, ALPHRED is capable of fast dynamic trotting, continuous hopping and jumping, efficient rolling on passive caster wheels, and even has the potential for bipedal walking. For manipulation tasks, ALPHRED has a tripod mode that provides single arm manipulation capabilities that is strong enough to punch through a wooden board. Additionally, ALPHRED can go into a bipedal mode to allow for dual arm manipulation capable of picking up a box off a one meter tall table and placing it on the ground.
|
|
17:15-17:30, Paper MoDT17.4 | |
>Contact Force Estimation and Regulation of a Position-Controlled Floating Base System without Joint Torque Information |
> Video Attachment
|
|
Zhang, Guoteng | Shandong University |
Ma, Shugen | Ritsumeikan University |
Li, Yibin | Shandong University |
Keywords: Legged Robots, Multi-legged Robots, Force and Tactile Sensing
Abstract: A floating base system is inevitably to contact the environment while it is moving. This paper explores the contact force estimation and regulation algorithm for a position-controlled floating base system without joint torque information. First, the joint space dynamic model of the system is presented and transformed into the contact space. Then, the inverse dynamics method is employed to estimate the contact forces. After that, a proportional-integral (PI) regulator is designed to drive the contact forces to track the desired values. Finally, the feasibility of this algorithm is demonstrated on a simulated bipedal platform.
|
|
17:30-17:45, Paper MoDT17.5 | |
>Decentralized Control Schemes for Stable Quadrupedal Locomotion: A Decomposition Approach from Centralized Controllers |
> Video Attachment
|
|
Pandala, Abhishek | Virginia Polytechnic Institute and State University |
Kamidi, Vinay | Virginia Tech |
Akbari Hamed, Kaveh | Virginia Tech |
Keywords: Legged Robots, Motion Control
Abstract: Although legged robots are becoming more nonlinear with higher degrees of freedom (DOFs), the centralized nonlinear control methods required to achieve stable locomotion cannot scale with the dimensionality of these robots. This paper investigates time-varying decentralized feedback control architectures based on hybrid zero dynamics (HZD) that stabilize dynamic legged locomotion with high degrees of freedom. By conforming to the natural symmetries present in the robot's full-order model, three decentralization schemes are proposed for control synthesis, namely left-right, front-hind and diagonal. Our approach considers the strong nonlinear interactions between the subsystems and relies only on the intrinsic communication of the body's translation and rotational data that is readily available. Further, a quadratic programming (QP) based feedback linearization is employed to compute the control inputs for each subsystem. The effectiveness of the HZD-based decentralization scheme is demonstrated numerically for the stabilization of forward and inplace walking gaits on an 18 DOF robot.
|
|
17:45-18:00, Paper MoDT17.6 | |
>Real-Time Constrained Nonlinear Model Predictive Control on SO(3) for Dynamic Legged Locomotion |
> Video Attachment
|
|
Hong, Seungwoo | Korea Advanced Institute of Science and Technology |
Kim, Joon-Ha | Korea Advanced Institute of Science and Technology(KAIST) |
Park, Hae-Won | Korea Advanced Institute of Science and Technology |
Keywords: Legged Robots, Multi-legged Robots, Optimization and Optimal Control
Abstract: This paper presents a constrained nonlinear model predictive control (NMPC) framework for legged locomotion. The framework assumes a legged robot as a floating base single rigid body with contact forces being applied to the body as external forces. With consideration of orientation dynamics evolving on the rotation manifold SO(3), analytic Jacobians which are necessary for constructing the gradient and the Gauss-Newton Hessian approximation of the objective function are derived. This procedure also includes the reparameterization of the robot orientation on SO(3) to orientation error in the tangent space of that manifold. Obtained gradient and Gauss-Newton Hessian approximation are utilized to solve nonlinear least square problems formulated from NMPC in a computationally efficient manner. The proposed algorithm is verified on various types of legged robots and gaits in a simulation environment.
|
|
MoDT18 |
Room T18 |
Multi-Legged Robots II |
Regular session |
Chair: Kim, Joohyung | University of Illinois at Urbana-Champaign |
Co-Chair: Park, Hae-Won | Korea Advanced Institute of Science and Technology |
|
16:30-16:45, Paper MoDT18.1 | |
>Automatic Gait Pattern Selection for Legged Robots |
> Video Attachment
|
|
Wang, Jiayi | The University of Edinburgh |
Chatzinikolaidis, Iordanis | The University of Edinburgh |
Mastalli, Carlos | University of Edinburgh |
Wolfslag, Wouter | University of Edinburgh |
Xin, Guiyang | The University of Edinburgh |
Tonneau, Steve | LAAS |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Multi-legged Robots, Motion and Path Planning, Legged Robots
Abstract: An important issue when synthesizing legged locomotion plans is the combinatorial complexity that arises from gait pattern selection. Though it can be defined manually, the gait pattern plays an important role in the feasibility and optimality of a motion with respect to a task. Replacing human intuition with an automatic and efficient approach for gait pattern selection would allow for more autonomous robots, responsive to task and environment changes. To this end, we propose the idea of building a map from task to gait pattern selection for given environment and performance objective. Indeed, we show that for a 2D half-cheetah model and a quadruped robot, a direct mapping between a given task and an optimal gait pattern can be established. We use supervised learning to capture the structure of this map in a form of gait regions. Furthermore, we propose to construct a warm-starting trajectory for each gait region. We empirically show that these warm-starting trajectories improve the convergence speed of our trajectory optimization problem up to 60 times when compared with random initial guesses. Finally, we conduct experimental trials on the ANYmal robot to validate our method.
|
|
16:45-17:00, Paper MoDT18.2 | |
>Kinodynamic Motion Planning for Multi-Legged Robot Jumping Via Mixed-Integer Convex Program |
> Video Attachment
|
|
Ding, Yanran | University of Illinois at Urbana-Champaign |
Li, Chuanzheng | University of Illinois, Urbana-Champaign |
Park, Hae-Won | Korea Advanced Institute of Science and Technology |
Keywords: Multi-legged Robots, Multi-Contact Whole-Body Motion Planning and Control, Legged Robots
Abstract: This paper proposes a kinodynamic motion planning framework for multi-legged robot jumping based on the mixed-integer convex program (MICP), which simultaneously reasons about centroidal motion, contact points, wrench, and gait sequences. This method uniquely combines configuration space discretization and the construction of feasible wrench polytope (FWP) to encode kinematic constraints, actuator limit, friction cone constraint, and gait sequencing into a single MICP. The MICP could be efficiently solved to the global optimum by off-the-shelf numerical solvers and provide highly dynamic jumping motions without requiring initial guesses. Simulation and experimental results demonstrate that the proposed method could find novel and dexterous maneuvers that are directly deployable on the two-legged robot platform to traverse through challenging terrains.
|
|
17:00-17:15, Paper MoDT18.3 | |
>Quadrupedal Robotic Walking on Sloped Terrains Via Exact Decomposition into Coupled Bipedal Robots |
> Video Attachment
|
|
Ma, Wenlong | Caltech |
Csomay-Shanklin, Noel | California Institute of Technology |
Ames, Aaron | Caltech |
Keywords: Multi-Robot Systems, Optimization and Optimal Control, Legged Robots
Abstract: Can we design motion primitives for complex legged systems uniformly for different terrain types without neglecting modeling details? This paper presents a method for rapidly generating quadrupedal locomotion on sloped terrains---from modeling to gait generation, to hardware demonstration. At the core of this approach is the observation that a quadrupedal robot can be exactly decomposed into coupled bipedal robots. Formally, this is represented through the framework of coupled control systems, wherein isolated subsystems interact through coupling constraints. We demonstrate this concept in the context of quadrupeds and use it to reduce the gait planning problem for uneven terrains to bipedal walking generation via hybrid zero dynamics. This reduction method allows for the formulation of a nonlinear optimization problem that leverages low-dimensional bipedal representations to generate dynamic walking gaits on slopes for the full-order quadrupedal robot dynamics. The result is the ability to rapidly generate quadrupedal walking gaits on a variety of slopes. We demonstrate these walking behaviors on the Vision 60 quadrupedal robot; in simulation, via walking on a range of sloped terrains of 13, 15, 20, 25, and, experimentally, through the successful locomotion of 13 and 20~25 sloped outdoor grasslands.
|
|
17:15-17:30, Paper MoDT18.4 | |
>Waste Not, Want Not: Lessons in Rapid Quadrupedal Gait Termination from Thousands of Suboptimal Solutions |
> Video Attachment
|
|
Shield, Stacey Leigh | University of Cape Town |
Patel, Amir | University of Cape Town |
Keywords: Multi-legged Robots, Multi-Contact Whole-Body Motion Planning and Control, Whole-Body Motion Planning and Control
Abstract: Elaborate trajectory optimization models with many degrees of freedom can be a useful locomotion-planning tool, as they provide rich solutions that take advantage of the robot's specific morphology. They are, however, prone to falling into local minima. Depending on the seed that initializes the solver, the trajectories themselves and the extent to which they minimize the cost function can vary widely, making it impossible to judge the quality of any solution without generating many more. In this paper, we argue that this perceived drawback can actually be a powerful advantage in exploratory studies, since the resulting set of diverse motions can reveal which features tend to be associated with good performance, and therefore aid in the formulation of strategies for executing challenging maneuvers. We selected rapid gait termination from a high-speed gallop as our case study - a dangerous and scarcely-researched movement. By analyzing a set of over 3000 monopedal and quadrupedal trajectories, we were able to extract conclusions about how braking and sliding should be performed to reduce the stopping distance, and identify a hindlimb action that creates large braking forces.
|
|
17:30-17:45, Paper MoDT18.5 | |
>Brainless Running: A Quasi-Quadruped Robot with Decentralized Spinal Reflexes by Solely Mechanical Devices |
> Video Attachment
|
|
Masuda, Yoichi | Osaka University |
Miyashita, Kazuhiro | Osaka University |
Yamagishi, Kaisei | Osaka University |
Ishikawa, Masato | Osaka University |
Hosoda, Koh | Osaka University |
Keywords: Multi-legged Robots, Hydraulic/Pneumatic Actuators, Biologically-Inspired Robots
Abstract: As a strategy to address the difficulties encountered when modeling and controlling a musculoskeletal system, we present a straightforward implementation of an autonomous decentralized motion control system in this paper; the system is inspired by the spinal reflex system of animals. We developed an artificial receptor, a muscle, and a neuron to mechanically implement the reflex mechanisms of animals. Among the reflex mechanisms, this paper presents a reflex system with reciprocal innervation for a musculoskeletal quasi-quadruped robot, including antagonist muscles. In the experiments, the robot autonomously generated a leg trajectory and a gait pattern with smooth alternating motions of the antagonist muscles through the interaction between the body, the ground, and the artificial reflex systems. To evaluate the reciprocal innervation, we compared the developed robot with one that does not include antagonist muscles. The reciprocal innervation allows for twice as many muscle implementations as those offered by the robot without antagonist muscles. Moreover, it improves the running speed by 5% on average and the flexion and extension velocities of all joints by 28% on average at around touchdowns and liftoffs of the foot. This successful result lead to implement more advanced nervous systems by solely mechanical devices.
|
|
17:45-18:00, Paper MoDT18.6 | |
>Snapbot V2: A Reconfigurable Legged Robot with a Camera for Self Configuration Recognition |
> Video Attachment
|
|
Gim, Kevin | University of Illinois, Urbana-Champaign |
Kim, Joohyung | University of Illinois at Urbana-Champaign |
Keywords: Multi-legged Robots, Legged Robots, Cellular and Modular Robots
Abstract: In this paper, we present the second version of a reconfigurable modular legged robot, Snapbot V2. The mechanical design of Snapbot V2 is enhanced for better dynamic performance and robust connection with modular legs. A motion generator for locomotion is developed to achieve various locomotion skills in one to six-leg configurations. The locomotion is tested on a multi-body dynamic simulation model and implemented on a physical robot as well. A visual detection is implemented with a camera module to recognize the robot's configuration. By detecting the particular color of the parts at the leg module, the robot can recognize the number and location of the connected legs. Based on the recognized configuration, Snapbot V2 selects the proper locomotion style automatically.
|
|
MoDT19 |
Room T19 |
Human Motion Analysis |
Regular session |
Chair: Meger, David Paul | McGill University |
Co-Chair: Kim, Keehoon | POSTECH, Pohang University of Science and Technology |
|
16:30-16:45, Paper MoDT19.1 | |
>PresSense: Passive Respiration Sensing Via Ambient WiFi Signals in Noisy Environments |
|
Xu, Yi Tian | Samsung Electronics Canada |
Chen, Xi | Samsung Electronics Canada |
Liu, Xue | McGill University |
Meger, David Paul | McGill University |
Dudek, Gregory | McGill University |
Keywords: Human-Centered Robotics, Sensor-based Control, Cognitive Human-Robot Interaction
Abstract: Passive sensing with ambient WiFi signals is a promising technique that will enable new types of human-robot interactions while preserving users' privacy. Here, we present PresSense, a system for human respiration sensing in noisy environments. Unlike existing WiFi-based respiration sensors, we employ a human presence detector, improving the robustness in scenarios where no human is present in an Area Of Interest (AOI). We also integrate our novel feature, Peak Distance Histogram (PDH), with other classic WiFi features to achieve better accuracy when someone is present in the AOI. We tested our system using commodity WiFi devices in an office room. Our PresSense outperforms the state of the arts in both respiration rate estimation and presence detection.
|
|
16:45-17:00, Paper MoDT19.2 | |
>Automatic Synthesis of Human Motion from Temporal Logic Specifications |
|
Althoff, Matthias | Technische Universität München |
Mayer, Matthias | Technical University of Munich |
Müller, Robert | Technical University of Munich |
Keywords: Human and Humanoid Motion Analysis and Synthesis, Formal Methods in Robotics and Automation, Simulation and Animation
Abstract: Humans and robots are increasingly sharing their workspaces to benefit from the precision, endurance, and strength of machines and the universal capabilities of humans. Instead of performing time-consuming real experiments, computer simulations of humans could help to optimally orchestrate human and robotic tasks---either for setting up new production cells or by optimizing the motion planning of already installed robots. Especially when human-robot coexistence is optimized using machine learning, being able to synthesize a huge number of human motions is indispensable. However, no solution exists that automatically creates a range of human motions from a high-level specification of tasks. We propose a novel method that automatically generates human motions from linear temporal logic specifications and demonstrate our approach by numerical examples.
|
|
17:00-17:15, Paper MoDT19.3 | |
>Drift-Free and Self-Aligned IMU-Based Human Gait Tracking System with Augmented Precision and Robustness |
|
Chen, Yawen | The Hong Kong University of Science and Technology |
Fu, Chenglong | Southern University of Science and Technology |
Leung, Suk Wai Winnie | Hong Kong University of Science and Technology |
Shi, Ling | The Hong Kong University of Science and Technology |
Keywords: Human and Humanoid Motion Analysis and Synthesis, Sensor Fusion
Abstract: IMU-based human joint motion acquisition system is attractive for real-time control and monitoring in the emerging wearable technology due to its portability. However, in practical applications, it heavily suffers from long-term drift, magnetic interference and inconsistency of rotational reference frames, which causes precision degradation. In this paper, a novel on-line IMU-based human gait estimation framework was proposed to obtain the joint rotational angles directly under the kinematic constraints between multiple body segments, whereas traditional methods need to estimate the orientation of each individual segment. This framework consists of an on-line algorithm to align IMU frames with human joints and motion estimation algorithms for hip and knee without the aid of magnetometer. Both a 2-DoF robot and human gait tests were performed to validate the proposed method as compared with the predictions from commercial IMUs, joint encoders and an optical tracking system. The outcome demonstrated its advantages of adaptive alignment, drift rejection and low computational cost, which alleviates the practical barriers faced by human motion data collection in the wearable devices.
|
|
17:15-17:30, Paper MoDT19.4 | |
>Shift-Adaptive Estimation of Joint Angle Using Instrumented Brace with Two Stretch Sensors Based on Gaussian Mixture Models |
|
Eguchi, Ryo | Keio University |
Michael, Brendan | King's College London |
Howard, Matthew | King's College London |
Takahashi, Masaki | Keio University |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Medical Robots and Systems, Sensor Fusion
Abstract: Wearable motion sensing in daily life has attracted attention in various disciplines. Especially, stretchable strain sensors have been instrumented into garments (e.g. brace). To estimate joint motions from such sensors, previous studies have modelled relationships between the sensor strains and motion parameters via supervised/semi-supervised learning. However, typically these only model a single relationship assuming the sensor to be located at a specific point on the body. Consequently, they exhibit reduced performance when the strain-parameter relationship varies due to sensor shifts caused by long-term wearing or donning/doffing of braces. This letter presents a shift-adaptive estimation of knee joint angle. First, a brace is instrumented with two stretch sensors placed at different heights. Next, the different strain-angle relationships at varying brace shift positions are learned using Gaussian mixture models (GMMs). The system then estimates the joint angle from the sensor strains through Gaussian mixture regression using a maximum likelihood shift GMM, which is identified by referring to the two strains in a previous 1 s period. Experimental results indicated that the proposed method estimates the joint angle at multiple shift positions (0--20 mm) with higher accuracy than methods using a single model, single sensor, or referring to the present sensor strains.
|
|
17:30-17:45, Paper MoDT19.5 | |
>Subject-Independent sEMG Pattern Recognition by Using a Muscle Source Activation Model |
|
Kim, Minjae | KIST |
Chung, Wan Kyun | POSTECH |
Kim, Keehoon | POSTECH, Pohang University of Science and Technology |
Keywords: Rehabilitation Robotics, Prosthetics and Exoskeletons
Abstract: The interpretation of surface electromyographic (sEMG) signals facilitates intuitive gesture recognition. However, sEMG signals are highly dependent on measurement conditions. The relationship between sEMG signals and gestures identified from a specific subject cannot be applied to other subjects owing to anatomical differences between the subjects. Furthermore, an sEMG signal varies even according to the electrode placement on the same subject. These limitations reduce the practicability of sEMG signal applications. This paper proposes a subject-independent gesture recognition method based on a muscle source activation model; a reference source model facilitates parameter transfer from a specific subject, i.e., donor to any subject, donee. The proposed method can compensate for the angular difference of the interface between subjects. A donee only needs to perform ulnar deviation for approximately 2 s for the overall process. Ten subjects participated in the experiment, and the results show that, in the best configuration, the subject-independent classifier achieved a reasonable accuracy of 78.3% compared with the subject-specific classifier (88.7%) for four wrist/hand motions.
|
|
17:45-18:00, Paper MoDT19.6 | |
>Learning Gait Models with Varying Walking Speeds |
> Video Attachment
|
|
Zou, Chaobin | University of Electronic Science and Technology of China |
Huang, Rui | University of Electronic Science and Technology of China |
Cheng, Hong | University of Electronic Science and Technology |
Qiu, Jing | University of Electronic Science and Technology of China |
Keywords: Rehabilitation Robotics, Motion Control, Learning from Demonstration
Abstract: Lower-limb exoskeletons can reduce the therapist's burden and quantify repetitive gait training for patients with gait disorder. For patient's gait training, different walking speeds are required at different rehabilitation stages. However, due to the uniqueness of gait patterns, it is challenging for lower limb exoskeletons to generate individualized gait patterns for patients with different anthropometric parameters. This paper proposed learning-based gait models to learn and reconstruct gait patterns from healthy subject's gait database, including the Gait Parameters Model (GPM) and the Gait Trajectory Model (GTM). The GPM employs Neural Networks to predict gait parameters with a given desired walking speed and the anthropometric parameters of the subject. The GTM utilizes Kernelized Movement Primitives (KMP) to reconstruct gait patterns with the predicted gait parameters. The proposed approach has been tested on a lower limb exoskeleton named AIDER. Experimental results indicate that the reconstructed gait patterns are very similar to the subject's actual gait patterns for varying walking speeds.
|
|
MoDT20 |
Room T20 |
Wearable and Assistive Devices |
Regular session |
Chair: Rouse, Elliott | University of Michigan / (Google) X |
Co-Chair: Plante, Jean-Sebastien | Université De Sherbrooke |
|
16:30-16:45, Paper MoDT20.1 | |
>Dynamic Assistance for Human Balancing with Inertia of a Wearable Robotic Appendage |
> Video Attachment
|
|
Maekawa, Azumi | The University of Tokyo |
Kawamura, Kei | The University of Tokyo |
Inami, Masahiko | The University of Tokyo |
Keywords: Physical Human-Robot Interaction, Human Performance Augmentation, Wearable Robots
Abstract: A reduced balance ability can lead to falls and critical injuries. To prevent falls, humans use reaction forces and torques generated by swinging their arms. In animals, we can find that a similar strategy is taken using tails. Inspired by these strategies, we propose an approach that utilizes a robotic appendage as a human balance supporter without assistance from environmental contact. As a proof of concept, we developed a wearable robotic appendage that has one actuated degree of freedom and rotates around the sagittal axis of the wearer. To validate the feasibility of our proposed approach, we conducted an evaluation experiment with human subjects. Controlling the robotic appendage we developed improved the subjects' balance ability and enabled the subject to withstand up to 22.8 % larger impulse disturbances on average than in the fixed appendage condition.
|
|
16:45-17:00, Paper MoDT20.2 | |
>A Supernumerary Robotic Leg Powered by Magnetorheological Actuators to Assist Human Locomotion |
> Video Attachment
|
|
Khazoom, Charles | Université De Sherbrooke |
Caillouette, Pierre | Université De Sherbrooke |
Girard, Alexandre | Université De Sehrbrooke |
Plante, Jean-Sebastien | Université De Sherbrooke |
Keywords: Wearable Robots, Physically Assistive Devices, Actuation and Joint Mechanisms
Abstract: Supernumerary robotic limbs are emerging to augment human function. Unlike exoskeletons, these robots provide additional kinematic structures to the user that enable novel human-robot interactions. To assist walking, a supernumerary leg should be compliant to impacts, minimize efforts on users, move quickly when swinging and exert large assistive forces on the ground. Here, we study the potential of a supernumerary leg powered by delocalized magnetorheological clutches (MR leg) to assist walking with three different gaits. Simulations show that the MR leg's low actuation inertia reduces the impact impulse by a factor 4 compared to geared motors and that delocalizing the clutches reduces by half the inertial forces transmitted to the user during swing. An impedance controller receives a reference trajectory based on each ankle's position to move the MR leg in synchrony with the gait cycle. Experiments show that the MR leg can comfortably contact the ground and swing at 3.9 m/s for a 1.4 m/s walk. The MR leg tracks the ankle within 5% of the gait cycle for the leader-follower gait, alternately tracks both ankles for the double gait and contacts the ground in between each step for the three-legged gait. A theoretical upper limit suggests that the average transmitted power in a gait cycle could be 84 W for the leader-follower gait, which is 4 times higher than autonomous ankle exoskeletons.
|
|
17:00-17:15, Paper MoDT20.3 | |
>A Deep Learning Based End-To-End Locomotion Mode Detection Method for Lower Limb Wearable Robot Control |
|
Lu, Zeyu | National University of Singapore |
Narayan, Ashwin | National University of Singapore |
Yu, Haoyong | National University of Singapore |
Keywords: Wearable Robots, Recognition, Prosthetics and Exoskeletons
Abstract: To function effectively in real-world environments, powered wearable robots such as exoskeletons and robotic prostheses must recognize the user’s motion intent by detecting the user’s locomotion modes such as walking, stair ascent and descent or ramp ascent and descent. Traditionally, intent detection is achieved using rule based methods such as state machines or fuzzy logic using data from wearable sensors. Due to the difficulty of manual rule design, these methods are limited to detecting certain simple locomotion modes. Machine learning (ML) based methods can perform classification on a large number of classes without manual rule design and recent research has explored several ML methods for locomotion mode classification. However, current ML based methods for locomotion mode detection use classical methods that require use of feature engineering to achieve acceptable accuracies. Additionally, current ML strategies only classify when certain motion events are detected. This strategy, while computationally efficient could result in misclassifications affecting large sections of motion recognition. To overcome these limitations, this paper proposes an end-to-end deep learning based method for locomotion mode detection that eliminates the need for feature engineering and classifies at a fixed sample rate. This paper introduces a new metric called confidence index and proposes a strategy for tuning confidence index thresholds to achieve a stable intent recognition and overall accuracy of greater than 95% on a publicly available benchmark dataset.
|
|
17:15-17:30, Paper MoDT20.4 | |
>Image Transformation and CNNs: A Strategy for Encoding Human Locomotor Intent for Autonomous Wearable Robots |
|
Lee, Ung Hee | University of Michigan |
Bi, Justin | University of Michigan |
Patel, Rishi | University of Michigan |
Fouhey, David | University of Michigan |
Rouse, Elliott | University of Michigan / (Google) X |
Keywords: Wearable Robots, Prosthetics and Exoskeletons, Sensor Fusion
Abstract: Wearable robots have the potential to improve the lives of countless individuals; however, challenges associated with controlling these systems must be addressed before they can reach their full potential. Modern control strategies for wearable robots are predicated on activity-specific implementations, and testing is usually limited to a single, fixed activity within the laboratory (e.g. level ground walking). To accommodate various activities in real-world scenarios, control strategies must include the ability to safely and seamlessly transition between activity-specific controllers. One potential solution to this challenge is to the infer wearer’s intent using pattern recognition of locomotion sensor data. To this end, we developed an intent recognition framework implementing convolutional neural networks with image encoding (i.e. spectrogram) that enables prediction of the upcoming locomotor activity of the wearer’s next step. In this paper, we describe our intent recognition system, comprised of a mel-spectrogram and subsequent neural network architecture. In addition, we analyzed the effect of sensor locations and modalities on the recognition system, and compared our proposed system to state-of-the-art locomotor intent recognition strategies. We were able to attain high classification performance (error rate: 1.1%), which was comparable or better than previous systems.
|
|
17:30-17:45, Paper MoDT20.5 | |
>Development of Exo-Glove for Measuring 3-Axis Forces Acting on the Human Finger without Obstructing Natural Human-Object Interaction |
> Video Attachment
|
|
Sathe, Prathamesh | Waseda University |
Schmitz, Alexander | Waseda University |
Kristanto, Harris | Waseda University |
Hsu, Chincheng | Waseda University |
Tomo, Tito Pradhono | Waseda University |
Somlor, Sophon | Waseda University |
Sugano, Shigeki | Waseda University |
Keywords: Haptics and Haptic Interfaces, Force and Tactile Sensing, Wearable Robots
Abstract: Measuring the forces that humans exert with their fingers could have many potential applications, such as skill transfer from human experts to robots or monitoring humans. In this paper we introduce the ``Exo-Glove'' system, which can measure the joint angles and forces acting on the human finger without covering the skin that is in contact with the manipulated object. In particular, 3-axis sensors measure the deformation of the human skin on the sides of the finger to indirectly measure the 3-axis forces acting on the finger. To provide a frame of reference for the sensors, and to measure the joint angles of the human finger, an exoskeleton with remote center of motion (RCM) joints is used. Experiments showed that with the RCM joints the quality of the force measurements can be improved.
|
|
17:45-18:00, Paper MoDT20.6 | |
>Dynamic Stability Control of Inverted-Pendulum-Type Robotic Wheelchair for Going up and down Stairs |
> Video Attachment
|
|
Onozuka, Yuya | The University of Tokyo |
Tomokuni, Nobuyasu | Kinki University |
Murata, Genki | R&D Center, JTEKT Corporation |
Shino, Motoki | The University of Tokyo |
Keywords: Wheeled Robots, Physically Assistive Devices, Underactuated Robots
Abstract: The wheelchair is the major means of transport for elderly and physically disabled people in their daily lives. However it cannot overcome architectural barriers such as curbs and stairs. In this study, we developed an inverted-pendulum-type robotic wheelchair for climbing stairs. The wheelchair has a seat slider and two rotary links between the front and rear wheels on each side. When climbing up stairs, the wheelchair rotates the rotary links while maintaining an inverted state of a movable body by controlling the position of the center of gravity using the seat slider. In previous research, we proposed the control method for climbing up stairs using the rotary links and seat slider, confirming a period of approximately 5 s to rotate the rotary links. In this paper, we propose a control method for going down stairs, and experimentally verify the control effectiveness and stability.
|
|
MoDT21 |
Room T21 |
Prosthesis Control |
Regular session |
Chair: Young, Aaron | Georgia Tech |
Co-Chair: Carloni, Raffaella | University of Groningen |
|
16:30-16:45, Paper MoDT21.1 | |
>Mapping Thigh Motion to Knee Motion: Implications for Motion Planning of Active Prosthetic Knees |
|
Eslamy, Mahdy | Medical University Göttingen |
Oswald, Felix | Applied Rehabilitation Technology Lab - Göttingen |
Schilling, Arndt | UMG Göttingen |
Keywords: Prosthetics and Exoskeletons, Rehabilitation Robotics
Abstract: One of the main challenges of the active assistive devices is how to estimate the motion of the missing/impaired limbs and joints in line with the remaining limbs. To do so, a motion planner is required. This study proposes a motion planner that can be used for active prosthetic/orthotic knees. The aim is to continuously estimate the knee joint positions based on the thigh motion, using as few inputs as possible. Data from thigh-mounted IMU (thigh acceleration and angle) are used as inputs to estimate knee joint positions as outputs. It is aimed to continuously estimate the outputs as opposed to the state-machine approaches which divide the gait cycles into different sections and require switching rules. The performance of the motion planner is investigated for five walking speeds (0.6, 0.9, 1.2, 1.4 and 1.6 m/s). The strengths and limitations of the motion planer are investigated at different scenarios.
|
|
16:45-17:00, Paper MoDT21.2 | |
>Data-Driven Characterization of Human Interaction for Model-Based Control of Powered Prostheses |
> Video Attachment
|
|
Gehlhar, Rachel | California Institute of Technology |
Chen, Yuxiao | California Institute of Technology |
Ames, Aaron | California Institute of Technology |
Keywords: Prosthetics and Exoskeletons, Wearable Robots
Abstract: This paper proposes a data-driven method for powered prosthesis control that achieves stable walking without the need for additional sensors on the human. The key idea is to extract the nominal gait and the human interaction information from motion capture data, and reconstruct the walking behavior with a dynamic model of the human-prosthesis system. The walking behavior of a human wearing a powered prosthesis is obtained through motion capture, which yields the limb and joint trajectories. Then a nominal trajectory is obtained by solving a gait optimization problem designed to reconstruct the walking behavior observed by motion capture. Moreover, the interaction force profiles between the human and the prosthesis are recovered by simulating the model following the recorded gaits, which are then used to construct a force tube that covers all the interaction force profiles. Finally, a robust Control Lyapunov Function (CLF) Quadratic Programming (QP) controller is designed to guarantee the convergence to the nominal trajectory under all possible interaction forces within the tube. Simulation results show this controller's improved tracking performance with a perturbed force profile compared to other control methods with less model information.
|
|
17:00-17:15, Paper MoDT21.3 | |
>IMU-Based Locomotor Intention Prediction for Real-Time Use in Transfemoral Prostheses |
|
Lu, Huaitian | University of Groningen |
Schomaker, Lambert R.B. | University of Groningen |
Carloni, Raffaella | University of Groningen |
Keywords: Prosthetics and Exoskeletons, Sensorimotor Learning
Abstract: This paper focuses on the design and comparison of different deep neural networks for the real-time prediction of locomotor intentions by using data from inertial measurement units. The deep neural network architectures are convolutional neural networks, recurrent neural networks, and convolutional recurrent neural networks. The input to the architectures are features in the time domain, which have been derived either from one inertial measurement unit placed on the upper right leg of ten healthy subjects, or two inertial measurement units placed on both the upper and lower right leg of ten healthy subjects. The study shows that a WaveNet, i.e., a full convolutional neural network, achieves a peak F1-score of 87.17% in the case of one IMU, and a peak of 97.88% in the case of two IMUs, with a 5-fold cross-validation.
|
|
17:15-17:30, Paper MoDT21.4 | |
>Machine Learning Model Comparisons of User Independent & Dependent Intent Recognition Systems for Powered Prostheses |
> Video Attachment
|
|
Bhakta, Krishan | Georgia Institute of Technology |
Camargo, Jonathan | Universidad De Los Andes |
Donovan, Luke | Georgia Institute of Technology |
Herrin, Kinsey | Georgia Institute of Technology |
Young, Aaron | Georgia Tech |
Keywords: Prosthetics and Exoskeletons, Wearable Robots, Human Performance Augmentation
Abstract: Developing intelligent prosthetic controllers to recognize user intent across users is a challenge. Machine learning algorithms present an opportunity to develop methods for predicting user's locomotion mode. Currently, linear discriminant analysis (LDA) offers the standard solution in the state-of-the-art for subject dependent models and has been used in the development of subject independent applications. However, the performance of subject independent models differ radically from their dependent counterpart. Furthermore, most of the studies limit the evaluation to a fixed terrain with individual stair height and ramp inclination. In this study, we investigated the use of the XGBoost algorithm for developing a subject independent model across 8 individuals with transfemoral amputation. We evaluated the performance of XGBoost across different stair heights and inclination angles and found that it generalizes well across preset conditions. Our findings suggest that XGBoost offers a potential benefit for both subject independent and subject dependent algorithms outperforming LDA and NN (DEP SS Error: 2.93% ± 0.49%, DEP TS Error: 7.03% ± 0.74%, IND SS Error: 10.12% ± 3.16%, and IND TS Error: 15.78% ± 2.39%)(p < 0.05). We were also able to show that with the inclusion of extra sensors the model performance could continually be improved in both user dependent and independent models (p < 0.05). Our study provides valuable information for future intent recognition systems to make them more reliable across different users and common community ambulation modes.
|
|
MoDT22 |
Room T22 |
Rehabilitation Robotics I |
Regular session |
Chair: Dubey, Rajiv | University of South Florida |
Co-Chair: Meattini, Roberto | University of Bologna |
|
16:30-16:45, Paper MoDT22.1 | |
>Development of Dementia Care Training System Based on Augmented Reality and Whole Body Wearable Tactile Sensor |
> Video Attachment
|
|
Hiramatsu, Tomoki | Kyushu University |
Kamei, Masaya | Kyushu University |
Inoue, Daiji | Kyushu University |
Kawamura, Akihiro | Kyushu University |
An, Qi | Kyushu University |
Kurazume, Ryo | Kyushu University |
Keywords: Medical Robots and Systems, Rehabilitation Robotics, Haptics and Haptic Interfaces
Abstract: This study develops a training system for a multimodal comprehensive care methodology for dementia patients called Humanitude. Humanitude has attracted much attention as a gentle and effective care technique. It consists of four main techniques, namely, eye contact, verbal communication, touch, and standing up, and more than 150 care elements. Learning Humanitude thus requires much time. To provide an effective training system for Humanitude, we develop a training system that realizes sensing and interaction simultaneously by combining a real entity and augmented reality technology. To imitate the interaction between a patient and a caregiver, we superimpose a three-dimensional CG model of a patient's face onto the head of a soft doll using augmented reality technology. Touch information such as position and force is sensed using the whole body wearable tactile sensor developed to quantify touch skills. This training system enables the evaluationof eye contact and touch skills simultaneously. We build a prototype of the proposed training system and evaluate the usefulness of the system in public lectures.
|
|
16:45-17:00, Paper MoDT22.2 | |
>Examination of Screen-Indicated Methods of Gait Training System with Real-Time Audiovisual Feedback Function of Ground Reaction Force |
|
Fukuyama, Kei | Graduate School of Oita University |
Kurose, Ichiro | Beppu Rehabilitation Center |
Ikeuchi, Hidetaka | Oita University |
Keywords: Rehabilitation Robotics, Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Force and Tactile Sensing
Abstract: In gait training for walking rehabilitation of patients with stroke hemiplegia or bone joint conditions such as fractures, it is important to recognize the load of the affected lower limbs for improving gait ability and avoiding risks such as re-fractures. A weight scale is used at the actual rehabilitation site to recognize the load. However, in this situation, the trainee must look down to verify whether the scale and their walking posture are correct. In addition, the trainee generally cannot read the load value accurately. Therefore, we have developed a system that can show the load in real time on an eye-level display. By using this system, we expect the patients to be able to perform gait training smoothly while recognizing the state of walking. In this paper, we have reported the results of a clinical trial held at a rehabilitation hospital and an examination of the screen-indicated methods.
|
|
17:00-17:15, Paper MoDT22.3 | |
>A Mixed-Integer Model Predictive Control Approach to Motion Cueing in Immersive Wheelchair Simulator |
|
Dao, Le Anh | National Research Council of Italy |
Prini, Alessio | National Research Council of Italy |
Malosio, Matteo | National Research Council of Italy |
Davalli, Angelo | INAIL Prosthesis Center |
Sacco, Marco | Sistemi E Tecnologie Industriali Intelligenti Per Il Manifatturi |
Keywords: Optimization and Optimal Control, Rehabilitation Robotics, Human-Centered Robotics
Abstract: To allow wheelchair (electronic or manual) users to practice driving in different safe, repeatable and controlled scenarios, the use of simulator as a training tool is considered here. In this context, the capabilities of providing high fidelity motions for users of the simulator is highlighted as one of the most important aspects for the effectiveness of the tool. For this purpose, the motion cueing algorithm (MCA) is studied in our work to regenerate wheelchair motion cues by transforming motions of the real or simulated wheelchair into the simulator motion. The studied algorithm is developed based on Model Predictive Control (MPC) approach to efficiently optimize the motions of the platform. The overall problem is formulated using mixed-integer quadratic programming (MIQP) which involves not only the vestibular model, strict constraints of the platform but also the perception threshold in the optimization cost function. In the end, the performance assessment of the system using different control techniques is analyzed, showing the effectiveness of the proposed approach in the simulation environment.
|
|
17:15-17:30, Paper MoDT22.4 | |
>Development of Smartphone-Based Human-Robot Interfaces for Individuals with Disabilities |
> Video Attachment
|
|
Wu, Lei | University of South Florida |
Alqasemi, Redwan | University of South Florida |
Dubey, Rajiv | University of South Florida |
Keywords: Telerobotics and Teleoperation
Abstract: Persons with disabilities often rely on caregivers or family members to assist in their daily living activities. Robotic assistants can provide an alternative solution if intuitive user interfaces are designed for simple operations. Current human-robot interfaces are still far from being able to operate in an intuitive way when used for complex activities of daily living (ADL). In this era of smartphones that are packed with sensors, such as accelerometers, gyroscopes and a precise touch screen, robot controls can be interfaced with smartphones to capture the user's intended operation of the robot assistant. In this paper, we review the current popular human-robot interfaces, and we present three novel human-robot smartphone-based interfaces to operate a robotic arm for assisting persons with disabilities in their ADL tasks. Useful smartphone data, including 3 dimensional orientation and 2 dimensional touchscreen positions, are used as control variables to the robot motion in Cartesian teleoperation. In this paper, we present the three control interfaces, their implementation on a smartphone to control a robotic arm, and a comparison between the results on using the three interfaces for three different ADL tasks. The developed interfaces provide intuitiveness, low cost, and environmental adaptability.
|
|
17:30-17:45, Paper MoDT22.5 | |
>SEMG-Based Human-In-The-Loop Control of Elbow Assistive Robots for Physical Tasks and Muscle Strength Training |
|
Meattini, Roberto | University of Bologna |
Chiaravalli, Davide | Alma Mater Studiorum, University of Bologna |
Palli, Gianluca | University of Bologna |
Melchiorri, Claudio | University of Bologna |
Keywords: Human Factors and Human-in-the-Loop, Cognitive Human-Robot Interaction, Physically Assistive Devices
Abstract: In this article we present a sEMG-driven human-in-the-loop (HITL) control designed to allow an assistive robot produce proper support forces for both muscular effort compensations, i.e. for assistance in physical tasks, and muscular effort generations, i.e. for the application in muscle strength training exercises related to the elbow joint. By employing our control strategy based on a Double Threshold Strategy (DTS) with a standard PID regulator, we report that our approach can be successfully used to achieve a target, quantifiable muscle activity assistance. In this relation, an experimental concept validation was carried out involving four healthy subjects in physical and muscle strength training tasks, reporting with single-subject and global results that the proposed sEMGdriven control strategy was able to successfully limit the elbow muscular activity to an arbitrary level for effort compensation objectives, and to impose a lower bound to the sEMG signals during effort generation goals. Also, a subjective qualitative evaluation of the robotic assistance was carried out by means of a questionnaire. The obtained results open future possibilities for a simplified usage of the sEMG measurements to obtain a target, quantitatively defined, robot assistance for human joints and muscles.
|
|
MoDT23 |
Room T23 |
Rehabilitation Robotics II |
Regular session |
Chair: Díaz, Ińaki | CEIT |
Co-Chair: Agrawal, Sunil | Columbia University |
|
16:30-16:45, Paper MoDT23.1 | |
>EDAN - an EMG-Controlled Daily Assistant to Help People with Physical Disabilities |
> Video Attachment
|
|
Vogel, Jörn | German Aerospace Center |
Hagengruber, Annette | German Aerospace Center |
Iskandar, Maged | German Aerospace Center - DLR |
Quere, Gabriel | DLR |
Leipscher, Ulrike | DLR - German Aerospace Center |
Bustamante, Samuel | German Aeroespace Center (DLR), Robotics and Mechatronics Center |
Dietrich, Alexander | German Aerospace Center (DLR) |
Hoeppner, Hannes | Beuth University of Applied Sciences, Berlin |
Leidner, Daniel | German Aerospace Center (DLR) |
Albu-Schäffer, Alin | DLR - German Aerospace Center |
Keywords: Rehabilitation Robotics, Physically Assistive Devices, Service Robots
Abstract: Injuries, accidents, strokes, and other diseases can significantly degrade the capabilities to perform even the most simple activities in daily life. A large share of these cases involves neuromuscular diseases, which lead to severely reduced muscle function. However, even though affected people are no longer able to move their limbs, residual muscle function can still be existent. Previous work has shown that this residual muscular activity can suffice to apply an EMG-based user interface. In this paper, we introduce DLR’s robotic wheelchair EDAN (EMG-controlled Daily Assistant), which is equipped with a torque-controlled, eight degree-of-freedom light-weight arm and a dexterous, five-fingered robotic hand. Using electromyography, muscular activity of the user is measured, processed and utilized to control both the wheelchair and the robotic manipulator. This EMG-based interface is enhanced with shared control functionality to allow for efficient and safe physical interaction with the environment.
|
|
16:45-17:00, Paper MoDT23.2 | |
>Real-Time Virtual Coach Using LSTM for Assisting Physical Therapists with End-Effector-Based Robot-Assisted Gait Training |
> Video Attachment
|
|
Seo, Yeongsik | National Rehabilitation Center in the Republic of Korea |
Lee, Eunkyeong | National Rehabilitation Center |
Kwon, Suncheol | National Rehabilitation Center |
Song, Won-Kyung | National Rehabilitation Center |
Keywords: Rehabilitation Robotics, Human-Centered Robotics, Physical Human-Robot Interaction
Abstract: With the development of robotic technology, the demand for state-of-the-art technology in the field of rehabilitation is rapidly increasing for the elderly and people with disabilities. In this paper, we propose a real-time virtual coach to assist physical therapists with the end-effector-based robot-assisted gait training for stroke survivors using Long Short-Term Memory (LSTM) networks. Our proposed virtual coach consists of the sensor module for data gathering and dataset generation, real-time classification of the pathologic patient gait during the training using LSTM networks, and delivery of the coaching recommendations in an audiovisual form. Our preliminary study determined the selection of coaching recommendations. LSTM networks are trained to provide the selected coaching recommendations. The performance of the proposed virtual coach is verified using classification simulation of an able-bodied person on the rehabilitation robot, G-EO System. The usability was verified through a satisfaction survey of five professional physical therapists.
|
|
17:00-17:15, Paper MoDT23.3 | |
>Applying Force Perturbations Using a Wearable Robotic Neck Brace |
> Video Attachment
|
|
Zhang, Haohan | Columbia University |
Santamaria Gonzalez, Victor | Columbia University |
Agrawal, Sunil | Columbia University |
Keywords: Physical Human-Robot Interaction, Rehabilitation Robotics, Force Control
Abstract: Force perturbation is used in this paper to study cervical neuromuscular responses which can be used in the future to assess impairments in patients with neurological diseases. Current literature on this topic is limited to applying forces on the head in the anterior-posterior direction, perhaps due to technological limitations. In this paper, we propose to use a robotic neck brace to address these shortcomings due to its lightweight portable design and the ability to control forces. A controller is implemented to apply direction-specific perturbations on the head. To demonstrate the effectiveness of this capability, a human study was carried out with able-bodied subjects. We used this robotic brace to apply forces on the head of the subjects and observed their movement and muscle responses both when their eyes were open and closed. Our results suggest that the robotic brace is capable of perturbing the head and tracking the kinematic response. It revealed that able-bodied subjects reacted to the perturbations differently when their eyes were closed. They showed longer head trajectories and more muscle activation when the eyes were closed. We also show that the direction-specific perturbation feature enables us to analyze kinematic and muscle variables with respect to the direction of perturbation. This helps better understand the neuromuscular response in the head-neck.
|
|
17:15-17:30, Paper MoDT23.4 | |
>Energetic Passivity Decoding of Human Hip Joint for Physical Human-Robot Interaction |
|
Atashzar, S. Farokh | New York University (NYU), US |
Huang, Hsien-Yung | Imperial College London |
Del Duca, Fulvia | Technical University of Munich |
Burdet, Etienne | Imperial College London |
Farina, Dario | Imperial College London |
Keywords: Physical Human-Robot Interaction, Rehabilitation Robotics, Compliance and Impedance Control
Abstract: The capacity of the biomechanics of human limbs to absorb energy during physical human-robot interaction (pHRI) can play an imperative role in controlling the performance of human-centered robotics systems. Using the concept of ``excess of passivity'', we have recently designed passivity signature maps for elbow and wrist joints. We have also shown that this knowledge can be exploited and extrapolated during the interaction with a robotic system by transparency-maximized algorithms. A major application is in robotic rehabilitation systems and assistive technologies. Here, for the first time, the nonlinear energy capacitance of the hip joint and the affecting factors are decoded. This can be critical for maximizing the performance of wearable exoskeletons. Knowledge regarding energy absorption behavior can significantly help to reduce the conservatism of control algorithms. In this work, the energetic behavior is studied for three different hip angles, while perturbations were provided at three different interaction speeds. The results show that the increase in agonist and antagonist muscle contractions can consistently expand the margins of the passivity map. Additionally, by separating the effects of agonist and antagonist contractions, it was identified that the passivity margins have a correlation with the subject’s posture during interaction with the robot and the correlation depends on the type of muscle contraction. A preliminary design of a stabilizer is also formulated that takes into account variable passivity behavior of the joint, in the energy domain, to enhance the performance while guaranteeing pHRI stability.
|
|
17:30-17:45, Paper MoDT23.5 | |
>Machine Learning for Active Gravity Compensation in Robotics: Application to Neurological Rehabilitation Systems (I) |
|
Ugartemendia, Axier | Ceit-IK4 |
Rosquete, Daniel | Ceit-IK4 |
Gil, Jorge Juan | University of Navarra |
Díaz, Ińaki | CEIT |
Borro, Diego | CEIT |
|