|
MoAT10 |
Room T10 |
Aerial Systems: Applications I |
Regular session |
Chair: Mueller, Mark Wilfried | University of California, Berkeley |
Co-Chair: Bezzo, Nicola | University of Virginia |
|
10:00-10:15, Paper MoAT10.1 | |
>Staging Energy Sources to Extend Flight Time of a Multirotor UAV |
> Video Attachment
|
|
Jain, Karan | UC Berkeley |
Tang, Haoyun(Jerry) | UC Berkeley |
Sreenath, Koushil | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Aerial Systems: Applications, Mechanism Design, Cellular and Modular Robots
Abstract: Energy sources such as batteries do not decrease in mass after consumption, unlike combustion-based fuels. We present the concept of staging energy sources, i.e. consuming energy in stages and ejecting used stages, to progressively reduce the mass of aerial vehicles in-flight which reduces power consumption, and consequently increases flight time. A flight time vs. energy storage mass analysis is presented to show the endurance benefit of staging to multirotors. We consider two specific problems in discrete staging -- optimal order of staging given a certain number of energy sources, and optimal partitioning of a given energy storage mass budget into a given number of stages. We then derive results for a continuously staged case of an internal combustion engine driving propellers. Notably, we show that a multirotor powered by internal combustion has an upper limit on achievable flight time independent of the available fuel mass. Lastly, we validate the analysis with flight experiments on a custom two-stage battery-powered quadcopter. This quadcopter can eject a battery stage after consumption in-flight using a custom-designed mechanism, and continue hovering using the next stage. The experimental flight times match well with those predicted from the analysis for our vehicle. We achieve a 19% increase in flight time using the batteries in two stages as compared to a single stage.
|
|
10:15-10:30, Paper MoAT10.2 | |
>Target Search on Road Networks with Range-Constrained UAVs and Ground-Based Mobile Recharging Vehicles |
|
Booth, Kyle E. C. | University of Toronto |
Piacentini, Chiara | University of Toronto |
Bernardini, Sara | Royal Holloway University of London |
Beck, J. Christopher | University of Toronto |
Keywords: Aerial Systems: Applications, Surveillance Systems, Planning, Scheduling and Coordination
Abstract: We study a range-constrained variant of the multi-UAV target search problem where commercially available UAVs are used for target search in tandem with ground-based mobile recharging vehicles (MRVs) that can travel, via the road network, to meet up with and recharge a UAV. We propose a pipeline for representing the problem on real-world road networks, starting with a map of the road network and yielding a final routing graph that permits UAVs to recharge via rendezvous with MRVs. The problem is then solved using mixed-integer linear programming (MILP) and constraint programming (CP). We conduct a comprehensive simulation of our methods using real-world road network data from Scotland. The assessment investigates accumulated search reward compared to ideal and worst-case scenarios and briefly explores the impact of UAV speeds. Our empirical results indicate that CP is able to provide better solutions than MILP, overall, and that the use of a fleet of MRVs can improve the accumulated reward of the UAV fleet, supporting their inclusion for surveillance tasks.
|
|
10:30-10:45, Paper MoAT10.3 | |
>Assured Runtime Monitoring and Planning: Towards Verification of Neural Networks for Safe Autonomous Operations (I) |
> Video Attachment
|
|
Yel, Esen | University of Virginia |
Carpenter, Taylor | University of Pennsylvania |
Di Franco, Carmelo | University of Virginia |
Ivanov, Radoslav | University of Pennsylvania |
Kantaros, Yiannis | University of Pennsylvania |
Lee, Insup | University of Pennsylvania |
Weimer, James | University of Pennsylvania |
Bezzo, Nicola | University of Virginia |
Keywords: Aerial Systems: Applications, Novel Deep Learning Methods, Hybrid Logical/Dynamical Planning and Verification
Abstract: Autonomous systems operating in uncertain environments under the effects of disturbances and noises can reach unsafe states even while using fine-tuned controllers and precise sensors and actuators. To provide safety guarantees on such systems during motion planning operations, reachability analysis (RA) has been demonstrated to be a powerful tool. RA however suffers from computational complexity especially when dealing with complex systems characterized by high order dynamics, making it hard to be deployed for runtime monitoring. To deal with this issue, in this work, a neural network (NN)-based framework is proposed to perform fast online monitoring for safety and an approach for verification of NNs is presented. Training is performed offline using precise RA tools while the trained NN is used online as a fast safety checker for motion planning. In this way, at runtime, a planned trajectory can be quickly predicted to be safe or unsafe: when unsafe, a replanning procedure is triggered until a safe trajectory is obtained. The results of the trained network are tested for verification using our recent tool Verisig in which the NN is transformed into a hybrid system in order to provide guarantees before deployment. In case of unverified NN, the outputs of the verification are used to retrain the network until verification is achieved. Two illustrative case studies on a quadrotor aerial vehicle - a pick-up drop-off operation and a navigation in a cluttered environment - are presented to validate the proposed framework both in simulations and experiments.
|
|
10:45-11:00, Paper MoAT10.4 | |
>UAV-AdNet: Unsupervised Anomaly Detection Using Deep Neural Networks for Aerial Surveillance |
> Video Attachment
|
|
Bozcan, Ilker | Aarhus University |
Kayacan, Erdal | Aarhus University |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control, Aerial Systems: Perception and Autonomy
Abstract: Anomaly detection is a key goal of autonomous surveillance systems that should be able to alert unusual observations. In this paper, we propose a holistic anomaly detection system using deep neural networks for surveillance of critical infrastructures (e.g., airports, harbors, warehouses) using an unmanned aerial vehicle (UAV). First, we present a heuristic method for the explicit representation of spatial layouts of objects in bird-view images. Then, we propose a deep neural network architecture for unsupervised anomaly detection (UAV-AdNet), which is trained on environment representations and GPS labels of bird-view images jointly. Unlike studies in the literature, we combine GPS and image data to predict abnormal observations. We evaluate our model against several baselines on our aerial surveillance dataset and show that it performs better in scene reconstruction and several anomaly detection tasks. The codes, trained models, dataset, and video will be available at https://bozcani.github.io/uavadnet.
|
|
11:00-11:15, Paper MoAT10.5 | |
>A Morphing Cargo Drone for Safe Flight in Proximity of Humans |
> Video Attachment
|
|
Kornatowski, Przemyslaw Mariusz | Ecole Polytechnique Federale De Lausanne (EPFL) |
Feroskhan, Mir | Nanyang Technological University |
Stewart, William | Ecole Polytechnique Federale De Lausanne |
Floreano, Dario | Ecole Polytechnique Federal, Lausanne |
Keywords: Aerial Systems: Applications, Intelligent Transportation Systems, Field Robots
Abstract: Delivery drones used by logistics companies today are equipped with unshielded propellers, which represent a major hurdle for in-hand parcel delivery. The exposed propeller blades are hazardous to unsuspecting bystanders, pets, and untrained users. One solution to provide safety is to enclose a drone with an all-encompassing protective cage. However, the structures of existing cage designs have low density in order to minimize obstruction of propeller airflow, so as to not decrease efficiency. The relatively large openings in the cage do not protect hands and fingers from fast rotating propellers. Here we describe a novel approach to safety and aerodynamic efficiency by means of a high-density cage and morphing arms loosely inspired by the box turtle. The drone cage is made of a dense and lightweight grid. When flying in proximity of humans, the arms and propellers are retracted and fully sealed within the cage, thus making the drone safe and also reducing the total footprint. When flying at cruising altitude far from people and objects, the arms and propellers extend out of the protective grid, thus increasing aerodynamic efficiency by more than 20%.
|
|
MoAT11 |
Room T11 |
Aerial Systems: Applications II |
Regular session |
Chair: Minor, Mark | University of Utah |
Co-Chair: Yu, Kee-Ho | Chonbuk National University |
|
10:00-10:15, Paper MoAT11.1 | |
>ROSflight: A Lean Open-Source Research Autopilot |
|
Jackson, James | Brigham Young University |
Koch, Daniel | Brigham Young University |
Henrichsen, Trey | Brigham Young University |
McLain, T.W. | Brigham Young University |
Keywords: Aerial Systems: Applications
Abstract: ROSflight is a lean, open-source autopilot system developed with the primary goal of supporting the needs of researchers working with micro aerial vehicle systems. The project consists of firmware designed to run on low-cost, readily available flight controller boards, as well as ROS packages for interfacing between the flight controller and application code and for simulation. The core objectives of the project are as follows: maintain a small, easy-to-understand code base; provide high-bandwidth, low-latency communication between the flight controller and application code; provide a straightforward interface to research application code; allow for robust safety pilot integration; and enable true software-in-the-loop simulation capability.
|
|
10:15-10:30, Paper MoAT11.2 | |
>Online Weight-Adaptive Nonlinear Model Predictive Control |
|
Kostadinov, Dimche | University of Zurich, Robotics and Perception Group |
Scaramuzza, Davide | University of Zurich |
Keywords: Aerial Systems: Applications
Abstract: Nonlinear Model Predictive Control (NMPC) is a powerful and widely used technique for nonlinear dynamic process control under constraints. In NMPC, the state and control weights of the corresponding state and control costs are commonly selected based on human-expert knowledge, which usually reflects the acceptable stability in practice. Although broadly used, this approach might not be optimal for the execution of a trajectory with the lowest positional error and sufficiently "smooth" changes in the predicted controls. Furthermore, NMPC with an online weight update strategy for fast, agile, and precise unmanned aerial vehicle navigation, has not been studied extensively. To this end, we propose a novel control problem formulation that allows online updates of the state and control weights. As a solution, we present an algorithm that consists of two alternating stages: (i) state and command variable prediction and (ii) weights update. We present a numerical evaluation with a comparison and analysis of different trade-offs for the problem of quadrotor navigation. Our computer simulation results show improvements of up to 70% in the accuracy of the executed trajectory compared to the standard solution of NMPC with fixed weights.
|
|
10:30-10:45, Paper MoAT11.3 | |
>CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes |
> Video Attachment
|
|
Pueyo, Pablo | Universidad De Zaragoza |
Cristofalo, Eric | Stanford University |
Montijano, Eduardo | Universidad De Zaragoza |
Schwager, Mac | Stanford University |
Keywords: Software, Middleware and Programming Environments, Simulation and Animation, Aerial Systems: Applications
Abstract: Unmanned Aerial Vehicles (UAVs) are becoming increasingly popular in the film and entertainment industries, in part because of their maneuverability and perspectives they enable. While there exists methods for controlling the position and orientation of the drones for visibility, other artistic elements of the filming process, such as focal blur, remain unexplored in the robotics community. The lack of cinematographic robotics solutions is partly due to the cost associated with the cameras and devices used in the filming industry, but also because stateof-the-art photo-realistic robotics simulators only utilize a full in-focus pinhole camera model which does not incorporate these desired artistic attributes. To overcome this, the main contribution of this work is to endow the well-known drone simulator, AirSim, with a cinematic camera as well as extend its API to control all of its parameters in real time, including various filming lenses and common cinematographic properties. In this paper, we detail the implementation of our AirSim modification, CinemAirSim, present examples that illustrate the potential of the new tool, and highlight the new research opportunities that the use of cinematic cameras can bring to research in robotics and control.
|
|
10:45-11:00, Paper MoAT11.4 | |
>Design and Evaluation of a Perching Hexacopter Drone for Energy Harvesting from Power Lines |
|
Kitchen, Ryan | University of Utah |
Bierwolf, Nick | University of Utah |
Harbertson, Sean | University of Utah |
Platt, Brage | University of Utah |
Owen, Dean | University of Utah |
Griesman, Klaus | University of Utah |
Minor, Mark | University of Utah |
Keywords: Aerial Systems: Applications
Abstract: With a growing number of applications in the world for UAVs, there is a clear limitation regarding the need for extended battery life. With the current flight times, many users would benefit greatly with an innovative option of field charging these devices. The objective of this project is to investigate feasibility of inductively harvesting energy from a power line cable for applications such as charging a UAV drone. Research investigates a dual hook perching device that securely attaches to a power cable and aligns an inductive core with the cable for harvesting energy from its electro-magnetic field. Modeling and analysis of the core highlights critical design parameters, leading to evaluation of circular, semi-cylindrical, and u-shaped prototypes designed to interface with a 1” power cable. Underactuated two jaw manipulators at each end of the coil are proposed for grasping the cable and aligning it with the charging coil, ultimately providing a firm grasp and perch. An open source hexacopter drone was used in this study to integrate with the charging novelty. The results provided can be used as a starting point to study the reliability of this method of charging and to further investigate perching abilities of UAVs.
|
|
11:00-11:15, Paper MoAT11.5 | |
>Flight Path Planning of Solar-Powered UAV for Sustainable Communication Relay |
|
Guerra Padilla, Giancarlo Eder | Chonbuk National University |
Kim, Kun-Jung | Chonbuk National University |
Park, Seok-Hwan | Jeonbuk National University |
Yu, Kee-Ho | Chonbuk National University |
Keywords: Aerial Systems: Applications, Energy and Environment-Aware Automation, Motion and Path Planning
Abstract: Communication is a key aspect in modern life. Unfortunately, when natural disasters occur, the communication system and infrastructure of a city can be partially lost, and in the worst case, completely destroyed. In this case, communication is a crucial part for the search-and-rescue missions. This paper focuses on developing an aerial communication relay platform as an effective solution for communication loss in a natural disaster. The model used considers the aircraft altitude and attitude, which affects the energy acquisition and consumption, and the signal fading effects. The flight path planning is performed adopting a nonlinear optimization technique, Hermite-Simpson collocation method. For a realistic communication model regarding urban signal loss and path propagation, the building deployment of a 2km radius circular area of two cities in South Korea (Seoul and Jeonju) was obtained. Simulation experiments for the different urban environments are performed to test the communication reliability focusing on the relation between the Unmanned Aerial Vehicle (UAV) and the Ground Users (GU). As a result of the simulation, an optimal flight path in a high-rise urban and urban microcell environment is obtained. The flight path indicates the feasibility of endurance flights for low-altitude communication aid aircrafts including signal fading model alongside solar power energy acquisition into the case study.
|
|
MoAT12 |
Room T12 |
Aerial Systems: Cooperating Robots |
Regular session |
Chair: Tadakuma, Kenjiro | Tohoku University |
Co-Chair: Chirarattananon, Pakpong | City University of Hong Kong |
|
10:00-10:15, Paper MoAT12.1 | |
>SplitFlyer: A Modular Quadcoptor That Disassembles into Two Flying Robots |
> Video Attachment
|
|
Bai, Songnan | City University of Hong Kong |
Tan, Shixin | City University of Hong Kong |
Chirarattananon, Pakpong | City University of Hong Kong |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications, Cellular and Modular Robots
Abstract: We introduce SplitFlyer--a novel quadcopter with an ability to disassemble into two self-contained bicopters through human assistance. As a subunit, the bicopter is a severely underactuated aerial vehicle equipped with only two propellers. Still, each bicopter is capable of independent flight. To achieve this, we provide an analysis of the system dynamics by relaxing the control over the yaw rotation, allowing the bicopter to maintain its large spinning rate in flight. Taking into account the gyroscopic motion, the dynamics are described and a cascaded control strategy is developed. We constructed a transformable prototype to demonstrate consecutive flights in both configurations. The results verify the proposed control strategy and show the potential of the platform for future research in modular aerial swarm robotics.
|
|
10:15-10:30, Paper MoAT12.2 | |
>Towards Cooperative Transport of a Suspended Payload Via Two Aerial Robots with Inertial Sensing |
> Video Attachment
|
|
Xie, Heng | City University of Hong Kong |
Cai, Xinyu | City University of HongKong |
Chirarattananon, Pakpong | City University of Hong Kong |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications, Cooperating Robots
Abstract: This paper addresses the problem of cooperative transport of a point mass hoisted by two aerial robots. Treating the robots as a leader and a follower, the follower stabilizes the system with respect to the leader using only feedback from its Inertial Measurement Units (IMU). This is accomplished by neglecting the acceleration of the leader, analyzing the system through the generalized coordinates or the cables' angles, and employing an observation model based on the IMU measurements. A lightweight estimator based on an Extended Kalman Filter (EKF) and a controller are derived to stabilize the robot-payload-robot system. The proposed methods are verified with extensive flight experiments, first with a single robot and then with two robots. The results show that the follower is capable of realizing the desired quasi-static trajectory using only its IMU measurements. The outcomes demonstrate promising progress towards the goal of autonomous cooperative transport of a suspended payload via small flying robots with minimal sensing and computational requirements.
|
|
10:30-10:45, Paper MoAT12.3 | |
>Active Vertical Takeoff of an Aquatic UAV |
> Video Attachment
|
|
Tétreault, Étienne | Université De Sherbrooke |
Rancourt, David | Université De Sherbrooke |
Lussier Desbiens, Alexis | Université De Sherbrooke |
Keywords: Aerial Systems: Mechanics and Control, Marine Robotics
Abstract: To extend the mission duration of smaller unmanned aerial vehicles, this paper presents a solar recharge approach that uses lakes as landing, charging, and standby areas. The Sherbrooke University Water-Air VEhicle (SUWAVE) is a small aircraft capable of vertical takeoff and landing on water. A second-generation prototype has been developed with new capabilities: solar recharging, autonomous flight, and a larger takeoff envelope using an actuated takeoff strategy. A 3D dynamic model of the new takeoff maneuver is conceived to understand the major forces present during this critical phase. Numerical simulations are validated with experimental results from real takeoffs made in the laboratory and on lakes. The final prototype is shown to have accomplished repeated cycles of autonomous takeoff, followed by assisted flight and landing, without any human physical intervention between cycles.
|
|
10:45-11:00, Paper MoAT12.4 | |
>Energy-Based Cooperative Control for Landing Fixed-Wing UAVs on Mobile Platforms under Communication Delays |
|
Muskardin, Tin | German Aerospace Center (DLR) |
Coelho, Andre | German Aerospace Center (DLR) |
Rodrigues Della Noce, Eduardo | German Aerospace Center (DLR) |
Ollero, Anibal | University of Seville |
Kondak, Konstantin | German Aerospace Center |
Keywords: Aerial Systems: Applications, Cooperating Robots, Telerobotics and Teleoperation
Abstract: The landing of a fixed-wing UAV on top of a mobile landing platform requires a cooperative control strategy, which is based on relative motion estimates. These estimates typically suffer from communication or processing time delays, which can render an otherwise stable control system unstable. Such effects must therefore be considered during the design process of the cooperative landing controller. In this letter the application of a model-free passivity-based stabilizing controller is proposed, which is based on the monitoring of energy flows in the system, and actively dissipating any given active energy by means of adaptive damping elements. In doing so, overall system passivity and consequently stability is enforced in a straightforward and easy to implement way. The proposed control system is validated in numerical simulations for round trip delays of up to 4 seconds.
|
|
11:00-11:15, Paper MoAT12.5 | |
>Toward Enabling a Hundred Drones to Land in a Minute |
> Video Attachment
|
|
Fujikura, Daiki | TOHOKU UNIVERSITY |
Tadakuma, Kenjiro | Tohoku University |
Watanabe, Masahiro | Tohoku University |
Okada, Yoshito | Tohoku University |
Ohno, Kazunori | Tohoku University |
Tadokoro, Satoshi | Tohoku University |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control
Abstract: Currently, drone research and development has received significant attention worldwide. Particularly, delivery services employ drones as it is a viable method to improve delivery efficiency by using a several unmanned drones. Research has been conducted to realize complete automation of drone control for such services. However, regarding the takeoff and landing port of the drones, conventional methods have focused on the landing operation of a single drone, and the continuous landing of multiple drones has not been realized. To address this issue, we propose a completely novel port system, “EAGLES Port,” that allows several drones to continuously land and takeoff in a short time. Experiments verified that the landing time efficiency of the proposed port is ideally 7.5 times higher than that of conventional vertical landing systems. Moreover, the system can tolerate 270 mm of horizontal positional error, ±30 degrees of angular error in the drone’s approach (±40 degrees with the proposed gate mechanism), and up to 1.9 m/s of drone’s approach speed. This technology significantly contributes to the scalability of drone usage. Therefore, it is critical for the development of a future drone port for the landing of automated drone swarms.
|
|
11:15-11:30, Paper MoAT12.6 | |
>Adaptive Aerial Grasping and Perching with Dual Elasticity Combined Suction Cup |
> Video Attachment
|
|
Liu, Sensen | ShanghaiJiaotong University |
Dong, Wei | Shanghai Jiao Tong University |
Ma, Zhao | ShanghaiJiaotong University |
Sheng, Xinjun | Shanghai Jiao Tong University |
Keywords: Aerial Systems: Mechanics and Control, Grippers and Other End-Effectors, Mobile Manipulation
Abstract: To perch on or grasp the objective surface using the suction cup-based manipulator, the precise contact control is commonly required. Improper contact angle or insufficient contact force may cause failure. To enhance the tolerance to flight control insufficiency, a suction cup that comprises an inner soft cup and an outer firm cup to facilitate its engagement without reducing the adhesion stiffness is investigated. The soft cup is adaptable to the angular error induced by the multicopter and the resulting adhesion force can draw the firm cup and correct the angular error between the firm cup and the surface. These effects increase the engagement rate and reduce the dependence on precise control. The outer firm cup is devoted to providing a large adhesion force and a stiff base for subsequent tasks. To reduce the air evacuation time in the firm cup, a novel self-sealing structure is designed. Based on the combined cup, we build a multifunctional aerial manipulation system which can execute perching or lateral aerial grasping tasks. With the proposed prototype, the comparative flight experiments involving perching on a wall under disturbance and grasping an object are conducted. The results demonstrate that our proposed suction cup outperforms the conventional cup.
|
|
MoAT13 |
Room T13 |
Aerial Systems: Environmental Monitoring |
Regular session |
Chair: Rivas-Davila, Juan | Stanford University |
Co-Chair: Das, Jnaneshwar | Arizona State University |
|
10:00-10:15, Paper MoAT13.1 | |
>Wind and the City: Utilizing UAV-Based In-Situ Measurements for Estimating Urban Wind Fields |
> Video Attachment
|
|
Patrikar, Jay | Carnegie Mellon University |
Moon, Brady | Carnegie Mellon University |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: Aerial Systems: Applications, Environment Monitoring and Management, Field Robots
Abstract: A high-quality estimate of wind fields can potentially improve the safety and performance of Unmanned Aerial Vehicles (UAVs) operating in dense urban areas. Computational Fluid Dynamics (CFD) simulations can help provide a wind field estimate, but their accuracy depends on the knowledge of the distribution of the inlet boundary conditions. This paper provides a real-time methodology using a Particle Filter (PF) that utilizes wind measurements from a UAV to solve the inverse problem of predicting the inlet conditions as the UAV traverses the flow field. A Gaussian Process Regression (GPR) approach is used as a surrogate function to maintain the real-time nature of the proposed methodology. Real-world experiments with a UAV at an urban test-site prove the efficacy of the proposed method. The flight test shows that the 95% confidence interval for the difference between the mean estimated inlet conditions and mean ground truth measurements closely bound zero, with the difference in mean angles being between -3.67 degrees and 1.2 degrees and the difference in mean magnitudes being between -0.206 m/s and 0.020 m/s.
|
|
10:15-10:30, Paper MoAT13.2 | |
>Microdrone-Equipped Mobile Crawler Robot System, DIR-3, for High-Step Climbing and High-Place Inspection |
> Video Attachment
|
|
Ogusu, Yuji | AIST |
Tomita, Kohji | National Institute of Advanced Industrial Science AndTechnology |
Kamimura, Akiya | National Institute of Advanced Industrial Science and Technology |
Keywords: Multi-Robot Systems, Aerial Systems: Applications, Search and Rescue Robots
Abstract: Mobile robots of various types have been proposed for infrastructure inspection and disaster investigation. For such mobile robot applications, accessing the areas is of primary importance for missions. Therefore, various locomotive mechanisms have been studied. We introduce a novel mobile robot system, named DIR-3, combining a crawler robot and a microdrone. By rotating its arm back and forth, DIR-3, a very simple, lightweight crawler robot with a single 360-degree rotatable U-shaped arm, can climb up/down an 18 cm high step, 1.5 times its height. Furthermore, to inspect high places, which is considered difficult for conventional mobile robots, a drone mooring system for mobile robots is presented. The tethered microdrone of DIR-3 can be controlled freely as a flying camera by switching operating modes on the graphic user interface. The drone mooring system has a unique tension-controlled winding mechanism that enables stable landing on DIR-3 from any location in the air, in addition to measurement and estimation of relative positions of the drone. We evaluated the landing capability, position estimation accuracy, and following control of the drone using the winding mechanism. Results show the feasibility of the proposed system for inspection of cracks in a 5 m high concrete wall.
|
|
10:30-10:45, Paper MoAT13.3 | |
>MHYRO: Modular HYbrid RObot for Contact Inspection and Maintenance in Oil&gas Plants |
> Video Attachment
|
|
López, Abraham | University of Seville, GRVC |
Sanchez-Cuevas, Pedro J | University of Seville |
Suarez, Alejandro | University of Seville |
Soldado, Ámbar | University of Seville |
Ollero, Anibal | University of Seville |
Heredia, Guillermo | University of Seville |
Keywords: Aerial Systems: Applications
Abstract: In this paper, we propose a new concept of robot which is hybrid, including aerial and crawling subsystems and an arm, and also modular with interchangeable crawling subsystems for different pipe configurations, since it has been designed to cover most industrial oil & gas end-users’ requirements. The robot has the same ability than aerial robots to reach otherwise inaccessible locations, but makes the inspection more efficient, increasing operation time since crawling requires less energy than flying, and achieving better accuracy in the inspection. It also integrates safety-related characteristics for operating in the potentially explosive atmosphere of a refinery, being able to immediately interrupt the inspection if a hazardous situation is detected and carry the sensible parts such as batteries and electronic devices away as soon as possible. The paper presents the design of this platform in detail and shows the feasibility of the whole system performing indoor experiments.
|
|
10:45-11:00, Paper MoAT13.4 | |
>Geomorphological Analysis Using Unpiloted Aircraft Systems, Structure from Motion, and Deep Learning |
|
Chen, Zhiang | Arizona State University |
Scott, Tyler | Arizona State University |
Bearman, Sarah | Arizona State University |
Anand, Harish | Arizona State University |
Keating, Devin | Arizona State University |
Scott, Chelsea | Arizona State University |
Arrowsmith, Ramon | Arizona State University |
Das, Jnaneshwar | Arizona State University |
Keywords: Aerial Systems: Applications, Field Robots, Environment Monitoring and Management
Abstract: We present a pipeline for geomorphological analysis that uses structure from motion (SfM) and deep learning on close-range aerial imagery to estimate spatial distributions of rock traits (size, roundness, and orientation) along a tectonic fault scarp. The properties of the rocks on the fault scarp derive from the combination of initial volcanic fracturing and subsequent tectonic and geomorphic fracturing, and our pipeline allows scientists to leverage UAS-based imagery to gain a better understanding of such surface processes. We start by using SfM on aerial imagery to produce georeferenced orthomosaics and digital elevation models (DEM). A human expert then annotates rocks on a set of image tiles sampled from the orthomosaics, and these annotations are used to train a deep neural network to detect and segment individual rocks in the entire site. The extracted semantic information (rock masks) on large volumes of unlabeled, high-resolution SfM products allows subsequent structural analysis and shape descriptors to estimate rock size, roundness, and orientation. We present results of two experiments conducted along a fault scarp in the Volcanic Tablelands near Bishop, California. We conducted the first, proof-of-concept experiment with a DJI Phantom 4 Pro equipped with an RGB camera and inspected if elevation information assisted instance segmentation from RGB channels. Rock-trait histograms along and across the fault scarp were obtained with the neural network inference. In the second experiment, we deployed a hexrotor and a multispectral camera to produce a DEM and five spectral orthomosaics in red, green, blue, red edge, and near infrared. We focused on examining the effectiveness of different combinations of input channels in instance segmentation.
|
|
11:00-11:15, Paper MoAT13.5 | |
>Lightweight High Voltage Generator for Untethered Electroadhesive Perching of Micro Air Vehicles |
> Video Attachment
|
|
Park, Sanghyeon | Stanford University |
Drew, Daniel S. | Stanford University |
Follmer, Sean | Stanford University |
Rivas-Davila, Juan | Stanford University |
Keywords: Aerial Systems: Applications, Surveillance Systems
Abstract: The limited in-flight battery lifetime of centimeter-scale flying robots is a major barrier to their deployment, especially in applications which take advantage of their ability to reach high vantage points. Perching, where flyers remain fixed in space without use of flight actuators by attachment to a surface, is a potential mechanism to overcome this barrier. Electroadhesion, a phenomenon where an electrostatic force normal to a surface is generated by induced charge, has been shown to be an increasingly viable perching mechanism as robot size decreases due to the increased surface-area-to-volume ratio. Typically electroadhesion requires high (> 1 kV) voltages to generate useful forces, leading to relatively large power supplies that cannot be carried on-board a micro air vehicle. In this paper, we motivate the need for application-specific power electronics solutions for electroadhesive perching, develop a useful figure of merit (the "specific voltage") for comparing and guiding efforts, and walk through the design methodology of a system implementation. We conclude by showing that this high voltage power supply enables, for the first time in the literature, tetherless electroadhesive perching of a commercial micro quadrotor.
|
|
11:15-11:30, Paper MoAT13.6 | |
>Unmanned Aerial Sensor Placement for Cluttered Environments |
|
Farinha, Andre | Imperial College |
Zufferey, Raphael | Imperial College of London |
Zheng, Peter | Imperial College London |
Armanini, Sophie Franziska | Imperial College London |
Kovac, Mirko | Imperial College London |
Keywords: Aerial Systems: Applications, Robotics in Hazardous Fields, Sensor Networks
Abstract: Unmanned aerial vehicles (UAVs) have been shown to be useful for the installation of wireless sensor networks (WSNs). More notably, the accurate placement of sensor nodes using UAVs, opens opportunities for many industrial and scientific uses, in particular, in hazardous environments or inaccessible locations. This publication proposes and demonstrates a new aerial sensor placement method based on impulsive launching. Since direct physical interaction is not required, sensor deployment can be achieved in cluttered environments where the target location cannot be safely approached by the UAV, such as under the forest canopy. The proposed method is based on mechanical energy storage and an ultralight shape memory alloy (SMA) trigger. The developed aerial system weighs a total of 650 grams and can execute up to 17 deployments on a single battery charge. The system deploys sensors of 30 grams up to 4 meters from a target with an accuracy of pm 10 cm. The aerial deployment method is validated through more than 80 successful deployments in indoor and outdoor environments. The proposed approach can be integrated in field operations and complement other robotic or manual sensor placement procedures. This would bring benefits for demanding industrial applications, scientific field work, smart cities and hazardous environments [Video attachment: https://youtu.be/duPRXCyo6cY].
|
|
MoAT14 |
Room T14 |
Aerial Systems: Mechanics & Control I |
Regular session |
Chair: Bergbreiter, Sarah | Carnegie Mellon University |
Co-Chair: Szafir, Daniel J. | University of Colorado Boulder |
|
10:00-10:15, Paper MoAT14.1 | |
>In-Flight Efficient Controller Auto-Tuning Using a Pair of UAVs |
|
Giernacki, Wojciech | Poznan University of Technology |
Horla, Dariusz | Poznan University of Technology |
Saska, Martin | Czech Technical University in Prague |
Keywords: Multi-Robot Systems, Aerial Systems: Perception and Autonomy, Optimization and Optimal Control
Abstract: In the paper, a pair of auto-tuning methods for fixed-parameter controllers is presented, in application to multirotor unmanned aerial vehicles (UAVs) control. In both cases, the automatized process of searching the best altitude controller parameters is carried out with the use of a modified golden-search method, for a selected cost function, during the flight of a pair of UAVs. All the calculations are performed in real-time in the iterative manner using only basic sensory information available concerning current altitude information for a pair of UAVs. The auto-tuning process of the controller is characterized by neglectfully low computational demand, and the parameters are obtained rapidly with no dynamic model of a UAV needed. In both methods, by using a pair of UAVs in tuning process, the level of control performance can be increased, what has been proved by means of multiple outdoor experiments. The first method increases precision of the obtained controller parameters by averaging sensory information over a pair of UAVs, whereas in the second, by exchanging measurement information between the units, the search space is explored faster. The latter is of special importance when seeking the best controller parameters, what is especially expected when a limited experiment duration of multirotor UAVs is taken into account.
|
|
10:15-10:30, Paper MoAT14.2 | |
>A Novel Trajectory Optimization for Affine Systems: Beyond Convex-Concave Procedure |
> Video Attachment
|
|
Rastgar, Fatemeh | University of Tartu |
Singh, Arun Kumar | Tampere University of Technology, Finland |
Masnavi, Houman | Institute of Technology, University of Tartu |
Kruusamäe, Karl | University of Tartu |
Aabloo, Alvo | University of Tartu, IMS Lab |
Keywords: Optimization and Optimal Control, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: Trajectory optimization under affine motion models and convex cost functions are often solved through the convex-concave procedure (CCP), wherein the non-convex collision avoidance constraints are replaced with its affine approximation. Although mathematically rigorous, CCP has some critical limitations. First, it requires a collision-free initial guess of solution trajectory which is difficult to obtain, especially in dynamic environments. Second, at each iteration, CCP involves solving a convex constrained optimization problem which becomes prohibitive for real-time computation even with a moderate number of obstacles, if long planning horizons are used. In this paper, we propose a novel trajectory optimization algorithm which like CCP involves solving convex optimization problems but can work with an arbitrary initial guess. Moreover, our proposed optimizer can be computationally upto a few orders of magnitude faster than CCP while achieving similar or better optimal cost. The reduced computation time, in turn, stems from some interesting mathematical structures in our optimizer which allows for distributed computation and obtaining solutions in symbolic form. We validate the proposed optimizer on several benchmarks with static and dynamic obstacles.
|
|
10:30-10:45, Paper MoAT14.3 | |
>Development of a Passive Skid for Multicopter Landing on Rough Terrain |
> Video Attachment
|
|
Xu, Maozheng | Hiroshima University |
Sumida, Naoto | Hiroshima University |
Takaki, Takeshi | Hiroshima University |
Keywords: Underactuated Robots
Abstract: Landing is an essential part of multicopter task operations. A multicopter has relatively stringent requirements for landing, particularly for achieving flatness. Currently, landing on rough terrain with normal skids is difficult. Therefore, research is being conducted to obtain skids capable of landing on rough terrain. In this paper, a passive skid for multicopter landing on rough terrain is proposed. The proposed device is based on an existing previous study of the multicopter carried with a electric robo-arm only for object manipulation. This innovative idea stems from the aim of giving the multicopter carried with a electric robo-arm the ability to land on various occasions and then the passive skid is designed. By using a slope to simulate a rough terrain, the range of available landing in which a multicopter can maintain its pose and the frictional torque of the passive joint are analyzed. Further, experiments are conducted to demonstrate that landing can be achieved using the skid proposed in our study.
|
|
10:45-11:00, Paper MoAT14.4 | |
>Template-Based Optimal Robot Design with Application to Passive-Dynamic Underactuated Flapping |
|
De, Avik | Harvard University |
Wood, Robert | Harvard University |
Keywords: Optimization and Optimal Control, Aerial Systems: Mechanics and Control
Abstract: We present a novel paradigm and algorithm for optimal design of underactuated robot platforms in highly-constrained nonconvex parameter spaces. We apply this algorithm to two variants of the mature RoboBee platform, numerically demonstrating predicted performance improvements of over 10% in some cases by algorithmically reasoning about variable effective-mechanical-advantage (EMA) transmissions, higher aspect ratio (AR) wing designs, and force-power tradeoffs. The algorithm can currently be applied to any underactuated mechanical system with one actuated degree of freedom (DOF), and can be easily extended to arbitrary configuration spaces and dynamics.
|
|
11:00-11:15, Paper MoAT14.5 | |
>A Whisker-Inspired Fin Sensor for Multi-Directional Airflow Sensing |
|
Kim, Suhan | Carnegie Mellon University |
Kubicek, Regan | Carnegie Mellon University |
Paris, Aleix | Massachusetts Institute of Technology |
Tagliabue, Andrea | Massachusetts Institute of Technology |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Bergbreiter, Sarah | Carnegie Mellon University |
Keywords: Mechanism Design, Micro/Nano Robots, Aerial Systems: Perception and Autonomy
Abstract: This work presents the design, fabrication, and characterization of an airflow sensor inspired by the whiskers of animals. The body of the whisker was replaced with a fin structure in order to increase the air resistance. The fin was suspended by a micro-fabricated spring system at the bottom. A permanent magnet was attached beneath the spring, and the motion of fin was captured by a readily accessible and low-cost 3D magnetic sensor located below the magnet. The sensor system was modeled in terms of the dimension parameters of fin and the spring stiffness, which were optimized to improve the performance of the sensor. The system response was then characterized using a commercial wind tunnel and the results were used for sensor calibration. The sensor was integrated into a micro aerial vehicle (MAV) and demonstrated the capability of capturing the velocity of the MAV by sensing the relative airflow during flight.
|
|
11:15-11:30, Paper MoAT14.6 | |
>PufferBot: Actuated Expandable Structures for Aerial Robots |
> Video Attachment
|
|
Hedayati, Hooman | Colorado University Boulder |
Suzuki, Ryo | University of Colorado Boulder |
Leithinger, Daniel | MIT |
Szafir, Daniel J. | University of Colorado Boulder |
Keywords: Mechanism Design, Aerial Systems: Mechanics and Control, Human-Centered Robotics
Abstract: We present PufferBot, an aerial robot with an expandable structure that may expand to protect a drone's propellers when the robot is close to obstacles or collocated humans. PufferBot is made of a custom 3D printed expandable scissor structure, which utilizes a one degree of freedom actuator with rack and pinion mechanism. We propose four designs for the expandable structure, each with unique characterizations which may be useful in different situations. Finally, we present three motivating scenarios in which PufferBot might be useful beyond existing static propeller guard structures.
|
|
MoAT15 |
Room T15 |
Aerial Systems: Mechanics & Control II |
Regular session |
Chair: Zheng, Minghui | University at Buffalo |
Co-Chair: Kumar, Manish | University of Cincinnati |
|
10:00-10:15, Paper MoAT15.1 | |
>Optimal-Power Configurations for Hover Solutions in Mono-Spinners |
|
Hedayatpour, Mojtaba | University of Regina |
Mehrandezh, Mehran | University of Regina |
Janabi-Sharifi, Farrokh | Ryerson University |
Keywords: Aerial Systems: Mechanics and Control, Dynamics
Abstract: Rotary-wing flying machines draw attention within the UAV community for their in-place hovering capability, and recently, holonomic motion over fixed-wings. In this paper, we investigate about the power-optimality in a mono-spinner, i.e., a class of rotary-wing UAVs with one rotor only, whose main body has a streamlined shape for producing additional lift when counter-spinning the rotor. We provide a detailed dynamic model of our mono-spinner. Two configurations are studied: (1) a symmetric configuration, in which the rotor is aligned with the fuselage’s COM, and (2) an asymmetric configuration, in which the rotor is located with an offset from the fuselage’s COM. While the former can generate an in-place hovering flight condition, the latter can achieve trajectory tracking in 3D space by resolving the yaw and precession rates. Furthermore, it is shown that by introducing a tilting angle between the rotor and the fuselage, within the asymmetric design, one can further minimize the power consumption without compromising the overall stability. It is shown that an energy optimal solution can be achieved through the proper aerodynamic design of the mono-spinner for the first time.
|
|
10:15-10:30, Paper MoAT15.2 | |
>Knowledge Transfer between Different UAVs for Trajectory Tracking |
|
Chen, Zhu | University at Buffalo |
Liang, Xiao | University at Buffalo |
Zheng, Minghui | University at Buffalo |
Keywords: Aerial Systems: Mechanics and Control, Optimization and Optimal Control, Motion Control
Abstract: Robots are usually programmed for particular tasks with a considerable amount of hand-crafted tuning work. Whenever a new robot with different dynamics is presented, the well-designed control algorithms for the robot usually have to be re-tuned to guarantee good performance. It remains challenging to directly program a robot to automatically learn from the experiences gathered by other dynamically different robots. With such a motivation, this paper proposes a learning algorithm that enables a quadrotor unmanned aerial vehicle (UAV) to automatically improve its tracking performance by learning from the tracking errors made by other UAVs with different dynamics. This learning algorithm utilizes the relationship between the dynamics of different UAVs, named the target and training UAVs, respectively. The learning signal is generated by the learning algorithm and then added to the feedforward loop of the target UAV, which does not affect the closed-loop stability. The learning convergence can be guaranteed by the design of a learning filter. With the proposed learning algorithm, the target UAV can improve its tracking performance by learning from the training UAV without baseline controller modifications. Both numerical studies and experimental tests are conducted to validate the effectiveness of the proposed learning algorithm.
|
|
10:30-10:45, Paper MoAT15.3 | |
>Flight Control of Sliding Arm Quadcopter with Dynamic Structural Parameters |
> Video Attachment
|
|
Kumar, Rumit | University of Cincinnati |
Deshpande, Aditya M. | University of Cincinnati |
Wells, James Z. | University of Cincinnati |
Kumar, Manish | University of Cincinnati |
Keywords: Aerial Systems: Mechanics and Control, Robust/Adaptive Control of Robotic Systems, Motion Control
Abstract: The conceptual design and flight controller of a novel kind of quadcopter are presented. This design is capable of morphing the shape of the UAV during flight to achieve position and attitude control. We consider a dynamic center of gravity (CoG) which causes continuous variation in a moment of inertia (MoI) parameters of the UAV. These dynamic structural parameters play a vital role in the stability and control of the system. The length of quadcopter arms is a variable parameter, and it is actuated using attitude feedback-based control law. The MoI parameters are computed in real-time and incorporated in the equations of motion of the system. The UAV utilizes the angular motion of propellers and variable quadcopter arm lengths for position and navigation control. The movement space of the CoG is a design parameter and it is bounded by actuator limitations and stability requirements of the system. A detailed information on equations of motion, flight controller design and possible applications of this system are provided. Further, the proposed shape-changing UAV system is evaluated by comparative numerical simulations for way point navigation mission and complex trajectory tracking.
|
|
10:45-11:00, Paper MoAT15.4 | |
>Design and Control of SQUEEZE: A Spring-Augmented QUadrotor for intEractions with the Environment to SqueeZE-And-Fly |
> Video Attachment
|
|
Patnaik, Karishma | Arizona State University |
Mishra, Shatadal | ASU |
Rezayat Sorkhabadi, Seyed Mostafa | Arizona State University |
Zhang, Wenlong | Arizona State University |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control
Abstract: This paper presents the design and control of a novel quadrotor with a variable geometry to physically interact with cluttered environments and fly through relatively narrow gaps and passageways. This compliant quadrotor with passive morphing capabilities is designed using torsional springs at every arm hinge to allow for rotation with external forces. We derive the dynamic model of this variable geometry quadrotor (SQUEEZE), and develop a low-level adaptive controller for trajectory tracking. The corresponding Lyapunov stability proof of attitude tracking is also presented. Further, an admittance controller is designed to account for change in yaw due to physical interactions with the environment. Finally, the proposed design is validated in real-time flight tests in two setups: a relatively small gap and a passageway. The experimental results demonstrate unique capability of SQUEEZE in navigating through constrained narrow spaces.
|
|
11:00-11:15, Paper MoAT15.5 | |
>Hybrid Aerial-Ground Locomotion with a Single Passive Wheel |
> Video Attachment
|
|
Qin, Youming | The University of Hong Kong |
Li, Yihang | University of Hong Kong |
Xu, Wei | University of Hong Kong |
Zhang, Fu | University of Hong Kong |
Keywords: Aerial Systems: Mechanics and Control, Underactuated Robots, Mechanism Design
Abstract: Exploiting contacts with environment structures provides extra force support to a UAV, often reducing the power consumption and hence extending the mission time. This paper investigates one such way to exploit flat surfaces in the environment by a novel aerial-ground hybrid locomotion. Our design is a single passive wheel integrated at the UAV bottom, serving a minimal design to date. We present the principle and implementation of such a simple design as well as its control. Flight experiments are conducted to verify the feasibility and the power saving caused by the ground locomotion. Results show that our minimal design allows successful aerial-ground hybrid locomotion even with a less-controllable bi-copter UAV. The ground locomotion saves up to 77% battery without much tuning effort.
|
|
11:15-11:30, Paper MoAT15.6 | |
>TiltDrone: A Fully-Actuated Tilting Quadrotor Platform |
|
Zheng, Peter | Imperial College London |
Tan, XinKai | Imperial College London |
Koçer, Başaran Bahadır | Nanyang Technological University |
Yang, Erdeng | Imperial College London |
Kovac, Mirko | Imperial College London |
Keywords: Aerial Systems: Mechanics and Control, Mechanism Design, Aerial Systems: Applications
Abstract: Multi-directional aerial platforms can fly in almost any orientation and direction, often maneuvering in ways their underactuated counterparts cannot match. A subset of multi-directional platforms is fully-actuated multirotors, where all six degrees of freedom are independently controlled without redundancies. Fully-actuated multirotors possess much greater freedom of movement than conventional multirotor drones, allowing them to perform complex sensing and manipulation tasks. While there has been comprehensive research on multi-directional multirotor control systems, the spectrum of hardware designs remains fragmented. This paper sets out the hardware design architecture of a fully-actuated quadrotor and its associated control framework. Following the novel platform design, a prototype was built to validate the control scheme and characterize the flight performance. The resulting quadrotor was shown in operation to be capable of holding a stationary hover at 30 degrees incline, and track position commands by thrust vectoring [Video attachment: url{https://youtu.be/8HOQl_77CVg}].
|
|
MoAT16 |
Room T16 |
Aerial Systems: Mechanics & Control III |
Regular session |
Chair: Kim, H. Jin | Seoul National University |
Co-Chair: Bass, John | Université De Sherbrooke |
|
10:00-10:15, Paper MoAT16.1 | |
>Adaptive Nonlinear Control for Perching of a Bioinspired Ornithopter |
> Video Attachment
|
|
Maldonado Fernández, Francisco Javier | University of Seville |
Acosta, Jose Angel | University of Seville |
Tormo Barbero, Jesus | Universidad De Sevilla |
Grau, Pedro | University of Seville |
GuzmÁn GarcÍa, MarÍa Del Mar | University of Seville |
Ollero, Anibal | University of Seville |
Keywords: Aerial Systems: Mechanics and Control, Biologically-Inspired Robots
Abstract: This work presents a model-free nonlinear controller for an ornithopter prototype with bioinspired wings and tail. The size and power requirements have been thought to allocate a customized autopilot onboard. To assess the functionality and performance of the full mechatronic design, a controller has been designed and implemented to execute a prescribed perching 2D trajectory. Although functional, its 'handmade' nature forces many imperfections that cause uncertainty that hinder its control. Therefore, the controller is based on adaptive backstepping and does not require any knowledge of the aerodynamics. The controller is able to follow a given reference in flight path angle by actuating only on the tail deflection. A novel space-dependent nonlinear guidance law is also provided to prescribe the perching trajectory. Mechatronics, guidance and control system performance is validated by conducting indoor flight tests.
|
|
10:15-10:30, Paper MoAT16.2 | |
>Improving Multirotor Landing Performance on Inclined Surfaces Using Reverse Thrust |
|
Bass, John | Université De Sherbrooke |
Lussier Desbiens, Alexis | Université De Sherbrooke |
Keywords: Aerial Systems: Mechanics and Control, Contact Modeling, Flexible Robots
Abstract: Conventional multirotors are unable to land on inclined surfaces without specialized suspensions and adhesion devices. With the development of a bidirectional rotor, landing maneuvers could benefit from rapid thrust reversal, which would increase the landing envelope without involving the addition of heavy and complex landing gears or reduction of payload capacity. This article presents a model designed to accurately simulate quadrotor landings, the behavior of their stiff landing gear, and the limitations of bidirectional rotors. The model was validated using experimental results on both low-friction and high-friction surfaces, and was then used to test multiple landing algorithms over a wide range of touchdown velocities and slope inclinations to explore the benefits of reverse thrust. It is shown that thrust reversal can nearly double the maximum inclination on which a quadrotor can land and can also allow high vertical velocity landings.
|
|
10:30-10:45, Paper MoAT16.3 | |
>Evolved Neuromorphic Control for High Speed Divergence-Based Landings of MAVs |
|
Hagenaars, Jesse Jan | Delft University of Technology |
Paredes-Valles, Federico | Delft University of Technology |
Bohte, Sander | Centrum Wiskunde & Informatica |
de Croon, Guido | TU Delft / ESA |
Keywords: Aerial Systems: Perception and Autonomy, Autonomous Vehicle Navigation, Neurorobotics
Abstract: Flying insects are capable of vision-based navigation in cluttered environments, reliably avoiding obstacles through fast and agile maneuvers, while being very efficient in the processing of visual stimuli. Meanwhile, autonomous micro air vehicles still lag far behind their biological counterparts, displaying inferior performance at a much higher energy consumption. In light of this, we want to mimic flying insects in terms of their processing capabilities, and consequently show the efficiency of this approach in the real world. This letter does so through evolving spiking neural networks for controlling landings of micro air vehicles using optical flow divergence from a downward-looking camera. We demonstrate that the resulting neuromorphic controllers transfer robustly from a highly abstracted simulation to the real world, performing fast and safe landings while keeping network spike rate minimal. Furthermore, we provide insight into the resources required for successfully solving the problem of divergence-based landing, showing that high-resolution control can be learned with only a single spiking neuron. To the best of our knowledge, this work is the first to integrate spiking neural networks in the control loop of a real-world flying robot. Videos of the experiments can be found at https://bit.ly/neuro-controller.
|
|
10:45-11:00, Paper MoAT16.4 | |
>A Collision-Resilient Aerial Vehicle with Icosahedron Tensegrity Structure |
> Video Attachment
|
|
Zha, Jiaming | University of California, Berkeley |
Wu, Xiangyu | University of California, Berkeley |
Kroeger, Joseph | University of California Berkeley |
Perez, Natalia | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Aerial Systems: Mechanics and Control, Search and Rescue Robots, Robotics in Hazardous Fields
Abstract: Aerial vehicles with collision resilience can operate with more confidence in environments with obstacles that are hard to detect and avoid. This paper presents the methodology used to design a collision resilient aerial vehicle with icosahedron tensegrity structure. A simplified stress analysis of the tensegrity frame under impact forces is performed to guide the selection of its components. In addition, an autonomous controller is presented to reorient the vehicle from an arbitrary orientation on the ground to help it take off. Experiments show that the vehicle can successfully reorient itself after landing upside-down and can survive collisions with speed up to 6.5m/s.
|
|
11:00-11:15, Paper MoAT16.5 | |
>Fail-Safe Flight of a Fully-Actuated Quadcopter in a Single Motor Failure |
> Video Attachment
|
|
Lee, Seung Jae | Seoul National University |
Jang, Inkyu | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Mechanics and Control, Robot Safety, Aerial Systems: Applications
Abstract: In this paper, we introduce a new quadrotor fail-safe flight solution that can perform the same four controllable degrees-of-freedom flight as a standard multirotor even when a single thruster fails. The new solution employs a novel multirotor platform known as the T3-Multirotor and utilizes a distinctive strategy of actively controlling the center of gravity position to restore the controllable degrees of freedom. A dedicated control structure is introduced, along with a detailed analysis of the dynamic characteristics of the platform that change during emergency flights. Experimental results are provided to validate the feasibility of the proposed solution.
|
|
11:15-11:30, Paper MoAT16.6 | |
>Development of Hiryu-II: A Long-Reach Articulated Modular Manipulator Driven by Thrusters |
> Video Attachment
|
|
Ueno, Yusuke | Tokyo Institute of Technology |
Hagiwara, Tetsuo | KinderHeim |
Nabae, Hiroyuki | Tokyo Institute of Technology |
Suzumori, Koichi | Tokyo Institute of Technology |
Endo, Gen | Tokyo Institute of Technology |
Keywords: Aerial Systems: Mechanics and Control, Redundant Robots, Cellular and Modular Robots
Abstract: Robotic manipulators using thrusters for weight compensation are an active research topic due to their potential to exceed the limits of maximum length. However, existing manipulators that use thrusters have limitations of maximum length because the hardware design is not sufficiently refined. This paper focuses on overcoming these limitations and realizing an articulated manipulator more than twice the length of conventional ones. To cancel the moment for each link, we performed static analysis considering the torsional deformation around the link axis to derive the thruster position. Weight compensation and joint angle control of the manipulator can be realized with simple proportional integral derivative control for each link by numerical simulation. Consequently, we demonstrated the feasibility of the proposed manipulator by lifting a 0.6 kg payload at the arm end with a prototype of length 6.6 m. Theoretically, each thrust force control input was almost constant, regardless of link attitude. This suggests modular properties that contribute to the practicality of the proposed manipulator for various tasks.
|
|
MoAT17 |
Room T17 |
Aerial Systems: Path Planning |
Regular session |
Chair: Gao, Fei | Zhejiang University |
|
10:00-10:15, Paper MoAT17.1 | |
>Experimental Flights of Adaptive Patterns for Cloud Exploration with UAVs |
> Video Attachment
|
|
Verdu, Titouan | ENAC, University of Toulouse |
Maury Nicolas, Nicolas | Météo France Toulouse |
Narvor Pierre, Pierre | LAAS-CNRS, Université De Toulouse |
Seguin, Florian | LAAS-CNRS, Université De Toulouse |
Roberts Gregory, Gregory | METEO-FRANCE Toulouse |
Couvreux, Fleur | CNRM, Université Toulouse, Météo France and CNRS |
Cayez Grégoire, Grégoire | METEO-FRANCE Toulouse |
Bronz, Murat | ENAC, Université De Toulouse |
Hattenberger, Gautier | ENAC, French Civil Aviation University |
Lacroix, Simon | LAAS/CNRS |
Keywords: Aerial Systems: Applications, Reactive and Sensor-Based Planning
Abstract: This work presents the deployment of UAVs for the exploration of clouds, from the system architecture and simulation tests to a real-flight campaign and trajectory analyzes. Thanks to their small size and low altitude, light UAVs have proven to be adapted for in-situ cloud data collection. The short life time of the clouds and limited endurance of the planes require to focus on the area of maximum interest to gather relevant data. Based on previous work on cloud adaptive sampling, the article focuses on the overall system architecture, the improvements made to the system based on preliminary tests and simulations, and finally the results of a field campaign. The Barbados experimental flight campaign confirmed the capacity of the system to map clouds and to collect relevant data in dynamic environment, and highlighted areas for improvement.
|
|
10:15-10:30, Paper MoAT17.2 | |
>Navigation-Assistant Path Planning within a MAV Team |
> Video Attachment
|
|
Jang, Youngseok | Seoul National University |
Lee, Yunwoo | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Perception and Autonomy, Reactive and Sensor-Based Planning, Motion and Path Planning
Abstract: In micro aerial vehicle (MAV) operations, the success of mission is highly dependent on navigation performance, which has raised recent interests on navigation-aware path planning. One of the challenges lies in that optimal motions for successful navigation and a designated mission are often different in unknown, unstructured environments, and only sub-optimality may be obtained in each aspect. We aim to organize a two-MAV team that can effectively execute mission and simultaneously guarantee navigation quality, which consists of a main-agent responsible for mission and a sub-agent for navigation of the team. Especially, this paper focuses on path planning of the sub-agent to provide navigational assistance to the main-agent using a monocular camera. We adopt a graph-based receding horizon planner to find a dynamically feasible path in order for the sub-agent to help the main-agent's navigation. In this process, we present a metric for evaluating the localization performance utilizing the distribution of the features projected to the image plane. We also design a map management strategy and pose-estimation support mechanism in a monocular camera setup, and validate their effectiveness in two scenarios.
|
|
10:30-10:45, Paper MoAT17.3 | |
>UAV Coverage Path Planning under Varying Power Constraints Using Deep Reinforcement Learning |
> Video Attachment
|
|
Theile, Mirco | Technical University of Munich |
Bayerlein, Harald | EURECOM |
Nai, Richard | Technical University of Munich |
Gesbert, David | EURECOM |
Caccamo, Marco | Technical University of Munich |
Keywords: Aerial Systems: Perception and Autonomy, Motion and Path Planning, Autonomous Agents
Abstract: Coverage path planning (CPP) is the task of designing a trajectory that enables a mobile agent to travel over every point of an area of interest. We propose a new method to control an unmanned aerial vehicle (UAV) carrying a camera on a CPP mission with random start positions and multiple options for landing positions in an environment containing no-fly zones. While numerous approaches have been proposed to solve similar CPP problems, we leverage end-to-end reinforcement learning (RL) to learn a control policy that generalizes over varying power constraints for the UAV. Despite recent improvements in battery technology, the maximum flying range of small UAVs is still a severe constraint, which is exacerbated by variations in the UAV’s power consumption that are hard to predict. By using map-like input channels to feed spatial information through convolutional network layers to the agent, we are able to train a double deep Q-network (DDQN) to make control decisions for the UAV, balancing limited power budget and coverage goal. The proposed method can be applied to a wide variety of environments and harmonizes complex goal structures with system constraints.
|
|
10:45-11:00, Paper MoAT17.4 | |
>Detection-Aware Trajectory Generation for a Drone Cinematographer |
> Video Attachment
|
|
Jeon, Boseong | Seoul National University |
Shim, Dongseok | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Perception and Autonomy, Motion and Path Planning, Reactive and Sensor-Based Planning
Abstract: This work investigates an efficient trajectory generation for chasing a dynamic target, which incorporates the detectability objective. The proposed method actively guides the motion of a cinematographer drone so that the color of a target is well-distinguished against the colors of the background in the view of the drone. For the objective, we define a measure of color detectability given a chasing path. After computing a discrete path optimized for the metric, we generate a dynamically feasible trajectory. The whole pipeline can be updated on-the-fly to respond to the motion of the target. For the efficient discrete path generation, we construct a directed acyclic graph (DAG) for which a topological sorting can be determined analytically without the depth-first search. The smooth path is obtained in quadratic programming (QP) framework. We validate the enhanced performance of state-of-the-art object detection and tracking algorithms when the camera drone executes the trajectory obtained from the proposed method.
|
|
11:00-11:15, Paper MoAT17.5 | |
>Autonomous and Cooperative Design of the Monitor Positions for a Team of UAVs to Maximize the Quantity and Quality of Detected Objects |
> Video Attachment
|
|
Koutras, Dimitrios | Center for Research and Technology Hellas, Democritus University |
Kapoutsis, Athanasios | Democritus University of Thrace, Xanthi, Greece & Centre for Res |
Kosmatopoulos, Elias | Democritus Univ. Thrace & ITI/CERTH |
Keywords: Aerial Systems: Perception and Autonomy, Surveillance Systems, Motion and Path Planning
Abstract: This paper tackles the problem of positioning a swarm of UAVs inside a completely unknown terrain, having as objective to maximize the overall situational awareness. The situational awareness is expressed by the number and quality of unique objects of interest, inside the UAVs' fields of view. YOLOv3 and a system to identify duplicate objects of interest were employed to assign a single score to each UAVs' configuration. Then, a novel navigation algorithm, capable of optimizing the previously defined score, without taking into consideration the dynamics of either UAVs or environment, is proposed. A cornerstone of the proposed approach is that it shares the same convergence characteristics as the block coordinate descent (BCD) family of approaches. The effectiveness and performance of the proposed navigation scheme were evaluated utilizing a series of experiments inside the AirSim simulator. The experimental evaluation indicates that the proposed navigation algorithm was able to consistently navigate the swarm of UAVs to ``strategic'' monitoring positions and also adapt to the different number of swarm sizes, utilizing the dynamics of UAVs to the full extent. The source code and a video demonstration are available at https://github.com/dimikout3/ConvCAO_AirSim.
|
|
11:15-11:30, Paper MoAT17.6 | |
>Alternating Minimization Based Trajectory Generation for Quadrotor Aggressive Flight |
> Video Attachment
|
|
Wang, Zhepei | Zhejiang University |
Zhou, Xin | ZHEJIANG UNIVERSITY |
Xu, Chao | Zhejiang University |
Chu, Jian | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Aerial Systems: Applications, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: With much research has been conducted into trajectory planning for quadrotors, planning with spatial and temporal optimal trajectories in real-time is still challenging. In this paper, we propose a framework for large-scale waypoint-based polynomial trajectory generation, with highlights on its superior computational efficiency and simultaneous spatial-temporal optimality. Exploiting the implicitly decoupled structure of the problem, we conduct alternating minimization between boundary conditions and time durations of trajectory pieces. Algebraic convenience of both sub-problems is leveraged to escape poor local minima and achieve the lowest time consumption. Theoretical analysis for the global/local convergence rate of our method is provided. Moreover, based on polynomial theory, an extremely fast feasibility checker is designed for various kinds of constraints. By incorporating it into our alternating structure, a constrained minimization algorithm is constructed to optimize trajectories on the premise of feasibility. Benchmark evaluation shows that our algorithm outperforms state-of-the-art waypoint-based methods regarding efficiency, optimality, and scalability. The algorithm can be incorporated in a high-level waypoint planner, which can rapidly search over a three-dimensional space for aggressive autonomous flights. The capability of our algorithm is experimentally demonstrated by quadrotor fast flights in a limited space with dense obstacles.
|
|
MoAT18 |
Room T18 |
UAV Planning |
Regular session |
Chair: Mueller, Mark Wilfried | University of California, Berkeley |
Co-Chair: Jing, Wei | A*STAR |
|
10:00-10:15, Paper MoAT18.1 | |
>Motion Planning for Heterogeneous Unmanned Systems under Partial Observation from UAV |
> Video Attachment
|
|
Chen, Ci | Zhejiang University |
Wan, Yuanfang | Southern University of Science and Technology |
Li, Baowei | Peking University |
Wang, Chen | Peking University |
Xie, Guangming | Peking University |
Jiang, Huanyu | Zhejiang University |
Keywords: Motion and Path Planning, Multi-Robot Systems, Autonomous Vehicle Navigation
Abstract: For heterogeneous unmanned systems composed of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), using UAVs serve as eyes to assist UGVs in motion planning is a promising research direction due to the UAVs' vast view scope. However, its limitations on flight altitude prevent the UAVs from observing the global map. Thus motion planning in the local map becomes a Partially Observable Markov Decision Process (POMDP) problem. This paper proposes a motion planning algorithm for heterogeneous unmanned systems under partial observation from UAV without reconstruction of global maps. Our algorithm consists of two parts designed for perception and decision-making, respectively. For the perception part, we propose the Grid Map Generation Network (GMGN), which is used to perceive scenes from UAV's perspective and classify the pathways and obstacles. For the decision-making part, we propose the Motion Command Generation Network (MCGN). Due to the addition of the memory mechanism, MCGN has planning and reasoning abilities under partial observation from UAVs. We evaluate our proposed algorithm by comparing it with baseline algorithms. The results show that our method effectively plans the motion of heterogeneous unmanned systems and achieves a relatively high success rate.
|
|
10:15-10:30, Paper MoAT18.2 | |
>Multi-UAV Coverage Path Planning for the Inspection of Large and Complex Structures |
|
Jing, Wei | A*STAR |
Deng, Di | Carnegie Mellon University |
Wu, Yan | A*STAR Institute for Infocomm Research |
Shimada, Kenji | Carnegie Mellon University |
Keywords: Motion and Path Planning, Task Planning
Abstract: We present a multi-UAV Coverage Path Planning (CPP) framework for the inspection of large-scale, complex 3D structures. In the proposed sampling-based coverage path planning method, we formulate the multi-UAV inspection applications as a multi-agent coverage path planning problem. By combining two NP-hard problems: Set Covering Problem (SCP) and Vehicle Routing Problem (VRP), a Set-Covering Vehicle Routing Problem (SC-VRP) is formulated and subsequently solved by a modified Biased Random Key Genetic Algorithm (BRKGA) with novel, efficient encoding strategies and local improvement heuristics. We test our proposed method for several complex 3D structures with the 3D model extracted from OpenStreetMap. The proposed method outperforms previous methods, by reducing the length of the planned inspection path by up to 48%
|
|
10:30-10:45, Paper MoAT18.3 | |
>Generating Minimum-Snap Quadrotor Trajectories Really Fast |
> Video Attachment
|
|
Burke, Declan | The University of Melbourne |
Chapman, Airlie | University of Washington |
Shames, Iman | The University of Melbourne |
Keywords: Motion and Path Planning, Optimization and Optimal Control, Nonholonomic Motion Planning
Abstract: We propose an algorithm for generating minimum-snap trajectories for quadrotors with linear computational complexity with respect to the number of segments in the spline trajectory. Our algorithm is numerically stable for large numbers of segments and is able to generate trajectories of more than 500,000 segments. The computational speed and numerical stability of our algorithm makes it suitable for real-time generation of very large scale trajectories. We demonstrate the performance of our algorithm and compare it to existing methods, in which it is both faster and able to calculate larger trajectories than state-of-the-art. We also show the feasibility of the trajectories experimentally with a long quadrotor flight.
|
|
10:45-11:00, Paper MoAT18.4 | |
>Rectangular Pyramid Partitioning Using Integrated Depth Sensors (RAPPIDS): A Fast Planner for Multicopter Navigation |
> Video Attachment
|
|
Bucki, Nathan | University of California, Berkeley |
Lee, Junseok | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Reactive and Sensor-Based Planning, Collision Avoidance, Aerial Systems: Perception and Autonomy
Abstract: We present RAPPIDS: a novel collision checking and planning algorithm for multicopters that is capable of quickly finding local collision-free trajectories given a single depth image from an onboard camera. The primary contribution of this work is a new pyramid-based spatial partitioning method that enables rapid collision detection between candidate trajectories and the environment. By leveraging the efficiency of our collision checking method, we shown how a local planning algorithm can be run at high rates on computationally constrained hardware, evaluating thousands of candidate trajectories in milliseconds. The performance of the algorithm is compared to existing collision checking methods in simulation, showing our method to be capable of evaluating orders of magnitude more trajectories per second. Experimental results are presented showing a quadcopter quickly navigating a previously unseen cluttered environment by running the algorithm on an ODROID-XU4 at 30 Hz.
|
|
MoAT19 |
Room T19 |
Planning for Aerial Systems |
Regular session |
Chair: Chli, Margarita | ETH Zurich |
Co-Chair: Torres-González, Arturo | University of Seville |
|
10:00-10:15, Paper MoAT19.1 | |
>Persistent Connected Power Constrained Surveillance with Unmanned Aerial Vehicles |
|
Ghosh, Pradipta | University of Southern California |
Tabuada, Paulo | UCLA |
Govindan, Ramesh | University of Southern California |
Sukhatme, Gaurav | University of Southern California |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Networked Robots, Motion and Path Planning
Abstract: Persistent surveillance with aerial vehicles (drones) subject to connectivity and power constraints is a relatively uncharted domain of research. To reduce the complexity of multi-drone motion planning, most state-of-the-art solutions ignore network connectivity and assume unlimited battery power. Motivated by this and advances in optimization and constraint satisfaction techniques, we introduce a new persistent surveillance motion planning problem for multiple drones that incorporates connectivity and power consumption constraints. We use a recently developed constrained optimization tool (Satisfiability Modulo Convex Optimization (SMC)) that has the expressivity needed for this problem. We show how to express the new persistent surveillance problem in the SMC framework. Our analysis of the formulation based on a set of simulation experiments illustrates that we can generate the desired motion planning solution within a couple of minutes for small teams of drones (up to 5) confined to a 7 x 7 x 1 grid-space.
|
|
10:15-10:30, Paper MoAT19.2 | |
>Autonomous Planning for Multiple Aerial Cinematographers |
|
Caraballo de la Cruz, Luis Evaristo | Universidad De Sevilla |
Montes-Romero, Angel-Manuel | University of Seville; GRVC Team |
Díaz-Báñez, José-Miguel | Universidad Sevilla |
Capitan, Jesus | University of Seville |
Torres-González, Arturo | University of Seville |
Ollero, Anibal | University of Seville |
Keywords: Multi-Robot Systems, Planning, Scheduling and Coordination, Aerial Systems: Applications
Abstract: This paper proposes a planning algorithm for autonomous media production with multiple Unmanned Aerial Vehicles (UAVs) in outdoor events. Given filming tasks specified by a media Director, we formulate an optimization problem to maximize the filming time considering battery constraints. As we conjecture that the problem is NP-hard, we consider a discretization version, and propose a graph-based algorithm that can find an optimal solution of the discrete problem for a single UAV in polynomial time. Then, a greedy strategy is applied to solve the problem sequentially for multiple UAVs. We demonstrate that our algorithm is efficient for small teams (3-5 UAVs) and that its performance is close to the optimum. We showcase our system in field experiments carrying out actual media production in an outdoor scenario with multiple UAVs.
|
|
10:30-10:45, Paper MoAT19.3 | |
>Multi-Robot Coordination with Agent-Server Architecture for Autonomous Navigation in Partially Unknown Environments |
> Video Attachment
|
|
Bartolomei, Luca | ETH Zurich |
Karrer, Marco | ETH Zurich |
Chli, Margarita | ETH Zurich |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Aerial Systems: Perception and Autonomy, Multi-Robot Systems
Abstract: In this work, we present a system architecture to enable autonomous navigation of multiple agents across user-selected global interest points in a partially unknown environment. The system is composed of a server and a team of agents, here small aircrafts. Leveraging this architecture, computationally demanding tasks, such as global dense mapping and global path planning can be outsourced to a potentially powerful central server, limiting the onboard computation for each agent to local pose estimation using Visual-Inertial Odometry (VIO) and local path planning for obstacle avoidance. By assigning priorities to the agents, we propose a hierarchical multi-robot global planning pipeline, which avoids collisions amongst the agents and computes their paths towards the respective goals. The resulting global paths are communicated to the agents and serve as reference input to the local planner running onboard each agent. In contrast to previous works, here we relax the common assumption of a previously mapped environment and perfect knowledge about the state, and we show the effectiveness of the proposed approach in photo-realistic simulations with up to four agents operating in an industrial environment.
|
|
10:45-11:00, Paper MoAT19.4 | |
>A Distributed Pipeline for Scalable, Deconflicted Formation Flying |
|
Lusk, Parker C. | Massachusetts Institute of Technology |
Cai, Xiaoyi | Massachusetts Institute of Technology |
Wadhwania, Samir | Massachusetts Institute of Technology |
Paris, Aleix | Massachusetts Institute of Technology |
Fathian, Kaveh | MIT |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Keywords: Swarms, Distributed Robot Systems, Multi-Robot Systems
Abstract: Reliance on external localization infrastructure and centralized coordination are main limiting factors for formation flying of vehicles in large numbers and in unprepared environments. While solutions using onboard localization address the dependency on external infrastructure, the associated coordination strategies typically lack collision avoidance and scalability. To address these shortcomings, we present a unified pipeline with onboard localization and a distributed, collision-free motion planning strategy that scales to a large number of vehicles. Since distributed collision avoidance strategies are known to result in gridlock, we also present a decentralized task assignment solution to deconflict vehicles. We experimentally validate our pipeline in simulation and hardware. The results show that our approach for solving the optimization problem associated with motion planning gives solutions within seconds in cases where general purpose solvers fail due to high complexity. In addition, our lightweight assignment strategy leads to successful and quicker formation convergence in 96-100% of all trials, whereas indefinite gridlocks occur without it for 33-50% of trials. By enabling large-scale, deconflicted coordination, this pipeline should help pave the way for anytime, anywhere deployment of aerial swarms.
|
|
11:00-11:15, Paper MoAT19.5 | |
>Decentralized Nonlinear MPC for Robust Cooperative Manipulation by Heterogeneous Aerial-Ground Robots |
> Video Attachment
|
|
Lissandrini, Nicola | University of Padova |
Verginis, Christos | Electrical Engineering, KTH Royal Institute of Technology |
Roque, Pedro | KTH Royal Institute of Technology, Stockholm, Sweden |
Cenedese, Angelo | University of Padova |
Dimarogonas, Dimos V. | KTH Royal Institute of Technology |
Keywords: Multi-Robot Systems, Cooperating Robots, Aerial Systems: Applications
Abstract: Cooperative robotics is a trending topic nowadays as it makes possible a number of tasks that cannot be performed by individual robots, such as heavy payload transportation and agile manipulation. In this work, we address the problem of cooperative transportation by heterogeneous, manipulator- endowed robots. Specifically, we consider a generic number of robotic agents simultaneously grasping an object, which is to be transported to a prescribed set point while avoiding obstacles. The procedure is based on a decentralized leader-follower Model Predictive Control scheme, where a designated leader agent is responsible for generating a trajectory compatible with its dynamics, and the followers must compute a trajectory for their own manipulators that aims at minimizing the internal forces and torques that might be applied to the object by the different grippers. The Model Predictive Control approach appears to be well suited to solve such a problem, because it provides both a control law and a technique to generate trajectories, which can be shared among the agents. The proposed algorithm is implemented using a system comprised of a ground and an aerial robot, both in the robotic Gazebo simulator as well as in experiments with real robots, where the methodological approach is assessed and the controller design is shown to be effective for the cooperative transportation task.
|
|
11:15-11:30, Paper MoAT19.6 | |
>A Unified NMPC Scheme for MAVs Navigation with 3D Collision Avoidance under Position Uncertainty |
|
Sharif Mansouri, Sina | Luleå University of Technology |
Kanellakis, Christoforos | LTU |
Lindqvist, Björn | Luleå University of Technology |
Pourkamali-Anaraki, Farhad | Assistant Professor |
Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Burdick, Joel | California Institute of Technology |
Nikolakopoulos, George | Luleå University of Technology |
Keywords: Collision Avoidance, Aerial Systems: Applications, Object Detection, Segmentation and Categorization
Abstract: This article proposes a novel Nonlinear Model Predictive Control (NMPC) framework for Micro Aerial Vehicle (MAV) autonomous navigation in indoor enclosed environments. The introduced framework allows us to consider the nonlinear dynamics of MAVs, nonlinear geometric constraints, while guarantees real-time performance. Our first contribution is to reveal underlying planes within a 3D point cloud, obtained from a 3D lidar scanner, by designing an efficient subspace clustering method. The second contribution is to incorporate the extracted information into the nonlinear constraints of NMPC for avoiding collisions. Our third contribution focuses on making the controller robust by considering the uncertainty of localization in NMPC using Shannon's entropy to define the weights involved in the optimization process. This strategy enables us to track position or velocity references or none in the event of losing track of position or velocity estimations. As a result, the collision avoidance constraints are defined in the local coordinates of the MAV and it remains active and guarantees collision avoidance, despite localization uncertainties, e.g., position estimation drifts. The efficacy of the suggested framework has been evaluated using various simulations in the Gazebo environment.
|
|
MoAT20 |
Room T20 |
Aerial Systems: Perception |
Regular session |
Chair: Roy, Nicholas | Massachusetts Institute of Technology |
Co-Chair: Pan, Jia | University of Hong Kong |
|
10:00-10:15, Paper MoAT20.1 | |
>In-Flight Range Optimization of Multicopters Using Multivariable Extremum Seeking with Adaptive Step Size |
> Video Attachment
|
|
Wu, Xiangyu | University of California, Berkeley |
Mueller, Mark Wilfried | University of California, Berkeley |
Keywords: Energy and Environment-Aware Automation, Robust/Adaptive Control of Robotic Systems, Aerial Systems: Perception and Autonomy
Abstract: Limited flight range is a common problem for multicopters. To alleviate this problem, we propose a method for finding the optimal speed and heading of a multicopter when flying a given path to achieve the longest flight range. Based on a novel multivariable extremum seeking controller with adaptive step size, the method (a) does not require any power consumption model of the vehicle, (b) can adapt to unknown disturbances, (c) can be executed online, and (d) converges faster than the standard extremum seeking controller with constant step size. We conducted indoor experiments to validate the effectiveness of this method under different payloads and initial conditions, and showed that it is able to converge more than 30% faster than the standard extremum seeking controller. This method is especially useful for applications such as package delivery, where the size and weight of the payload differ for different deliveries and the power consumption of the vehicle is hard to model.
|
|
10:15-10:30, Paper MoAT20.2 | |
>Semantic Trajectory Planning for Long-Distant Unmanned Aerial Vehicle Navigation in Urban Environments |
> Video Attachment
|
|
Ryll, Markus | Massachusetts Institute of Technology |
Ware, John | Massachusetts Institute of Technology |
Carter, John | MIT |
Roy, Nicholas | Massachusetts Institute of Technology |
Keywords: Aerial Systems: Perception and Autonomy, Autonomous Vehicle Navigation, Aerial Systems: Applications
Abstract: There has been a considerable amount of recent work on high-speed micro-aerial vehicle flight in unknown and unstructured environments. Generally these approaches either use active sensing or fly slowly enough to ensure a safe braking distance with the relatively short sensing range of passive sensors. The former generally requires carrying large and heavy LIDARs and the latter only allows flight far away from the dynamic limits of the vehicle. One of the significant challenges for high-speed flight is the computational demand of trajectory planning at sufficiently high rates and length scales required in outdoor environments. We tackle both problems in this work by leveraging semantic information derived from an RGB camera on-board the vehicle. We first describe how to use semantic information to increase the effective range of perception on certain environment classes. Second, we present a sparse representation of the environment that is sufficiently lightweight for long distance path planning. We show how our approach outperforms more traditional metric planners which seek the shortest path, demonstrate the semantic planner's capabilities in a set of simulated and excessive real-world autonomous quadrotor flights in an urban environment.
|
|
10:30-10:45, Paper MoAT20.3 | |
>Augmented Memory for Correlation Filters in Real-Time UAV Tracking |
|
Li, Yiming | Tongji University |
Fu, Changhong | Tongji University |
Ding, Fangqiang | Tongji University |
Huang, Ziyuan | National Universitu of Singapore |
Pan, Jia | University of Hong Kong |
Keywords: Aerial Systems: Perception and Autonomy, Computer Vision for Automation, Computer Vision for Other Robotic Applications
Abstract: The outstanding computational efficiency of discriminative correlation filter (DCF) fades away with various complicated improvements. Previous appearances are also gradually forgotten due to the exponential decay of historical views in traditional appearance updating scheme of DCF framework, reducing the model’s robustness. In this work, a novel tracker based on DCF framework is proposed to augment memory of previously appeared views while running at real-time speed. Several historical views and the current view are simultaneously introduced in training to allow the tracker to adapt to new appearances as well as memorize previous ones. A novel rapid compressed context learning is proposed to increase the discriminative ability of the filter efficiently. Substantial experiments on UAVDT and UAV123 datasets have validated that the proposed tracker performs competitively against other 26 top DCF and deep-based trackers with over 40fps on CPU.
|
|
10:45-11:00, Paper MoAT20.4 | |
>Next-Best-View Planning for Surface Reconstruction of Large-Scale 3D Environments with Multiple UAVs |
> Video Attachment
|
|
Hardouin, Guillaume | ONERA |
Moras, Julien | ONERA |
Morbidi, Fabio | Université De Picardie Jules Verne |
Marzat, Julien | ONERA, Université Paris Saclay |
Mouaddib, El Mustapha | Universite De Picardie Jules Verne |
Keywords: Aerial Systems: Perception and Autonomy, Reactive and Sensor-Based Planning, Path Planning for Multiple Mobile Robots or Agents
Abstract: In this paper, we propose a novel cluster-based Next-Best-View path planning algorithm to simultaneously explore and inspect large-scale unknown environments with multiple Unmanned Aerial Vehicles (UAVs). In the majority of existing informative path-planning methods, a volumetric criterion is used for the exploration of unknown areas, and the presence of surfaces is only taken into account indirectly. Unfortunately, this approach may lead to inaccurate 3D models, with no guarantee of global surface coverage. To perform accurate 3D reconstructions and minimize runtime, we extend our previous online planner based on TSDF (Truncated Signed Distance Function) mapping, to a fleet of UAVs. Sensor configurations to be visited are directly extracted from the map and assigned greedily to the aerial vehicles, in order to maximize the global utility at the fleet level. The performances of the proposed TSGA (TSP-Greedy Allocation) planner and of a nearest neighbor planner have been compared via realistic numerical experiments in two challenging environments (a power plant and the Statue of Liberty) with up to five quadrotor UAVs equipped with stereo cameras.
|
|
11:00-11:15, Paper MoAT20.5 | |
>Towards Robust Visual Tracking for Unmanned Aerial Vehicle with Tri-Attentional Correlation Filters |
|
He, Yujie | Tongji University |
Fu, Changhong | Tongji University |
Lin, Fuling | Tongji University |
Li, Yiming | Tongji University |
Lu, Peng | The Hong Kong Polytechnic University |
Keywords: Aerial Systems: Perception and Autonomy, Aerial Systems: Applications, Surveillance Systems
Abstract: Object tracking has been broadly applied in unmanned aerial vehicle (UAV) tasks in recent years. However, existing algorithms still face difficulties such as partial occlusion, clutter background, and other challenging visual factors. Inspired by the cutting-edge attention mechanisms, a novel object tracking framework is proposed to leverage multi-level visual attention. Three primary attention, i.e., contextual attention, dimensional attention, and spatiotemporal attention, are integrated into the training and detection stages of correlation filter-based tracking pipeline. Therefore, the proposed tracker is equipped with robust discriminative power against challenging factors while maintaining high operational efficiency in UAV scenarios. Quantitative and qualitative experiments on two well-known benchmarks with 173 challenging UAV video sequences demonstrate the effectiveness of the proposed framework. The proposed tracking algorithm favorably outperforms 12 state-of-the-art methods, yielding 4.8% relative gain in UAVDT and 8.2% relative gain in UAV123@10fps against the baseline tracker while operating at the speed of ~28 frames per second.
|
|
11:15-11:30, Paper MoAT20.6 | |
>Inspection-On-The-Fly Using Hybrid Physical Interaction Control for Aerial Manipulators |
> Video Attachment
|
|
Abbaraju, Praveen | Purdue University |
Ma, Xin | Chinese Univerisity of HongKong |
Manoj Krishnan, Harikrishnan | Purdue University |
Venkatesh, L.N Vishnunandan | Purdue University |
Rastgaar, Mo | Purdue University |
Voyles, Richard | Purdue University |
Keywords: Aerial Systems: Perception and Autonomy
Abstract: Inspection for structural properties (surface stiffness and coefficient of restitution) is crucial for understanding and performing aerial manipulations in unknown environments, with little to no prior knowledge on their state. Inspection-on-the-fly is the uncanny ability of humans to infer states during manipulation, reducing the necessity to perform inspection and manipulation separately. This paper presents an infrastructure for inspection-on-the-fly method for aerial manipulators using hybrid physical interaction control. With the proposed method, structural properties (surface stiffness and coefficient of restitution) can be estimated during physical interactions. A three-stage hybrid physical interaction control paradigm is presented to robustly approach, acquire and impart a desired force signature onto a surface. This is achieved by combining a hybrid force/motion controller with a model-based feed-forward impact control as intermediate phase. The proposed controller ensures a steady transition from unconstrained motion control to constrained force control, while reducing the lag associated with the force control phase. And an underlying Operational Space dynamic configuration manager permits complex, redundant vehicle/arm combinations. Experiments were carried out in a mock-up of a Dept. of Energy exhaust shaft, to show the effectiveness of the inspection-on-the-fly method to determine the structural properties of the target surface and the performance of the hybrid physical interaction controller in reducing the lag associated with force control phase.
|
|
11:15-11:30, Paper MoAT20.7 | |
>AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning |
|
Tallamraju, Rahul | International Institute of Information Technology, Hyderabad |
Saini, Nitin | Max Planck Institute for Intelligent Systems |
Bonetto, Elia | Max Planck Institute for Intelligent Systems, Tuebingen |
Pabst, Michael | Max Planck Institute for Intelligent Systems |
Liu, Yu Tang | Max Planck Institute Intelligent System |
Black, Michael | Max Planck Institute for Intelligent Systems in Tübingen |
Ahmad, Aamir | Max Planck Institute for Intelligent Systems |
Keywords: Reinforecment Learning, Aerial Systems: Perception and Autonomy, Multi-Robot Systems
Abstract: In this letter, we introduce a deep reinforcement learning (DRL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system and observation models. Such models are difficult to derive and generalize across different systems. Moreover, the non-linearities and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.
|
|
MoAT21 |
Room T21 |
Perception for Aerial Systems |
Regular session |
Chair: Scherer, Sebastian | Carnegie Mellon University |
Co-Chair: Albl, Cenek | ETH Zurich |
|
10:00-10:15, Paper MoAT21.1 | |
>DR^2Track: Towards Real-Time Visual Tracking for UAV Via Distractor Repressed Dynamic Regression |
|
Fu, Changhong | Tongji University |
Ding, Fangqiang | Tongji University |
Li, Yiming | Tongji University |
Jin, Jin | Tongji University |
Feng, Chen | New York University |
Keywords: Aerial Systems: Applications, Computer Vision for Automation, Aerial Systems: Perception and Autonomy
Abstract: Visual tracking has yielded promising applications with unmanned aerial vehicle (UAV). In literature, the advanced discriminative correlation filter (DCF) type trackers generally distinguish the foreground from the background with a learned regressor which regresses the implicit circulated samples into a fixed target label. However, the predefined and unchanged regression target results in low robustness and adaptivity to uncertain aerial tracking scenarios. In this work, we exploit the local maximum points of the response map generated in the detection phase to automatically locate current distractors. By repressing the response of distractors in the regressor learning, we can dynamically and adaptively alter our regression target to leverage the tracking robustness as well as adaptivity. Substantial experiments conducted on three challenging UAV benchmarks demonstrate both excellent performance and extraordinary speed (~50fps on a cheap CPU) of our tracker.
|
|
10:15-10:30, Paper MoAT21.2 | |
>Towards Vision-Based Impedance Control for the Contact Inspection of Unknown Generically-Shaped Surfaces with a Fully-Actuated UAV |
> Video Attachment
|
|
Rashad, Ramy | University of Twente |
Bicego, Davide | University of Twente |
Jiao, Ran | Beihang University |
Sanchez-Escalonilla, Santiago | University of Twente |
Stramigioli, Stefano | University of Twente |
Keywords: Aerial Systems: Perception and Autonomy, Aerial Systems: Applications, Compliance and Impedance Control
Abstract: The integration of computer vision techniques for the accomplishment of autonomous interaction tasks represents a challenging research direction in the context of aerial robotics. In this paper, we consider the problem of contact-based inspection of a textured target of unknown geometry and pose. Exploiting state of the art techniques in computer graphics, tuned and improved for the task at hand, we designed a framework for the projection of a desired trajectory for the robot end-effector on a generically-shaped surface to be inspected. Combining these results with previous work on energy-based interaction control, we are laying the basis of what we call vision-based impedance control paradigm. To demonstrate the feasibility and the effectiveness of our methodology, we present the results of both realistic ROS/Gazebo simulations and preliminary experiments with a fully-actuated hexarotor interacting with heterogeneous curved surfaces whose geometric description is not available a priori, provided that enough visual features on the target are naturally or artificially available to allow the integration of localization and mapping algorithms.
|
|
10:30-10:45, Paper MoAT21.3 | |
>Towards Deep Learning Assisted Autonomous UAVs for Manipulation Tasks in GPS-Denied Environments |
> Video Attachment
|
|
Kumar, Ashish | Indian Institute of Technology, Kanpur |
Vohra, Mohit | Indian Institute of Technology, Kanpur |
Prakash, Ravi | Indian Institute of Technology, Kanpur |
Behera, Laxmidhar | IIT Kanpur |
Keywords: Aerial Systems: Perception and Autonomy, Deep Learning for Visual Perception, Computer Vision for Automation
Abstract: In this work, we present a pragmatic approach to enable unmanned aerial vehicle (UAVs) to autonomously perform highly complicated tasks of object pick and place. This paper is largely inspired by challenge-2 of MBZIRC 2020 and is primarily focused on the task of assembling large 3D structures in outdoors and GPS-denied environments. Primary contributions of this system are: (i) a novel computationally efficient deep learning based unified multi-task visual perception system for target localization, part segmentation, and tracking, (ii) a novel deep learning based grasp state estimation, (iii) a retracting electromagnetic gripper design, (iv) a remote computing approach which exploits state-of-the-art MIMO based high speed (5000Mb/s) wireless links to allow the UAVs to execute compute intensive tasks on remote high end compute servers, and (v) system integration in which several system components are weaved together in order to develop an optimized software stack. We use DJI Matrice-600 Pro, a hex-rotor UAV and interface it with the custom designed gripper. Our framework is deployed on the specified UAV in order to report the performance analysis of the individual modules. Apart from the manipulation system, we also highlight several hidden challenges associated with the UAVs in this context.
|
|
10:45-11:00, Paper MoAT21.4 | |
>Reconstruction of 3D Flight Trajectories from Ad-Hoc Camera Networks |
|
Li, Jingtong | ETH Zurich |
Murray, Jesse | ETH Zurich |
Ismaili, Dorina | Technical University Munich |
Schindler, Konrad | ETH Zurich |
Albl, Cenek | ETH Zurich |
Keywords: Aerial Systems: Applications, Computer Vision for Automation, Visual Tracking
Abstract: We present a method to reconstruct the 3D trajectory of an airborne robotic system only from videos recorded with cameras that are unsynchronized, may feature rolling shutter distortion, and whose viewpoints are unknown. Our approach enables robust and accurate outside-in tracking of dynamically flying targets, with cheap and easy-to-deploy equipment. We show that, in spite of the weakly constrained setting, recent developments in computer vision make it possible to reconstruct trajectories in 3D from unsynchronized, uncalibrated networks of consumer cameras, and validate the proposed method in a realistic field experiment. We make our code available along with the data, including cm-accurate ground-truth from differential GNSS navigation.
|
|
11:00-11:15, Paper MoAT21.5 | |
>Bayesian Fusion of Unlabeled Vision and RF Data for Aerial Tracking of Ground Targets |
> Video Attachment
|
|
Kanlapuli Rajasekaran, Ramya | University of Colorado Boulder |
Ahmed, Nisar | University of Colorado Boulder |
Frew, Eric W. | University of Colorado |
Keywords: Aerial Systems: Perception and Autonomy, Sensor Fusion, Visual Tracking
Abstract: This paper presents a method for target localization and tracking in clutter using Bayesian fusion of vision and Radio Frequency (RF) sensors used aboard a small Unmanned Aircraft System (sUAS). Sensor fusion is used to ensure tracking robustness and reliability in case of camera occlusion or RF signal interference. Camera data is processed using an off-the-shelf algorithm that detects possible objects of interest in a given image frame, and the true RF emitting target needs to be identified from among these if it is present. These data sources, as well as the unknown motion of the target, lead to a heavily non-linear non-Gaussian target state uncertainties, which are not amenable to typical data association methods for tracking. A probabilistic model is thus first rigorously developed to relate conditional dependencies between target movements, RF data, and visual object detections. A modified particle filter is then developed to simultaneously reason over target states and RF emitter association hypothesis labels for visual object detections. Truth model simulations are presented to compare and validate the effectiveness of the RF + visual data fusion filter.
|
|
11:15-11:30, Paper MoAT21.6 | |
>Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations |
> Video Attachment
|
|
Bonatti, Rogerio | Carnegie Mellon University |
Madaan, Ratnesh | Carnegie Mellon University |
Vineet, Vibhav | Stanford University |
Scherer, Sebastian | Carnegie Mellon University |
Kapoor, Ashish | MicroSoft |
Keywords: Aerial Systems: Perception and Autonomy, Visual-Based Navigation, Representation Learning
Abstract: Machines are a long way from robustly solving open-world perception-control tasks, such as first-person view (FPV) aerial navigation. While recent advances in end-to-end Machine Learning, especially Imitation Learning and Reinforcement appear promising, they are constrained by the need of large amounts of difficult-to-collect labeled real-world data. Simulated data, on the other hand, is easy to generate, but generally does not render safe behaviors in diverse real-life scenarios. In this work we propose a novel method for learning robust visuomotor policies for real-world deployment which can be trained purely with simulated data. We develop rich state representations that combine supervised and unsupervised environment data. Our approach takes a cross-modal perspective, where separate modalities correspond to the raw camera data and the system states relevant to the task, such as the relative pose of gates to the drone in the case of drone racing. We feed both data modalities into a novel factored architecture, which learns a joint low-dimensional embedding via Variational Auto Encoders. This compact representation is then fed into a control policy, which we trained using imitation learning with expert trajectories in a simulator. We analyze the rich latent spaces learned with our proposed representations, and show that the use of our cross-modal architecture significantly improves control policy performance as compared to end-to-end learning or purely unsupervised feature extractors. We also present real-world results for drone navigation through gates in different track configurations and environmental conditions. Our proposed method, which runs fully onboard, can successfully generalize the learned representations and policies across simulation and reality, significantly outperforming baseline approaches. Supplementary video available at: https://youtu.be/AxE7qGKJWaw and open-sourced code available at: https://github.com/microsoft/AirSim-Drone-Racing-VAE-Imitation
|
|
MoAT22 |
Room T22 |
Sensor Fusion for Aerial, Autonomous, and Marine Robotics |
Regular session |
Chair: Englot, Brendan | Stevens Institute of Technology |
Co-Chair: Atkins, Ella | University of Michigan |
|
10:00-10:15, Paper MoAT22.1 | |
>Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor |
> Video Attachment
|
|
Tagliabue, Andrea | Eth Zurich |
Paris, Aleix | Massachusetts Institute of Technology |
Kim, Suhan | Carnegie Mellon University |
Kubicek, Regan | Carnegie Mellon University |
Bergbreiter, Sarah | Carnegie Mellon University |
How, Jonathan Patrick | Massachusetts Institute of Technology |
Keywords: Sensor Fusion, Aerial Systems: Perception and Autonomy, Aerial Systems: Applications
Abstract: Disturbance estimation for Micro Aerial Vehicles (MAVs) is crucial for robustness and safety. In this paper, we use novel, bio-inspired airflow sensors to measure the airflow acting on a MAV, and we fuse this information in an Unscented Kalman Filter (UKF) to simultaneously estimate the three-dimensional wind vector, the drag force, and other interaction forces (e.g. due to collisions, interaction with a human) acting on the robot. To this end, we present and compare a fully model-based and a deep learning-based strategy. The model-based approach considers the MAV and airflow sensor dynamics and its interaction with the wind, while the deep learning-based strategy uses a Long Short-Term Memory (LSTM) to obtain an estimate of the relative airflow, which is then fused in the proposed filter. We validate our methods in hardware experiments, showing that we can accurately estimate relative airflow of up to 4 m/s, and we can differentiate drag and interaction force.
|
|
10:15-10:30, Paper MoAT22.2 | |
>Fusing Concurrent Orthogonal Wide-Aperture Sonar Images for Dense Underwater 3D Reconstruction |
> Video Attachment
|
|
McConnell, John | Stevens Institute of Technology |
Martin, John D. | Stevens Institute of Technology |
Englot, Brendan | Stevens Institute of Technology |
Keywords: Marine Robotics, Range Sensing, Sensor Fusion
Abstract: We propose a novel approach to handling the ambiguity in elevation angle associated with the observations of a forward looking multi-beam imaging sonar, and the challenges it poses for performing an accurate 3D reconstruction. We utilize a pair of sonars with orthogonal axes of uncertainty to independently observe the same points in the environment from two different perspectives, and associate these observations. Using these concurrent observations, we can create a dense, fully defined point cloud at every time-step to aid in reconstructing the 3D geometry of underwater scenes. We will evaluate our method in the context of the current state of the art, for which strong assumptions on object geometry limit applicability to generalized 3D scenes. We will discuss results from laboratory tests that quantitatively benchmark our algorithm's reconstruction capabilities, and results from a real-world, tidal river basin which qualitatively demonstrate our ability to reconstruct a cluttered field of underwater objects.
|
|
10:30-10:45, Paper MoAT22.3 | |
>A Scalable Framework for Robust Vehicle State Estimation with a Fusion of a Low-Cost IMU, the GNSS, Radar, a Camera and Lidar |
|
Liang, Yuran | Technical University of Berlin |
Müller, Steffen | Technical University of Berlin |
Schwendner, Daniel | BMW Group |
Rolle, Daniel | BMW Group |
Ganesch, Dieter | BMW Group |
Schaffer, Immanuel | BMW Group |
Keywords: Sensor Fusion, Autonomous Vehicle Navigation, Computer Vision for Transportation
Abstract: Automated driving requires highly precise and robust vehicle state estimation for its environmental perception, motion planning and control functions. Using GPS and environmental sensors can compensate for the deficits of the estimation based on traditional vehicle dynamics sensors. However, each type of sensor has specific strengths and limitations in accuracy and robustness due to their different properties regarding the quality of detection and robustness in diverse environmental conditions. For these reasons, we present a scalable concept for vehicle state estimation using an error-state extended Kalman filter (ESEKF) to fuse classical vehicle sensors with environmental sensors. The state variables, i.e., position, velocity and orientation, are predicted by a 6-degree-of-freedom (DoF) vehicle kinematic model that uses a low-cost inertial measurement unit (IMU) on a customer vehicle. The Error of the 6-DoF rigid body motion model is estimated using observations of global position using the global navigation satellite system (GNSS) and of the environment using radar, a camera and low-cost lidar. Our concept is scalable such that it is compatible with different sensor setups on different vehicle configurations. The experimental results compare various sensor combinations with measurement data in scenarios such as dynamic driving maneuvers on a test field. The results show that our approach ensures accuracy and robustness with redundant sensor data under regular and dynamic driving conditions.
|
|
10:45-11:00, Paper MoAT22.4 | |
>Probabilistic End-To-End Vehicle Navigation in Complex Dynamic Environments with Multimodal Sensor Fusion |
> Video Attachment
|
|
Cai, Peide | Hong Kong University of Science and Technology |
Wang, Sukai | Robotics and Multi-Perception Lab (RAM-LAB), Robotics Institute, |
Sun, Yuxiang | Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Automation Technologies for Smart Cities, Service Robots, Field Robots
Abstract: All-day and all-weather navigation is a critical capability for autonomous driving, which requires proper reaction to varied environmental conditions and complex agent behaviors. Recently, with the rise of deep learning, end-to-end control for autonomous vehicles has been well studied. However, most works are solely based on visual information, which can be degraded by challenging illumination conditions such as dim light or total darkness. In addition, they usually generate and apply deterministic control commands without considering the uncertainties in the future. In this paper, based on imitation learning, we propose a probabilistic driving model with multi-perception capability utilizing the information from the camera, lidar and radar. We further evaluate its driving performance online on our new driving benchmark, which includes various environmental conditions (e.g., urban and rural areas, traffic densities, weather and times of the day) and dynamic obstacles (e.g., vehicles, pedestrians, motorcyclists and bicyclists). The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments with heavy traffic and extreme weather.
|
|
11:00-11:15, Paper MoAT22.5 | |
>Vision Only 3-D Shape Estimation for Autonomous Driving |
|
Monica, Josephine | Cornell University |
Campbell, Mark | Cornell University |
Keywords: Sensor Fusion, Computer Vision for Automation, Autonomous Vehicle Navigation
Abstract: We present a probabilistic framework for detailed 3-D shape estimation and tracking using only vision measurements. Vision detections are processed via a bird’s eye view representation, creating accurate detections at far ranges. A probabilistic model of the vision based point cloud measurements is learned and used in the framework. A 3-D shape model is developed by fusing a set of point cloud detections via a recursive Best Linear Unbiased Estimator (BLUE). The point cloud fusion accounts for noisy and inaccurate measurements, as well as minimizing growth of points in the 3-D shape. The use of a tracking algorithm and sensor pose enables 3-D shape estimation of dynamic objects from a moving car. Results are analyzed on experimental data, demonstrating the ability of our approach to produce more accurate and cleaner shape estimates.
|
|
11:15-11:30, Paper MoAT22.6 | |
>Polylidar - Polygons from Triangular Meshes |
|
Castagno, Jeremy | University of Michigan |
Atkins, Ella | University of Michigan |
Keywords: Aerial Systems: Perception and Autonomy, Reactive and Sensor-Based Planning, Computational Geometry
Abstract: This paper presents Polylidar, an efficient algorithm to extract non-convex polygons from 2D point sets, including interior holes. Plane segmented point clouds can be input into Polylidar to extract their polygonal counterpart, thereby reducing map size and improving visualization. The algorithm begins by triangulating the point set and filtering triangles by user configurable parameters such as triangle edge length. Next, connected triangles are extracted into triangular mesh regions representing the shape of the point set. Finally each region is converted to a polygon through a novel boundary following method which accounts for holes. Real-world and synthetic benchmarks are presented to comparatively evaluate Polylidar speed and accuracy. Results show comparable accuracy and more than four times speedup compared to other concave polygon extraction methods
|
|
MoBT1 |
Room T1 |
Marine Robotics |
Regular session |
Chair: Hollinger, Geoffrey | Oregon State University |
|
11:45-12:00, Paper MoBT1.1 | |
>Active Alignment Control-Based LED Communication for Underwater Robots |
> Video Attachment
|
|
Solanki, Pratap Bhanu | Michigan State University |
Bopardikar, Shaunak D. | Michigan State University |
Tan, Xiaobo | Michigan State University |
Keywords: Marine Robotics, Optimization and Optimal Control
Abstract: Achieving and maintaining line-of-sight (LOS) is challenging for underwater optical communication systems, especially when the underlying platforms are mobile. In this work, we propose and demonstrate an active alignment control-based LED-communication system that uses the DC value of the communication signal as feedback for LOS maintenance. Utilizing the uni-modal nature of the dependence of the light signal strength on local angles, we propose a novel triangular exploration algorithm, that does not require the knowledge of the underlying light intensity model, to maximize the signal strength that leads to achieving and maintaining LOS. The method maintains an equilateral triangle shape in the angle space for any three consecutive exploration points, while ensuring the consistency of exploration direction with the local gradient of signal strength. The effectiveness of the approach is first evaluated in simulation by comparison with extremum-seeking control, where the proposed approach shows a significant advantage in the convergence speed. The efficacy is further demonstrated experimentally, where an underwater robot is controlled by a joystick via LED communication.
|
|
12:00-12:15, Paper MoBT1.2 | |
>An Electrocommunication System Using FSK Modulation and Deep Learning Based Demodulation for Underwater Robots |
|
Qinghao, Wang | Peking University |
Ruijun, Liu | Guangxi University of Science and Technology |
Wang, Wei | Massachusetts Institute of Technology |
Xie, Guangming | Peking University |
Keywords: Biologically-Inspired Robots, Biomimetics, Marine Robotics
Abstract: Underwater communication is extremely challenging for small underwater robots that have stringent power and size constraints. In our previous work, we have demonstrated that electrocommunication is an alternative method for small underwater robot communication. This paper presents a new electrocommunication system which utilizes Binary Frequency Shift Keying (2FSK) modulation and deep-learning-based demodulation for underwater robots. We first derive an underwater electrocommunication model which covers both the near-field area and a large transition area outside of the near-field area. The 2FSK modulation is adopted to improve the anti-interference ability of the signal. A deep learning algorithm is used to demodulate the signal by the receiver. Simulations and experiments show that at the same testing condition, the new communication system has a lower bit error rate and higher data rate than the previous electrocommunication system. The communication system achieves stable communication within the distance of 10 m at a data transfer rate of 5 Kbps with a power consumption of less than 0.1 W. The large improvement of the communication distance in this study further advances the application of electrocommunication.
|
|
12:15-12:30, Paper MoBT1.3 | |
>Demonstration of a Novel Phase Lag Controlled Roll Rotation Mechanism Using a Two-DOF Soft Swimming Robot |
> Video Attachment
|
|
Liu, Bangyuan | Georgia Institute of Technology |
Hammond III, Frank L. | Georgia Institute of Technology |
Keywords: Marine Robotics, Underactuated Robots, Biologically-Inspired Robots
Abstract: Underwater roll rotation is a basic but essential maneuver that allows many biological swimmers to achieve high maneuverability and complex locomotion patterns. In particular, sea mammals (e.g., sea otter) with flexible vertebra structures have a unique mechanism to efficiently achieve roll rotation, not propelled mainly by inter-digital webbing or fin, but by bending and twisting their body. In this work, we attempt to implement and effectively control the roll rotation by mimicking this kind of efficient biomorphic roll mechanism on our two degrees of freedom (DOF) soft modular swimming robot. The robot also allows the achievement of other common maneuvers, such as pitch/yaw rotation and linear swimming patterns. The proposed 2DOF soft swimming robot platform includes an underactuated, cable-driven design that mimics the flexible cascaded skeletal structure of soft spine tissue and hard spine bone seen in many fish species. The cable-driven actuation mechanism is oriented laterally for forwarding motion and steering in a 3D plane. The robot can perform a steady and controllable roll rotation with a maximum angular speed of 41.6 deg/s. A hypothesis explaining this novel roll rotation mechanism is set forth, and the phenomenon is systematically studied at different frequencies and phase lag gait conditions. Preliminary results show a linear relationship between roll angular velocity and frequency within a specific range. Additionally, the roll rotation can be controlled independently in some special conditions. These abilities form the foundation for future research on 3D underwater locomotion with adaptive, controllable maneuvering capabilities.
|
|
12:30-12:45, Paper MoBT1.4 | |
>Pauses Provide Effective Control for an Underactuated Oscillating Swimming Robot |
|
Knizhnik, Gedaliah | University of Pennsylvania |
deZonia, Philip | University of Pennsylvania |
Yim, Mark | University of Pennsylvania |
Keywords: Underactuated Robots, Marine Robotics
Abstract: We describe motion primitives and closed-loop control for a unique low-cost single-motor oscillating aquatic system: the Modboat. The Modboat is driven by the conservation of angular momentum, which is used to actuate two passive flippers in a sequential paddling motion for propulsion and steering. We propose a discrete description of the motion of the system, which oscillates around desired trajectories, and propose two motion primitives - one frequency based and one pause-based - with associated closed-loop controllers. Testing is performed to evaluate each motion primitive, the merits of each are presented, and the pause-based primitive is shown to be significantly superior. Finally, waypoint following is implemented using both primitives and shown to be significantly more successful using the pause-based motion primitive.
|
|
12:45-13:00, Paper MoBT1.5 | |
>Topology-Aware Self-Organizing Maps for Robotic Information Gathering |
|
McCammon, Seth | Oregon State University |
Jones, Dylan | Oregon State University |
Hollinger, Geoffrey | Oregon State University |
Keywords: Marine Robotics, Motion and Path Planning, Computational Geometry
Abstract: In this paper, we present a novel algorithm for constructing a maximally informative path for a robot in an information gathering task. We use a Self-Organizing Map (SOM) framework to discover important topological features in the information function. Using these features, we identify a set of distinct classes of trajectories, each of which has improved convexity compared with the original function. We then leverage a Stochastic Gradient Ascent (SGA) optimization algorithm within each of these classes to optimize promising representative paths. The increased convexity leads to an improved chance of SGA finding the globally optimal path across all homotopy classes. We demonstrate our approach in three different simulated experiments. First, we show that our SOM is able to correctly learn the topological features of a gyre environment with a well-defined topology. Then, in the second set of experiments, we compare the effectiveness of our algorithm in an information gathering task across the gyre world, a set of randomly generated worlds, and a set of worlds drawn from real-world ocean model data. In these experiments our algorithm performs competitively or better than a state-of-the-art Branch and Bound while requiring significantly less computation time. Lastly, the final set of experiments show that our method scales better than the comparison methods across different planning mission sizes in real-world environments.
|
|
13:00-13:15, Paper MoBT1.6 | |
>The SPIR: An Autonomous Underwater Robot for Bridge Pile Cleaning and Condition Assessment |
|
Le, Duy Khoa | University of Technology Sydney |
To, Andrew | University of Technology, Sydney |
Leighton, Brenton | University of Technology Sydney |
Hassan, Mahdi | University of Technology, Sydney |
Liu, Dikai | University of Technology, Sydney |
Keywords: Marine Robotics, Robotics in Hazardous Fields, Autonomous Agents
Abstract: The SPIR, Submersible Pylon Inspection Robot, is developed to provide an innovative and practical solution to keep workers safe during maintenance of underwater structures in shallow waters, which involves working in dangerous water currents, and high-pressure water-jet cleaning. More advanced than work-class Remotely Operated Vehicles technology, the SPIR is automated and required minimum involvement of humans into the working process, thus effectively lowered the learning curve required to conduct work. To make SPIR operate effectively in poor visibility and highly disturbed environments, the multiple new technologies are developed and implemented into the system, including SBL-SONAR-based navigation, 6-DOF stabilisation, and vision-based 3D mapping. Extensive testing and field trials in various bridges are conducted to verify the robotic system. The results demonstrate the suitability of the SPIR in substituting humans for underwater hazardous tasks such as autonomous cleaning and inspection of bridge and wharf piles.
|
|
13:00-13:15, Paper MoBT1.7 | |
>Vehicle-In-The-Loop Framework for Testing Long-Term Autonomy in a Heterogeneous Marine Robot Swarm |
> Video Attachment
|
|
Babic, Anja | University of Zagreb, Faculty of Electrical Engineering and Comp |
Vasiljevic, Goran | Faculty of Electrical Engineering and Computing, Zagreb, Croatia |
Miskovic, Nikola | University of Zagreb, Faculty of Electrical Engineering And |
Keywords: Marine Robotics, Task Planning, Cooperating Robots
Abstract: A heterogeneous swarm of marine robots was developed with the goal of autonomous long-term monitoring of environmental phenomena in the highly relevant ecosystem of Venice, Italy. As logistics are a continuing challenge in the field of marine robotics, especially when dealing with a large number of agents to be collected and redeployed per experimental run, an approach is needed that provides the benefits of simulation while also reflecting the complexity of the real world. This paper focuses on the development of a vehicle-in-the-loop test environment in which a surface station simulates and transmits the data of any number of simulated agents, while a real marine platform operates based on the received information. Several experimental runs of a specific use-case test scenario using the developed framework and carried out in the field are described and their results are examined.
|
|
MoBT2 |
Room T2 |
Marine Robotics: Mechanisms |
Regular session |
Chair: Sattar, Junaed | University of Minnesota |
Co-Chair: Qian, Huihuan (Alex) | The Chinese University of Hong Kong, Shenzhen |
|
11:45-12:00, Paper MoBT2.1 | |
>Roboat II: A Novel Autonomous Surface Vessel for Urban Environments |
> Video Attachment
|
|
Wang, Wei | Massachusetts Institute of Technology |
Shan, Tixiao | Massachusetts Institute of Technology |
Leoni, Pietro | Massachusetts Institute of Technology |
Meyers, Drew | MIT |
Ratti, Carlo | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Marine Robotics, Autonomous Vehicle Navigation, Automation Technologies for Smart Cities
Abstract: This paper presents a novel autonomous surface vessel (ASV), called Roboat II for urban transportation. Roboat II is capable of accurate simultaneous localization and mapping (SLAM), receding horizon tracking control and estimation, and path planning. Roboat II is designed to maximize the internal space for transport, and can carry payloads several times of its own weight. Moreover, it is capable of holonomic motions to facilitate transporting, docking, and inter-connectivity between boats. The proposed SLAM system receives sensor data from a 3D LiDAR, an IMU, and a GPS, and utilizes a factor graph to tackle the multi-sensor fusion problem. To cope with the complex dynamics in the water, Roboat II employs an online nonlinear model predictive controller (NMPC), where we experimentally estimated the dynamical model of the vessel in order to achieve superior performance for tracking control. The states of Roboat II are simultaneously estimated using a nonlinear moving horizon estimation (NMHE) algorithm. Experiments demonstrate that Roboat II is able to successfully perform online mapping and localization, plan its path and robustly track the planned trajectory in the confined river, implying that this autonomous vessel holds the promise on potential applications in transporting humans and goods in many of the waterways nowadays.
|
|
12:00-12:15, Paper MoBT2.2 | |
>A Two-Stage Automatic Latching System for the USVs Charging in Disturbed Berth |
> Video Attachment
|
|
Xue, Kaiwen | The Chinese University of Hong Kong, Shenzhen |
Liu, Chongfeng | The Chinese University of Hong Kong, Shenzhen |
Liu, Hengli | Peng Cheng Laboratory, Shenzhen |
Xu, Ruoyu | The Chinese University of Hong Kong, Shenzhen |
Sun, Zhenglong | Chinese University of Hong Kong, Shenzhen |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Qian, Huihuan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Keywords: Marine Robotics, Field Robots, Intelligent Transportation Systems
Abstract: Automatic latching for charging in a disturbed environment for Unmanned Surface Vehicle (USVs) is always a challenging problem. In this paper, we propose a two-stage automatic latching system for USVs charging in berth. In Stage I, a vision-guided algorithm is developed to calculate an optimal latching position for charging. In Stage II, a novel latching mechanism is designed to compensate the movement misalignments from the water disturbance. A set of experiments have been conducted in real-world environments. The results show the latching success rate has been improved from 40% to 73.3% in the best cases with our proposed system. Furthermore, the vision-guided algorithm provides a methodology to optimize the design radius of the latching mechanism with respect to different disturbance levels accordingly. Outdoor experiments have validated the efficiency of our proposed automatic latching system. The proposed system improves the autonomy intelligence of the USVs and provides great benefits for practical applications.
|
|
12:15-12:30, Paper MoBT2.3 | |
>Variable Pitch System for the Underwater Explorer Robot UX-1 |
|
Suarez Fernandez, Ramon A. | Universidad Politecnica De Madrid |
Grande, Davide | Politecnico Di Milano |
Martins, Alfredo | INESC TEC |
Bascetta, Luca | Politecnico Di Milano |
Dominguez, Sergio | Technical University of Madrid |
Rossi, Claudio | Universidad Politecnica De Madrid |
Keywords: Marine Robotics, Field Robots, Mining Robotics
Abstract: This paper presents the results of the experimental tests performed to validate the functionality of a variable pitch system (VPS), designed for pitch attitude control of the novel underwater robotic vehicle explorer UX-1. The VPS is composed of a mass suspended from a central rod mounted across the hull. This mass is rotated around the transverse axis of the vehicle in order to perform a change in the inclination angle for navigation in vertical mine shafts. In this work, the equations of motion are first derived with a quaternion attitude representation, and are then extended to include the dynamics of the VPS. The performance of the VPS is demonstrated in real underwater experimental tests that validate the pitch angle control independently, and coupled with the heave motion control system.
|
|
12:30-12:45, Paper MoBT2.4 | |
>Design and Experiments with LoCO AUV: A Low Cost Open-Source Autonomous Underwater Vehicle |
> Video Attachment
|
|
Edge, Chelsey | University of Minnesota |
Enan, Sadman Sakib | University of Minnesota, Twin Cities |
Fulton, Michael | University of Minnesota |
Hong, Jungseok | University of Minnesota |
Mo, Jiawei | University of Minnesota, Twin Cities |
Barthelemy, Kimberly | University of Minnesota |
Bashaw, Hunter | Clarkson University |
Kallevig, Berik | University of Minnesota Twin Cities |
Knutson, Corey | University of Minnesota - Duluth |
Orpen, Kevin | University of Minnesota |
Sattar, Junaed | University of Minnesota |
Keywords: Marine Robotics, Field Robots
Abstract: In this paper we present the LoCO AUV, a Low-Cost, Open Autonomous Underwater Vehicle. LoCO is a general-purpose, single-person-deployable, vision-guided AUV, rated to a depth of 100 meters. We discuss the open and expandable design of this underwater robot, as well as the design of a simulator in Gazebo. Additionally, we explore the platform’s preliminary local motion control and state estimation abilities, which enable it to perform maneuvers autonomously. In order to demonstrate its usefulness for a variety of tasks, we implement a variety of our previously presented human-robot interaction capabilities on LoCO, including gestural control, diver following, and robot communication via motion. Finally, we discuss the practical concerns of deployment and our experiences in using this robot in pools, lakes, and the ocean. All design details, instructions on assembly, and code will be released under a permissive, open-source license.
|
|
MoBT3 |
Room T3 |
Marine Robotics: Perception |
Regular session |
Chair: Drews-Jr, Paulo | Federal University of Rio Grande (FURG) |
Co-Chair: Rekleitis, Ioannis | University of South Carolina |
|
11:45-12:00, Paper MoBT3.1 | |
>Semantic Segmentation of Underwater Imagery: Dataset and Benchmark |
|
Islam, Md Jahidul | University of Minnesota-Twin Cities |
Edge, Chelsey | University of Minnesota |
Xiao, Yuyang | University of Minnesota |
Luo, Peigen | University of Minnesota, Twin Cities |
Mehtaz, Muntaqim | University of Minnesota (IRV Lab) |
Morse, Christopher | University of Minnesota - Twin Cities |
Enan, Sadman Sakib | University of Minnesota, Twin Cities |
Sattar, Junaed | University of Minnesota |
Keywords: Marine Robotics, Field Robots, Object Detection, Segmentation and Categorization
Abstract: In this paper, we present the first large-scale dataset for semantic Segmentation of Underwater IMagery (SUIM). It contains over 1500 images with pixel annotations for eight object categories: fish (vertebrates), reefs (invertebrates), aquatic plants, wrecks/ruins, human divers, robots, and sea-floor. The images have been rigorously collected during oceanic explorations and human-robot collaborative experiments, and annotated by human participants. We also present a comprehensive benchmark evaluation of several state-of-the-art semantic segmentation approaches based on standard performance metrics. Additionally, we present SUIM-Net, a fully-convolutional deep residual model that balances the trade-off between performance and computational efficiency. It offers competitive performance while ensuring fast end-to-end inference, which is essential for its use in the autonomy pipeline by visually-guided underwater robots. In particular, we demonstrate its usability benefits for visual servoing, saliency prediction, and detailed scene understanding. With a variety of use cases, the proposed model and benchmark dataset open up promising opportunities for future research in underwater robot vision.
|
|
12:00-12:15, Paper MoBT3.2 | |
>DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization |
> Video Attachment
|
|
Joshi, Bharat | University of South Carolina |
Modasshir, Md | University of South Carolina |
Manderson, Travis | McGill University |
Damron, Hunter | University of South Carolina |
Xanthidis, Marios | University of South Carolina |
Quattrini Li, Alberto | Dartmouth College |
Rekleitis, Ioannis | University of South Carolina |
Dudek, Gregory | McGill University |
Keywords: Field Robots, Deep Learning for Visual Perception, Localization
Abstract: In this paper, we propose a real-time deep-learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communication-constrained underwater environment is essential for many applications such as underwater exploration, mapping, multi-robot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image-to-image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental results in real-world underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness and accuracy of the proposed technique in terms of translation error and orientation error over the state-of-the-art methods. The code is publicly available.
|
|
12:15-12:30, Paper MoBT3.3 | |
>Underwater Monocular Image Depth Estimation Using Single-Beam Echosounder |
> Video Attachment
|
|
Roznere, Monika | Dartmouth College |
Quattrini Li, Alberto | Dartmouth College |
Keywords: Marine Robotics, SLAM, Sensor Fusion
Abstract: This paper proposes a methodology for real-time depth estimation of underwater monocular camera images, fusing measurements from a single-beam echosounder. Our system exploits the echosounder's detection cone to match its measurements with the detected feature points from a monocular SLAM system. Such measurements are integrated in a monocular SLAM system to adjust the visible map points and the scale. We also provide a novel calibration process to determine the extrinsic between camera and echosounder to have reliable matching. Our proposed approach is implemented within ORB-SLAM2 and evaluated in a swimming pool and in the ocean to validate image depth estimation improvement. In addition, we demonstrate its applicability for improved underwater color correction. Overall, the proposed sensor fusion system enables inexpensive underwater robots with a monocular camera and echosounder to correct the depth estimation and scale in visual SLAM, leading to interesting future applications, such as underwater exploration and mapping.
|
|
12:30-12:45, Paper MoBT3.4 | |
>Matching Color Aerial Images and Underwater Sonar Images Using Deep Learning for Underwater Localization |
|
Machado dos Santos, Matheus | FURG |
Giacomo, Giovanni | FURG |
Drews-Jr, Paulo | Federal University of Rio Grande (FURG) |
Botelho, Silvia | University Federal of Rio Grande (FURG) |
Keywords: Marine Robotics, Deep Learning for Visual Perception, Aerial Systems: Perception and Autonomy
Abstract: Underwater localization is a challenging task due to the lack of a Global Positioning System (GPS). However, the capability to match georeferenced aerial images and acoustic data can help with this task. Autonomous hybrid aerial and underwater vehicles also demand a new localization method capable of combining the perception from both environments. This study proposes a cross-domain and cross-view image matching, using a color aerial image and an underwater acoustic image to identify if these images are captured in the same place. The method is designed to match images acquired in partially structured environments with shared features, such as harbors and marinas. Our pipeline combines traditional image processing methods and deep neural network techniques. Real-world datasets from multiple regions are used to validate our work, obtaining a matching precision of up to 80%
|
|
12:45-13:00, Paper MoBT3.5 | |
>ACMarker: Acoustic Camera-Based Fiducial Marker System in Underwater Environment |
> Video Attachment
|
|
Wang, Yusheng | The University of Tokyo |
Ji, Yonghoon | JAIST |
Liu, Dingyu | The University of Tokyo |
Tamura, Yusuke | Tohoku University |
Tsuchiya, Hiroshi | Wakachiku Construction Co., Ltd |
Yamashita, Atsushi | The University of Tokyo |
Asama, Hajime | The University of Tokyo |
Keywords: Marine Robotics, Computer Vision for Other Robotic Applications
Abstract: ACMarker is an acoustic camera-based fiducial marker system designed for underwater environments. Optical camera-based fiducial marker systems have been widely used in computer vision and robotics applications such as augmented reality (AR), camera calibration, and robot navigation. However, in underwater environments, the performance of optical cameras is limited owing to water turbidity and illumination conditions. Acoustic cameras, which are forward-looking sonars, have been gradually applied in underwater situations. They can acquire high-resolution images even in turbid water with poor illumination. We propose methods to recognize a simply designed marker and to estimate the relative pose between the acoustic camera and the marker. The proposed system can be applied to various underwater tasks such as object tracking and localization of unmanned underwater vehicles. Simulation and real experiments were conducted to test the recognition of such markers and pose estimation based on the markers.
|
|
MoBT4 |
Room T4 |
Marine Robotics: Planning and Control |
Regular session |
Chair: Arbanas, Barbara | University of Zagreb, Faculty of Electrical Engineering and Computing |
Co-Chair: Kaess, Michael | Carnegie Mellon University |
|
12:00-12:15, Paper MoBT4.2 | |
>Risk Vector-Based Near Miss Obstacle Avoidance for Autonomous Surface Vehicles |
> Video Attachment
|
|
Jeong, Mingi | Dartmouth College |
Quattrini Li, Alberto | Dartmouth College |
Keywords: Marine Robotics, Collision Avoidance, Autonomous Vehicle Navigation
Abstract: This paper presents a novel risk vector-based near miss prediction and obstacle avoidance that can be used for computing an efficient, dynamic, and robust action in real-time. Simulation experiments with parameters inferred from experiments in the ocean with our custom-made robotic boat show flexibility and adaptability to many obstacles present in the environment.
|
|
12:15-12:30, Paper MoBT4.3 | |
>Model Identification of a Small Omnidirectional Aquatic Surface Vehicle: A Practical Implementation |
|
Groves, Keir | The University of Manchester |
Dimitrov, Marin | University of Manchester |
Peel, Harriet | University of Manchester |
Marjanovic, Ognjen | University of Manchester |
Lennox, Barry | The University of Manchester |
Keywords: Marine Robotics, Calibration and Identification, Dynamics
Abstract: This work presents a practical method of obtaining a dynamic system model for small omnidirectional aquatic vehicles. The models produced can be used to improve vehicle localisation, aid in the design or tuning of control systems and facilitate the development of simulated environments. The use of a dynamic model for onboard real-time velocity prediction is of particular importance for aquatic vehicles because, unlike ground vehicles, fast and direct measurement of velocity using encoders is not possible. Previous work on model identification of aquatic vehicles has focused on large vessels that are typically underactuated and have low controllability in the sway direction. In this paper it is demonstrated that the procedure for identifying the model coefficients can be performed quickly, without specialist equipment and using only onboard sensors. This is of key importance because the dynamic model coefficients will change with the payload. Two different thrust allocation schemes are tested, one of which is a known method and another is proposed here. Validation tests are performed and the models generated are shown to be suitable for their intended applications. Significant reduction in model error is demonstrated using the novel thrust allocation method that is designed to avoid deadbands in the thruster responses.
|
|
12:30-12:45, Paper MoBT4.4 | |
>Towards Micro Robot Hydrobatics: Vision-Based Guidance, Navigation, and Control for Agile Underwater Vehicles in Confined Environments |
> Video Attachment
|
|
Duecker, Daniel Andre | Hamburg University of Technology |
Bauschmann, Nathalie | Hamburg University of Technology |
Hansen, Tim | Technical University of Hamburg |
Kreuzer, Edwin | Hamburg University of Technology |
Seifried, Robert | Hamburg University of Technology |
Keywords: Marine Robotics, Field Robots, Robotics in Hazardous Fields
Abstract: Despite the recent progress, guidance, navigation, and control (GNC) are largely unsolved for agile micro autonomous underwater vehicles (micro AUVs). Hereby, robust and accurate self-localization systems which fit micro AUVs play a key role and their lack is, thus, a severe bottleneck in micro underwater robotics research. In this work we present, first, a small-size low-cost high performance vision-based self-localization module which solves this bottleneck even for the requirements highly agile robot platforms. Second, we present its integration into a powerful GNC-framework which allows the deployment of micro AUVs in fully autonomous mission. Finally, we critically evaluate the performance of the localization system and the GNC-framework in two experimental scenarios.
|
|
12:45-13:00, Paper MoBT4.5 | |
>Coverage Path Planning with Track Spacing Adaptation for Autonomous Underwater Vehicles |
> Video Attachment
|
|
Yordanova, Veronika | CMRE |
Gips, Bart | Nato Sto Cmre |
Keywords: Marine Robotics, Motion and Path Planning, Robotics in Hazardous Fields
Abstract: In this paper we address the mine countermeasures (MCM) search problem for an autonomous underwater vehicle (AUV) surveying the seabed using a side-looking sonar. We propose a coverage path planning method that adapts the AUV track spacing with the objective of collecting better data. We achieve this by shifting the coverage overlap at the tail of the sensor range where the lowest data quality is expected. To assess the algorithm, we collected data from three at-sea experiments. The adaptive survey allowed the AUV to recover from a situation where the sensor range was overestimated and resulted in reducing area coverage gaps. In another experiment, the adaptive survey showed a 4.2% improvement in data quality for nearly 30% of the 'worst' data.
|
|
13:00-13:15, Paper MoBT4.6 | |
>Dynamic Median Consensus for Marine Multi-Robot Systems Using Acoustic Communication |
|
Vasiljevic, Goran | Faculty of Electrical Engineering and Computing, Zagreb, Croatia |
Petrovic, Tamara | Univ. of Zagreb |
Arbanas, Barbara | University of Zagreb, Faculty of Electrical Engineering and Comp |
Bogdan, Stjepan | University of Zagreb |
Keywords: Marine Robotics, Multi-Robot Systems, Autonomous Agents
Abstract: In this paper, we present a dynamic median consensus protocol for multi-agent systems using acoustic communication. The motivating target scenario is a multi-agent system consisting of underwater robots acting as intelligent sensors, applied to continuous monitoring of the state of a marine environment. The proposed protocol allows each agent to track the median value of individual measurements of all agents through local communication with neighbouring agents. Median is chosen as a measure robust to outliers, as opposed to average value, which is usually used. In contrast to the existing consensus protocols, the proposed protocol is dynamic, uses a switching communication topology and converges to median of measured signals. Stability and correctness of the protocol are theoretically proven. The protocol is tested in simulation, and accuracy and influence of protocol parameters on the system output are analyzed. The protocol is implemented and validated by a set of experiments on an underwater group of robots comprising of aMussel units. This experimental setup is one of the first deployments of any type of consensus protocol for an underwater setting. Both simulation and experimental results confirm the correctness of the presented approach.
|
|
MoBT5 |
Room T5 |
Space Robotics: Control |
Regular session |
Chair: McBryan, Katherine | US Naval Research Laboratory |
Co-Chair: Papadopoulos, Evangelos | National Technical University of Athens |
|
11:45-12:00, Paper MoBT5.1 | |
>On Parameter Estimation of Flexible Space Manipulator Systems |
|
Christidi-Loumpasefski, Olga-Orsalia | National Technical University of Athens |
Nanos, Kostas | National Technical University of Athens |
Papadopoulos, Evangelos | National Technical University of Athens |
Keywords: Space Robotics and Automation, Flexible Robots, Calibration and Identification
Abstract: Space manipulator systems on orbit are subject to link flexibilities since they are designed to be lightweight and long reaching. Often, their joints are driven by harmonic gear-motor units, which introduce joint flexibility. Both of these types of flexibility may cause structural vibrations. To improve endpoint tracking, advanced control strategies that benefit from the knowledge of system parameters, including those describing link and joint flexibilities, are required. In this paper, first, the equations of motion of space manipulator systems whose manipulators are subject to both link and joint flexibilities are derived. Then, a parameter estimation method is developed, based on the energy balance during the motion of a flexible space manipulator. The method estimates all system parameters including those that describe both link and joint flexibilities and can reconstruct the system full dynamics required for the application of advanced control strategies. The method, developed for spatial systems, is illustrated by a planar example.
|
|
12:00-12:15, Paper MoBT5.2 | |
>Comparison between Stationary and Crawling Multi-Arm Robotics for In-Space Assembly |
|
McBryan, Katherine | US Naval Research Laboratory |
Keywords: Space Robotics and Automation, Assembly, Dual Arm Manipulation
Abstract: In-space assembly (ISA) is the next step to building larger and more permanent structures in orbit. The use of a robotic in-space assembler will save on costly and potentially risky EVAs. Determining the best robot for ISA is difficult as it will depend on the structure being assembled. A comparison between two categories of robots are presented: a stationary robot and robot which crawls along the truss. The estimated mass, energy, and time are presented for each system as it, in simulation, builds a desired truss system. There are trade-offs to every robot design and understanding those trade-offs is essential to building a system that is not only efficient but also cost-effective.
|
|
12:15-12:30, Paper MoBT5.3 | |
>Interactive Planning and Supervised Execution for High-Risk, High-Latency Teleoperation |
> Video Attachment
|
|
Pryor, Will | Johns Hopkins University |
Vagvolgyi, Balazs | Johns Hopkins University |
Deguet, Anton | Johns Hopkins University |
Leonard, Simon | The Johns Hopkins University |
Whitcomb, Louis | The Johns Hopkins University |
Kazanzides, Peter | Johns Hopkins University |
Keywords: Telerobotics and Teleoperation, Virtual Reality and Interfaces, Space Robotics and Automation
Abstract: Ground-based teleoperation of robot manipulators for on-orbit servicing of spacecraft represents an example of high-payoff, high-risk operations that are challenging to perform due to high latency communications, with telemetry time delays of several seconds. In these scenarios, confidence of operating without failure is paramount. We report the development of an Interactive Planning and Supervised Execution (IPSE) system that takes advantage of accurate 3D reconstruction of the remote environment to enable operators to plan motions in the virtual world, evaluate and adjust the plan, and then supervise execution with the ability to pause and return to the planning environment at any time. We report the results of an experimental evaluation of a representative on-orbit telerobotic servicing task from NASA's upcoming OSAM-1 mission to refuel a satellite in low earth orbit; specifically, to change the robot tool to acquire the fuel supply line and then to insert it into the satellite fill/drain valve. Results of a pilot study show that the operators preferred, and were more successful with, the IPSE system when compared to a conventional teleoperation implementation.
|
|
12:30-12:45, Paper MoBT5.4 | |
>Parameter Identification for an Uncooperative Captured Satellite with Spinning Reaction Wheels |
|
Christidi-Loumpasefski, Olga-Orsalia | National Technical University of Athens |
Papadopoulos, Evangelos | National Technical University of Athens |
Keywords: Space Robotics and Automation, Calibration and Identification
Abstract: A novel identification method is developed which identifies the accumulated angular momentum (AAM) of spinning reaction wheels (RWs) of an uncooperative satellite captured by a robotic servicer. In contrast to other methods that treat captured satellite’s RWs as non-spinning, the developed method provides simultaneously accurate estimates of the AAM of the captured satellite’s RWs and of the inertial parameters of the entire system consisting of the robotic servicer and of the captured satellite. These estimates render the system free-floating dynamics fully identified and available to model-based control. Three-dimensional simulations demonstrate the method’s validity. To show its usefulness, the performance of a model-based controller is evaluated with and without knowledge of the captured satellite’s RWs AAM.
|
|
12:45-13:00, Paper MoBT5.5 | |
>Tumbling and Hopping Locomotion Control for a Minor Body Exploration Robot |
> Video Attachment
|
|
Kobashi, Keita | Tohoku University |
Bando, Ayumu | Tohoku University |
Nagaoka, Kenji | Tohoku University |
Yoshida, Kazuya | Tohoku University |
Keywords: Space Robotics and Automation, Contact Modeling, Simulation and Animation
Abstract: This paper presents the modeling and analysis of a novel moving mechanism ``tumbling'' for asteroid exploration. The system actuation is provided by an internal motor and torque wheel; elastic spring-mounted spikes are attached to the perimeter of a circular-shaped robot, protruding normal to the surface and distributed uniformly. Compared with the conventional motion mechanisms, this simple layout enhances the capability of the robot to traverse a diverse microgravity environment. Technical challenges involved in conventional moving mechanisms, such as uncertainty of moving direction and inability to traverse uneven asteroid surfaces, can now be solved. A tumbling locomotion approach demonstrates two beneficial characteristics in this environment. First, tumbling locomotion maintains contact between the rover spikes and the ground. This enables the robot to continually apply control adjustments to realize precise and controlled motion. Second, owing to the nature of the mechanical interaction of the spikes and potential uneven surface protrusions, the robot can traverse uneven surfaces. In this paper, we present the dynamics modeling of the robot and analyze the motion of the robot experimentally and via numerical simulations. The results of this study help establish a moving strategy to approach the desired locations on asteroid surfaces.
|
|
13:00-13:15, Paper MoBT5.6 | |
>Inertia-Decoupled Equations for Hardware-In-The-Loop Simulation of an Orbital Robot with External Forces |
> Video Attachment
|
|
Mishra, Hrishik | German Aerospace Center (DLR) |
Giordano, Alessandro Massimo | DLR (German Aerospace Center) |
De Stefano, Marco | German Aerospace Center (DLR) |
Lampariello, Roberto | German Aerospace Center (DLR) |
Ott, Christian | German Aerospace Center (DLR) |
Keywords: Space Robotics and Automation, Simulation and Animation, Compliance and Impedance Control
Abstract: In this paper, we propose three novel Hardware-in-the-loop simulation (HLS) methods for a fully-actuated orbital robot in the presence of external interactions using On-Ground Facility Manipulators (OGFM). In particular, a fixed-base and a vehicle-driven manipulator are considered in the analyses. The key idea is to describe the orbital robot's dynamics using the Lagrange-Poincare (LP) equations, which reveal a block-diagonalized inertia. The resulting advantage is that noisy joint acceleration/torque measurements are avoided in the computation of the spacecraft motion due to manipulator interaction even while considering external forces. The proposed methods are a consequence of two facilitating theorems, which are proved herein. These theorems result in two actuation maps between the simulated orbital robot and the physical OGFM. The chief advantage of the proposed methods is physical consistency without level-set assumptions on the momentum map. We validate this through experiments on both types of OGFM in the presence of external forces. Finally, the effectiveness of our approach is validated through a HLS of a fully-actuated orbital robot while interacting with the environment.
|
|
MoBT6 |
Room T6 |
Space Robotics: Perception |
Regular session |
Chair: Triebel, Rudolph | German Aerospace Center (DLR) |
Co-Chair: Leonard, Simon | The Johns Hopkins University |
|
11:45-12:00, Paper MoBT6.1 | |
>A Target Tracking and Positioning Framework for Video Satellites Based on SLAM |
|
Zhao, Xuhui | Wuhan University |
Gao, Zhi | Temasek Laboratories @ NUS |
Zhang, Yongjun | Wuhan University |
Chen, Ben M. | Chinese University of Hong Kong |
Keywords: Space Robotics and Automation, SLAM, Visual Tracking
Abstract: With the booming development in aerospace technology,the video satellite has gradually emerged as a new Earth observation method, which observes the live phenomena on the ground by video shooting and opens a “dynamic” era of remote sensing. Thus, some new techniques are needed, especially the near-real-time tracking and positioning algorithm for ground moving targets. However, many researches only extract pixel-level trajectories in the post-processed video product, resulting in fairly limited applications. We regard the video satellite as a robot flying in space and adopt the SLAM framework for the positioning of ground moving targets. We design our framework based on the representative ORB-SLAM and make improvements mainly in feature extraction, satellite pose estimation, moving target tracking, and positioning. We install GPS-RTK (Real-time Kinematic) devices on a fishing boat to measure its ground truth and use the Zhuhai-1 video satellite to observe it simultaneously. We conduct experiments on this video and demonstrate that our framework can provide the geolocation of the moving target in satellite videos.
|
|
12:00-12:15, Paper MoBT6.2 | |
>Gaussian Process Gradient Maps for Loop-Closure Detection in Unstructured Planetary Environments |
|
Le Gentil, Cedric | University of Technology Sydney |
Vayugundla, Mallikarjuna | DLR (German Aerospace Center) |
Giubilato, Riccardo | German Aerospace Center (DLR) |
Stuerzl, Wolfgang | DLR, Institute of Robotics and Mechantronics |
Vidal-Calleja, Teresa A. | University of Technology Sydney |
Triebel, Rudolph | German Aerospace Center (DLR) |
Keywords: Space Robotics and Automation, Mapping, SLAM
Abstract: The ability to recognize previously mapped locations is an essential feature for autonomous systems.Unstructured planetary-like environments pose a major challenge to these systems due to the similarity of the terrain. As a result, the ambiguity of the visual appearance makes state-of-the-art visual place recognition approaches less effective than in urban or man-made environments. This paper presents a method to solve the loop closure problem using only spatial information. The key idea is to use a novel continuous and probabilistic representations of terrain elevation maps. Given 3D point clouds of the environment, the proposed approach exploits Gaussian Process (GP) regression with linear operators to generate continuous gradient maps of the terrain elevation information. Traditional image registration techniques are then used to search for potential matches. Loop closures are verified by leveraging both the spatial characteristic of the elevation maps (SE(2) registration) and the probabilistic nature of the GP representation. A submap-based localization and mapping framework is used to demonstrate the validity of the proposed approach. The performance of this pipeline is evaluated and benchmarked using real data from a rover that is equipped with a stereo camera and navigates in challenging, unstructured planetary-like environments in Morocco and on Mt. Etna.
|
|
12:15-12:30, Paper MoBT6.3 | |
>Visual Monitoring and Servoing of a Cutting Blade During Telerobotic Satellite Servicing |
> Video Attachment
|
|
Mahmood, Amama | Johns Hopkins University |
Vagvolgyi, Balazs | Johns Hopkins University |
Pryor, Will | Johns Hopkins University |
Whitcomb, Louis | The Johns Hopkins University |
Kazanzides, Peter | Johns Hopkins University |
Leonard, Simon | The Johns Hopkins University |
Keywords: Space Robotics and Automation, Force Control, Visual Servoing
Abstract: We propose a system for visually monitoring and servoing the cutting of a multi-layer insulation (MLI) blanket that covers the envelope of satellites and spacecraft. The main contributions of this paper are: 1) to propose a model for relating visual features describing the engagement depth of the blade to the force exerted on the MLI blanket by the cutting tool, 2) a blade design and algorithm to reliably detect the engagement depth of the blade inside the MLI, and 3) a servoing mechanism to achieve the desired applied force by monitoring the engagement depth. We present results that validate these contributions by comparing forces estimated from visual feedback to measured forces at the blade. We also demonstrate the robustness of the blade design and vision processing under challenging conditions.
|
|
12:30-12:45, Paper MoBT6.4 | |
>Terrain-Aware Path Planning and Map Update for Mars Sample Return Mission |
> Video Attachment
|
|
Hedrick, Gabrielle | West Virginia University |
Ohi, Nicholas | West Virginia University |
Gu, Yu | West Virginia University |
Keywords: Space Robotics and Automation, Robotics in Hazardous Fields, Autonomous Vehicle Navigation
Abstract: This work aims at developing an efficient path planning algorithm for the driving objective of a Martian day(sol) that can take into account terrain information for application to the proposed Mars Sample Return (MSR) mission. To prepare the planning process for one sol (i.e., with a limited time allocated to driving), a map of expected rover velocity over a chosen area is constructed, obtained by combining terrain classes, rock abundance and slope at that location.The planning phase starts offline by computing several paths that can be traversed in one sol (i.e., a few hours), which will later provide suitable options to the rover if replanning is necessary due to unexpected mobility difficulties. Online, the rover gains information about its environment as it drives (via slip monitoring and/or instrument deployment) and updates the map if major discrepancies are found. If an update is made, the remaining driving time along the different options is recalculated and the most efficient path is chosen. The online process is repeated until the rover has reached its daily goal.When simulated on different maps of expected rover speed at Gusev Crater, Mars, the algorithm correctly captured changes of terrain initially not mapped, and rerouted the rover to a more efficient path only when necessary, in which case it effectively complied with the time constraint to reach the goal.
|
|
12:45-13:00, Paper MoBT6.5 | |
>Virtual IR Sensing for Planetary Rovers: Improved Terrain Classification and Thermal Inertia Estimation |
|
Iwashita, Yumi | NASA / Caltech Jet Propulsion Laboratory |
Nakashima, Kazuto | Kyushu University |
Gatto, Joseph | Columbia University |
Higa, Shoya | Jet Propulsion Laboratory |
Stoica, Adrian | NASA/JPL |
Khoo, Norris | NASA Jet Propulsion Laboratory |
Kurazume, Ryo | Kyushu University |
Keywords: Space Robotics and Automation, Multi-Modal Perception
Abstract: Terrain classification is critically important for Mars rovers, which rely on it for planning and autonomous navigation. On-board terrain classification using visual information has limitations, and is sensitive to illumination conditions. Classification can be improved if one fuses visual imagery with additional infrared (IR) imagery of the scene, yet unfortunately there are no IR image sensors on the current Mars rovers. A virtual IR sensor, estimating IR from RGB imagery using deep learning, was proposed in the context of a MU-Net architecture. However, virtual IR estimation was limited by the fact that slope angle variations induce temperature differences within the same terrain. This paper removes this limitation, giving good IR estimates and as a consequence improving terrain classification by including the additional angle from the surface normal to the Sun and the measurement of solar radiation. The estimates are also useful when estimating thermal inertia, which can enhance slip prediction and small rock density estimation. Our approach is demonstrated in two applications. We collected a new data set to verify the effectiveness of the proposed approach and show its benefit by applying to the two applications.
|
|
MoBT7 |
Room T7 |
Space Robotics: Systems |
Regular session |
Chair: Komendera, Erik | Virginia Polytechnic Institute and State University |
Co-Chair: Kubota, Takashi | JAXA ISAS |
|
11:45-12:00, Paper MoBT7.1 | |
>Subsurface Sampling Robot for Time-Limited Asteroid Exploration |
> Video Attachment
|
|
Kato, Hiroki | Japan Aerospace Exploration Agency |
Satou, Yasutaka | JAXA |
Yoshikawa, Kent | JAXA |
Otsuki, Masatsugu | Japan Aerospace Exploration Agency |
Sawada, Hirotaka | JAXA |
Kuratoi, Takeshi | WEL Research |
Hidaka, Nana | WEL Research |
Keywords: Space Robotics and Automation, Field Robots
Abstract: This paper presents a novel approach to sampling subsurface asteroidal regolith under severe time constraints. Sampling operations that must be completed within a few hours require techniques that can manage subsurface obstructions that may be encountered. The large uncertainties due to our lack of knowledge of regolith properties also make sampling difficult. To aid in managing these challenges, machine learning-based detection methods using tactile feedback can detect the presence of rocks deeper than the length of the probe, ensuring reliable sampling in unobstructed areas. In addition, given the variability of soil hardness and the short time available, a corer shooting mechanism has been developed that uses a special shape-memory alloy to collect regolith in about a minute. Experiments on subsurface obstacle detection and shooting-corer ejection tests were conducted to demonstrate the functionality of this approach.
|
|
12:00-12:15, Paper MoBT7.2 | |
>Robots Made from Ice: An Analysis of Manufacturing Techniques |
> Video Attachment
|
|
Carroll, Devin | University of Pennsylvania |
Yim, Mark | University of Pennsylvania |
Keywords: Space Robotics and Automation, Product Design, Development and Prototyping, Wheeled Robots
Abstract: Modular robotic systems with self-repair or self-replication capabilities have been presented as a robust, low cost solution to extraterrestrial or Arctic exploration. This paper explores using ice as the sole structure element to build robots. The ice allows for increased flexibility in the system design, enabling the robotic structure to be designed and built post deployment, after tasks and terrain obstacles have been better identified and analyzed. However, ice presents many difficulties in manufacturing. The authors explore a structure driven approach to examine compatible manufacturing processes with an emphasis on conserving process energies. The energy analysis shows the optimal manufacturing technique depends on the volume of the final part relative to the volume of material that must be removed. Based on experiments three general design principles are presented. A mobile robotic platform made from ice is presented as a proof of concept and first demonstration.
|
|
12:15-12:30, Paper MoBT7.3 | |
>Autonomous Navigation Over Europa Analogue Terrain for an Actively Articulated Wheel-On-Limb Rover |
> Video Attachment
|
|
Reid, William | Jet Propulsion Laboratory |
Paton, Michael | Jet Propulsion Laboratory |
Karumanchi, Sisir | Jet Propulsion Lab, Caltech |
Emanuel, Blair | Jet Propulsion Laboratory |
Chamberlain-Simon, Brendan | Jet Propulsion Laboratory |
Meirion-Griffith, Gareth | Jet Propulsion Laboratory |
Keywords: Space Robotics and Automation, Field Robots, Whole-Body Motion Planning and Control
Abstract: The ocean world Europa is a prime target for exploration given its potential habitability. We propose a mobile platform that is capable of autonomously traversing tens of meters to visit multiple sites of interest on a Europan analogue surface. Due to the topology of Europan terrain being largely unknown, it is desired that this mobility system traverse a large variety of terrain types. The mobility system should also be capable of crossing unstructured terrain in an autonomous manner given the communications limitations between Earth and Europa. A wheel-on-limb robotic rover is presented that may actively conform to terrain features up to 1.5 wheel diameters tall while driving. The robot uses a sampling-based motion planner to generate paths that leverage its unique locomotive capabilities. The planner assesses terrain hazards and wheel workspace limits as obstacles. It may also select a mobility mode based on predicted energy usage and the need for limb articulation on the terrain being traversed. This autonomous mobility was evaluated on chaotic salt-evaporite terrain found in Death Valley, CA, an analogue to the Europan surface. Over the course of 38 trials, the rover autonomously traversed 435 m of extreme terrain while maintaining a rate of 0.64 traverse ending failures for every 10 m driven.
|
|
12:30-12:45, Paper MoBT7.4 | |
>Autonomous Multi-Robot Assembly of Solar Array Modules: Experimental Analysis and Insights |
|
Everson, Holly | Virginia Polytechnic Institute and State University |
Moser, Joshua | Virginia Polytechnic Institute and State University |
Quartaro, Amy | Virginia Polytechnic Institute and State University |
Glassner, Samantha | Virginia Tech |
Komendera, Erik | Virginia Polytechnic Institute and State University |
Keywords: Space Robotics and Automation, Cooperating Robots, Robotics in Construction
Abstract: To allow for the construction of large space structures to support future space endeavors, autonomous robotic solutions would serve to reduce cost and risk of human extravehicular activity (EVA). Practicality of autonomous assembly requires both theoretical and algorithmic advances, and hardware experimentation across a spectrum of technological readiness levels. Analysis of hardware experiments provides novel insights not readily apparent in simulations alone, which serves to inform future developments. This paper describes analysis and insights gained from an autonomous assembly experiment consisting of a dexterous manipulator, a gross positioning serial arm, and a 1 degree of freedom (DOF) turntable to facilitate the assembly and deployment of a solar array mockup. This experiment combined state estimation in an uncertain environment with contact-heavy robot operations such as grasping, self-reconfiguring, joining, and deploying. Insights gained are presented here due to their applicability to other field-based manipulation tasks by teams of robots.
|
|
12:45-13:00, Paper MoBT7.5 | |
>The ARCHES Space-Analogue Demonstration Mission: Towards Heterogeneous Teams of Autonomous Robots for Collaborative Scientific Sampling in Planetary Exploration |
> Video Attachment
|
|
Schuster, Martin J. | German Aerospace Center (DLR) |
Müller, Marcus Gerhard | German Aerospace Center |
Brunner, Sebastian Georg | DLR German Aerospace Center, Robotics and Mechatronics Center |
Lehner, Hannah | German Aerospace Center (DLR) |
Lehner, Peter | German Aerospace Center (DLR) |
Sakagami, Ryo | German Aerospace Center (DLR) |
Dömel, Andreas | German Aerospace Center (DLR) |
Meyer, Lukas | German Aerospace Center (DLR) |
Vodermayer, Bernhard | German Aerospace Center (DLR) |
Giubilato, Riccardo | German Aerospace Center (DLR) |
Vayugundla, Mallikarjuna | DLR (German Aerospace Center) |
Reill, Joseph | German Aerospace Center (DLR) |
Steidle, Florian | German Aerospace Center |
von Bargen, Ingo | German Aerospace Center (DLR) |
Bussmann, Kristin | German Aerospace Center (DLR) |
Belder, Rico | German Aerospace Center |
Lutz, Philipp | German Aerospace Center (DLR) |
Stuerzl, Wolfgang | DLR, Institute of Robotics and Mechantronics |
Smisek, Michal | German Aerospace Center (DLR) |
Maier, Moritz | German Aerospace Center (DLR) |
Stoneman, Samantha | DLR (German Space Center) |
Fonseca Prince, Andre | German Aerospace Center (DLR) |
Rebele, Bernhard | German Aerospace Center (DLR) |
Durner, Maximilian | German Aerospace Center DLR |
Staudinger, Emanuel | DLR |
Zhang, Siwei | German Aerospace Center (DLR) |
Pöhlmann, Robert | German Aerospace Center (DLR) |
Bischoff, Esther | Karlsruhe Institute of Technology (KIT) |
Braun, Christian | Karlsruhe Institute of Technology (KIT) |
Schröder, Susanne | German Aerospace Center (DLR) |
Dietz, Enrico | German Aerospace Center (DLR) |
Frohmann, Sven | German Aerospace Center (DLR) |
Börner, Anko | DLR |
Hübers, Heinz-Wilhelm | German Aerospace Center (DLR) |
Foing, Bernard | European Space Agency (ESA) |
Triebel, Rudolph | German Aerospace Center (DLR) |
Albu-Schäffer, Alin | DLR - German Aerospace Center |
Wedler, Armin | DLR - German Aerospace Center |
Keywords: Space Robotics and Automation, Multi-Robot Systems, Autonomous Agents
Abstract: Teams of mobile robots will play a crucial role in future missions to explore the surfaces of extraterrestrial bodies. Setting up infrastructure and taking scientific samples are expensive tasks when operating in distant, challenging, and unknown environments. In contrast to current single-robot space missions, future heterogeneous robotic teams will increase efficiency via enhanced autonomy and parallelization, improve robustness via functional redundancy, as well as benefit from complementary capabilities of the individual robots. In this article, we present our heterogeneous robotic team, consisting of flying and driving robots that we plan to deploy on scientific sampling demonstration missions at a Moon-analogue site on Mt. Etna, Sicily, Italy in 2021 as part of the ARCHES project. We describe the robots' individual capabilities and their roles in two mission scenarios. We then present components and experiments on important tasks therein: automated task planning, high-level mission control, spectral rock analysis, radio-based localization, collaborative multi-robot 6D SLAM in Moon-analogue and Mars-like scenarios, and demonstrations of autonomous sample return.
|
|
13:00-13:15, Paper MoBT7.6 | |
>A Routing Framework for Heterogeneous Multi-Robot Teams in Exploration Tasks |
> Video Attachment
|
|
Sakamoto, Takuma | The University of Tokyo |
Bonardi, Stephane | Institute of Space and Astronautical Science (ISAS), Japan Aeros |
Kubota, Takashi | JAXA ISAS |
Keywords: Space Robotics and Automation, Path Planning for Multiple Mobile Robots or Agents, Motion and Path Planning
Abstract: This paper proposes a routing framework for heterogeneous multi-robot teams in exploration tasks. The proposed framework deals with a combinatorial optimization problem and provides a new solving algorithm, for Generalized Team Orienteering Problem (GTOP). In this paper, a route optimization problem is formulated for a heterogeneous multi-robot system. A novel problem solver is also proposed based on self-organizing map. The proposed framework has a strong advantage in its scalability because the processing time is independent from the number of robots and the heterogeneity of the team. The validity of the proposed framework is evaluated in the exploration and mapping tasks by heterogeneous robot team with overlapping abilities. The simulation results show the effectiveness of the proposed framework and how it outperforms the conventional greedy exploration scheme.
|
|
MoBT8 |
Room T8 |
AI and Learning for Autonomous Driving Applications |
Regular session |
Chair: Pillai, Sudeep | Toyota Research Institute |
|
11:45-12:00, Paper MoBT8.1 | |
>Accurate, Low-Latency Visual Perception for Autonomous Racing: Challenges, Mechanisms, and Practical Solutions |
|
Strobel, Kieran | MIT |
Zhu, Sibo | Brandeis University |
Chang, Raphael | Massachusetts Institute of Technology |
Koppula, Skanda | Google DeepMind |
Keywords: Deep Learning for Visual Perception, Computer Vision for Automation, Autonomous Vehicle Navigation
Abstract: Autonomous racing provides the opportunity to test safety-critical perception pipelines at their limit. This paper describes the practical challenges and solutions to applying state-of-the-art computer vision algorithms to build a low-latency, high-accuracy perception system for DUT18 Driverless (DUT18D), a 4WD electric race car with podium finishes at all Formula Driverless competitions for which it raced. The key components of DUT18D include YOLOv3-based object detection, pose estimation, and time synchronization on its dual stereovision/monovision camera setup. We highlight modifications required to adapt perception CNNs to racing domains, improvements to loss functions used for pose estimation, and methodologies for sub-microsecond camera synchronization among other improvements. We perform a thorough experimental evaluation of the system, demonstrating its accuracy and low-latency in real-world racing scenarios.
|
|
12:00-12:15, Paper MoBT8.2 | |
>Spatio-Temporal Ultrasonic Dataset: Learning Driving from Spatial and Temporal Ultrasonic Cues |
|
Wang, Shuai | University of Science and Technology of China |
Qin, Jiahu | University of Science and Technology of China |
Zhang, Zhanpeng | University of Science and Technology of China |
Keywords: Autonomous Vehicle Navigation, Big Data in Robotics and Automation, Model Learning for Control
Abstract: Recent works have proved that combining spatial and temporal visual cues can significantly improve the performance of various vision-based robotic systems. However, for the ultrasonic sensors used in most robotic tasks (e.g. collision avoidance, localization and navigation), there is a lack of benchmark ultrasonic datasets that consist of spatial and temporal data to verify the usability of spatial and temporal ultrasonic cues. In this paper, we are the first to propose a Spatio-Temporal Ultrasonic Dataset (STUD), which aims to develop the ability of ultrasonic sensors by mining spatial and temporal information from multiple ultrasonic measurements. In particular, we first propose a novel Spatio-Temporal (ST) ultrasonic data gathering scheme, in which an innovatory data instance is designed. Besides, part of the data in the STUD is collected in a robot simulator, in which a well-designed corridor map is utilized to increase data diversity. Then a selection algorithm is proposed to find a proper length of data sequences to obtain the best description of the navigation environments. Finally, we present an end-to-end learning benchmark model that learns driving policies by extracting spatial and temporal ultrasonic cues from the STUD. With the help of our STUD and this benchmark model, more powerful deep neural networks can be trained for addressing the tasks of indoor navigation or motion planning of mobile robots, which is unachievable by simply using the existing ultrasonic datasets. Comparison experiments verified the effectiveness of spatial and temporal ultrasonic cues for the driving policy learning.
|
|
12:15-12:30, Paper MoBT8.3 | |
>A POMDP Treatment of Vehicle-Pedestrian Interaction: Implicit Coordination Via Uncertainty-Aware Planning |
|
Hsu, Ya-Chuan | Texas A&M University |
Gopalswamy, Swaminathan | Texas A&M University |
Saripalli, Srikanth | Texas A&M |
Shell, Dylan | Texas A&M University |
Keywords: AI-Based Methods, Autonomous Vehicle Navigation, Social Human-Robot Interaction
Abstract: Drivers and other road users often encounter situations (e.g., arriving at an intersection simultaneously) where priority is ambiguous or unclear but must be resolved via communication to reach agreement. This poses a challenge for autonomous vehicles, for which no direct means for expressing intent and acknowledgment has yet been established. This paper contributes a minimal model to manage ambiguity and produce actions that are expressive and encode aspects of intent. Specifically, intent is treated as a latent variable, communicated implicitly through a partially observable Markov decision process (POMDP). We validate the model in a simple setting: a simulation of a prototypical crossing with a vehicle and one pedestrian at an unsignalized intersection. We further report use of our self-driving Ford Lincoln MKZ platform, through which we conducted experimental trials of the method involving real-time interaction. The experiment shows the method achieves safe and efficient navigation.
|
|
12:30-12:45, Paper MoBT8.4 | |
>Multiple Trajectory Prediction with Deep Temporal and Spatial Convolutional Neural Networks |
|
Strohbeck, Jan | Ulm University |
Belagiannis, Vasileios | Universität Ulm |
Müller, Johannes | Ulm University |
Schreiber, Marcel | Ulm University |
Herrmann, Martin | Ulm University |
Wolf, Daniel | Ulm University |
Buchholz, Michael | University of Ulm |
Keywords: Autonomous Vehicle Navigation, Novel Deep Learning Methods, AI-Based Methods
Abstract: Automated vehicles need to not only perceive their environment, but also predict the possible future behavior of all detected traffic participants in order to safely navigate in complex scenarios and avoid critical situations, ranging from merging on highways to crossing urban intersections. Due to the availability of datasets with large numbers of recorded trajectories of traffic participants, deep learning based approaches can be used to model the behavior of road users. This paper proposes a convolutional network that operates on rasterized actor-centric images which encode the static and dynamic actor-environment. We predict multiple possible future trajectories for each traffic actor, which include position, velocity, acceleration, orientation, yaw rate and position uncertainty estimates. To make better use of the past movement of the actor, we propose to employ temporal convolutional networks (TCNs) and rely on uncertainties estimated from the previous object tracking stage. We evaluate our approach on the public "Argoverse Motion Forecasting" dataset, on which it won the first prize at the Argoverse Motion Forecasting Challenge, as presented on the NeurIPS 2019 workshop on "Machine Learning for Autonomous Driving".
|
|
12:45-13:00, Paper MoBT8.5 | |
>End-To-End Autonomous Driving Perception with Sequential Latent Representation Learning |
|
Chen, Jianyu | UC Berkeley |
Xu, Zhuo | UC Berkeley |
Tomizuka, Masayoshi | University of California |
Keywords: Representation Learning, Deep Learning for Visual Perception, Semantic Scene Understanding
Abstract: Current autonomous driving systems are composed of a perception system and a decision system. Both of them are divided into multiple subsystems built up with lots of human heuristics. An end-to-end approach might clean up the system and avoid huge efforts of human engineering, as well as obtain better performance with increasing data and computation resources. Compared to the decision system, the perception system is more suitable to be designed in an end-to-end framework, since it does not require online driving exploration. In this paper, we propose a novel end-to-end approach for autonomous driving perception. A latent space is introduced to capture all relevant features useful for perception, which is learned through sequential latent representation learning. The learned end-to-end perception model is able to solve the detection, tracking, localization and mapping problems altogether with only minimum human engineering efforts and without storing any maps online. The proposed method is evaluated in a realistic urban driving simulator, with both camera image and lidar point cloud as sensor inputs.
|
|
13:00-13:15, Paper MoBT8.6 | |
>PillarFlow: End-To-End Birds-Eye-View Flow Estimation for Autonomous Driving |
> Video Attachment
|
|
Lee, Kuan-Hui | Toyota Research Institute |
Kliemann, Matthew | Toyota Research Institute |
Gaidon, Adrien | Toyota Research Institute |
Li, Jie | University of Michigan |
Fang, Chao | Toyota Research Institute |
Pillai, Sudeep | Toyota Research Institute |
Burgard, Wolfram | Toyota Research Institute |
Keywords: Deep Learning for Visual Perception, Computer Vision for Automation, Visual Learning
Abstract: In autonomous driving, accurately estimating the state of surrounding obstacles is critical for safe and robust path planning. However, this perception task is difficult, particularly for generic obstacles/objects, due to appearance and occlusion changes. To tackle this problem, we propose an end-to-end deep learning framework for LIDAR-based flow estimation in bird's eye view (BeV). Our method takes consecutive point cloud pairs as input and produces a 2-D BeV "flow" grid describing the dynamic state of each cell. The experimental results show that the proposed method not only estimates 2-D BeV flow accurately but also improves tracking performance of both dynamic and static objects.
|
|
MoBT9 |
Room T9 |
Autonomous Vehicles: Behavior |
Regular session |
Chair: Borges, Paulo Vinicius Koerich | CSIRO |
|
11:45-12:00, Paper MoBT9.1 | |
>Real-Time Detection of Distracted Driving Using Dual Cameras |
|
Tran, Duy | Oklahoma State University |
Do, Ha Manh | Oklahoma State University |
Lu, Jiaxing | Oklahoma State University |
Sheng, Weihua | Oklahoma State University |
Keywords: Intelligent Transportation Systems, Robot Safety
Abstract: Distracted driving is one of the main contributors to traffic accidents. This paper proposes a deep learning approach to detecting multiple distracted driving behaviors. In order to obtain more accurate detection results, a synchronized image recognition system based on two cameras is designed, by which the body movements and face of the driver are monitored respectively. The images captured from driver's body and face areas are fed to two Convolutional Neural Networks (CNNs) simultaneously to ensure the performance of classification. The data collection and validation processes of the proposed distraction detection approach were conducted on a laboratory-based assisted driving testbed to provide near-realistic driving experiences. Our dataset includes distracted and safe driving images of the drivers. Furthermore, we developed a meaningful and practical application of a voice-alert system that alerts the distracted driver to focus on the driving task. We evaluated VGG-16, ResNet, and MobileNet-v2 networks for the proposed approach. Experimental results show that by using two cameras and VGG-16 networks, we can achieve a recognition accuracy of 96.7% with a computation speed of 8 fps.
|
|
12:00-12:15, Paper MoBT9.2 | |
>Expressing Diverse Human Driving Behavior with ProbabilisticRewards and Online Inference |
|
Sun, Liting | University of California, Berkeley |
Wu, Zheng | University of California, Berkeley |
Ma, Hengbo | University of California, Berkeley |
Tomizuka, Masayoshi | University of California |
Keywords: Intelligent Transportation Systems, Learning from Demonstration
Abstract: In human-robot interaction (HRI) systems, such as autonomous vehicles, understanding and representing human behavior are important. Human behavior is naturally rich and diverse. Cost/reward learning, as an efficient way to learn and represent human behavior, has been successfully applied in many domains. Most traditional inverse reinforcement learning (IRL) algorithms, however, cannot adequately capture the diversity of human behavior since they assume that all behavior in a given dataset is generated by a single cost function. In this paper, we propose a probabilistic IRL framework that directly learns a distribution of cost functions in the continuous domain. Evaluations of both synthetic data and real human driving data are conducted. Both the quantitative and subjective results show that our proposed framework can better express diverse human driving behaviors, as well as extracting different driving styles that match what human participants interpret in our user study.
|
|
12:15-12:30, Paper MoBT9.3 | |
>Identification of Effective Motion Primitives for Ground Vehicles |
|
Löw, Tobias | ETH Zürich |
Bandyopadhyay, Tirthankar | CSIRO |
Borges, Paulo Vinicius Koerich | CSIRO |
Keywords: Autonomous Vehicle Navigation, Field Robots, Motion and Path Planning
Abstract: Understanding the kinematics of a ground robot is essential for efficient navigation. Based on the kinematic model of a robot, its full motion capabilities can be represented by theoretical motion primitives. However, depending on the environment and/or human preferences, not all of those theoretical motion primitives are desirable and/or achievable. This work presents a method to identify effective motion primitives (eMP) from continuous trajectories for autonomous ground robots. The pipeline efficiently performs segmentation, representation and reconstruction of the motion primitives, using initial human-driving behaviour as a guide to create a motion primitive library. Hence, this strategy incorporates how the environment affects the robot operation regarding accelerations, speed, braking, and steering behaviours. The method is thoroughly tested on an autonomous car-like electric vehicle, and the results show excellent generalisation of the theoretical motion primitive distribution to real vehicle. The experiments are carried out on large site with very diverse characteristics, illustrating the applicability of the method.
|
|
12:30-12:45, Paper MoBT9.4 | |
>CMetric: A Driving Behavior Measure Using Centrality Functions |
> Video Attachment
|
|
Chandra, Rohan | University of Maryland |
Bhattacharya, Uttaran | UMD College Park |
Mittal, Trisha | University of Maryland, College Park |
Bera, Aniket | University of Maryland |
Manocha, Dinesh | University of Maryland |
Keywords: Intelligent Transportation Systems
Abstract: We present a new measure, CMetric, to classify driver behaviors using centrality functions. Our formulation combines concepts from computational graph theory and social traffic psychology to quantify and classify the behavior of human drivers. CMetric is used to compute the probability of a vehicle executing a driving style, as well as the intensity used to execute the style. Our approach is designed for realtime autonomous driving applications, where the trajectory of each vehicle or road-agent is extracted from a video. We compute a dynamic geometric graph (DGG) based on the positions and proximity of the road-agents and centrality functions corresponding to closeness and degree. These functions are used to compute the CMetric based on style likelihood and style intensity estimates. Our approach is general and makes no assumption about traffic density, heterogeneity, or how driving behaviors change over time. We present an algorithm to compute CMetric and demonstrate its performance on real-world traffic datasets. To test the accuracy of CMetric, we introduce a new evaluation protocol (called ``Time Deviation Error'') that measures the difference between human prediction and the prediction made by CMetric.
|
|
12:45-13:00, Paper MoBT9.5 | |
>Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs |
|
Chandra, Rohan | University of Maryland |
Guan, Tianrui | University of Maryland |
Panuganti, Srujan | University of Maryland, College Park |
Mittal, Trisha | University of Maryland, College Park |
Bhattacharya, Uttaran | UMD College Park |
Bera, Aniket | University of Maryland |
Manocha, Dinesh | University of Maryland |
Keywords: Intelligent Transportation Systems, Autonomous Agents
Abstract: We present a novel approach for traffic forecasting in urban traffic scenarios using a combination of spectral graph analysis and deep learning. We predict both the low-level information (future trajectories) as well as the high-level information (road-agent behavior) from the extracted trajectory of each road-agent. Our formulation represents the proximity between the road agents using a weighted dynamic geometric graph (DGG). We use a two-stream graph-LSTM network to perform traffic forecasting using these weighted DGGs. The first stream predicts the spatial coordinates of road-agents, while the second stream predicts whether a road-agent is going to exhibit overspeeding, underspeeding, or neutral behavior by modeling spatial interactions between road-agents. Additionally, we propose a new regularization algorithm based on spectral clustering to reduce the error margin in long-term prediction (3-5 seconds) and improve the accuracy of the predicted trajectories. Moreover, we prove a theoretical upper bound on the regularized prediction error. We evaluate our approach on the Argoverse, Lyft, Apolloscape, and NGSIM datasets and highlight the benefits over prior trajectory prediction methods. In practice, our approach reduces the average prediction error by approximately 75% over prior algorithms and achieves a weighted average accuracy of 91.2% for behavior prediction. Additionally, our spectral regularization improves long-term prediction by up to 70%.
|
|
MoBT10 |
Room T10 |
Autonomous Vehicles: Mapping |
Regular session |
Chair: Tombari, Federico | Technische Universität München |
Co-Chair: Liu, Lantao | Indiana University |
|
11:45-12:00, Paper MoBT10.1 | |
>Frontier Detection and Reachability Analysis for Efficient 2D Graph-SLAM Based Active Exploration |
> Video Attachment
|
|
Sun, Zezhou | Nanjing University of Science and Technology |
Wu, Banghe | Nanjing University of Science and Technology |
Xu, Cheng-Zhong | University of Macau |
Sarma, Sanjay E. | MIT |
Yang, Jian | Nanjing University of Science & Technology |
Kong, Hui | Nanjing University of Science and Technology |
Keywords: Autonomous Vehicle Navigation, Path Planning for Multiple Mobile Robots or Agents, Mapping
Abstract: We propose an integrated approach to active exploration by exploiting the Cartographer method as the base SLAM module for submap creation and performing efficient frontier detection in the geometrically co-aligned submaps induced by graph optimization. We also carry out analysis on the reachability of frontiers and their clusters to ensure that the detected frontier can be reached by robot. Our method is tested on a mobile robot in real indoor scene to demonstrate the effectiveness and efficiency of our approach.
|
|
12:00-12:15, Paper MoBT10.2 | |
>Probabilistic Semantic Mapping for Urban Autonomous Driving Applications |
> Video Attachment
|
|
Paz, David | University of California, San Diego |
Zhang, Hengyuan | University of California, San Diego |
Li, Qinru | University of California San Diego |
Xiang, Hao | University of California, San Diego |
Christensen, Henrik Iskov | UC San Diego |
Keywords: Autonomous Vehicle Navigation, Semantic Scene Understanding, Mapping
Abstract: Recent advancement in statistical learning and computational abilities have enabled autonomous vehicle technology to develop at a much faster rate. While many of the architectures previously introduced are capable of operating under highly dynamic environments, many of these are constrained to smaller-scale deployments, require constant maintenance due to the associated scalability cost with high-definition (HD) maps, and involve tedious manual labeling. As an attempt to tackle this problem, we propose to fuse image and pre-built point cloud map information to perform automatic and accurate labeling of static landmarks such as roads, sidewalks, crosswalks and lanes. The method performs semantic segmentation on 2D images, associates the semantic labels with point cloud maps to accurately localize them in the world, and leverages the confusion matrix formulation to construct a probabilistic semantic map in bird's eye view from semantic point clouds. Experiments from data collected in an urban environment show that this model is able to predict most road features and can be extended for automatically incorporating road features into HD maps with potential future work directions.
|
|
12:15-12:30, Paper MoBT10.3 | |
>City-Scale Grid-Topological Hybrid Maps for Autonomous Mobile Robot Navigation in Urban Area |
> Video Attachment
|
|
Niijima, Shun | Tokyo University of Science, National Institute of Advanced Indu |
Umeyama, Ryusuke | Tokyo University of Science |
Sasaki, Yoko | National Inst. of Advanced Industrial Science and Technology |
Mizoguchi, Hiroshi | Tokyo University of Science |
Keywords: Wheeled Robots, Autonomous Vehicle Navigation
Abstract: Extensive city navigation remains an unresolved problem for autonomous mobile robots that share space with pedestrians. This paper proposes a configuration for a navigation map that expresses urban structures and an autonomous navigation scheme that uses the configuration. The proposed map configuration is a hybrid structure of multiple 2D grid maps and a topological graph. The occupancy grids for path planning are automatically converted from a given 3D point cloud and publicly available maps. The topological graph enables the connections between the subdivisions of occupancy grids to be managed and are used for route planning. This hybrid configuration can embed various urban structures automatically and is applicable to a wide range of autonomous navigation tasks. We evaluated the map by generating the proposed navigation map in real city and performing path planning using on the hybrid map. Experimental results demonstrated that the hybrid map can reduce the planning time and memory usage compared to the conventional single 2D grid map based path planning.
|
|
12:30-12:45, Paper MoBT10.4 | |
>State-Continuity Approximation of Markov Decision Processes Via Finite Element Methods for Autonomous System Planning |
|
Xu, Junhong | Indiana University |
Yin, Kai | HomeAway |
Liu, Lantao | Indiana University |
Keywords: Autonomous Vehicle Navigation, Motion and Path Planning, Marine Robotics
Abstract: Motion planning under uncertainty for an autonomous system can be formulated as a Markov Decision Process with a continuous state space. In this paper, we propose a novel solution to this decision-theoretic planning problem that directly obtains the continuous value function with only the first and second moments of the transition probabilities, alleviating the assumption of requiring an explicit transition model in the literature. We achieve this by taking advantage of the linear span in basis functions for the value function and a partial differential equation to approximate the Bellman equation, where the value function can be naturally constructed using a finite element method. We have validated our approach via extensive simulations, and the evaluations reveal that comparing to baseline methods, our solution leads to the best path results in terms of path smoothness, travel distance, and time costs.
|
|
12:45-13:00, Paper MoBT10.5 | |
>APPLD: Adaptive Planner Parameter Learning from Demonstration |
> Video Attachment
|
|
Xiao, Xuesu | The University of Texas at Austin |
Liu, Bo | University of Texas at Austin |
Warnell, Garrett | U.S. Army Research Laboratory |
Fink, Jonathan | US Army Research Laborator |
Stone, Peter | University of Texas at Austin |
Keywords: Autonomous Vehicle Navigation, Learning from Demonstration, Motion and Path Planning
Abstract: Existing autonomous robot navigation systems allow robots to move from one point to another in a collision-free manner. However, when facing new environments, these systems generally require re-tuning by expert roboticists with a good understanding of the inner workings of the navigation system. In contrast, even users who are unversed in the details of robot navigation algorithms can generate desirable navigation behavior in new environments via teleoperation. In this paper, we introduce APPLD, Adaptive Planner Parameter Learning from Demonstration, that allows existing navigation systems to be successfully applied to new complex environments, given only a human-teleoperated demonstration of desirable navigation. APPLD is verified on two robots running different navigation systems in different environments. Experimental results show that APPLD can outperform navigation systems with the default and expert-tuned parameters, and even the human demonstrator themselves.
|
|
13:00-13:15, Paper MoBT10.6 | |
>Explicit Domain Adaptation with Loosely Coupled Samples |
> Video Attachment
|
|
Scheel, Oliver | BMW Group |
Schwarz, Loren | BMW Group |
Navab, Nassir | TU Munich |
Tombari, Federico | Technische Universität München |
Keywords: Autonomous Vehicle Navigation, AI-Based Methods, Novel Deep Learning Methods
Abstract: Transfer learning is an important field of machine learning in general, and particularly in the context of fully autonomous driving, which needs to be solved simultaneously for many different domains, such as changing weather conditions and country-specific driving behaviors. Traditional transfer learning methods often focus on image data and are black-box models. In this work we propose a transfer learning framework, core of which is learning an explicit mapping between domains. Due to its interpretability, this is beneficial for safety-critical applications, like autonomous driving. We show its general applicability by considering image classification problems and then move on to time-series data, particularly predicting lane changes. In our evaluation we adapt a pre-trained model to a dataset exhibiting different driving and sensory characteristics.
|
|
MoBT11 |
Room T11 |
Autonomous Vehicles: Navigation I |
Regular session |
Chair: Zhang, Shiqi | SUNY Binghamton |
Co-Chair: Johnson-Roberson, Matthew | University of Michigan |
|
11:45-12:00, Paper MoBT11.1 | |
>SCALE-Net: Scalable Vehicle Trajectory Prediction Network under Random Number of Interacting Vehicles Via Edge-Enhanced Graph Convolutional Neural Network |
> Video Attachment
|
|
Jeon, Hyeongseok | Korea Advanced Institute of Science and Technology (KAIST) |
Choi, Jun-Won | Hanyang University |
Kum, Dongsuk | KAIST |
Keywords: Intelligent Transportation Systems, Autonomous Agents, Novel Deep Learning Methods
Abstract: Predicting the future trajectory of surrounding vehicles in a randomly varying traffic level is one of the most challenging problems in developing an autonomous vehicle. Since there is no pre-defined number of interacting vehicles participated in, the prediction network has to be scalable with respect to the number of vehicles in order to guarantee consistent performance in terms of both accuracy and computational load. In this paper, the first fully scalable trajectory prediction network, SCALE-Net, is proposed that can ensure both high prediction performance while keeping the computational load low regardless of the number of surrounding vehicles. The SCALE-Net employs the Edge-enhanced Graph Convolutional Neural Network (EGCN) for the inter-vehicular interaction embedding network. Since the proposed EGCN is inherently scalable with respect to the graph node (an agent in this study), the model can be operated independently from the total number of vehicles considered. We evaluated the scalability of the SCALE-Net on the publically available NGSIM datasets by comparing variations on computation time and prediction accuracy per single driving scene with respect to the varying vehicle number. The experimental test shows that both computation time and prediction performance of the SCALE-Net consistently outperform those of previous models regardless of the level of traffic complexities.
|
|
12:00-12:15, Paper MoBT11.2 | |
>Behaviorally Diverse Traffic Simulation Via Reinforcement Learning |
> Video Attachment
|
|
Shiroshita, Shinya | Preferred Networks, Inc |
Maruyama, Shirou | Preferred Networks, Inc |
Nishiyama, Daisuke | Preferred Networks, Inc |
Ynocente Castro, Mario | Preferred Networks, Inc |
Hamzaoui, Karim | Preferred Networks Inc |
Rosman, Guy | Massachusetts Institute of Technology |
DeCastro, Jonathan | Cornell University |
Lee, Kuan-Hui | Toyota Research Institute |
Gaidon, Adrien | Toyota Research Institute |
Keywords: Intelligent Transportation Systems, Reinforecment Learning, Autonomous Agents
Abstract: Traffic simulators are important tools in autonomous driving development. While continuous progress has been made to provide developers more options for modeling various traffic participants, tuning these models to increase their behavioral diversity while maintaining quality is often very challenging. This paper introduces an easily-tunable policy generation algorithm for autonomous driving agents. The proposed algorithm balances diversity and driving skills by leveraging the representation and exploration abilities of deep reinforcement learning via a distinct policy set selector. Moreover, we present an algorithm utilizing intrinsic rewards to widen behavioral differences in the training. To provide quantitative assessments, we develop two trajectory-based evaluation metrics which measure the differences among policies and behavioral coverage. We experimentally show the effectiveness of our methods on several challenging intersection scenes.
|
|
12:15-12:30, Paper MoBT11.3 | |
>Predictive Runtime Monitoring of Vehicle Models Using Bayesian Estimation and Reachability Analysis |
> Video Attachment
|
|
Chou, Yi | University of Colorado, Boulder |
Yoon, Hansol | University of Colorado Boulder |
Sankaranarayanan, Sriram | University of Colorado, Boulder |
Keywords: Autonomous Vehicle Navigation, Formal Methods in Robotics and Automation, Collision Avoidance
Abstract: We present a predictive runtime monitoring technique for estimating future vehicle positions and the probability of collisions with obstacles. Vehicle dynamics model how the position and velocity change over time as a function of external inputs. They are commonly described by discrete-time stochastic models. Whereas positions and velocities can be measured, the inputs (steering and throttle) are not directly measurable in these models. In our paper, we apply Bayesian inference techniques for real-time estimation, given prior distribution over the unknowns and noisy state measurements. Next, we pre-compute the set-valued reachability analysis to approximate future positions of a vehicle. The pre-computed reachability sets are combined with the posterior probabilities computed through Bayesian estimation to provided a predictive verification framework that can be used to detect impending collisions with obstacles. Our approach is evaluated using the coordinated-turn vehicle model for a UAV using on-board measurement data obtained from a flight test of a Talon UAV. We also compare the results with sampling-based approaches. We find that precomputed reachability analysis can provide accurate warnings up to 6 seconds in advance and the accuracy of the warnings improve as the time horizon is narrowed from 6 to 2 seconds. The approach also outperforms sampling in terms of on-board computation cost and accuracy measures.
|
|
12:30-12:45, Paper MoBT11.4 | |
>Task-Motion Planning for Safe and Efficient Urban Driving |
> Video Attachment
|
|
Ding, Yan | SUNY Binghamton |
Zhang, Xiaohan | SUNY Binghamton |
Zhan, Xingyue | Binghamton University |
Zhang, Shiqi | SUNY Binghamton |
Keywords: Autonomous Vehicle Navigation, Task Planning, Motion and Path Planning
Abstract: Autonomous vehicles need to plan at the task level to compute a sequence of symbolic actions, such as merging left and turning right, to fulfill people's service requests, where efficiency is the main concern. At the same time, the vehicles must compute continuous trajectories to perform actions at the motion level, where safety is the most important. Task-motion planning in autonomous driving faces the problem of maximizing task-level efficiency while ensuring motion-level safety. To this end, we develop algorithm Task-Motion Planning for Urban Driving (TMPUD) that, for the first time, enables the task and motion planners to communicate about the safety level of driving behaviors. TMPUD has been evaluated using a realistic urban driving simulation platform. Results suggest that TMPUD performs significantly better than competitive baselines from the literature in efficiency, while ensuring the safety of driving behaviors.
|
|
12:45-13:00, Paper MoBT11.5 | |
>Feedback Enhanced Motion Planning for Autonomous Vehicles |
> Video Attachment
|
|
Sun, Ke | University of Pennsylvania |
Schlotfeldt, Brent | University of Pennsylvania |
Chaves, Stephen | Qualcomm Research Philadelphia |
Martin, Paul | Qualcomm |
Mandhyan, Gulshan | Qualcomm |
Kumar, Vijay | University of Pennsylvania, School of Engineering and Applied Sc |
Keywords: Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: In this work, we address the motion planning problem for autonomous vehicles through a new lattice planning approach, called Feedback Enhanced Lattice Planner (FELP). Existing lattice planners have two major limitations, namely the high dimensionality of the lattice and the lack of modeling of agent vehicle behaviors. We propose to apply the Intelligent Driver Model (IDM)~cite{treiber2013traffic} as a speed feedback policy to address both of these limitations. IDM both enables the responsive behavior of the agents, and uniquely determines the acceleration and speed profile of the ego vehicle on a given path. Therefore, only a spatial lattice is needed, while discretization of higher order dimensions is no longer required. Additionally, we propose a directed-graph map representation to support the implementation and execution of lattice planners. The map can reflect local geometric structure, embed the traffic rules adhering to the road, and is efficient to construct and update. We show that FELP is more efficient compared to other existing lattice planners through runtime complexity analysis, and we propose two variants of FELP to further reduce the complexity to polynomial time. We demonstrate the improvement by comparing FELP with an existing spatiotemporal lattice planner using simulations of a merging scenario and continuous highway traffic. We also study the performance of FELP under different traffic densities.
|
|
13:00-13:15, Paper MoBT11.6 | |
>Low Latency Trajectory Predictions for Interaction Aware Highway Driving |
|
Anderson, Cyrus | University of Michigan |
Vasudevan, Ram | University of Michigan |
Johnson-Roberson, Matthew | University of Michigan |
Keywords: Autonomous Vehicle Navigation, Autonomous Agents
Abstract: Highway driving places significant demands on human drivers and autonomous vehicles (AVs) alike due to high speeds and the complex interactions in dense traffic. Merging onto the highway poses additional challenges by limiting the amount of time available for decision-making. Predicting others' trajectories accurately and quickly is crucial to safely execute maneuvers. Many existing prediction methods based on neural networks have focused on modeling interactions to achieve better accuracy while assuming the existence of observation windows over 3s long. This paper proposes a novel probabilistic model for trajectory prediction that performs competitively with as little as 400ms of observations. The proposed model extends a deterministic car-following model to the probabilistic setting by treating model parameters as unknown random variables and introducing regularization terms. A realtime inference procedure is derived to estimate the parameters from observations in this new model. Experiments on dense traffic in the NGSIM dataset demonstrate that the proposed method achieves state-of-the-art performance with both highly constrained and more traditional observation windows.
|
|
13:00-13:15, Paper MoBT11.7 | |
>Stable Autonomous Spiral Stair Climbing of Tracked Vehicles Using Wall Reaction Force |
> Video Attachment
|
|
Kojima, Shotaro | Tohoku University |
Ohno, Kazunori | Tohoku University |
Suzuki, Takahiro | Tohoku University |
Okada, Yoshito | Tohoku University |
Westfechtel, Thomas | Tohoku University |
Tadokoro, Satoshi | Tohoku University |
Keywords: Autonomous Vehicle Navigation, Motion Control, Kinematics
Abstract: In this paper, an autonomous spiral stair climbing method for tracked vehicles using the reaction force from side walls has been proposed. Spiral stairs are one of the most difficult terrains for tracked vehicles because of their asymmetrical ground shape and small turning radius. Tracked vehicles are expected to be used in industrial plant inspection tasks, where robots should navigate on multiple floors by ascending the stairs. Spiral or curved stairs are installed as part of inspection passages for cylindrical facilities, such as boilers, chimneys, or large tanks. Previously, the authors have experimentally demonstrated that the wall-following motion is effective for stabilizing and accelerating spiral stair climbing. However, the complete automation of climbing motion or the analysis of why the same motion is generated even if a disturbance exists in the initial entry angle to the wall should be investigated. In this study, the authors developed an autonomous spiral stair climbing method using the wall reaction force and clarified the applicable limitations of this method using a geometrical model. Autonomous spiral stair climbing is realized by attaching passive wheels on its collision point and automating the motions of main-tracks and sub-tracks. The geometrical model shows the expected trajectory of the robot on the spiral stairs, which suggests that the robot' s rotation radius converges to a specific value; this is experimentally confirmed by measuring the robot's motion. The wall-following motion of robots is equivalent to human inspectors grasping handrails while climbing stairs. Through collisions with surrounding objects, motion is stabilized and certainty is guaranteed.
|
|
MoBT12 |
Room T12 |
Autonomous Vehicles: Navigation II |
Regular session |
Chair: Bezzo, Nicola | University of Virginia |
Co-Chair: Miao, Fei | University of Connecticut |
|
11:45-12:00, Paper MoBT12.1 | |
>GndNet: Fast Ground Plane Estimation and Point Cloud Segmentation for Autonomous Vehicles |
> Video Attachment
|
|
Paigwar, Anshul | Institut National De Recherche En Informatique Et En Automatique |
Erkent, Ozgur | Inria |
Sierra-Gonzalez, David | Inria Grenoble Rhône-Alpes |
Laugier, Christian | INRIA |
Keywords: Intelligent Transportation Systems, Autonomous Vehicle Navigation, Novel Deep Learning Methods
Abstract: Ground plane estimation and ground point segmentation is a crucial precursor for many applications in robotics and intelligent vehicles like navigable space detection and occupancy grid generation, 3D object detection, point cloud matching for localization and registration for mapping. In this paper, we present GndNet, a novel end-to-end approach that estimates the ground plane elevation information in a grid-based representation and segments the ground points simultaneously in real-time. GndNet uses PointNet and Pillar Feature Encoding network to extract features and regresses ground height for each cell of the grid. We augment the SemanticKITTI dataset to train our network. We demonstrate qualitative and quantitative evaluation of our results for ground elevation estimation and semantic segmentation of point cloud. GndNet establishes a new state-of-the-art, achieves a run-time of 55Hz for ground plane estimation and ground point segmentation.
|
|
12:00-12:15, Paper MoBT12.2 | |
>Intelligent Exploration and Autonomous Navigation in Confined Spaces |
> Video Attachment
|
|
Akbari, Aliakbar | Royal Holloway University of London |
Chhabra, Puneet Singh | Headlight AI Limited |
Bhandari, Ujjar | Headlight AI Limited |
Bernardini, Sara | Royal Holloway University of London |
Keywords: Autonomous Vehicle Navigation, Semantic Scene Understanding, Motion and Path Planning
Abstract: Autonomous navigation and exploration in confined spaces are currently setting new challenges for robots. The presence of narrow passages, flammable atmosphere, dust, smoke, and other hazards makes the mapping and navigation tasks extremely difficult. To tackle these challenges, robots need to make intelligent decisions, maximising information while maintaining the safety of the system and their surroundings. In this paper, we present a suite of reasoning mechanisms along with a software architecture for exploration tasks that can be used to underpin the behavior of a broad range of robots operating in confined spaces. We present an autonomous navigation module that allows the robot to safely traverse known areas of the environment and extract features of the unknown frontier regions. An exploration component, by reasoning about these frontiers, provides the robot with the ability to venture into new spaces. From low-level sensory input and contextual information, the robot incrementally builds a semantic network that represents known and unknown parts of the environment and then uses a logic-based, high-level reasoner to interrogate such a network and decide the best course of actions. We evaluate our approach against several mine-like challenging scenarios with different characteristics using a small drone. The experimental results indicate that our method allows the robot to make informed decisions on how to best explore the environment while preserving safety.
|
|
12:15-12:30, Paper MoBT12.3 | |
>Data-Driven Distributionally Robust Electric Vehicle Balancing for Mobility-On-Demand Systems under Demand and Supply Uncertainties |
|
He, Sihong | University of Connecticut |
Pepin, Lynn | University of Connecticut |
Guang, Wang | Rutgers University |
Zhang, Desheng | Rutgers University |
Miao, Fei | University of Connecticut |
Keywords: Intelligent Transportation Systems, Optimization and Optimal Control, Robust/Adaptive Control of Robotic Systems
Abstract: As electric vehicle (EV) technologies become mature, EV has been rapidly adopted in modern transportation systems, and is expected to provide future autonomous mobility-on-demand (AMoD) service with economic and societal benefits. However, EVs require frequent recharges due to their limited and unpredictable cruising ranges, and they have to be managed efficiently given the dynamic charging process. It is urgent and challenging to investigate a computationally efficient algorithm that provides EV AMoD system performance guarantees under model uncertainties, instead of using heuristic demand or charging models. To accomplish this goal, this work designs a data-driven distributionally robust optimization approach for vehicle supply-demand ratio and charging station utilization balancing, while minimizing the worst-case expected cost considering both passenger mobility demand uncertainties and EV supply uncertainties. We then derive an equivalent computationally tractable form for solving the distributionally robust problem in a computationally efficient way under ellipsoid uncertainty sets constructed from data. Based on E-taxi system data of Shenzhen city, we show that the average total balancing cost is reduced by 14.49%, the average unfairness of supply-demand ratio and utilization is reduced by 15.78% and 34.51% respectively with the distributionally robust vehicle balancing method, compared with solutions which do not consider model uncertainties.
|
|
12:30-12:45, Paper MoBT12.4 | |
>GP-Based Runtime Planning, Learning, and Recovery for Safe UAV Operations under Unforeseen Disturbances |
> Video Attachment
|
|
Yel, Esen | University of Virginia |
Bezzo, Nicola | University of Virginia |
Keywords: Autonomous Vehicle Navigation, Aerial Systems: Applications, Motion and Path Planning
Abstract: Autonomous vehicles are typically developed and trained to work under certain system and environmental conditions defined at design time and can fail or perform poorly if unforeseen conditions such as disturbances or changes in model dynamics appear at runtime. In this work, we present a fast online planning, learning, and recovery approach for safe autonomous operations under unknown runtime disturbances. Our approach estimates the behavior of the system with an unknown model and provides safe plans at runtime under previously unseen disturbances by leveraging Gaussian Process regression theory in which a model is continuously trained and adapted using data collected during the autonomous operation. A recovery procedure is event-triggered any time a safety constraint is violated to guarantee safety and enable learning and replanning. The proposed framework is applied and validated both in simulation and experiment on an unmanned aerial vehicle (UAV) delivery case study in which the UAV is tasked to carry an a priori unknown payload to a goal location in a cluttered/constrained environment.
|
|
12:45-13:00, Paper MoBT12.5 | |
>DiversityGAN: Diversity-Aware Vehicle Motion Prediction Via Latent Semantic Sampling |
> Video Attachment
|
|
Huang, Xin | MIT |
McGill, Stephen | Toyota Research Institute |
DeCastro, Jonathan | Cornell University |
Fletcher, Luke | Toyota Research Institute |
Leonard, John | MIT |
Williams, Brian | MIT |
Rosman, Guy | Massachusetts Institute of Technology |
Keywords: Intelligent Transportation Systems, Representation Learning, Computer Vision for Transportation
Abstract: Vehicle trajectory prediction is crucial for autonomous driving and advanced driver assistant systems. While existing approaches may sample from a predicted distribution of vehicle trajectories, they lack the ability to explore it -- a key ability for evaluating safety from a planning and verification perspective. In this work, we devise a novel approach for generating realistic and diverse vehicle trajectories. We extend the generative adversarial network (GAN) framework with a low-dimensional approximate semantic space, and shape that space to capture semantics such as merging and turning. We sample from this space in a way that mimics the predicted distribution, but allows us to control coverage of semantically distinct outcomes. We validate our approach on a publicly available dataset and show results that achieve state-of-the-art prediction performance, while providing improved coverage of the space of predicted trajectory semantics.
|
|
13:00-13:15, Paper MoBT12.6 | |
>Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning with Application to Autonomous Driving |
|
Wu, Zheng | University of California, Berkeley |
Sun, Liting | University of California, Berkeley |
Zhan, Wei | Univeristy of California, Berkeley |
Yang, Chenyu | Shanghai Jiao Tong University(SJTU) |
Tomizuka, Masayoshi | University of California |
Keywords: Intelligent Transportation Systems, Autonomous Agents, Behavior-Based Systems
Abstract: In the past decades, we have witnessed significant progress in the domain of autonomous driving. Advanced techniques based on optimization and reinforcement learning become increasingly powerful when solving the forward problem: given designed reward/cost functions, how we should optimize them and obtain driving policies that interact with the environment safely and efficiently. Such progress has raised another equally important question: emph{what should we optimize}? Instead of manually specifying the reward functions, it is desired that we can extract what human drivers try to optimize from real traffic data and assign that to autonomous vehicles to enable more naturalistic and transparent interaction between humans and intelligent agents. To address this issue, we present an efficient sampling-based maximum-entropy inverse reinforcement learning (IRL) algorithm in this paper. Different from existing IRL algorithms, by introducing an efficient continuous-domain trajectory sampler, the proposed algorithm can directly learn the reward functions in the continuous domain while considering the uncertainties in demonstrated trajectories from human drivers. We evaluate the proposed algorithm via real-world driving data, including both non-interactive and interactive scenarios. The experimental results show that the proposed algorithm achieves more accurate prediction performance with faster convergence speed and better generalization compared to other baseline IRL algorithms.
|
|
MoBT13 |
Room T13 |
Autonomous Vehicles: Planning & Environment |
Regular session |
Chair: Kong, Yu | Rochester Institute of Technology |
Co-Chair: Azaria, Amos | Computer Science Department, Ariel |
|
11:45-12:00, Paper MoBT13.1 | |
>Object-Aware Centroid Voting for Monocular 3D Object Detection |
> Video Attachment
|
|
Bao, Wentao | Rochester Institute of Technology |
Yu, Qi | Rochester Institute of Technology |
Kong, Yu | Rochester Institute of Technology |
Keywords: Autonomous Vehicle Navigation, Computer Vision for Automation, Deep Learning for Visual Perception
Abstract: Monocular 3D object detection aims to detect objects in a 3D physical world from a single image. However, recent approaches either rely on expensive LiDAR devices, or resort to dense pixel-wise depth estimation that causes prohibitive computational cost. In this paper, we propose an end-to-end trainable monocular 3D object detector without learning the dense depth. Specifically, the grid coordinates of a 2D box are first projected back to 3D space with the pinhole model as 3D centroids proposals. Then, a novel object-aware voting approach is introduced, which considers both the region-wise appearance attention and the geometric projection distribution, to vote the 3D centroid proposals for 3D object localization. With the late fusion and the predicted 3D orientation and dimension, the 3D bounding boxes of objects can be detected from a single RGB image. The method is straightforward yet significantly superior to other monocular-based even the recent LiDAR-based methods in localizing faraway objects. Extensive experimental results on the challenging KITTI benchmark validate the effectiveness of the proposed method.
|
|
12:00-12:15, Paper MoBT13.2 | |
>Estimating Pedestrian Crossing States Based on Single 2D Body Pose |
|
Wang, Zixing | University of Minnesota |
Papanikolopoulos, Nikos | University of Minnesota |
Keywords: Intelligent Transportation Systems, Computer Vision for Transportation
Abstract: The Crossing or Not-Crossing (C/NC) problem is important to autonomous vehicles (AVs) for safe vehicle/pedestrian interactions. However, this problem setup often ignores pedestrians walking along the direction of the vehicles’ movement (LONG). To enhance the AVs’ awareness of pedestrian behavior, we make the first step towards extending the C/NC to the C/NC/LONG problem and recognize them based on single body pose. In contrast, previous C/NC state classifiers depend on multiple poses or contextual information. Our proposed shallow neural network classifier aims to recognize these three states swiftly. We tested it on the JAAD dataset and reported an average 81.23% accuracy. Furthermore, this model can be integrated with different sensors and algorithms that provide 2D pedestrian body pose so that it is able to function across multiple light and weather conditions.
|
|
12:15-12:30, Paper MoBT13.3 | |
>SSP: Single Shot Future Trajectory Prediction |
|
Dwivedi, Isht | Honda Research Institute USA |
Malla, Srikanth | Honda Research Institute |
Dariush, Behzad | Honda Research Institute USA |
Choi, Chiho | Honda Research Institute |
Keywords: Intelligent Transportation Systems, Deep Learning for Visual Perception, Computer Vision for Transportation
Abstract: We propose a robust solution to future trajectory forecast, which can be practically applicable to autonomous agents in highly crowded environments. For this, three aspects are particularly addressed in this paper. First, we use composite fields to predict future locations of all road agents in a single-shot, which results in a constant time complexity, regardless of the number of agents in the scene. Second, interactions between agents are modeled as a non-local response, enabling spatial relationships between different locations to be captured temporally as well (i.e., in spatio-temporal interactions). Third, the semantic context of the scene are modeled and take into account the environmental constraints that potentially influence the future motion. To this end, we validate the robustness of the proposed approach using the ETH, UCY, and SDD datasets and highlight its practical functionality compared to the current state-of-the-art methods.
|
|
12:30-12:45, Paper MoBT13.4 | |
>Probabilistic Crowd GAN: Multimodal Pedestrian Trajectory Prediction Using a Graph Vehicle-Pedestrian Attention Network |
> Video Attachment
|
|
Eiffert, Stuart | The University of Sydney: The Australian Centre for Field Roboti |
Li, Kunming | University of Sydney |
Shan, Mao | The University of Sydney |
Worrall, Stewart | University of Sydney |
Sukkarieh, Salah | The University of Sydney: The Australian Centre for Field Roboti |
Nebot, Eduardo | Unversity of Sydney |
Keywords: Intelligent Transportation Systems, Social Human-Robot Interaction, Autonomous Vehicle Navigation
Abstract: Understanding and predicting the intention of pedestrians is essential to enable autonomous vehicles and mobile robots to navigate crowds. This problem becomes increasingly complex when we consider the uncertainty and multimodality of pedestrian motion, as well as the implicit interactions between members of a crowd, including any response to a vehicle. Our approach, Probabilistic Crowd GAN, extends recent work in trajectory prediction, combining Recurrent Neural Networks (RNNs) with Mixture Density Networks (MDNs) to output probabilistic multimodal predictions, from which likely modal paths are found and used for adversarial training. We also propose the use of Graph Vehicle-Pedestrian Attention Network (GVAT), which models social interactions and allows input of a shared vehicle feature, showing that inclusion of this module leads to improved trajectory prediction both with and without the presence of a vehicle. Through evaluation on various datasets we demonstrate improvements on existing state of the art methods for trajectory prediction and illustrate how the true multimodal and uncertain nature of crowd interactions can be directly modelled.
|
|
12:45-13:00, Paper MoBT13.5 | |
>Model-Based Reinforcement Learning for Time-Optimal Velocity Control |
|
Hartmann, Gabriel | Ariel University |
Shiller, Zvi | Ariel University |
Azaria, Amos | Computer Science Department, Ariel |
Keywords: Autonomous Vehicle Navigation, Reinforecment Learning, Motion and Path Planning
Abstract: Autonomous navigation has recently gained great interest in the field of reinforcement learning. However, little attention was given to the time-optimal velocity control problem, i.e. controlling a vehicle such that it travels at the maximal speed without becoming dynamically unstable (roll-over or sliding). Time optimal velocity control can be solved numerically using existing methods that are based on optimal control and vehicle dynamics. In this paper, we develop a model-based deep reinforcement learning to generate the time-optimal velocity control. Moreover, we introduce a method that uses a numerical solution that predicts whether the vehicle may become unstable and intervenes if needed. We show that our combined model outperforms several baselines as it achieves higher velocities (with only one minute of training) and does not encounter any failures during the training process.
|
|
13:00-13:15, Paper MoBT13.6 | |
>Learning Hierarchical Behavior and Motion Planning for Autonomous Driving |
> Video Attachment
|
|
Wang, Jingke | Zhejiang University |
Wang, Yue | Zhejiang University |
Zhang, Dongkun | Zhejiang University |
Yang, Yezhou | Arizona State University |
Xiong, Rong | Zhejiang University |
Keywords: Autonomous Vehicle Navigation, Reinforecment Learning
Abstract: Learning-based driving solution, a new branch for autonomous driving, is expected to simplify the modeling of driving by learning the underlying mechanisms from data. To improve the tactical decision-making for learning-based driving solution, we introduce hierarchical behavior and motion planning (HBMP) to explicitly model the behavior in learning-based solution. Due to the coupled action space of behavior and motion, it is challenging to solve HBMP problem using reinforcement learning (RL) for long-horizon driving tasks. We transform HBMP problem by integrating a classical sampling-based motion planner, of which the optimal cost is regarded as the rewards for high-level behavior learning. As a result, this formulation reduces action space and diversifies the rewards without losing the optimality of HBMP. In addition, we propose a sharable representation for input sensory data across simulation platforms and real-world environment, so that models trained in a fast event-based simulator, SUMO, can be used to initialize and accelerate the RL training in a dynamics based simulator, CARLA. Experimental results demonstrate the effectiveness of the method. Besides, the model is successfully transferred to the real-world, validating the generalization capability.
|
|
MoBT14 |
Room T14 |
Autonomous Vehicles: Safety & Systems |
Regular session |
Chair: Berman, Spring | Arizona State University |
Co-Chair: Zhao, Ding | Carnegie Mellon University |
|
11:45-12:00, Paper MoBT14.1 | |
>Learning to Collide: An Adaptive Safety-Critical Scenarios Generating Method |
> Video Attachment
|
|
Ding, Wenhao | Carnegie Mellon University |
Chen, Baiming | Tsinghua University |
Xu, Minjun | Carnegie Mellon University |
Zhao, Ding | Carnegie Mellon University |
Keywords: Autonomous Vehicle Navigation, Reinforecment Learning, Semantic Scene Understanding
Abstract: ng-tail and rare event problems become crucial when autonomous driving algorithms are applied in the real world. For the purpose of evaluating systems in challenging settings, we propose a generative framework to create safety-critical scenarios for evaluating specic task algorithms. We first represent the trafc scenarios with a series of autoregres- sive building blocks and generate diverse scenarios by sampling from the joint distribution of these blocks. We then train the generative model as an agent (or a generator) to search the risky scenario parameters for a given driving algorithm. We treat the driving algorithm as an environment that returns high reward to the agent when a risky scenario is generated. The whole process is optimized by policy gradient reinforce- ment learning method. Through the experiments conducted on several scenarios in the simulation, we demonstrate that the proposed framework generates safety-critical scenarios more efciently than grid search or human design methods. Another advantage of this method is its adaptiveness to the routes and parameters
|
|
12:00-12:15, Paper MoBT14.2 | |
>Synchrono: A Scalable, Physics-Based Simulation Platform for Testing Groups of Autonomous Vehicles And/or Robots |
|
Taves, Jay | University of Wisconsin–Madison |
Elmquist, Asher | University of Wisconsin-Madison |
Young, Aaron | University of Wisconsin–Madison |
Serban, Radu | University of Wisconsin - Madison |
Negrut, Dan | University of Wisconsin |
Keywords: Autonomous Vehicle Navigation, Autonomous Agents, Automation Technologies for Smart Cities
Abstract: This contribution is concerned with the topic of using simulation to understand the behavior of groups of mutually interacting autonomous vehicles (AVs) or robots engaged in traffic/maneuvers that involve coordinated operation. We outline the structure of a multi-agent simulator called SynChrono and provide results pertaining to its scalability and ability to run real-time scenarios with humans in the loop. SynChrono is a scalable multi-agent, high-fidelity environment whose purpose is that of testing AV and robot control strategies. Four main components make up the core of the simulation platform: a physics-based dynamics engine that can simulate rigid and compliant systems, fluid-solid interactions, and deformable terrains; a module that provides sensing simulation; an agent-to-agent communication server; dynamic virtual worlds, which host the interacting agents operating in a coordinated scenario. The platform provides a virtual proving ground that can be used to answer questions such as ``what will an AV do when it skids on a patch of ice and moves one way while facing the other way?''; ``is a new agent-control strategy robust enough to handle unforeseen circumstances?''; and ``what is the effect of a loss of communication between agents engaged in a coordinated maneuver?''. Full videos based on work in the paper are available at https://tinyurl.com/ChronoIROS2020 and additional descriptions on the particular version of software used is available at https://github.com/uwsbel/publications-data/tree/master/2020/IROS.
|
|
12:15-12:30, Paper MoBT14.3 | |
>Output Only Fault Detection and Mitigation of Networks of Autonomous Vehicles |
> Video Attachment
|
|
Khalil, Abdelrahman | Memorial University of Newfoundland |
Al Janaideh, Mohammad | Memorial University &University of Toronto |
Aljanaideh, Khaled | Jordan University of Science and Technology |
Kundur, Deepa | University of Toronto |
Keywords: Autonomous Vehicle Navigation
Abstract: An autonomous vehicle platoon is a network of autonomous vehicles that communicate together to move in a desired way. One of the greatest threats to the operation of an autonomous vehicle platoon is the failure of either a physical component of a vehicle or a communication link between two vehicles. This failure affects the safety and stability of the autonomous vehicle platoon. Transmissibility-based health monitoring uses available sensor measurements for fault detection under unknown excitation and unknown dynamics of the network. After a fault is detected, a sliding mode controller is used to mitigate the fault. Different fault scenarios are considered including vehicle internal disturbances, cyber attacks, and communication delays. We apply the proposed approach to a bond graph model of the platoon and an experimental setup consisting of three autonomous robots.
|
|
12:30-12:45, Paper MoBT14.4 | |
>Go-CHART: A Miniature Remotely Accessible Self-Driving Car Robot |
> Video Attachment
|
|
Kannapiran, Shenbagaraj | Arizona State University |
Berman, Spring | Arizona State University |
Keywords: Intelligent Transportation Systems, Distributed Robot Systems, Education Robotics
Abstract: The Go-CHART is a four-wheel, skid-steer robot that resembles a 1:28 scale standard commercial sedan. It is equipped with an onboard sensor suite and both onboard and external computers that replicate many of the sensing and computation capabilities of a full-size autonomous vehicle. The Go-CHART can autonomously navigate a small-scale traffic testbed, responding to its sensor input with programmed controllers. Alternatively, it can be remotely driven by a user who views the testbed through the robot's four camera feeds, which facilitates safe, controlled experiments on driver interactions with driverless vehicles. We demonstrate the Go-CHART's ability to perform lane tracking and detection of traffic signs, traffic signals, and other Go-CHARTs in real-time, utilizing an external GPU that runs computationally intensive computer vision and deep learning algorithms.
|
|
MoBT15 |
Room T15 |
Autonomous Vehicles: Sensors |
Regular session |
Chair: Urtasun, Raquel | University of Toronto |
Co-Chair: Bonnabel, Silvere | Mines ParisTech |
|
11:45-12:00, Paper MoBT15.1 | |
>An RLS-Based Instantaneous Velocity Estimator for Extended Radar Tracking |
> Video Attachment
|
|
Gosala, Nikhil Bharadwaj | ETH Zürich |
Meng, Xiaoli | APTIV AM |
Keywords: Intelligent Transportation Systems, Autonomous Vehicle Navigation, Range Sensing
Abstract: Radar sensors have become an important part of the perception sensor suite due to their long range and their ability to work in adverse weather conditions. However, several shortcomings such as large amounts of noise and extreme sparsity of the point cloud result in them not being used to their full potential. In this paper, we present a novel Recursive Least Squares (RLS) based approach to estimate the instantaneous velocity of dynamic objects in real-time that is capable of handling large amounts of noise in the input data stream. We also present an end-to-end pipeline to track extended objects in real-time that uses the computed velocity estimates for data association and track initialisation. The approaches are evaluated using several real-world inspired driving scenarios that test the limits of these algorithms. It is also experimentally proven that our approaches run in real-time with frame execution time not exceeding 30 ms even in dense traffic scenarios, thus allowing for their direct implementation on autonomous vehicles.
|
|
12:00-12:15, Paper MoBT15.2 | |
>Lidar Essential Beam Model for Accurate Width Estimation of Thin Poles |
> Video Attachment
|
|
Long, Yunfei | Michigan State University |
Morris, Daniel | Michigan State University |
Keywords: Computer Vision for Transportation, Computer Vision for Automation, Range Sensing
Abstract: While Lidar beams are often represented as rays, they actually have finite beam width and this width impacts the measured shape and size of objects in the scene. Here we investigate the effects of beam width on measurements of thin objects such as vertical poles. We propose a model for beam divergence and show how this can explain both object dilation and erosion. We develop a calibration method to estimate beam divergence angle. This calibration method uses one or more vertical poles observed from a Lidar on a moving platform. In addition, we derive an incremental method for using the calibrated beam angle to obtain accurate estimates of thin object diameters, observed from a Lidar on a moving platform. Our method achieves significantly more accurate diameter estimates than is obtained when beam divergence is ignored.
|
|
12:15-12:30, Paper MoBT15.3 | |
>MVLidarNet: Real-Time Multi-Class Scene Understanding for Autonomous Driving Using Multiple Views |
> Video Attachment
|
|
Chen, Ke | Nvidia |
Smolyanskiy, Nikolai | NVIDIA |
Oldja, Ryan | NVIDIA |
Birchfield, Stan | NVIDIA Corporation |
Popov, Alexander (Sasha) | CSE, UMN |
Wehr, David | NVIDIA |
Eden, Ibrahim | NVIDIA |
Pehserl, Joachim | Microsoft |
Keywords: Autonomous Vehicle Navigation, Computer Vision for Transportation, Intelligent Transportation Systems
Abstract: Autonomous driving requires the inference of actionable information such as detecting and classifying objects, and determining the drivable space. To this end, we present Multi-View LidarNet (MVLidarNet), a two-stage deep neural network for multi-class object detection and drivable space segmentation using multiple views of a single LiDAR point cloud. The first stage processes the point cloud projected onto a perspective view in order to semantically segment the scene. The second stage then processes the point cloud (along with semantic labels from the first stage) projected onto a bird's eye view, to detect and classify objects. Both stages use an encoder-decoder architecture. We show that our multi-view, multi-stage, multi-class approach is able to detect and classify objects while simultaneously determining the drivable space using a single LiDAR scan as input, in challenging scenes with more than one hundred vehicles and pedestrians at a time. The system operates efficiently at 150 fps on an embedded GPU designed for a self-driving car, including a postprocessing step to maintain identities over time. We show results on both KITTI and a much larger internal dataset, thus demonstrating the method's ability to scale by an order of magnitude.
|
|
12:30-12:45, Paper MoBT15.4 | |
>The Importance of Prior Knowledge in Precise Multimodal Prediction |
> Video Attachment
|
|
Casas Romero, Sergio | Uber ATG, University of Toronto |
Gulino, Cole | Uber ATG |
Suo, Simon | University of Toronto |
Urtasun, Raquel | University of Toronto |
Keywords: Autonomous Vehicle Navigation, Deep Learning for Visual Perception, Robot Safety
Abstract: Roads have well defined geometries, topologies, and traffic rules. While this has been widely exploited in motion planning methods to produce maneuvers that obey the law, little work has been devoted to utilize these priors in perception and motion forecasting methods. In this paper we propose to incorporate these structured priors as a loss function. In contrast to imposing hard constraints, this approach allows the model to handle non-compliant maneuvers when those happen in the real world. Safe motion planning is the end goal, and thus a probabilistic characterization of the possible future developments of the scene is key to choose the plan with the lowest expected cost. Towards this goal, we design a framework that leverages REINFORCE to incorporate non-differentiable priors over sample trajectories from a probabilistic model, thus optimizing the whole distribution. We demonstrate the effectiveness of our approach on real-world self-driving datasets containing complex road topologies and multi-agent interactions. Our motion forecasts not only exhibit better precision and map understanding, but most importantly result in safer motion plans taken by our self-driving vehicle. We emphasize that despite the importance of this evaluation, it has been often overlooked by previous perception and motion forecasting works.
|
|
12:45-13:00, Paper MoBT15.5 | |
>Simultaneous Estimation of Vehicle Position and Data Delays Using Gaussian Process Based Moving Horizon Estimation |
|
Mori, Daiki | Toyota Central R&D Labs. Inc |
Hattori, Yoshikazu | Toyota Central Research and Development Laboratories, Inc |
Keywords: Autonomous Vehicle Navigation, Localization, Sensor Fusion
Abstract: Automobiles or robots with recent advanced autonomous systems are equipped with multiple types of sensors to overcome different weather and geographical conditions. These sensors generally have various data delays and sampling rates. Additionally, the communication delays or time synchronization errors between the onboard computers significantly affect the robustness and accuracy of localization for autonomous vehicles. In this paper, the simultaneous estimation of vehicle position and sensor delays using a Gaussian process based moving horizon estimation (GP-MHE) is presented. The GP-MHE can estimate the unknown delays of multiple sensors with the resolution less than that of GP-MHE sampling rate. The localization performance of GP-MHE was confirmed using full-vehicle simulator, then evaluated in a real vehicle experiment on a highway scenario. Experimental result verified the sufficient localization accuracy of sub 0.3m using data that had irregular sampling rate and delay of more than 150ms. The proposed algorithm extends the capability of integrating various data with large unknown delays for vehicles, robots, drones and remote autonomy.
|
|
13:00-13:15, Paper MoBT15.6 | |
>A Real-Time Unscented Kalman Filter on Manifolds for Challenging AUV Navigation |
|
Cantelobre, Theophile | Mines ParisTech |
Chahbazian, Clément | Schlumberger-Doll Research |
Croux, Arnaud | Schlumberger-Doll Research |
Bonnabel, Silvere | Mines ParisTech |
Keywords: Autonomous Vehicle Navigation, Marine Robotics, Sensor Fusion
Abstract: We consider the problem of localization and navigation of Autonomous Underwater Vehicles (AUV) in the context of high performance subsea asset inspection missions in deep water. We propose a solution based on the recently introduced Unscented Kalman Filter on Manifolds (UKF-M) for onboard navigation to estimate the robot’s location, attitude and velocity, using a precise round and rotating Earth navigation model. Our algorithm has the merit of seamlessly handling nonlinearity of attitude, and is far more simpler to implement than the extended Kalman filter (EKF), which is state of the art in the navigation industry. The unscented transform notably spares the user the computation of Jacobians and lends itself well to fast prototyping in the context of multi-sensor data fusion. Besides, we provide the community with feedback about implementation, and execution time is shown to be compatible with real-time. Realistic extensive Monte-Carlo simulations prove uncertainty is estimated with accuracy by the filter, and illustrate its convergence ability. Real experiments in the context of a 900m deep dive near Marseille (France) illustrate the relevance of the method.
|
|
MoBT16 |
Room T16 |
Perception for Autonomous Driving |
Regular session |
Chair: Xiang, Zhiyu | Zhejiang University |
|
11:45-12:00, Paper MoBT16.1 | |
>DSSF-Net: Dual-Task Segmentation and Self-Supervised Fitting Network for End-To-End Lane Mark Detection |
|
Du, Wentao | Zhejiang University |
Xiang, Zhiyu | Zhejiang University |
Chen, Yiman | Zhejiang University |
Chen, Shuya | Zhejiang University |
Keywords: Computer Vision for Transportation, Deep Learning for Visual Perception, AI-Based Methods
Abstract: Lane mark detection is one of the key tasks for autonomous driving systems. Accurate detection of lane marks under complex urban environments remains a challenge. In this paper, an end-to-end lane mark detection network named DSSF-net, which is capable of directly outputting the accurate fitted lane curves, is proposed. First, a dual-task segmentation framework for jointing lane category prediction and spatial partition is presented. An IoU-based loss function is put forward to tackle the severely imbalanced category distribution problem. Then a fully self-supervised curve fitting network is proposed to directly output the parameters of lane line upon the probability map. To achieve better accuracy, the fitting network is trained with two sub-stages: coarse regression and confidence-based optimization. Finally the entire DSSF-net is implemented end-to-end. Comprehensive experiments conducted on challenging CULane dataset show that our model achieves 74.9% in F1-score and outperforms the state-of-the-art models.
|
|
12:00-12:15, Paper MoBT16.2 | |
>Lane Marking Verification for High Definition Map Maintenance Using Crowdsourced Images |
|
Li, Binbin | Texas A&M University |
Song, Dezhen | Texas A&M University |
Kingery, Aaron | Texas A&M University |
Zheng, Dongfang | Tencent |
Xu, Yiliang | Tencent America |
Guo, Huiwen | Tencent America |
Keywords: Computer Vision for Transportation, Mapping, Visual-Based Navigation
Abstract: Autonomous vehicles often rely on high-definition (HD) maps to navigate around. However, lane markings (LMs) are not necessarily static objects due to wear & tear from usage and road reconstruction & maintenance. Therefore, the wrong matching between LMs in the HD map and sensor readings may lead to erroneous localization or even cause traffic accidents. It is imperative to keep LMs up-to-date. However, frequently recollecting data to update HD maps is cost-prohibitive. Here we propose to utilize crowdsourced images from multiple vehicles at different times to help verify LMs for HD map maintenance. We obtain the LM distribution in the image space by considering the camera pose uncertainty in perspective projection. Both LMs in HD map and LMs in the image are treated as observations of LM distributions which allow us to construct posterior conditional distribution (a.k.a Bayesian belief functions) of LMs from either sources. An LM is consistent if belief functions from the map and the image satisfy statistical hypothesis testing. We further extend the Bayesian belief model into a sequential belief update using crowdsourced images. LMs with a higher probability of existence are kept in the HD map whereas those with a lower probability of existence are removed from the HD map. We verify our approach using real data. Experimental results show that our method is capable of verifying and updating LMs in the HD map.
|
|
12:15-12:30, Paper MoBT16.3 | |
>Toward Hierarchical Self-Supervised Monocular Absolute Depth Estimation for Autonomous Driving Applications |
> Video Attachment
|
|
Xue, Feng | Tongji University, Shanghai |
Zhuo, Guirong | Tongji University, Shanghai |
Huang, Ziyuan | National Universitu of Singapore |
Fu, Wufei | Tongji University |
Wu, Zhuoyue | Tongji University |
Ang Jr, Marcelo H | National University of Singapore |
Keywords: Computer Vision for Transportation, Deep Learning for Visual Perception
Abstract: In recent years, self-supervised methods for monocular depth estimation has rapidly become an significant branch of depth estimation task, especially for autonomous driving applications. Despite the high overall precision achieved, current methods still suffer from a) imprecise object-level depth inference and b) uncertain scale factor. The former problem would cause texture copy or provide inaccurate object boundary, and the latter would require current methods to have an additional sensor like LiDAR to provide depth ground-truth or stereo camera as additional training inputs, which makes them difficult to implement. In this work, we propose to address these two problems together by introducing DNet. Our contributions are twofold: a) a novel dense connected prediction (DCP) layer is proposed to provide better object-level depth estimation and b) specifically for autonomous driving scenarios, dense geometrical constrains (DGC) is introduced so that precise scale factor can be recovered without additional cost for autonomous vehicles. Extensive experiments have been conducted and, both DCP layer and DGC module are proved to be effectively solving the aforementioned problems respectively. Thanks to DCP layer, object boundary can now be better distinguished in the depth map and the depth is more continues on object level. It is also demonstrated that the performance of using DGC to perform scale recovery is comparable to that using groundtruth information, when the camera height is given and the ground point takes up more than 1.03% of the pixels. Code is available at https://github.com/TJ-IPLab/DNet.
|
|
12:30-12:45, Paper MoBT16.4 | |
>Label Efficient Visual Abstractions for Autonomous Driving |
> Video Attachment
|
|
Behl, Aseem | MPI Tübingen |
Chitta, Kashyap | Max Planck Institute for Intelligent Systems |
Prakash, Aditya | Max Planck Institute for Intelligent Systems |
Ohn-Bar, Eshed | Max Planck Institute |
Geiger, Andreas | Max Planck Institute for Intelligent Systems, Tübingen |
Keywords: Computer Vision for Transportation, Autonomous Vehicle Navigation, Imitation Learning
Abstract: It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as distance traveled per intervention or safety. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, i.e., the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (e.g., object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.
|
|
12:45-13:00, Paper MoBT16.5 | |
>Learning Accurate and Human-Like Driving Using Semantic Maps and Attention |
|
Hecker, Simon | ETH Zurich |
Dai, Dengxin | ETH Zurich |
Liniger, Alexander | ETH Zurich |
Hahner, Martin | ETH Zurich |
Van Gool, Luc | ETH Zurich |
Keywords: Computer Vision for Transportation, Big Data in Robotics and Automation, Learning from Demonstration
Abstract: This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like. To tackle the first issue we exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such. The maps are used in an attention mechanism that promotes segmentation confidence masks, thus focusing the network on semantic classes in the image that are important for the current driving situation. Human-like driving is achieved using adversarial learning, by not only minimizing the imitation loss with respect to the human driver but by further defining a discriminator, that forces the driving model to produce action sequences that are human-like. Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving models are more accurate and behave more human-like than previous methods.
|
|
13:00-13:15, Paper MoBT16.6 | |
>IDDA: A Large-Scale Multi-Domain Dataset for Autonomous Driving |
|
Alberti, Emanuele | Politecnico Di Torino |
Tavera, Antonio | Politecnico Di Torino |
Masone, Carlo | Max Planck Institute for Biological Cybernetics |
Caputo, Barbara | Sapienza University |
Keywords: Semantic Scene Understanding, Deep Learning for Visual Perception, Computer Vision for Transportation
Abstract: Semantic Segmentation is key in autonomous driving. Using deep visual learning architectures is not trivial in this context, because of the challenges in creating suitable large scale annotated datasets. This issue has been traditionally circumvented through the use of synthetic datasets, that have become a popular resource in this field. They have been released with the need to develop semantic segmentation algorithms able to close the visual domain shift between the training and test data. Although exacerbated by the use of artificial data, the problem is extremely relevant in this field even when training on real data. Indeed, weather conditions, viewpoint changes and variations in the city appearances can vary considerably from car to car, and even at test time for a single, specific vehicle. How to deal with domain adaptation in semantic segmentation, and how to leverage effectively several different data distributions (source domains) are important research questions in this field. To support work in this direction, this paper contributes a new large scale, synthetic dataset for semantic segmentation with more than 100 different source visual domains. The dataset has been created to explicitly address the challenges of domain shift between training and test data in various weather and view point conditions, in seven different city types. Extensive benchmark experiments assess the dataset, showcasing open challenges for the current state of the art. The dataset will be available at: https://idda-dataset.github.io/home/.
|
|
MoBT17 |
Room T17 |
Planning for Autonomous Vehicles I |
Regular session |
Chair: Haddon, David | CSIRO |
Co-Chair: Jiang, Jingjing | Loughborough University |
|
11:45-12:00, Paper MoBT17.1 | |
>PaintPath: Defining Path Directionality in Maps for AutonomousGround Vehicles |
|
Bowyer, Riley | CSIRO |
Lowe, Tom | CSIRO |
Borges, Paulo Vinicius Koerich | CSIRO |
Bandyopadhyay, Tirthankar | CSIRO |
Löw, Tobias | ETH Zürich |
Haddon, David | CSIRO |
Keywords: Field Robots, Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: Directionality in path planning is essential forefficient autonomous navigation in a number of real-worldenvironments. In many map-based navigation scenarios, the viable path from a given point A to point B is not the same as the viable path from B to A. We present a method that automatically incorporates preferred navigation directionality into a path planning costmap. This ‘preference’ is representedby coloured paths in the costmap. The colourisation is obtainedbased on an analysis of the driving trajectory generated bythe robot as it navigates through the environment. Hence,our method augments this driving trajectory by intelligently colouring it according to the orientation of the robot during the run. Creating an analogy between the vehicle orientation angleand the hue angle in the Hue-Saturation-Value colour space,the method uses the hue, saturation and value components toencode the direction, directionality and scalar cost, respectively,into a costmap image. We describe how we modify the A* algorithm to incorporate this information to plan direction-aware vehicle paths. Our experiments with LiDAR-based localisation and autonomous driving in real environments illustratethe applicability of the method
|
|
12:00-12:15, Paper MoBT17.2 | |
>Probabilistic Multi-Modal Trajectory Prediction with Lane Attention for Autonomous Vehicles |
|
Luo, Chenxu | Johns Hopkins University |
Sun, Lin | HKUST, Stanford, Samsung |
Dabiri, Dariush | Samsung Electronics |
Yuille, Alan | Johns Hopkins University |
Keywords: Autonomous Vehicle Navigation, Autonomous Agents, Intelligent Transportation Systems
Abstract: Trajectory prediction is crucial for autonomous vehicles. The planning system not only needs to know the current state of the surrounding objects but also their possible states in the future. As for vehicles, their trajectories are significantly influenced by the lane geometry and how to effectively use the lane information is of active interest. Most of the existing works use rasterized maps to explore road information, which does not distinguish different lanes. In this paper, we propose a novel instance-aware representation for lane representation. By integrating the lane features and trajectory features, a goal-oriented lane attention module is proposed to predict the future locations of the vehicle. We show that the proposed lane representation together with the lane attention module can be integrated into the widely used encoder-decoder framework to generate diverse predictions. Most importantly, each generated trajectory is associated with a probability to handle the uncertainty. Our method does not suffer from collapsing to one behavior modal and can cover diverse possibilities. Extensive experiments and ablation studies on the benchmark datasets corroborate the effectiveness of our proposed method. Notably, our proposed method ranks third place in the Argoverse motion forecasting competition at NeurIPS 2019.
|
|
12:15-12:30, Paper MoBT17.3 | |
>Safe Planning for Self-Driving Via Adaptive Constrained ILQR |
> Video Attachment
|
|
Pan, Yanjun | Carnegie Mellon University |
Lin, Qin | Carnegie Mellon University |
Shah, Het | Indian Institute of Technology Kharagpur |
Dolan, John M. | Carnegie Mellon University |
Keywords: Motion and Path Planning, Collision Avoidance
Abstract: Constrained Iterative Linear Quadratic Regulator (CILQR), a variant of ILQR, has been recently proposed for motion planning problems of autonomous vehicles to deal with constraints such as obstacle avoidance and reference tracking. However, the previous work considers either deterministic trajectories or persistent prediction for target dynamical obstacles. The other drawback is lack of generality - it requires manual weight tuning for different scenarios. In this paper, two significant improvements are achieved. Firstly, a two-stage uncertainty-aware prediction is proposed. The short-term prediction with safety guarantee based on reachability analysis is responsible for dealing with extreme maneuvers conducted by target vehicles. The long-term prediction leveraging an adaptive least square filter preserves the long-term optimality of the planned trajectory since using reachability only for long-term prediction is too pessimistic and makes the planner over-conservative. Secondly, to allow a wider coverage over different scenarios and to avoid tedious parameter tuning case by case, this paper designs a scenario-based analytical function taking the states from the ego vehicle and the target vehicle as input, and carrying weights of a cost function as output. It allows the ego vehicle to execute multiple behaviors (such as lane-keeping and overtaking) under a single planner. We demonstrate safety, effectiveness, and real-time performance of the proposed planner in simulations.
|
|
12:30-12:45, Paper MoBT17.4 | |
>Automatic Lane Change Maneuver in Dynamic Environment Using Model Predictive Control Method |
|
Li, Zhaolun | Loughborough University |
Jiang, Jingjing | Loughborough University |
Chen, Wen-Hua | Loughborough University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: The lane change maneuver is one of the typical maneuvers in various driving situations. Therefore the automatic lane change function is one of the key functions for autonomous vehicles. Many researches have been conducted in this field. Most existing work focused on the solutions for the static environment and assume that the surrounding vehicles are running at constant speeds. However, in reality, if not all the vehicles on the road are fully autonomous, the situation could be much more complicated and the ego vehicle has to deal with the dynamic environment. This paper proposes a Model Predictive Control (MPC)-based method to achieve automatic lane change in a dynamic environment. A two-wheel dynamic bicycle model, which combines the longitudinal and lateral motion of the ego vehicle, together with a utility function, which helps to automatically determine the target lane have been used in the algorithm. The simulation results have demonstrated the capability of the proposed algorithm in a dynamic environment.
|
|
12:45-13:00, Paper MoBT17.5 | |
>Real-Time Optimal Control of an Autonomous RC Car with Minimum-Time Maneuvers and a Novel Kineto-Dynamical Model |
> Video Attachment
|
|
Pagot, Edoardo | University of Trento |
Piccinini, Mattia | University of Trento |
Biral, Francesco | University of Trento |
Keywords: Motion and Path Planning, Optimization and Optimal Control, Autonomous Vehicle Navigation
Abstract: In this paper, we present a real-time non-linear model-predictive control (NMPC) framework to perform minimum-time motion planning for autonomous racing cars. We introduce an innovative kineto-dynamical vehicle model, able to accurately predict non-linear longitudinal and lateral vehicle dynamics. The main parameters of this vehicle model can be tuned with only experimental or simulated maneuvers, aimed to identify the handling diagram and the maximum performance G-G envelope. The kineto-dynamical model is adopted to generate on-line minimum time trajectories with an indirect optimal control method. The motion planning framework is applied to control an autonomous 1:8 RC vehicle near the limits of handling along a test circuit. Finally, the effectiveness of the proposed algorithms is illustrated by comparing the experimental results with the solution of an off-line minimum-time optimal control problem.
|
|
MoBT18 |
Room T18 |
Planning for Autonomous Vehicles II |
Regular session |
Chair: Liu, Lantao | Indiana University |
Co-Chair: Bopardikar, Shaunak D. | Michigan State University |
|
11:45-12:00, Paper MoBT18.1 | |
>Optimization-Based Hierarchical Motion Planning for Autonomous Racing |
> Video Attachment
|
|
Vazquez, Jose | ETH Zürich |
Bruehlmeier, Marius | ETH Zürich |
Liniger, Alexander | ETH Zurich |
Rupenyan, Alisa | ETH Zürich |
Lygeros, John | ETH Zurich |
Keywords: Motion and Path Planning, Optimization and Optimal Control
Abstract: In this paper we propose a hierarchical controller for autonomous racing where the same vehicle model is used in a two level optimization framework for motion planning. The high-level controller computes a trajectory that minimizes the lap time, and the low-level nonlinear model predictive path following controller tracks the computed trajectory online. Following a computed optimal trajectory avoids online planning and enables fast computational times. The efficiency is further enhanced by the coupling of the two levels through a terminal constraint, computed in the high-level controller. Including this constraint in the real-time optimization level ensures that the prediction horizon can be shortened, while safety is guaranteed. This proves crucial for the experimental validation of the approach on a full size driverless race car. The vehicle in question won two international student racing competitions using the proposed framework; moreover, our hierarchical controller achieved an improvement of 20% in the lap time compared to the state of the art result achieved using a very similar car and track.
|
|
12:00-12:15, Paper MoBT18.2 | |
>Secure Route Planning Using Dynamic Games with Stopping States |
> Video Attachment
|
|
Banik, Sandeep | Michigan State University |
Bopardikar, Shaunak D. | Michigan State University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation, Intelligent Transportation Systems
Abstract: This paper studies a motion planning problem over a roadmap in which a vehicle aims to travel from a start to a destination in presence of an attacker who can launch a cyber-attack on the vehicle over any one edge of the roadmap. The vehicle (defender) has the capability to switch on/off a countermeasure that can detect and permanently disable the attack if it occurs concurrently. We first model the problem of traversing an edge as a zero-sum dynamic game with a stopping state, termed as an edge-game played between an attacker and defender. We characterize Nash equilibria of the edge-game and provide closed form expressions for the case of two actions per player. We further provide an analytic and approximate expression on the value of an edge-game and characterize conditions under which it grows sub-linearly with the length of the edge. We study the sensitivity of Nash equilibrium to the (i) cost of using the countermeasure, (ii) cost of motion and (iii) benefit of disabling the attack. The solution of the edge-game is used to formulate and solve the secure route planning problem. We design an efficient heuristic by converting the problem to a shortest path problem using the edge cost as the solution of corresponding edge-games. We illustrate our findings through several insightful simulations.
|
|
12:15-12:30, Paper MoBT18.3 | |
>Online Planning in Uncertain and Dynamic Environment in the Presence of Multiple Mobile Vesicles |
|
Xu, Junhong | Indiana University |
Yin, Kai | HomeAway |
Liu, Lantao | Indiana University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: We investigate the autonomous navigation of a mobile robot in the presence of other moving vehicles under time-varying uncertain environmental disturbances. We first predict the future state distributions of other vehicles to account for their uncertain behaviors affected by the time-varying disturbances. We then construct a dynamic-obstacle-aware reachable space that contains states with high probabilities to be reached by the robot, within which the optimal policy is searched. Since, in general, the dynamics of both the vehicle and the environmental disturbances are nonlinear, we utilize a nonlinear Gaussian filter -- the unscented transform -- to approximate the future state distributions. Finally, the forward reachable space computation and backward policy search are iterated until convergence. Our simulation evaluations have revealed significant advantages of this proposed method in terms of computation time, decision accuracy, and planning reliability.
|
|
12:30-12:45, Paper MoBT18.4 | |
>Minimum Time - Minimum Jerk Optimal Traffic Management for AGVs |
> Video Attachment
|
|
Frego, Marco | University of Trento |
Bevilacqua, Paolo | University of Trento |
Divan, Stefano | University of Trento |
Zenatti, Fabiano | University of Trento |
Palopoli, Luigi | University of Trento |
Biral, Francesco | University of Trento |
Fontanelli, Daniele | University of Trento |
Keywords: Optimization and Optimal Control, Collision Avoidance, Motion and Path Planning
Abstract: A combined minimum time - minimum jerk traffic management system for the vehicle coordination in an automated warehouse is presented. The algorithm is organised in two steps: in the first, a simple minimum time optimisation problem is solved, in the second step, this time-optimal solution is refined into a smooth minimum jerk plan for the autonomous forklifts in order to avoid impulsive forces that may unbalance the vehicle. For the first step, we propose a novel approach based on Linear Programming, which guarantees convergence to the optimal solution starting from a feasible point, and a low computational overhead, which makes it suitable for real-time applications. The output of this step is a piecewise constant velocity profile for all the moving robots that ensures collision avoidance. The second step takes such speed profile and generates its smoothed version, which minimises the jerk while respecting the same levels of safety of the solution generated by the first step. We discuss the different solutions with simulation and experimental data.
|
|
12:45-13:00, Paper MoBT18.5 | |
>Non-Gaussian Chance-Constrained Trajectory Planning for Autonomous Vehicles under Agent Uncertainty |
|
Wang, Allen | Massachusetts Institute of Technology |
M. Jasour, Ashkan | MIT |
Williams, Brian | MIT |
Keywords: Motion and Path Planning, Probability and Statistical Methods, Intelligent Transportation Systems
Abstract: Agent behavior is arguably the greatest source of uncertainty in trajectory planning for autonomous vehicles. This problem has motivated significant amounts of work in the behavior prediction community on learning rich distributions of the future states and actions of agents. However, most current works on chance-constrained trajectory planning under agent or obstacle uncertainty either assume Gaussian uncertainty or linear constraints, which is limiting, or requires sampling, which can be computationally intractable to encode in an optimization problem. In this paper, we extend the state-of-the-art by presenting a methodology to upper-bound chance-constraints defined by polynomials and mixture models with potentially non-Gaussian components. Our method achieves its generality by using statistical moments of the distributions in concentration inequalities to upper-bound the probability of constraint violation. With this method, optimization-based trajectory planners can plan trajectories that are chance-constrained with respect to a wide range of distributions representing predictions of agent future positions. In experiments, we show that the resulting optimization problem can be solved with state-of-the-art nonlinear program solvers to plan trajectories fast enough for use online.
|
|
MoCT1 |
Room T1 |
Agricultural Automation |
Regular session |
Chair: Williams, Ryan | Virginia Polytechnic Institute and State University |
Co-Chair: Stachniss, Cyrill | University of Bonn |
|
14:00-14:15, Paper MoCT1.1 | |
>Segmentation-Based 4D Registration of Plants Point Clouds for Phenotyping |
|
Magistri, Federico | University of Bonn |
Chebrolu, Nived | University of Bonn |
Stachniss, Cyrill | University of Bonn |
Keywords: Robotics in Agriculture and Forestry, Computer Vision for Other Robotic Applications, Mapping
Abstract: Plant phenotyping, i.e., the task of measuring plant traits to describe the anatomy and physiology of plants, is a central task in crop science and plant breeding. Standard methods require intrusive and time-consuming operations involving a lot of manual labor. Cameras and range sensors paired with 3D reconstructions methods can support phenotyping but the task yields several challenges. In this paper, we address the problem of finding correspondences between plants recorded at different points in time in order to track phenotyping traits in an autonomous fashion. Our approach makes use of successive learning stages to compute a minimal representation of plant point clouds encoding both topology and semantic information. In this way, we are able to tackle the data association problem for 4D point cloud data od plants. We tested our approach on different 3D+time sequences of plant point clouds of different plant species. The experiments presented in this paper suggest that our 4D matching approach allows for non-rigid registration of the plants. Moreover, we show that our method allows for tracking different phenotyping traits at an organ level forming a basis for automated temporal phenotyping.
|
|
14:15-14:30, Paper MoCT1.2 | |
>Incorporating Spatial Constraints into a Bayesian Tracking Framework for Improved Localisation in Agricultural Environments |
|
Khan, Muhammad Waqas Khan | University of Lincoln |
Das, Gautham | University of Lincoln |
Hanheide, Marc | University of Lincoln |
Cielniak, Grzegorz | University of Lincoln |
Keywords: Robotics in Agriculture and Forestry, Localization, Probability and Statistical Methods
Abstract: Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle's proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF). The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.
|
|
14:30-14:45, Paper MoCT1.3 | |
>Learning Continuous Object Representations from Point Cloud Data |
|
Henry, Nelson | CSE, UMN |
Papanikolopoulos, Nikos | University of Minnesota |
Keywords: Agricultural Automation, Object Detection, Segmentation and Categorization
Abstract: Continuous representations of objects have always been used in robotics in the form of geometric primitives and surface models. Recently, learning techniques have emerged which allow more complex continuous representations to be learned from data, but these learning techniques require training data in the form of watertight meshes which restricts their application as meshes of this form are difficult to obtain from real data. This paper proposes a modification to existing methods that allows real world point cloud data to be used for training these surface representations allowing the techniques to be used in broader applications. The modification is evaluated on ModelNet10 to quantify the difference between the existing and the proposed methods as well as on a novel precision agriculture dataset that has been released publicly to show the modification’s applicability to new areas. The proposed method enables obtaining training data from real world sensors that produce point clouds rather than requiring an expensive meshing step which may not be possible for some applications. This opens the possibility of using techniques like this for complex shapes in areas like grasping and agricultural data collection.
|
|
14:45-15:00, Paper MoCT1.4 | |
>Solving Large-Scale Stochastic Orienteering Problems with Aggregation |
|
Thayer, Thomas C. | University of California, Merced |
Carpin, Stefano | University of California, Merced |
Keywords: Planning, Scheduling and Coordination, Agricultural Automation
Abstract: In this paper we consider the stochastic cost orienteering problem, i.e., a version of the classic orienteering problem where the cost associated with each edge is a random variable with known distribution. Such a model is relevant when travel costs are variable, e.g., when a robot moves in uncertain terrain conditions. We model this problem using a composite state space tracking both how much progress the robot has made towards the goal and how much time it has left. On top of this state space, we compute a time-aware policy that allows the robot to dynamically adjust its path and avoid missing the temporal deadline. This policy is determined using a Constrained Markov Decision Process that allows tuning the accepted failure probability upfront. This approach suffers from a significant growth in the composite state space, and to mitigate this problem we introduce an aggregation technique where nearby vertices are compounded together, effectively reducing the original routing problem to an instance with a smaller state space. We then analyze this approach over large scale problem instances associated with robotic irrigation on a commercial grade vineyard.
|
|
15:00-15:15, Paper MoCT1.5 | |
>DIAT (Depth-Infrared Image Annotation Transfer) for Training a Depth-Based Pig-Pose Detector |
> Video Attachment
|
|
Yik, Steven | Michigan State University |
Benjamin, Madonna | Michigan State University |
Lavagnino, Michael | Michigan State University |
Morris, Daniel | Michigan State University |
Keywords: Agricultural Automation, Novel Deep Learning Methods, Computer Vision for Automation
Abstract: Precision livestock farming uses artificial intelligence to individually monitor livestock activity and health. Tracking individuals over time can reveal health indicators that correlate with productivity and longevity. For instance, locomotion patterns observed in lame pigs have been shown to correlate with poor animal welfare and productivity. Kinematic analysis of pigs using pose estimates provides a means of assessing locomotion. New dense depth sensors have potential to achieve full 3D pose estimation and tracking. However, the lack of annotated dense depth datasets has limited use of these sensors in detecting animal pose. Current annotation methods rely on human labeling, but identifying hip and shoulder locations is difficult for pigs with few prominent features, and is especially difficult in-depth images as these lack albedo texture. This work proposes a solution to quickly generate high accuracy pig landmark annotations for depth-based postestimation. We propose Depth-Infrared Annotation Transfer (DIAT), an approach that semi-automatically finds, identifies, and tracks marks visible in infrared, and transfers these labels to depth images. As a result, we are able to train a precise pig pose detector that operates on depth images.
|
|
15:15-15:30, Paper MoCT1.6 | |
>Data-Driven Models with Expert Influence: A Hybrid Approach to Spatiotemporal Process Estimation |
|
Liu, Jun | Virginia Tech |
Williams, Ryan | Virginia Polytechnic Institute and State University |
Keywords: Agricultural Automation, Robotics in Agriculture and Forestry, Optimization and Optimal Control
Abstract: In this paper, our motivating application lies in precision agriculture where accurate modeling of forage is essential for informing rotational grazing strategies. Unfortunately, a major difficulty arises in modeling forage processes as they evolve on large scales according to complex ecological influences. As robots can collect data over large scales in a forage environment, they act as a promising resource for the forage modeling problem when combined with a data-driven Gaussian processes (GPs) technique. However, GPs are non-parametric in nature and may be blind to certain nuances of a process that a parameterized expert model may predict well. Indeed, for the forage modeling problem specifically, there exist several highly parameterized models from agricultural experts that exhibit powerful predictive capabilities. Expert models, however, often come with two shortcomings: (1) parameters may be difficult to determine in general; and (2) the model may not make complete spatiotemporal predictions. For example, a stochastic differential equation (SDE) that models the dynamics of the average output of an environment may be available from experts (a typical case). In such cases, we propose to take advantage of both data-driven (GPs) and expert (SDE) models, by fusing data collected by robots, which often yields spatial insight, with models from experienced professionals that often yield temporal insights. Specifically, we propose to leverage Bayesian inference to combine these two methods, resulting in a posterior prediction that is a hybrid of data-driven and expert models. Finally, we provide simulations to demonstrate the effectiveness of the proposed method.
|
|
MoCT2 |
Room T2 |
Environment Monitoring |
Regular session |
Chair: Triebel, Rudolph | German Aerospace Center (DLR) |
Co-Chair: Kovac, Mirko | Imperial College London |
|
14:00-14:15, Paper MoCT2.1 | |
>Robust MUSIC-Based Sound Source Localization in Reverberant and Echoic Environments |
> Video Attachment
|
|
Sewtz, Marco | Deutsches Zentrum Für Luft Und Raumfahrt E.V |
Bodenmueller, Tim | German Aerospace Center (DLR) |
Triebel, Rudolph | German Aerospace Center (DLR) |
Keywords: Robot Audition, Service Robots, Environment Monitoring and Management
Abstract: Intuitive human robot interfaces like speech or gesture recognition are essential for gaining acceptance for robots in daily life. However, such interaction requires that the robot detects the human’s intention to interact, tracks his position and keeps its sensor systems in an optimal configuration. Audio is a suitable modality for such task as it allows for detecting a speaker in arbitrary positions around the robot. In this paper, we present a novel approach for localization of sound sources by analyzing the frequency spectrum of the received signal and applying a motion model to the estimation process. We use an improved version of the Generalized Singular Value Decomposition (GSVD) based MUltiple SIgnal Classification (MUSIC) algorithm as a direction of arrival (DoA) estimator. Further, we introduce a motion model to enable robust localization in reverberant and echoic environments. We evaluate the system under real conditions in an experimental setup. Our experiments show that our approach outperforms current state-of-the-art algorithm and demonstrate the robustness against the previously mentioned disruptive factors.
|
|
14:15-14:30, Paper MoCT2.2 | |
>OceanVoy: A Hybrid Energy Planning System for Autonomous Sailboat |
> Video Attachment
|
|
Sun, Qinbo | The Chinese Univeristy of Hong Kong, Shenzhen |
Qi, Weimin | The Chinese University of Hong Kong, Shenzhen |
Liu, Hengli | Peng Cheng Laboratory, Shenzhen |
Sun, Zhenglong | Chinese University of Hong Kong, Shenzhen |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Qian, Huihuan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Keywords: Energy and Environment-Aware Automation, Field Robots, Marine Robotics
Abstract: Towards long range and high endurance sailing, energy is of utmost importance. Moreover, benefiting from the dominance of the sailboat itself, it is energy-saving and environment-friendly. Thus, the sailboat with energy planning problem is meaningful. However, until now, the sailboat energy optimization problem has rarely been considered. In this paper, we focus on the energy consumption optimization of an autonomous sailboat. It has been formulated as a Non-linear Programming problem (NLP). We deal with it with a hybrid control scheme, in which pseudo-spectral (PS) optimal control method is used in heading control, and a model-free framework guided by Extreme Seeking Control (ESC) is used in sail control. The optimal path is generated with the optimal input motor torques in time series. As a result, both simulation and experiments have validated motion planning and energy planning performance. Notably, about 7% of energy is saved on average. Our proposed method can make sailboats sailing longer and sustainable.
|
|
14:30-14:45, Paper MoCT2.3 | |
>LAVAPilot: Lightweight UAVTrajectory Planner with Situational Awarenessfor Embedded Autonomy to Track and Locate Radio-Tags |
> Video Attachment
|
|
Nguyen, Hoa Van | The University of Adelaide |
Chen, Fei | The University of Adelaide |
Chesser, Joshua | The University of Adelaide |
Rezatofighi, S. Hamid | The University of Adelaide |
Ranasinghe, Damith | The University of Adelaide |
Keywords: Field Robots, Range Sensing, Environment Monitoring and Management
Abstract: Tracking and locating radio-tagged wildlife is a labor-intensive and time-consuming task necessary in wildlife conservation. In this article, we focus on the problem of achieving embedded autonomy for a resource-limited aerial robot for the task capable of avoiding undesirable disturbances to wildlife. We employ a lightweight sensor system capable of simultaneous (noisy) measurements of radio signal strength information from multiple tags for estimating object locations. We formulate a new lightweight task-based trajectory planning method-LAVAPilot-with a greedy evaluation strategy and a void functional formulation to achieve situational awareness to maintain a safe distance from objects of interest. Conceptually, we embed our intuition of moving closer to reduce the uncertainty of measurements into LAVAPilot instead of employing a computationally intensive information gain based planning strategy. We employ LAVAPilot and the sensor to build a lightweight aerial robot platform with fully embedded autonomy for jointly tracking and planning to track and locate multiple VHF radio collar tags used by conservation biologists. Using extensive Monte Carlo simulation-based experiments, implementations on a single board compute module, and field experiments using an aerial robot platform with multiple VHF radio collar tags, we evaluate our joint planning and tracking algorithms. Further, we compare our method with other information-based planning methods with and without situational awareness to demonstrate the effectiveness of our robot executing LAVAPilot. Our experiments demonstrate that LAVAPilot significantly reduces (by 98.5%) the computational cost of planning to enable real-time planning decisions whilst achieving similar localization accuracy of objects compared to information gain based planning methods albeit taking a slightly longer time to complete a mission. To support research in the field, and conservation biology, we also open source the complete project. In particular, to the best of our knowledge, this is the first demonstration of a fully autonomous aerial robot system where trajectory planning and tracking to survey and locate multiple radio-tagged objects are achieved onboard.
|
|
14:45-15:00, Paper MoCT2.4 | |
>Coordinate-Free Isoline Tracking in Unknown 2-D Scalar Fields |
|
Dong, Fei | Tsinghua University |
You, Keyou | Tsinghua University |
Wang, Jian | Tsinghua Univ |
Keywords: Autonomous Vehicle Navigation, Whole-Body Motion Planning and Control, Environment Monitoring and Management
Abstract: The isoline tracking of this work is concerned with the control design for a sensing robot to track a given isoline of an unknown 2-D scalar filed. To this end, we propose a coordinate-free controller with a simple PI-like form using only the concentration feedback for a Dubins robot, which is particularly useful in GPS-denied environments. The key idea lies in the novel design of a sliding surface based error term in the standard PI controller. Interestingly, we also prove that the tracking error can be reduced by increasing the proportion gain, and be eliminated for circular fields with a non-zero integral gain. The effectiveness of our controller is validated via simulations by using a fixed-wing UAV on the real dataset of the concentration distribution of PM2.5 in an area of China.
|
|
15:15-15:30, Paper MoCT2.7 | |
>MEDUSA: A Multi-Environment Dual-Robot for Underwater Sample Acquisition |
> Video Attachment
|
|
Debruyn, Diego | Imperial College London |
Zufferey, Raphael | Imperial College of London |
Armanini, Sophie Franziska | Imperial College London |
Winston, Crystal | Imperial College London |
Farinha, Andre | Imperial College |
Jin, Yufei | Imperial College London |
Kovac, Mirko | Imperial College London |
Keywords: Environment Monitoring and Management, Aerial Systems: Applications, Marine Robotics
Abstract: Aerial-aquatic robots possess the unique ability of operating in both air and water. However, this capability comes with tremendous challenges, such as communication incompatibility, increased airborne mass, potentially inefficient operation in each of the environments and manufacturing difficulties. Such robots, therefore, typically have small payloads and a limited operational envelope, often making their field usage impractical. We propose a novel robotic water sampling approach that combines the robust technologies of multirotors and underwater micro-vehicles into a single integrated tool usable for field operations. The proposed solution encompasses a multirotor capable of landing and floating on the water, and a tethered mobile underwater pod that can be deployed to depths of several meters. The pod is controlled remotely in three dimensions and transmits video feed and sensor data via the floating multirotor back to the user. The 'dual-robot' approach considerably simplifies robotic underwater monitoring, while also taking advantage of the fact that multirotors can travel long distances, fly over obstacles, carry payloads and manoeuvre through difficult terrain, while submersible robots are ideal for underwater sampling or manipulation. The presented system can perform challenging tasks which would otherwise require boats or submarines. The ability to collect aquatic images, samples and metrics will be invaluable for ecology and aquatic research, supporting our understanding of local climate in difficult-to-access environments.
|
|
MoCT3 |
Room T3 |
Field Robots |
Regular session |
Chair: Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Co-Chair: Detweiler, Carrick | University of Nebraska-Lincoln |
|
14:00-14:15, Paper MoCT3.1 | |
>Efficient Trajectory Library Filtering for Quadrotor Flight in Unknown Environments |
> Video Attachment
|
|
Viswanathan, Vaibhav | Carnegie Mellon University |
Dexheimer, Eric | Carnegie Mellon University |
Li, Guanrui | New York University |
Loianno, Giuseppe | New York University |
Kaess, Michael | Carnegie Mellon University |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: Field Robots, Aerial Systems: Perception and Autonomy, Perception-Action Coupling
Abstract: Quadrotor flight in unknown environments is challenging due to the limited range of perception sensors, state estimation drift, and limited onboard computation. In this work, we tackle these challenges by proposing an efficient, reactive planning approach. We introduce the Bitwise Trajectory Eliminiation (BiTE) algorithm for efficiently filtering out in-collision trajectories from a trajectory library by using bitwise operations. Then, we outline a full planning approach for quadrotor flight in unknown environments. This approach is evaluated extensively in simulation and shown to require up to 90% less computation than comparable approaches. Finally, we validate our planner in over 120 minutes of flights in forest-like and urban subterranean environments.
|
|
14:15-14:30, Paper MoCT3.2 | |
>Autonomous Spot: Long-Range Autonomous Exploration of Extreme Environments with Legged Locomotion |
> Video Attachment
|
|
Bouman, Amanda | Caltech |
Ginting, Muhammad Fadhil | Jet Propulsion Laboratory |
Alatur, Nikhilesh | ETH Zurich |
Palieri, Matteo | Polytechnic University of Bari |
Fan, David D | Georgia Institute of Technology |
Kim, Sung-Kyun | NASA Jet Propulsion Laboratory, Caltech |
Touma, Thomas | Caltech |
Pailevanian, Torkom | Jet Propulsion Laboratory |
Otsu, Kyohei | California Institute of Technology |
Burdick, Joel | California Institute of Technology |
Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Keywords: Field Robots, Autonomous Vehicle Navigation, Robotics in Hazardous Fields
Abstract: This paper serves as one of the first efforts to enable large-scale and long-duration autonomy using the Boston Dynamics Spot robot. Motivated by exploring extreme environments, particularly those involved in the DARPA Subterranean Challenge, this paper pushes the boundaries of the state-of-practice in enabling legged robotic systems to accomplish real-world complex missions in relevant scenarios. In particular, we discuss the behaviors and capabilities which emerge from the integration of the autonomy architecture NeBula (Networked Belief-aware Perceptual Autonomy) with next-generation mobility systems. We will discuss the hardware and software challenges, and solutions in mobility, perception, autonomy, and very briefly, wireless networking, as well as lessons learned and future directions. We demonstrate the performance of the proposed solutions on physical systems in real-world scenarios. The proposed solution contributed to winning 1st-place in the 2020 DARPA Subterranean Challenge, Urban Circuit.
|
|
14:30-14:45, Paper MoCT3.3 | |
>Towards In-Flight Transfer of Payloads between Multirotors |
> Video Attachment
|
|
Shankar, Ajay | University of Nebraska-Lincoln |
Elbaum, Sebastian | University of Virginia |
Detweiler, Carrick | University of Nebraska-Lincoln |
Keywords: Field Robots, Aerial Systems: Applications, Visual Servoing
Abstract: Multirotor unmanned aerial systems (UASs) are often used to transport a variety of payloads. However, the maximum time that the cargo can remain airborne is limited by the flight endurance of the UAS. In this paper, we present a novel approach for two multirotors to transfer a payload between them in-air, while keeping the payload aloft and stationary. Our framework is built on a visual-feedback and grasping pipeline that enables one UAS to grasp the payload held by another, thereby allowing the UASs to act as swappable carriers. By connecting the payload outwards along a single rigid link, and allowing the UASs to maneuver about it, we let the payload remain online while it is transferred to a different carrier. Furthermore, building entirely on monocular vision, the approach does not rely on precise extrinsic localization systems. We demonstrate our proposed strategy in a variety of indoor and GPS-free outdoor experiments, and explore the range of operating limits for our system.
|
|
14:45-15:00, Paper MoCT3.4 | |
>Improvement in Measurement Area of 3D LiDAR for a Mobile Robot Using a Mirror Mounted on a Manipulator |
|
Matsubara, Kazuki | Tohoku University |
Nagatani, Keiji | The University of Tokyo |
Hirata, Yasuhisa | Tohoku University |
Keywords: Field Robots
Abstract: Light Detection and Ranging (LiDAR) is widely employed in mobile robots to acquire environmental information. However, it has a limited laser irradiation direction and cannot measure the backside of an object. In this study, a method that expands the LiDAR measurement range to various directions using a mirror installed on the manipulator mounted on mobile robots is developed. As mirrors can easily be mounted on robots, this method is expected to have a wide range of applications. This paper also proposes a method for determining the mirror position and attitude to expand the measurement area to obtain target data. In addition, we conducted an accuracy evaluation test of the reflection acquisition point. Using the proposed method, we demonstrate the measurement of the shape of descending staircase as an example of a potential application.
|
|
15:00-15:15, Paper MoCT3.5 | |
>Wide Area Exploration System Using Passive-Follower Robots Towed by Multiple Winches |
|
Salazar Luces, Jose Victorio | Tohoku University |
Hoshi, Manami | Tohoku University |
Hirata, Yasuhisa | Tohoku University |
Keywords: Field Robots, Motion Control, Multi-Robot Systems
Abstract: In this study, we propose a wide area exploration system that consists on passive wheeled robots equipped with exploration sensors that are pulled from a high position with wires fed out from two winches. The robots are driven by the pulling force from the winches and they are able to steer by controlling brakes attached to their wheels. By adjusting the wire length, the passive-follower robot is pulled within the exploration area and it controls the braking torque of the wheels to follow a desired trajectory based on its current position. This system has the advantage that it is effective for ground exploration, does not require advanced calibration, and can be installed quickly. In this paper, we first explain the outline of the proposed system. Then, we introduce the hardware design of the developed winches and passive-follower robot. Next, the control method of the winch unit and the passive-follower robots are described. Here, we introduce the feasible braking control region for motion analysis and control of the passive-follower robot. Finally, we apply these control methods to the proposed system and report the results of verification experiments. We describe the feasible range of a follower robot, which changes depending on the position of the winches. We conducted an outdoor experiment, and confirmed the effectiveness of this system by evaluating the trajectories of the passive-follower robot.
|
|
15:15-15:30, Paper MoCT3.6 | |
>End-To-End Velocity Estimation for Autonomous Racing |
|
Srinivasan, Sirish | ETH Zürich |
Sa, Inkyu | CSIRO |
Zyner, Alex | The University of Sydney |
Reijgwart, Victor | ETH Zurich |
de la Iglesia Valls, Miguel | ETH Zürich |
Siegwart, Roland | ETH Zurich |
Keywords: Field Robots, Autonomous Vehicle Navigation, Sensor Fusion
Abstract: Velocity estimation plays a central role in driverless vehicles, but standard and affordable methods struggle to cope with extreme scenarios like aggressive maneuvers due to the presence of high sideslip. To solve this, autonomous race cars are usually equipped with expensive external velocity sensors. In this paper, we present an end-to-end recurrent neural network that takes available raw sensors as input (IMU, wheel odometry, and motor currents) and outputs velocity estimates. The results are compared to two state-of-the-art Kalman filters, which respectively include and exclude expensive velocity sensors. All methods have been extensively tested on a formula student driverless race car with very high sideslip (10° at the rear axle) and slip ratio (≈ 20%), operating close to the limits of handling. The proposed network is able to estimate lateral velocity up to 15x better than the Kalman filter with the equivalent sensor input and matches (0.06 m/s RMSE) the Kalman filter with the expensive velocity sensor setup.
|
|
MoCT4 |
Room T4 |
Wheeled Robots |
Regular session |
Chair: La, Hung | University of Nevada at Reno |
Co-Chair: Yamaguchi, Tomoyuki | University of Tsukuba |
|
14:00-14:15, Paper MoCT4.1 | |
>RoVaLL: Design and Development of a Multi-Terrain Towed Robot with Variable Lug-Length Wheels |
> Video Attachment
|
|
Salazar Luces, Jose Victorio | Tohoku University |
Matsuzaki, Shin | Tohoku University |
Hirata, Yasuhisa | Tohoku University |
Keywords: Multi-Robot Systems, Mechanism Design, Wheeled Robots
Abstract: Robotic systems play a very important role in exploration, allowing us to reach places that would otherwise be unsafe or unreachable to humans, such as volcanic areas, disaster sites or unknown areas in other planets. As the area to be explored increases, so does the time it takes for robots to explore it. One approach to reduce the required time is using multiple autonomous robots to perform distributed exploration. However, this significantly increases the associated cost and the complexity of the exploration process. To address these issues, in the past we proposed a leader-follower architecture where multiple two-wheeled passive robots capable of steering only using brakes are pulled by a leader robot. By controlling their relative angle with respect to the leader, the followers could move in arbitrary formations. The proposed follower robots used rubber tires, which allowed it to perform well in rigid ground, but poorly in soft soil. One alternative is to use lugged wheels, which increase the traction in soft soils. In this paper we propose a robot with shape-shifting wheels that allow it to steer in both rigid and soft soils. The wheels use a cam mechanism to push out and retract lugs stored on its inside. The shape of the wheel can be manipulated by controlling the driving torque exerted on the cam mechanism. Through experiments we verified that the developed mechanism allowed the follower robots to control their relative angle with respect to the leader in both rigid and soft soils.
|
|
14:15-14:30, Paper MoCT4.2 | |
>Modeling and Control of a Hybrid Wheeled Jumping Robot |
> Video Attachment
|
|
Dinev, Traiko | The University of Edinburgh |
Xin, Songyan | The University of Edinburgh |
Merkt, Wolfgang Xaver | University of Oxford |
Ivan, Vladimir | University of Edinburgh |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Wheeled Robots, Motion Control, Optimization and Optimal Control
Abstract: In this paper, we study a wheeled robot with a prismatic extension joint. This allows the robot to build up momentum to perform jumps over obstacles and to swing up to the upright position after the loss of balance. We propose a template model for the class of such two-wheeled jumping robots. This model can be considered as the simplest wheeled-legged system. We provide an analytical derivation of the system dynamics which we use inside a model predictive controller (MPC). We study the behavior of the model and demonstrate highly dynamic motions such as swing-up and jumping. Furthermore, these motions are discovered through optimization from first principles. We evaluate the controller on a variety of tasks and uneven terrains in a simulator.
|
|
14:30-14:45, Paper MoCT4.3 | |
>Ospheel: Design of an Omnidirectional Spherical-Sectioned Wheel |
> Video Attachment
|
|
Hayat, Abdullah Aamir | Singapore University of Technology and Design |
Shi, Yuyao | SUTD |
Elangovan, Karthikeyan | Singapore University of Technology and Design |
Elara, Mohan Rajesh | Singapore University of Technology and Design |
Abdulkader, Raihan Enjikalayil | Singapore University of Technology and Design |
Keywords: Wheeled Robots, Mechanism Design
Abstract: The holonomic and omnidirectional capabilities to the mobile platform are dependent on the wheel design and its various arrangements in the platform chassis. This paper reports on the development of an omnidirectional spherical sectioned wheel named Ospheel. It is modular, and the spherical sectioned geometry of the wheel is driven using two actuators placed inside the housing above the wheel that rotates it independently about two perpendicular axes. The mechanical drive system for Ospheel consists of two gear trains, namely, internal spur gear and crown gear spatially assembled in orthogonal planes and are driven by two driving pinions. The omnidirectional movement is achieved using the combination of two rotations, and its kinematics is presented. Two wheels at a fixed inclination assembled with the base and experiments were carried out to illustrate its holonomic motion. The robustness of the wheel design is experimented with different trajectories and on different terrains.
|
|
14:45-15:00, Paper MoCT4.4 | |
>Dynamics and Aerial Attitude Control for Rapid Emergency Deployment of the Agile Ground Robot AGRO |
> Video Attachment
|
|
Gonzalez, Daniel | United States Military Academy at West Point |
Lesak, Mark C. | United States Military Academy |
Rodriguez, Andres | United States Military Academy |
Cymerman, Joseph | Department of Civil and Mechanical Engineering, United States Mi |
Korpela, Christopher M. | United States Military Academy at West Point |
Keywords: Wheeled Robots, Dynamics, Motion Control
Abstract: In this work we present a Four-Wheeled Independent Drive and Steering (4WIDS) robot named AGRO and a method of controlling its orientation while airborne using wheel reaction torques. This is the first documented use of independently steerable wheels to both drive on the ground and achieve aerial attitude control when thrown. Inspired by a cat's self-righting reflex, this capability was developed to allow emergency response personnel to rapidly deploy AGRO by throwing it over walls and fences or through windows without the risk of it landing upside down. It also allows AGRO to drive off of ledges and ensure it lands on all four wheels. We have demonstrated a successful thrown deployment of AGRO. A novel parametrization and singularity analysis of 4WIDS kinematics reveals independent yaw authority with simultaneous adjustment of the ratio between roll and pitch authority. Simple PD controllers allow for stabilization of roll, pitch, and yaw. These controllers were tested in a simulation using derived dynamic equations of motion, then implemented on the AGRO prototype. An experiment comparing a controlled and non-controlled fall was conducted in which AGRO was dropped from a height of 0.85 m with an initial roll and pitch angle of 16 degrees and -23 degrees respectively. With the controller enabled, AGRO can use the reaction torque from its wheels to stabilize its orientation within 402 milliseconds.
|
|
15:00-15:15, Paper MoCT4.5 | |
>Control Framework for a Hybrid-Steel Bridge Inspection Robot |
> Video Attachment
|
|
Bui, Hoang-Dung | University of Nevada Reno |
Nguyen, Son | University of Nevada, Reno |
Billah, Umme-Hafsa | University of Nevada, Reno |
Le, Chuong | University of Oklahoma |
Tavakkoli, Alireza | University of Nevada, Reno |
La, Hung | University of Nevada at Reno |
Keywords: Field Robots, Search and Rescue Robots, Wheeled Robots
Abstract: Autonomous navigation of steel bridge inspection robots are essential for proper maintenance. Majority of existing robotic solutions for bridge inspection require human intervention to assist in the control and navigation. In this paper, a control system framework has been proposed for a previously designed ARA robot, which facilitates autonomous real-time navigation and minimizes human involvement. The mechanical design and control framework of ARA robot enables two different configurations, namely the mobile and inch-worm transformation. In addition, a switching control was developed with 3D point clouds of steel surfaces as the input which allow the robot to switch between mobile and inch-worm transformation. The surface availability algorithm (considers plane, area and height) of the switching control enables the robot to perform inch-worm jumps autonomously. The mobile transformation allows the robot to move on continuous steel surfaces and perform visual inspection of steel bridge structures. Practical experiments on actual steel bridge structures highlight the effective performance of ARA robot with the proposed control framework for autonomous navigation during visual inspection of steel bridges.
|
|
15:15-15:30, Paper MoCT4.6 | |
>Development of a Steep Slope Mobile Robot with Propulsion Adhesion |
|
Nishimura, Yuki | University of Tsukuba |
Yamaguchi, Tomoyuki | University of Tsukuba |
Keywords: Wheeled Robots
Abstract: A mobile robot that can achieve a stable attitude and locomotion on steep slopes is needed to overcome the problems of slipping and falling for automation of works on steep slopes. The conventional approaches to achieve a stable attitude and locomotion have been researched by adopting tracked wheels and multi-legged mechanisms instead of wheel mechanisms. However, these robots have limitations in term of application angles. A systematic theory for stable attitude and locomotion on steep slopes has not been established. Therefore, research on control strategies for wheeled mobile robots on steep slopes is essential. In this paper, a method to realize a stable attitude and locomotion on a steep slope for the wheeled mobile robot by using propellers for propulsion adhesion is proposed. The proposed robot can generate a large frictional force by pushing its body against the slope with a thrust force. This force prevents the robot from slipping while maneuvering on the slope. The magnitude and the direction of the thrust force is optimized using an appropriate control mechanism influencing the moment of force acting on it to avoid falling and side slipping during locomotion on steep slopes. A simulation experiment was conducted from the perspective of mechanics and dynamics to arrive at an optimal design of the mobile robot. The developed robot has four propellers to generate thrust forces and a rotation axis to control the direction of the generated thrust forces. Evaluation experiments were performed to validate the stability of the robot at a resting position and during lateral locomotion and its ability to climb over a slope. The experimental results confirmed that the proposed robot with propellers realized a steady attitude and locomotion on a slope of up to 90°by controlling the magnitude and the direction of the thrust force.
|
|
15:15-15:30, Paper MoCT4.7 | |
>Definition and Application of Variable Resistance Coefficient for Wheeled Mobile Robots on Deformable Terrain (I) |
|
Ding, Liang | Harbin Institute of Technology |
Huang, Lan | Harbin Institute of Technology |
Li, Shu | Harbin Institute of Technology |
Gao, Haibo | Harbin Institute of Technology |
Deng, Huichao | Beihang university |
Li, Yuankai | Department of Aerospace Engineering, Ryerson University |
Liu, Guangjun | Ryerson University |
|
|
MoCT5 |
Room T5 |
Robotics in Agriculture and Forestry |
Regular session |
Chair: Tokekar, Pratap | University of Maryland |
Co-Chair: Isler, Volkan | University of Minnesota |
|
14:00-14:15, Paper MoCT5.1 | |
>Interactive Movement Primitives: Planning to Push Occluding Pieces for Fruit Picking |
> Video Attachment
|
|
Mghames, Sariah | University of Lincoln |
Hanheide, Marc | University of Lincoln |
Ghalamzan Esfahani, Amir Masoud | University of Lincoln |
Keywords: Agricultural Automation, Robotics in Agriculture and Forestry, Motion and Path Planning
Abstract: Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3-D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.
|
|
14:15-14:30, Paper MoCT5.2 | |
>Robotic Untangling of Herbs and Salads with Parallel Grippers |
> Video Attachment
|
|
Ray, Prabhakar | King's College London |
Howard, Matthew | King's College London |
Keywords: Robotics in Agriculture and Forestry, Agricultural Automation, Computer Vision for Automation
Abstract: Robotic packaging of fresh leafy produce such as herbs and salads generally involves picking out a target mass from a pile or crate of plant material. Typically, for low-complexity parallel grippers, the weight picked can be controlled by varying the opening aperture. However, often individual strands of plant material get entangled with each other, causing more to be picked out than desired. This paper presents a simple spread-and-pick approach that significantly reduces the degree of entanglement in a herb pile when picking. Compared to the traditional approach of picking from an entanglement-free point in the pile, the proposed approach results in a decrease of up to 29.06% of the variance in for separate homogeneous piles of fresh herbs. Moreover, it shows good generalisation with up to 55.53% decrease in picked weight variance for herbs previously unseen by the system.
|
|
14:30-14:45, Paper MoCT5.3 | |
>Choosing Classification Thresholds for Mobile Robot Coverage |
|
Maini, Parikshit | University of Minnesota |
Isler, Volkan | University of Minnesota |
Keywords: Field Robots, Robotics in Agriculture and Forestry
Abstract: Many robotic coverage applications involve detection of spatially distributed targets, followed by path planning to visit them for service. In these applications, the performance of the detection algorithm can have profound effect on planning decisions and costs. Range of operation, in both space and time, for robots is typically finite over a single mission and is a common constraint that needs to be accounted for in decision making. Misclassification may result in wastage of resources and can even jeopardize the completion of a mission if the length of a path extends beyond the range of the robot. In this work, we develop techniques on the computation of planning-aware classification thresholds. We discuss two versions that compute binary classification thresholds as a function of planning budget and detection accuracy. We present an implementation of our methods in path planning applications for an autonomous mower and show results on real and simulated data. Our method allows up to 25% improvement in coverage as compared to standard thresholding methods.
|
|
14:45-15:00, Paper MoCT5.4 | |
>Unsupervised Domain Adaptation for Transferring Plant Classification Systems to New Field Environments, Crops, and Robots |
|
Gogoll, Dario | University of Bonn |
Lottes, Philipp | University of Bonn |
Weyler, Jan | University of Bonn |
Petrinic, Nik | University of Oxford |
Stachniss, Cyrill | University of Bonn |
Keywords: Robotics in Agriculture and Forestry, Agricultural Automation
Abstract: Crops are an important source of food and other products. In conventional farming, tractors apply large amounts of agrochemicals uniformly across fields for weed control and plant protection. Autonomous farming robots have the potential to provide environment-friendly weed control on a per plant basis. A system that reliably distinguishes crops, weeds, and soil under varying environment conditions is the basis for plant-specific interventions such as spot applications. Such semantic segmentation systems, however, often show a performance decay when applied under new field conditions. In this paper, we therefore propose an effective approach to unsupervised domain adaptation for plant segmentation systems in agriculture and thus to adapt existing systems to new environments, different value crops, and other farm robots. Our system yields a high segmentation performance in the target domain by exploiting labels only from the source domain. It is based on CycleGANs and enforces a semantic consistency domain transfer by constraining the images to be pixel-wise classified in the same way before and after translation. We perform an extensive evaluation, which indicates that we can substantially improve the transfer of our semantic segmentation system to new field environments, different crops, and different sensors or robots.
|
|
15:00-15:15, Paper MoCT5.5 | |
>Crop Height and Plot Estimation for Phenotyping from Unmanned Aerial Vehicles Using 3D LiDAR |
> Video Attachment
|
|
Dhami, Harnaik | University of Maryland |
Yu, Kevin | Virginia Tech |
Xu, Tianshu | University of Maryland |
Zhu, Qian | Virginia Tech |
Dhakal, Kshitiz | Virginia Tech |
Friel, James | Virginia Tech |
Li, Song | Virginia Tech |
Tokekar, Pratap | University of Maryland |
Keywords: Robotics in Agriculture and Forestry, Computer Vision for Other Robotic Applications, Agricultural Automation
Abstract: We present techniques to measure crop heights using a 3D Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV). Knowing the height of plants is crucial to monitor their overall health and growth cycles, especially for high-throughput plant phenotyping. We present a methodology for extracting plant heights from 3D LiDAR point clouds, specifically focusing on plot-based phenotyping environments. We also present a toolchain that can be used to create phenotyping farms for use in Gazebo simulations. The tool creates a randomized farm with realistic 3D plant and terrain models. We conducted a series of simulations and hardware experiments in controlled and natural settings. Our algorithm was able to estimate the plant heights in a field with 112 plots with a root mean square error (RMSE) of 6.1 cm. This is the first such dataset for 3D LiDAR from an airborne robot over a wheat field. The developed simulation toolchain, algorithmic implementation, and datasets can be found on our GitHub repository. https://github.com/hsd1121/PointCloudProcessing
|
|
MoCT6 |
Room T6 |
Robotics in Construction I |
Regular session |
Chair: Lee, Dongjun | Seoul National University |
Co-Chair: Liu, Yunhui | Chinese University of Hong Kong |
|
14:00-14:15, Paper MoCT6.1 | |
>A Robotic Gripper Design and Integrated Solution towards Tunnel Boring Construction Equipment |
> Video Attachment
|
|
Yuan, Jianjun | Shanghai University, China |
Guan, Renming | Shanghai Jiao Tong University |
Du, Liang | Ritsumeikan University |
Ma, Shugen | Ritsumeikan University |
Keywords: Mechanism Design, Robotics in Construction
Abstract: Creative design of grippers on their configurations, mechatronic control system, and multi-component collaborative algorithms is often utilized to realize complex operations in industrial applications, due to the environmental constraints or specific task requirements. Firstly, this paper introduces the background problems. As the main automatic equipment -- the shield machine -- in the field of tunnel boring construction, needs frequent tool (cutter) replacement during underground process, but has no practical automatic method yet, due to heavy payload, complex environment and work procedure. Thus, an integrated solution was proposed by developing a specific gripper and a snake-like manipulator to accomplish tool replacement in a cooperative way. Through simple and unique design of relative components, the solution realizes a fully automatic and precise approach including heavy load tool grasping and regrasping, posture adjustment, unlocking and disassembly, and installation and locking. Finally, this paper also describes the experimental process of tool replacement by the prototype under a real working condition, and discusses the feasibility of putting the scheme into practical application through comparison.
|
|
14:15-14:30, Paper MoCT6.2 | |
>Expert-Emulating Excavation Trajectory Planning for Autonomous Robotic Industrial Excavator |
> Video Attachment
|
|
Son, Bukun | Seoul National University |
Kim, ChangU | Seoul National University |
Kim, ChangMuk | Seoul National University, Doosan |
Lee, Dongjun | Seoul National University |
Keywords: Robotics in Construction, Motion and Path Planning, Imitation Learning
Abstract: We propose a novel excavation (i.e., digging) trajectory planning framework for industrial autonomous robotic excavators, which emulates the strategies of human expert operators to optimize the excavation of (complex/unmodellable) soils while also upholding robustness and safety in practice. First, we encode the trajectory with dynamic movement primitives (DMP), which is known to robustly preserve qualitative shape of the trajectory and attraction to (variable) end-points (i.e., start-points of swing/dumping), while also being data-efficient due to its structure, thus, suitable for our purpose, where expert data collection is expensive. We further shape this DMP-based trajectory to be expert-emulating, by learning the shaping force of the DMP dynamics from the real expert excavation data via a neural network (i.e., MLP (multi-layer perception)). To cope with (possibly dangerous) underground uncertainties (e.g., pipes, rocks), we also real-time modulate the expert-emulating (nominal) trajectory to prevent excessive build-up of excavation force by using the feedback of its online estimation. The proposed framework is then validated/demonstrated by using an industrial-scale autonomous robotic excavator, with the associated data also presented here.
|
|
14:30-14:45, Paper MoCT6.3 | |
>Prediction of Backhoe Loading Motion Via the Beta-Process Hidden Markov Model |
> Video Attachment
|
|
Yamada, Kento | Tohoku Univ |
Ohno, Kazunori | Tohoku University |
Hamada, Ryunosuke | Tohoku University |
Westfechtel, Thomas | Tohoku University |
Bezerra, Ranulfo | Tohoku University |
Miyamoto, Naoto | Tohoku Univ |
Suzuki, Taro | Chiba Institute of Technology |
Suzuki, Takahiro | Tohoku University |
Nagatani, Keiji | The University of Tokyo |
Shibata Yukinori, Shibata | Sato Komuten Co |
Asano, Kimitaka | Sanyo-Technics Co |
Komatsu, Tomohiro | KOWATECH Co |
Tadokoro, Satoshi | Tohoku University |
Keywords: Behavior-Based Systems, Robotics in Construction, Human-Centered Automation
Abstract: At a construction site, a backhoe loads sediment onto the bed of a dump truck for earthmoving work. In cooperation between the backhoe and the dump truck, the dump truck must move for the loading spot at the instant the backhoe complete preparation for loading, like gathering sediment. To automate transport of sediment by a dump truck, it is required to predict the instant immediately. However, it is difficult to predict the instant at which the backhoe is ready to load sediment, owing to the similarity in motions that are observed during preparation for loading. Moreover, the level of skill required to operate a backhoe differs between operators. Thus the prediction of the instant requires a unique model for each operator. Through this study, we attempt to predict the instant at which the backhoe is in the ideal position to load sediment into the dump truck. We employ the beta-process hidden Markov model (BP-HMM) to develop a motion model of a backhoe used for earthmoving works and operated by a specific operator, to predict the instant at which the backhoe is ready to load sediment into the dump truck. The BP-HMM classifies the backhoe motion into several primitive motions. Furthermore, for a series of primitive motions, such as loading sediment, we identify a specific series of actions that are unique to waiting for the dump truck to drive into the loading spot. For input for the model, we gathered 6-axis inertial data along the cab, boom, and arm of the backhoe using attachable sensor boxes, which include inertial measurement units (IMU). Thus, our measurement methodology could also be used for older backhoes without sensors. As a result, we were able to identify three kind of primitive motions that could help predict the instant at which the backhoe is ready to load sediment into the dump truck, using the backhoe motion data by a specific operator. At best, the instant could be predicted with a probability of 67% and 100%, at 6 s and 0.7 s before loading process began, respectively. This phased prediction could be used to reduce the idle time and risk to dump trucks during earthmoving work with the backhoe.
|
|
14:45-15:00, Paper MoCT6.4 | |
>Robust Dynamic State Estimation for Lateral Control of an Industrial Tractor Towing Multiple Passive Trailers |
|
Zhou, Shunbo | The Chinese University of Hong Kong |
Zhao, Hongchao | The Chinese University of Hong Kong |
Chen, Wen | The Chinese University of Hong Kong |
Liu, Zhe | University of Cambridge |
Wang, Hesheng | Shanghai Jiao Tong University |
Liu, Yunhui | Chinese University of Hong Kong |
Keywords: Industrial Robots, Logistics, Robotics in Construction
Abstract: In this paper, we propose a dynamic state estimation framework for lateral control of a heavy tractor-trailers system using only mass-produced low-cost sensors. This issue is challenging since the lateral velocity of the lead tractor is difficult to measure directly. The performance of existing dynamic model-based estimation methods will also be degraded, as different trailers and payloads cause the tractor model parameters to change. We address this issue by incorporating a kinematic estimator into a dynamic model-based estimation scheme. Accurate and reliable tire cornering stiffness and dynamics-informed lateral velocity of the lead tractor can be output in real-time by using our method. The stability and robustness of the proposed method are theoretically proved. The feasibility of our method is verified by full-scale experiments. It is also verified that the estimated model parameters and lateral states do improve the control performance by integrating the estimator into a lateral control system.
|
|
MoCT7 |
Room T7 |
Robotics in Construction II |
Regular session |
Chair: Liu, Zhe | University of Cambridge |
Co-Chair: Hutter, Marco | ETH Zurich |
|
14:00-14:15, Paper MoCT7.1 | |
>End-To-End 3D Point Cloud Learning for Registration Task Using Virtual Correspondences |
|
Wei, Huanshu | Chinese University of Hong Kong |
Qiao, Zhijian | Shanghai Jiao Tong University |
Liu, Zhe | University of Cambridge |
Suo, Chuanzhe | The Chinese University of Hong Kong |
Yin, Peng | Carnegie Mellon University |
Shen, Yueling | Shanghai Jiao Tong University |
Li, Haoang | The Chinese University of Hong Kong |
Wang, Hesheng | Shanghai Jiao Tong University |
Keywords: Robotics in Construction
Abstract: 3D Point cloud registration is still a very challenging topic due to the difficulty in finding the rigid transformation between two point clouds with partial correspondences, and it's even harder in the absence of any initial estimation information. In this paper, we present an end-to-end deep-learning based approach to resolve the point cloud registration problem. Firstly, the revised LPD-Net is introduced to extract features and aggregate them with the graph network. Secondly, the self-attention mechanism is utilized to enhance the structure information in the point cloud and the cross-attention mechanism is designed to enhance the corresponding information between the two input point clouds. Based on which, the virtual corresponding points can be generated by a voted-based method, and finally, the point cloud registration problem can be solved by implementing the SVD method. Comparison results in ModelNet40 dataset validate that the proposed approach reaches the state-of-the-art in point cloud registration tasks and experiment resutls in KITTI dataset validate the effectiveness of the proposed approach in real applications.
|
|
14:15-14:30, Paper MoCT7.2 | |
>Terrain-Adaptive Planning and Control of Complex Motions for Walking Excavators |
> Video Attachment
|
|
Jelavic, Edo | Swiss Federal Institute of Technology Zurich |
Berdou, Yannick | ETH Zurich |
Jud, Dominic | ETH Zurich |
Kerscher, Simon | Eth Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Robotics in Construction, Whole-Body Motion Planning and Control
Abstract: This article presents a planning and control pipeline for legged-wheeled (hybrid) machines. It consists of a Trajectory Optimization based planner that computes references for end-effectors and joints. The references are tracked using a whole-body controller based on a hierarchical optimization approach. Our controller is capable of performing terrain adaptive whole-body control. Furthermore, it computes both torque and position/velocity references, depending on the actuator capabilities. We perform experiments on a Menzi Muck M545, a full size 31 Degree of Freedom (DoF) walking excavator with five limbs: four wheeled legs and an arm. We show motions that require full-body coordination executed in realistic conditions. To the best of our knowledge, this is the first work that shows the execution of whole-body motions on a full size walking excavator, using all DoFs for locomotion.
|
|
14:30-14:45, Paper MoCT7.3 | |
>Towards RL-Based Hydraulic Excavator Automation |
> Video Attachment
|
|
Egli, Pascal Arturo | RSL, ETHZ |
Hutter, Marco | ETH Zurich |
Keywords: Robotics in Construction, Reinforecment Learning
Abstract: In this article we present a data-driven approach for automated arm control of a hydraulic excavator. Except for the link lengths of the excavator, our method does not require machine-specific knowledge nor gain tuning. Using data collected during operation of the excavator, we train a general purpose model to effectively represent the highly non-linear dynamics of the hydraulic actuation and joint linkage. Together with the link lengths a simulation is set up to train a neural network control policy for end-effector position tracking using reinforcement learning (RL). The control policy directly outputs the actuator commands that can be applied to the machine without unfounded filtering or modification. The proposed method is implemented and tested on a 12t hydraulic excavator, controlling its 4 main arm joints to track desired positions of the shovel in free-space. The results demonstrate the feasibility of directly applying control policies trained in simulation to the physical excavator for accurate and stable position tracking.
|
|
14:45-15:00, Paper MoCT7.4 | |
>Multimodal Teleoperation of Heterogeneous Robots within a Construction Environment |
|
Wallace, Dylan | University of Nevada, Las Vegas |
He, Yu Hang | University of Nevada, Las Vegas |
Chagas Vaz, Jean M. | University of Nevada Las Vegas |
Georgescu, Leonardo | University of Nevada, Las Vegas |
Oh, Paul Y. | University of Nevada, Las Vegas (UNLV) |
Keywords: Robotics in Construction, Telerobotics and Teleoperation, Virtual Reality and Interfaces
Abstract: Automation in construction continues to be a topic of interest for many in industry and academia. However, the dynamic environments presented in construction sites prove these tasks to be difficult to automate reliably. This paper proposes a novel method of teleoperation for multiple heterogeneous robots within a construction environment. The system is achieved by creating a virtual reality interface that allows an operator to control multiple robots both synchronously and asynchronously. Feedback is provided from an array of RGBD cameras, force sensors, and precise odometry data. The DRC-Hubo and Spot robot platforms are used for implementation and experimentation. Experiments include useful tasks for construction including item manipulation and item delivery of tools and components. Results demonstrate the feasibility of implementing the system in a construction environment, including trajectory comparisons, task learning curves, and successful multi-robot collaboration.
|
|
MoCT8 |
Room T8 |
Service Robots |
Regular session |
Chair: Liu, Ming | Hong Kong University of Science and Technology |
Co-Chair: Fernandez-Carmona, Manuel | University of Lincoln |
|
14:00-14:15, Paper MoCT8.1 | |
>Applying Surface Normal Information in Drivable Area and Road Anomaly Detection for Ground Mobile Robots |
|
Wang, Hengli | The Hong Kong University of Science and Technology |
Fan, Rui Ranger | UC San Diego |
Sun, Yuxiang | Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Automation Technologies for Smart Cities, Service Robotics, Logistics
Abstract: The joint detection of drivable areas and road anomalies is a crucial task for ground mobile robots. In recent years, many impressive semantic segmentation networks, which can be used for pixel-level drivable area and road anomaly detection, have been developed. However, the detection accuracy still needs improvement. Therefore, we develop a novel module named the Normal Inference Module (NIM), which can generate surface normal information from dense depth images with high accuracy and efficiency. Our NIM can be deployed in existing convolutional neural networks (CNNs) to refine the segmentation performance. To evaluate the effectiveness and robustness of our NIM, we embed it in twelve state-of-the-art CNNs. The experimental results illustrate that our NIM can greatly improve the performance of the CNNs for drivable area and road anomaly detection. Furthermore, our proposed NIM-RTFNet ranks 8th on the KITTI road benchmark and exhibits a real-time inference speed.
|
|
14:15-14:30, Paper MoCT8.2 | |
>Performance Characterization of an Algorithm to Estimate the Search Skill of a Human or Robot Agent |
|
Balaska, Audrey | Tufts University |
Rife, Jason | Tufts University |
Keywords: Search and Rescue Robots, Performance Evaluation and Benchmarking, Object Detection, Segmentation and Categorization
Abstract: This paper characterizes an algorithm that estimates searcher skill level to support planning for search activities involving heterogeneous robot and human/robot teams. Specifically, we use Monte-Carlo simulations to determine the empirical accuracy of the estimator, to assess the quality of its predicted distribution (nonparametric) of agent skill levels, and the convergence rate of the estimate. The simulation study suggests that a single challenging search task can be used to estimate searcher skill within about 10%; however, the quality of the estimate is higher when searcher skill is high.
|
|
14:30-14:45, Paper MoCT8.3 | |
>The Marathon 2: A Navigation System |
> Video Attachment
|
|
Macenski, Steven | Samsung Research America |
Martin Rico, Francisco | Carnegie Mellon University |
White, Ruffin | University of California San Diego |
Gines Clavero, Jonatan | King Juan Carlos University |
Keywords: Service Robots, Behavior-Based Systems, Software, Middleware and Programming Environments
Abstract: Developments in mobile robot navigation have enabled robots to operate in warehouses, retail stores, and on sidewalks around pedestrians. Various navigation solutions have been proposed, though few as widely adopted as ROS Navigation. 10 years on, it is still one of the most popular navigation solutions. Yet, ROS Navigation has failed to keep up with modern trends. We propose the new navigation solution, Navigation2, which builds on the successful legacy of ROS Navigation. Navigation2 uses a behavior tree for navigator task orchestration and employs new methods designed for dynamic environments applicable to a wider variety of modern sensors. It is built on top of ROS2, a secure message passing framework suitable for safety critical applications and program lifecycle management. We present experiments in a campus setting utilizing Navigation2 to operate safely alongside students over a marathon as an extension of the experiment proposed in Eppstein et al. The Navigation2 system is freely available at https://github.com/ros-planning/navigation2 with a rich community and instructions.
|
|
14:45-15:00, Paper MoCT8.4 | |
>Path Planning for Nonholonomic Multiple Mobile Robot System with Applications to Robotic Autonomous Luggage Trolley Collection at Airports |
|
Wang, Jiankun | The Chinese University of HongKong |
Meng, Max Q.-H. | The Chinese University of Hong Kong |
Keywords: Service Robots, Service Robotics, Motion and Path Planning
Abstract: In this paper, we propose a novel path planning algorithm for the nonholonomic multiple mobile robot system with applications to a robotic autonomous luggage trolley collection system at airports. We consider this path planning algorithm as a Multiple Traveling Salesman Problem (MTSP). Our path planning algorithm consists of three parts. First, we use the Minimum Spanning Tree (MSP) algorithm to divide the MTSP into a number of independent TSPs, which achieves the task assignment for each mobile robot. Secondly, we implement a closed-loop forward control policy based on the kinematic model of the mobile robot to get a feasible and smooth path. The control cost of the path is used as the new metric in solving the TSPs. Finally, in order to adapt to our case, we modify the TSP as an Open Dynamic Traveling Salesman Problem with Fixed Start (ODTSP-FS) and implement an ant colony algorithm to achieve the path planning for each mobile robot. We evaluate our algorithm with simulation experiments and the experimental results demonstrate that our algorithm can quickly generate feasible and smooth paths for each robot while satisfying the nonholonomic constraints.
|
|
15:00-15:15, Paper MoCT8.5 | |
>Affordance-Based Mobile Robot Navigation among Movable Obstacles |
> Video Attachment
|
|
Wang, Maozhen | Northeastern University |
Luo, Rui | Northeastern University |
Onol, Aykut Ozgun | Northeastern University |
Padir, Taskin | Northeastern University |
Keywords: Service Robotics, Motion and Path Planning, Visual-Based Navigation
Abstract: Avoiding obstacles in the perceived world has been the classical approach to autonomous mobile robot navigation. However, this usually leads to unnatural and inefficient motions that significantly differ from the way humans move in tight and dynamic spaces, as we do not refrain interacting with the environment around us when necessary. Inspired by this observation, we propose a framework for autonomous robot navigation among movable obstacles (NAMO) that is based on the theory of affordances and contact-implicit motion planning. We consider a realistic scenario in which a mobile service robot negotiates unknown obstacles in the environment while navigating to a goal state. An affordance extraction procedure is performed for novel obstacles to detect their movability, and a contact-implicit trajectory optimization method is used to enable the robot to interact with movable obstacles to improve the task performance or to complete an otherwise infeasible task. We demonstrate the performance of the proposed framework by hardware experiments with Toyota's Human Support Robot.
|
|
15:15-15:30, Paper MoCT8.6 | |
>Next-Best-Sense: A Multi-Criteria Robotic Exploration Strategy for RFID Tags Discovery |
|
Polvara, Riccardo | University of Lincoln |
Fernandez-Carmona, Manuel | University of Lincoln |
Hanheide, Marc | University of Lincoln |
Neumann, Gerhard | Karlsruhe Institute of Technology |
Keywords: Service Robotics, Inventory Management, Environment Monitoring and Management
Abstract: Automated exploration is one of the most relevant applications for autonomous robots. In this paper, we propose a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms which optimizes the exploration task balancing multiple criteria. NBS is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robot. We cast this problem as a coverage planning problem by defining a basic sensing operation – a scan with the RFID reader – as the field of “view” of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics, such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors’ repository 1.
|
|
MoCT9 |
Room T9 |
Automation at Micro-Nano Scales |
Regular session |
Chair: Gauthier, Michael | FEMTO-ST Institute |
Co-Chair: Cappelleri, David | Purdue University |
|
14:00-14:15, Paper MoCT9.1 | |
>Magnetically Actuated Pick-And-Place Operations of Cellular Micro-Rings for High-Speed Assembly of Micro-Scale Biological Tube |
> Video Attachment
|
|
Wu, Yang | Beijing Institute of Technology |
Sun, Tao | Beijing Institute of Technology |
Shi, Qing | Beijing Institute of Technology |
Wang, Huaping | Beijig Institute of Technology |
Huang, Qiang | Beijing Institute of Technology |
Fukuda, Toshio | Meijo University |
Keywords: Micro/Nano Robots, Automation at Micro-Nano Scales, Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care
Abstract: Tissue engineering is trying to use modular tissue micro-rings to construct artificial biological microtubes as substitute of autologous tissue tubes to alleviate the shortage of donor sources. However, because of the lack of effective assembly strategies, it is still challenging to achieve high-speed fabrication of biological microtubes with high cell density. In this paper, we proposed a robotic-based magnetic assembly strategy to handle this challenge. We first encapsulated magnetic alginate microfibers into micro-rings formed by cell self-assembly to enhance the controllability. Afterwards, a 3D long-stroke manipulator with visual servoing system was designed to achieve magnetic pick-and-place operations of micro-rings for 3D assembly. Moreover, we developed a mathematical model of the motion of micro-ring in solution environments. Based on visual feedback, we analyzed the feasibility of automatic assembly and following response of micro-rings with the moving magnets, which shows our proposed method has great potential to achieve high-speed bio-assembly. Finally, we successfully assembled multi-micro-rings into a biological microtube with high cell density.
|
|
14:15-14:30, Paper MoCT9.2 | |
>Design of the uMAZE Platform and Microrobots for Independent Control and Micromanipulation Tasks |
> Video Attachment
|
|
Johnson, Benjamin | Purdue University |
Esantsi, Nathan | Purdue University |
Cappelleri, David | Purdue University |
Keywords: Micro/Nano Robots, Automation at Micro-Nano Scales
Abstract: We present the uMAZE (u(Micro) Magnetic Actuation Zone control Environment) platform for independent control of multiple magnetic microrobots for performing individual and collaborative micromanipulation tasks. %Manipulation and assembly in microscale can be achieved with teams of such microrobots. We present a new local magnetic field generating coil system design, microrobot design, actuation scheme, and orientation control for actuating multiple magnetic microrobots independently. The new designs are validated and experiments showcasing their abilities are presented. The demonstrations include closed-loop independent and simultaneous control of four microrobots and a sample micromanipulation task involving two microrobots pushing micro-parts into a prescribed formation.
|
|
14:30-14:45, Paper MoCT9.3 | |
>Dielecrophoretic Introduction of the Membrane Proteins into the BLM Platforms for the Electrophygiological Analysis Systems |
|
Sugiura, Hirotaka | Nagoya University |
Osaki, Toshihisa | Kanagawa Institute of Industrial Science and Technology |
Mimura, Hisatoshi | Kanagawa Institute of Industrial Science and Technology (KISTEC) |
Yamada, Tetsuya | Kanagawa Institute of Industrial Science and Technology |
Takeuchi, Shoji | UTokyo |
Keywords: Micro/Nano Robots, Automation at Micro-Nano Scales, Medical Robots and Systems
Abstract: This paper proposed a technique to introduce the membrane protein into the lan-on-chip analysis system having a planar lipid bilayer. The proposed technique utilized a dielectrophoretic force generated by the asymmetric configuration of the solid electrodes on the aqueous buffer separator. By applying the alternating current to the separator and the counter electrode, we manipulated liposomes that could host the membrane proteins on the surface. The key point for the dielectrophoretic manipulation on this system was to fabricate an effective configuration of the droplet separator having the taper-edge on the contour of the micropore. This configuration made a strong interpenetrating DEP force at the lipid bilayer, and prompted the fusion of liposome into the lipid bilayer. The separator was fabricated by micromachining technique. Using the separator, we formed the lipid bilayer without evading the solid electrode on the surface. Finally, we elucidated the introduction of the liposome by monitoring with the optical microscopy.
|
|
14:45-15:00, Paper MoCT9.4 | |
>Miniaturized Robotics: The Smallest Camera Operator Bot Pays Tribute to David Bowie (I) |
|
Lehmann, Olivier | Universite de Franche-Comté |
Rauch, Jean-Yves | FEMTO-ST institute |
Vitry, Youen | ULB |
Pinsard, Tibo | Darrowan Prod |
Lambert, Pierre | Université libre de Bruxelles |
Gauthier, Michael | FEMTO-ST Institute |
|
|
15:00-15:15, Paper MoCT9.5 | |
>Electromagnetic Actuation of Microrobots in a Simulated Vascular Structure with a Position Estimator Based Motion Controller |
> Video Attachment
|
|
Dong, Dingran | City University of Hong Kong |
Lam, Wah Shing | City University of Hong Kong |
Sun, Dong | City University of Hong Kong |
Keywords: Motion Control, Automation at Micro-Nano Scales, Micro/Nano Robots
Abstract: The use of microrobots to achieve micromanipulation in vivo has attracted considerable attention in recent years to meet the request of non-invasiveness, precision and high efficiency in medical treatment. This paper reports the use of a home-designed electromagnetic manipulation system to control the movements of microrobots in a simulated vascular structure. After dynamic modeling, the moving trajectory of the microrobot is designed on the basis of an artificial potential field. Estimator for position is then designed with stability analysis by a Lyapunov approach. A super-twisting algorithm is further applied to control the microrobot to move along with the desired trajectory. Simulations and experiments are finally performed to demonstrate the effectiveness of the proposed control approach.
|
|
MoCT10 |
Room T10 |
Biological Cell Manipulation |
Regular session |
Chair: Hayakawa, Takeshi | Chuo University |
Co-Chair: Yaxiaer, Yalikun | Nara Institute of Science and Technology |
|
14:00-14:15, Paper MoCT10.1 | |
>On-Chip Integration of Ultra-Thin Glass Cantilever for Physical Property Measurement Activated by Femtosecond Laser Impulse |
|
Tang, Tao | Nara Institute of Science and Technology |
Hao, Yansheng | Nara Institute of Science and Technology |
Shen, Yigang | Osaka University |
Tanaka, Yo | Riken |
Huang, Ming | Nara Institute of Science and Technology |
Hosokawa, Yoichiroh | NAIST |
Li, Ming | Macquarie University |
Yaxiaer, Yalikun | Nara Institute of Science and Technology |
Keywords: Biological Cell Manipulation, Soft Sensors and Actuators
Abstract: Under the excitation of acoustic radiation, the amount of energy absorbed and rebounded by cells have the relationship with mechanical properties, e.g. stiffness, shape, weight and so on. In this paper, a femtosecond laser-activated micro-detector is designed to convert this relationship into an electrical signal. First, the acoustic radiation is generated by a femtosecond laser pulse in a microchannel and acts on neighbor cells / beads. Then, an ultra-thin glass sheet (UTGS)-based pressure sensor (cantilever) is fabricated at the bottom of the microfluidic chip to monitor changes in acoustic pressure during detection process. In this detection system, the pressure sensor is fabricated with a 10 𝛍m UTGS in a shape of rectangular cantilever and functions like a detector to convert acoustic waves into shift response. Based on the amplitude of detected pulses, we can directly analyze the acoustic energy, coming from either femtosecond laser pulse or that remains after penetrating target cells. We have taken experiments on 10 𝛍m beads and verified the applicability of this micro-detector, and the proposed method has great potential to be applied in label-free cell manipulation (i.e., sorting) as a detection mechanism.
|
|
14:15-14:30, Paper MoCT10.2 | |
>A Novel Portable Cell Sonoporation Device Based on Open-Source Acoustofluidics |
|
Song, Bin | BEIHANG UNIVERSITY |
Zhang, Wei | Beihang University |
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Feng, Lin | Beihang University |
Zhang, Deyuan | Beihang University |
Arai, Fumihito | Nagoya University |
Keywords: Biological Cell Manipulation, Micro/Nano Robots
Abstract: Sonoporation, which typically employs acoustic cavitation microbubbles, can enhance the permeability of the cell membrane, allowing foreign matter to enter cells across the natural barriers. However, the diameter nonuniformity and random distribution of microbubbles make it difficult to achieve controllable and high-efficiency sonoporation, while complex extern acoustic driving system also limits its applicability. Herein, we demonstrate a low-cost, expandable, and portable acoustofluidic device for cell sonoporation using acoustic streaming generated by oscillating sharp edges. The streaming-induced high shear forces can (i) quickly trap target cells at the tip of sharp edges and (ii) transiently modulate the permeability of the cell membrane, which is utilized to perform cell sonoporation events. Using our device, sonoporation is successfully achieved in a microbubble-free manner, with a sonoporation efficiency of more than 90%. Furthermore, our acoustic driving system is designed around the open-source Arduino prototyping platform due to its extendibility and portability. In addition to these benefits, our acoustofluidic device is simple to fabricate and operate, and it can work at relatively low frequency (4.6 kHz). All these advantages make our novel cell sonoporation device invaluable for many biological and biomedical applications such as drug delivery and gene transfection.
|
|
14:30-14:45, Paper MoCT10.3 | |
>Robotic Micromanipulation of Biological Cells with Friction Force-Based Rotation Control |
> Video Attachment
|
|
Cui, Shuai | Nanyang Technological University |
Ang, Wei Tech | Nanyang Technological University |
Keywords: Biological Cell Manipulation, Automation at Micro-Nano Scales
Abstract: Cell manipulation is a critical procedure in related biological applications such as embryo biopsy and intracytoplasmic sperm injection (ICSI), where the biological cell is required to be oriented to the desired position. To bridge the gap between the techniques and the clinical applications, a robotic micromanipulation method, which utilizes friction forces to rotate the cell with standard micropipettes, is presented in this paper. Force models for both in-plane and out-of-plane rotations are well established and analyzed for the rotation control. For better controllability, calibration steps are also designed for adjusting the orientation of the micropipette with a more efficient way. A cell orientation recognition algorithm based on the superpixel segmentation and spectral clustering is reported and achieved high validation accuracy (96%) for estimating the orientation of the oocyte. The extracted visual information further facilitates the feedback control of cell rotation. Experimental results show that the overall success rate for the cell rotation control was about 95% with orientation precision of ±1◦.
|
|
14:45-15:00, Paper MoCT10.4 | |
>Construction of Multiple Hepatic Lobule Like 3D Vascular Networks by Manipulating Magnetic Tweezers Toward Tissue Engineering |
|
Kim, Eunhye | Meijo University |
Takeuchi, Masaru | Nagoya University |
Kozuka, Taro | Meijo University |
Nomura, Takuto | Meijo University |
Ichikawa, Akihiko | Meijo University |
Hasegawa, Yasuhisa | Nagoya University |
Huang, Qiang | Beijing Institute of Technology |
Fukuda, Toshio | Meijo University |
Keywords: Biological Cell Manipulation, Micro/Nano Robots, Medical Robots and Systems
Abstract: In this paper, we have constructed actively perfusable multiple hepatic lobule-like vascular networks in a 3D cellular structure by using magnetic tweezers. Without well-organized channel networks, cells in a large 3D tissue cannot receive nutrients and oxygen from the channel, and therefore, the cells will be dead after few days. To construct well-organized channel networks, we fabricated a hepatic lobule like vascular networks by using magnetic fields in our previous work. However, the size of the hepatic lobule like vascular network was more than five times larger than real hepatic tissue. To improve the previous research, we have proposed several things. First, we have constructed the vascular network having similar size of the real thing in this step. Second, we have cultured the constructed structure for a long-time (more than two weeks) to verify the biocompatible condition. Third, we assemble the constructed hepatic tissues to make a large size of organ, liver. Finally, an actively perfusable system have been adopted to implement a bioreactor system by adding micro pump.
|
|
15:00-15:15, Paper MoCT10.5 | |
>Evaluations of Response Characteristics of On-Chip Gel Actuators for Various Single Cell Manipulations |
> Video Attachment
|
|
Wada, Hiroki | Chuo University |
Koike, Yuha | Chuo University |
Yokoyama, Yoshiyuki | Toyama Industrial Technology Research and Development Center |
Hayakawa, Takeshi | Chuo University |
Keywords: Micro/Nano Robots, Biological Cell Manipulation
Abstract: On-chip gel actuators are potential candidates for single cell manipulation because they can realize low-invasive manipulation of various cells. We propose an on-chip gel actuator driven by light irradiation. By patterning the gel actuator with light absorber, we can control the temperature of the actuator and drive it. The proposed drive method can realize highly localized temperature control of the gel actuator and can be applied to mass integration of on-chip gel actuators. In this study, we evaluate the heat conduction of the actuator during driving and its response characteristics as a function of various design parameters. We theoretically and experimentally evaluate the response characteristics and confirm that the response characteristics can be changed by altering the size of the light absorber. Furthermore, we show some examples of cell manipulation including trapping, transport, and sorting with various sizes of light absorber. Finally, we show proof of concept for the application of the proposed drive method for massive integration of on-chip gel actuators.
|
|
15:15-15:30, Paper MoCT10.6 | |
>Detection and Control of Air Liquid Interface with an Open-Channel Microfluidic Chip for Circulating Tumor Cells Isolation from Human Whole Blood |
> Video Attachment
|
|
Turan, Bilal | Nagoya University |
Tomori, Yusuke | Nagoya University |
Masuda, Taisuke | Nagoya University |
Weng, Ruixuan | Nagoya University |
Shen, Larina Tzu-Wei | Tsukuba University |
Matsusaka, Satoshi | Tsukuba University |
Arai, Fumihito | Nagoya University |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Biological Cell Manipulation, Micro/Nano Robots
Abstract: We have proposed a bio-automation system to isolate and recover circulating tumor cells (CTCs) individually from whole blood. An open-channel microfluidic chip-based approach is used to isolate the CTCs. The proposed microfluidic chip design can form a stable air-liquid interface. CTCs are trapped by the gaps in between the pillars of the microfluidic chip due to capillary force associated with the meniscus of the air-liquid interface. We propose a chip design to stabilize air-liquid interface and sample flow speed. We introduce an image analysis algorithm to detect the position of the air-liquid interface. Using the visual feedback from the image analysis algorithm, a control system is proposed to control air-liquid interface position. We succeeded in stabilizing the flow speed, making it feasible for the isolation of 5 mL of whole blood to be completed within 30 min. We achieved an average of position error of air-liquid interface of 4 µm with standard deviation of 7 µm. We have confirmed that air-liquid interface position is a deciding factor for trapping area of CTCs. By controlling air-liquid interface position, we have achieved trapping CTCs in a narrow band with a high concentration.
|
|
MoCT11 |
Room T11 |
Micro/Nano Robotics |
Regular session |
Chair: Petruska, Andrew J. | Colorado School of Mines |
Co-Chair: Jayaram, Kaushik | University of Colorado Boulder |
|
14:00-14:15, Paper MoCT11.1 | |
>Piezoelectric Grippers for Mobile Micromanipulation |
> Video Attachment
|
|
Abondance, Tristan | Harvard University |
Jayaram, Kaushik | University of Colorado Boulder |
Jafferis, Noah T. | Harvard University |
Shum, Jennifer | Harvard University |
Wood, Robert | Harvard University |
Keywords: Grippers and Other End-Effectors, Micro/Nano Robots, Mobile Manipulation
Abstract: The ability to efficiently and precisely manipulate objects in inaccessible environments is becoming an essential requirement for many applications of mobile robots, particularly at small sizes. Here, we propose and implement a mobile micromanipulation solution using a piezoelectric microgripper integrated into a dexterous robot, HAMR (the Harvard Ambulatory MicroRobot), that has a size of approximately 4.5cm by 4cm by 2.3cm and a maximum payload of approximately 3g. Our 100mg miniature gripper is composed of recurve piezoelectric actuators that produce parallel jaw motions (stroke of 205µm at 200V) while providing high gripping forces (blocked force of 0.575N at 200V), making it effective for micromanipulation applications with tiny objects. Using this gripper, we successfully demonstrated a grasping and lifting task with an object of 1.3g and thickness of 250µm at an operating voltage of 100V. Finally, by taking advantage of the locomotion capabilities of HAMR, we demonstrate mobile manipulation by changing the position and orientation of small objects weighing up to 2.8g controlled by the movement of the robot. We expect that the addition of this novel manipulation capability will increase the effectiveness of such miniature robots for accomplishing real-world tasks.
|
|
14:15-14:30, Paper MoCT11.2 | |
>A Novel and Controllable Cell-Robot in Real Vascular Network for Target Tumor Therapy |
|
Feng, Yanmin | Beihang University |
Feng, Lin | Beihang University |
Dai, Yuguo | Beihang University |
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Zhang, Chaonan | Beihang University |
Chen, Yuanyuan | Beihang University |
Arai, Fumihito | Nagoya University |
Keywords: Micro/Nano Robots
Abstract: Magnetic microrobots can be propelled precisely and wirelessly in vivo using magnetic field for targeted drug delivery and early detection. They are promising for clinical trials since magnetic fields are capable of penetrating most materials with minimal interaction, and are nearly harmless to human beings. However, challenges like the biocompatibility, biodegradation and therapeutic effects of these robots must be resolved before this technique is allowed for preclinical development. In this study, we proposed a cell-robot based on macrophages for carrying drugs to kill tumors propelled by magnetic gradient-based pulling. A custom-designed system with strong gradient magnetic field system in three-dimensional (3D) space using the minimum number of coils is used for precise control of the cell-robot. The cell-robots were fabricated by assembling magnetic nanoparticles (Fe3O4), anti-cancer drugs (DOX) into macrophages for magnetic actuation and therapeutic effects. Vitro experiments show that cell-robots can be accurately transported to the destination or approaching a targeted cancer cell. The magnetic nanoparticles have negligible effects on the cell-robot and the organism, which makes the cell-robot safe for in vivo experiments. The carried drugs in the cell-robot can be released by the irradiation of the near-field infrared and kill the cancer cells. Further in vivo experiments prove that the cell-robot can be transported to tumor area and release drugs to kill cancer effectively. The research provides biocompatible and biodegradable cell-robots for early tumor prevention and targeted precision therapy.
|
|
14:30-14:45, Paper MoCT11.3 | |
>Magnetized Cell-Robot Propelled by Magnetic Field for Cancer Killing |
|
Dai, Yuguo | Beihang University |
Feng, Yanmin | Beihang University |
Feng, Lin | Beihang University |
Chen, Yuanyuan | Beihang University |
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Liang, Shuzhang | Beihang University |
Song, Li | Beihang University |
Arai, Fumihito | Nagoya University |
Keywords: Micro/Nano Robots, Medical Robots and Systems, Automation at Micro-Nano Scales
Abstract: In this paper, we present a magnetized cell-robot using macrophages as templates, which can be controlled under a strong gradient magnetic field, to approach and kill cancer cells in both vitro and vivo environment. Firstly, we establish a magnetic control system using only four coils which can generate gradient field up to 4.14 T/m utilizing the coupled field contributed by multiple electromagnets acting in concert. Most importantly, the cell-robot which is based on the macrophage is proposed, and can be transported to the vicinity of cancer cells precisely using strong gradient magnetic field. Then the cell-robot will actively phagocytose the cancer cells and eventually kill them, achieving the cancer treatment at the cellular level. It has important significance for guiding accurate targeted therapy in vivo for the future, under the premise of zero harm to the human body.
|
|
14:45-15:00, Paper MoCT11.4 | |
>Control of Magnetically-Driven Screws in a Viscoelastic Medium |
> Video Attachment
|
|
Zhang, Zhengya | University Medical Center Groningen |
Klingner, Anke | German University in Cairo |
Misra, Sarthak | University of Twente |
Khalil, Islam S.M. | University of Twente |
Keywords: Micro/Nano Robots
Abstract: Magnetically-driven screws operating in softtissue environments could be used to deploy localized therapy or achieve minimally invasive interventions. In this work, we characterize the closed-loop behavior of magnetic screws in an agar gel tissue phantom using a permanent magnet-based robotic system with an open-configuration. Our closed-loop control strategy capitalizes on an analytical calculation of the swimming speed of the screw in viscoelastic fluids and the magnetic point-dipole approximation of magnetic fields. The analytical solution is based on the Stokes/Oldroyd-B equations and its predictions are compared to experimental results at different actuation frequencies of the screw. Our measurements matches the theoretical prediction of the analytical model before the step-out frequency of the screw owing to the linearity of the analytical model. We demonstrate open-loop control in two-dimensional space, and point-to-point closed-loop motion control of the screw (length and diameter of 6 mm and 2 mm, respectively) with maximum positioning error of 1.8 mm.
|
|
15:00-15:15, Paper MoCT11.5 | |
>Open-Loop Orientation Control Using Dynamic Magnetic Fields |
> Video Attachment
|
|
Petruska, Andrew J. | Colorado School of Mines |
Keywords: Micro/Nano Robots, Motion Control, Dynamics
Abstract: Remote magnetic control of soft magnetic objects has been limited to 2D orientation and 3D position. In this paper, we extend the five degree-of-freedom (5-DoF) control approach to full 6-DoF. We prove that 6-DoF control is possible for objects that have an apparent magnetic susceptibility tensor with unique eigenvalues. We further show that the object's orientation can be specified with a dynamic magnetic field and can be controlled without orientation feedback. The theory is demonstrated by rotating a soft magnetic object about each of its principle axes using a metronome like dynamic field.
|
|
15:15-15:30, Paper MoCT11.6 | |
>A Manipulability Criterion for Magnetic Actuation of Miniature Swimmers with Flexible Flagellum |
> Video Attachment
|
|
Begey, Jérémy | University of Strasbourg |
Etievant, Maxime | FEMTO-ST Institute |
Quispe, Johan Edilberto | Sorbonne University, CNRS Institut Des Systèmes Intelligents Et |
Bolopion, Aude | Femto-St Institute |
Vedrines, Marc | ICube - INSA De Strasbourg |
Abadie, Joel | UFC ENSMM |
Régnier, Stéphane | Sorbonne University |
Andreff, Nicolas | Université De Franche Comté |
Renaud, Pierre | ICube AVR |
Keywords: Micro/Nano Robots, Kinematics, Automation at Micro-Nano Scales
Abstract: The use of untethered miniature swimmers is a promising trend, especially in biomedical applications. These swimmers are often operated remotely using a magnetic field commonly generated using fixed coils that can suffer from a lack of compactness and heating issues. The analysis of the swimming capabilities is still an ongoing topic of research. In this paper, we focus on the ability of a magnetic actuation system to operate the propulsion of miniature swimmers with flexible flagellum. As a first contribution, we present a new manipulability criterion to assess the ability of a magnetic actuation system to operate a swimming robot, i.e. to ensure a displacement in any desired direction with a fixed minimum speed. This criterion is developed thanks to an analogy with cable-driven parallel robots. As a second contribution, this manipulability criterion is exploited to identify the dexterous swimming workspace which can be used to design of new coil configurations as well as to highlight the possibilities of moving coil systems. A case study for a planar workspace surrounded by three coils is in particular carried out. The accompanying video illustrates the application of the proposed criterion in 3D, for a large number of coils.
|
|
MoCT12 |
Room T12 |
Micro-Scale Perception and Manipulation |
Regular session |
Chair: Liu, Ming | Hong Kong University of Science and Technology |
Co-Chair: Liu, Xinyu | University of Toronto |
|
14:00-14:15, Paper MoCT12.1 | |
>Smart-Inspect: Micro Scale Localization and Classification of Smartphone Glass Defects for Industrial Automation |
|
Bhutta, M Usman Maqbool | The Hong Kong University of Science and Technology (HKUST) |
Aslam, Shoaib | The Hong Kong University of Science and Technology (HKUST), Clea |
Yun, Peng | The Hong Kong University of Science and Technology |
Jiao, Jianhao | The Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Localization, Automation Technologies for Smart Cities, Manufacturing, Maintenance and Supply Chains
Abstract: The presence of any type of defect on the glass screen of smart devices has a great impact on their quality. We present a robust semi-supervised learning framework for intelligent micro-scaled localization and classification of defects on a 16K pixel image of smartphone glass. Our model features the efficient recognition and labeling of three types of defects: scratches, light leakage due to cracks, and pits. Our method also differentiates between the defects and light reflections due to dust particles and sensor regions, which are classified as non-defect areas. We use a partially labeled dataset to achieve high robustness and excellent classification of defect and non-defect areas as compared to principal components analysis (PCA), multi-resolution and information-fusion-based algorithms. In addition, we incorporated two classifiers at different stages of our inspection framework for labeling and refining the unlabeled defects. We successfully enhanced the inspection depth-limit up to 5 microns. The experimental results show that our method outperforms manual inspection in testing the quality of glass screen samples by identifying defects on samples that have been marked as good by human inspection.
|
|
14:15-14:30, Paper MoCT12.2 | |
>An SEM-Based Nanomanipulation System for Multi-Physical Characterization of Single InGaN/GaN Nanowires |
|
Qu, Juntian | McGill University |
Wang, Renjie | McGill University |
Pan, Peng | McGill University |
Du, Linghao | University of Toronto |
Mi, Zetian | University of Michigan |
Sun, Yu | University of Toronto |
Liu, Xinyu | University of Toronto |
Keywords: Automation at Micro-Nano Scales, Micro/Nano Robots
Abstract: Functional nanomaterials possess exceptional multi-physical (e.g., mechanical, electrical and optical) proper- ties compared with their bulk counterparts. To facilitate both synthesis and device applications of these nanomaterials, it is highly desired to characterize their multi-physical properties with high accuracy and efficiency. The nanomanipulation tech- niques under scanning electron microscopy (SEM) has enabled the testing of mechanical and electrical properties of various nanomaterials. However, the seamless integration of mechan- ical, electrical, and optical testing techniques into an SEM for triple-field-coupled characterization of single nanostructures is still unexplored. In this work, we report the first SEM- based nanomanipulation system for high-resolution mechano- optoelectronic testing of single semiconductor InGaN/GaN nanowires (NWs). A custom-made optical measurement setup was integrated onto a four-probe nanomanipulator inside an SEM, with two optical microfibers actuated by the nanoma- nipulator for NW excitation and emission measurement. A conductive tungsten nanoprobe and a conductive atomic force microscopy (AFM) cantilever probe were integrated onto the nanomanipulator for electrical nanoprobing of single NWs for electroluminescence (EL) measurement. The AFM probe also served as a force sensor for quantifying the contact force applied to the NW during nanoprobing. Using this unique system, we examined, for the first time, the effect of mechanical compression applied to an InGaN/GaN NW on its optoelectronic properties.
|
|
14:30-14:45, Paper MoCT12.3 | |
>Observer-Based Disturbance Control for Small-Scale Collaborative Robotics |
|
Awde, Ahmad | Université Bourgogne Franche-Comté - Sorbonne Université |
Boudaoud, Mokrane | Sorbonne Université |
Régnier, Stéphane | Sorbonne University |
Clévy, Cédric | Franche-Comté University |
Keywords: Automation at Micro-Nano Scales, Haptics and Haptic Interfaces, Micro/Nano Robots
Abstract: Collaborative robotics allows merging the best capabilities of humans and robots to perform complex tasks. This allows the user to interact with remote and directly inaccessible environments such as the micro-scale world. This interaction is made possible by the bidirectional exchange of information (displacement - force) between the user and the environment through a haptic interface. The effectiveness of the human/robot interaction is highly dependent on how the human feels the forces. This is a key point to enable humans to make the right decisions in a collaborative task. This paper discusses the design of a dynamic observer to estimate the forces applied by a human operator on a class of parallel pantograph-type haptic interfaces used to control small-scale robotic systems. The objective is to reject disturbances in order to improve the human force perception capability over a wide frequency range. A dynamic pantograph model is proposed and experimentally validated. The observer is designed on the basis of the proposed dynamic model and its efficiency in estimating the applied human force is demonstrated for the first time with pantograph-type interfaces. Experimental validation first shows the effectiveness of the perturbation observer for external human force estimation with a response time of less than 0.2 s and a mean error of less than 7 mN and then the effectiveness of the controller in improving the quality of human sensation of forces down to 10 mN.
|
|
14:45-15:00, Paper MoCT12.4 | |
>Robust Micro-Particle Manipulation in a Microfluidic Channel Network Using Gravity-Induced Pressure Actuators |
> Video Attachment
|
|
Lee, Donghyeon | Pohang University of Science and Technology(POSTECH) |
Lee, Woongyong | POSTECH |
Chung, Wan Kyun | POSTECH |
Kim, Keehoon | POSTECH, Pohang University of Science and Technology |
Keywords: Automation at Micro-Nano Scales, Biological Cell Manipulation, Mechanism Design
Abstract: Robust particle manipulation is a challenging but essential technique for single-cell analysis and processing of microfluidic devices. This paper proposes a micro-particle manipulation system with a microfluidic channel network. We built gravity-induced pressure actuators, which can generate high-resolution output pressure with a wide range so that the multiple particles can be delivered from the inlet of the chip. In this paper, we studied how to model the proposed multi-input-single-output system and sources of disturbances, and designed a robust controller using disturbance observer technique. The performance of the proposed system was verified through experiments.
|
|
15:00-15:15, Paper MoCT12.5 | |
>Deep Learning-Based Autonomous Scanning Electron Microscope |
> Video Attachment
|
|
Jang, Jonggyu | Ulsan National Institute of Science and Technology (UNIST) |
Lyu, Hyeonsu | Ulsan National Institute of Science and Technology (UNIST) |
Yang, Hyun Jong | Pohang University of Science and Technology (POSTECH) |
Oh, Moohyun | Egovid Inc |
Lee, Junhee | Coxem Co. Ltd |
Keywords: Autonomous Agents, Reinforecment Learning, Computer Vision for Automation
Abstract: By virtue of their ultra high resolution, scanning electron microscopes (SEMs) are essential to study topography, morphology, composition, and crystallography of materials, and thus are widely used for advanced researches in physics, chemistry, pharmacy, geology, etc. The major hindrance of using SEMs is that obtaining high quality images from SEMs requires a professional control of many control parameters. Therefore, it is not an easy task even for an experienced researcher to get high quality sample images without any help from SEM experts. In this paper, we propose and implement a deep learning-based autonomous SEM machine, which assesses image quality and controls parameters autonomously to get high quality sample images just as if human experts do. This world’s first autonomous SEM machine may be the first step to bring SEMs, previously used only for advanced researches due to its difficulty in use, into much broader applications such as education, manufacture, and mechanical diagnosis, which are previously meant for optical microscopes.
|
|
MoCT13 |
Room T13 |
Computer Vision for Medical Robotics |
Regular session |
Chair: Yin, Hu | Beihang University |
Co-Chair: Hannaford, Blake | University of Washington |
|
14:00-14:15, Paper MoCT13.1 | |
>The Application of Navigation Technology for the Medical Assistive Devices Based on Aruco Recognition Technology |
|
Tian, Weihan | Beihang University |
Chen, Diansheng | Beihang University |
Yang, Zihao | Beihang University |
Yin, Hu | Beihang University |
Keywords: Visual Servoing, Service Robots, Visual-Based Navigation
Abstract: In order to improve the convenience of operation for the medical assistive devices and reduce the use and maintenance cost, the Aruco recognition technology is applied to the navigation and positioning of visual guided electric assistive devices. Firstly, the differential control kinematic model of the electric wheelchair is analyzed. We discuss the feasibility of Aruco recognition technology in the application of medical assistive devices. The camera on wheelchair captures the Aruco marker data and transmits it to controller. The controller calculates the position and posture information of electric wheelchair, which provides reference for the next movement of electric wheelchair. Combining with the kinematic model of electric wheelchair, this method can realize the navigation and positioning of electric wheelchair. Experiments show that the vision guidance of Electric Wheelchair based on Aruco recognition is accurate, stable, low cost, and can be flexibly applied to the auxiliary equipment of medical institutions。
|
|
14:15-14:30, Paper MoCT13.2 | |
>Endoscopic Navigation Based on Three-Dimensional Structure Registration |
|
Han, Minghui | Nankai University |
Dai, Yu | Nankai University |
Zhang, Jianxun | Nankai University |
Keywords: Visual-Based Navigation, Computer Vision for Medical Robotics, Computer Vision for Automation
Abstract: Surgical navigation is challenging on complicated multi-branch structures such as intrarenal collecting systems or bronchi. The objective of this work is to help surgeons quickly establish the corresponding relationship between intraoperative endoscopic images and preoperative CT data. An endoscopic navigation method is proposed based on three-dimensional structure registration. It mainly includes three sections. First, a reconstruction method is presented to obtain three-dimensional information of porous structures from endoscopic images. It combines image enhancement, structure-from-motion and template matching. Second, a hole search strategy based on slicing is given for detecting and extracting three-dimensional porous structures from CT data. Third, a similarity measurement algorithm is developed for registering endoscopic images to CT data. The performance of this work is evaluated on the data from the ureteroscopic holmium laser lithotripsy and the results show its accuracy, robustness and time cost.
|
|
14:30-14:45, Paper MoCT13.3 | |
>Z-Net: An Anisotropic 3D DCNN for Medical CT Volume Segmentation |
|
Li, Peichao | Imperial College London |
Zhou, Xiao-Yun | Imperial College London |
Wang, Zhaoyang | Imperial College London |
Yang, Guang-Zhong | Shanghai Jiao Tong University |
Keywords: Computer Vision for Medical Robotics, Object Detection, Segmentation and Categorization, Novel Deep Learning Methods
Abstract: Accurate volume segmentation from the Computed Tomography (CT) scan is a common prerequisite for pre-operative planning, intra-operative guidance and quantitative assessment of therapeutic outcomes in robot-assisted Minimally Invasive Surgery (MIS). 3D Deep Convolutional Neural Network (DCNN) is a viable solution for this task, but is memory intensive. Small isotropic patches are cropped from the original and large CT volume to mitigate this issue in practice, but it may cause discontinuities between the adjacent patches and severe class-imbalances within individual sub-volumes. This paper presents a new 3D DCNN framework, namely Z-Net, to tackle the discontinuity and class-imbalance issue by preserving a full field-of-view of the objects in the XY planes using anisotropic spatial separable convolutions. The proposed Z-Net can be seamlessly integrated into existing 3D DCNNs with isotropic convolutions such as 3D U-Net and V-Net, with improved volume segmentation Intersection over Union (IoU) - up to 12.6%. Detailed validation of Z-Net is provided for CT aortic, liver and lung segmentation, demonstrating the effectiveness and practical value of Z-Net for intra-operative 3D navigation in robot-assisted MIS.
|
|
14:45-15:00, Paper MoCT13.4 | |
>LC-GAN: Image-To-Image Translation Based on Generative Adversarial Network for Endoscopic Images |
|
Lin, Shan | University of Washington |
Qin, Fangbo | Institute of Automation, Chinese Academy of Sciences |
Li, Yangming | Rochester Institute of Technology |
Bly, Randall | University of Washington |
Moe, Kris | University of Washington |
Hannaford, Blake | University of Washington |
Keywords: Computer Vision for Medical Robotics, Medical Robots and Systems
Abstract: The intelligent perception of endoscopic vision is appealing in many computer-assisted and robotic surgeries. Achieving good vision-based analysis with deep learning techniques requires large labeled datasets, but manual data labeling is expensive and time-consuming in medical problems. When applying a trained model to a different but relevant dataset, a new labeled dataset may be required for training to avoid performance degradation. In this work, we investigate a novel cross-domain strategy to reduce the need for manual data labeling by proposing an image-to-image translation model called live-cadaver GAN (LC-GAN) based on generative adversarial networks (GANs). More specifically, we consider a situation when a labeled cadaveric surgery dataset is available while the task is instrument segmentation on a live surgery dataset. We train LC-GAN to learn the mappings between the cadaveric and live datasets. To achieve instrument segmentation on live images, we can first translate the live images to fake-cadaveric images with LC-GAN, and then perform segmentation on the fake-cadaveric images with models trained on the real cadaveric dataset. With this cross-domain strategy, we fully leverage the labeled cadaveric dataset for segmentation on live images without the need to label the live dataset again. Two generators with different architectures are designed for LC-GAN to make use of the deep feature representation learned from the cadaveric image based instrument segmentation task. Moreover, we propose structural similarity loss and segmentation consistency loss to improve the semantic consistency during translation. The results demonstrate that LC-GAN achieves better image-to-image translation results, and leads to improved segmentation performance in the proposed cross-domain segmentation task.
|
|
MoCT14 |
Room T14 |
Surgical Robotics: Control |
Regular session |
Chair: Krieger, Axel | University of Maryland |
Co-Chair: Eagleson, Roy | University of Western Ontario |
|
14:00-14:15, Paper MoCT14.1 | |
>DaVinciNet: Joint Prediction of Motion and Surgical State in Robot-Assisted Surgery |
> Video Attachment
|
|
Qin, Yidan | Intuitive Surgical |
Feyzabadi, Seyedshams | UC Merced |
Allan, Max | Intuitive Surgical |
Burdick, Joel | California Institute of Technology |
Azizian, Mahdi | Intuitive Surgical |
Keywords: Surgical Robotics: Laparoscopy, Deep Learning for Visual Perception, Medical Robots and Systems
Abstract: This paper presents a technique to concurrently and jointly predict the future trajectories of surgical instruments and the future state(s) of surgical subtasks in robot-assisted surgeries (RAS) using multiple input sources. Such predictions are a necessary first step towards shared control and supervised autonomy of surgical subtasks. Minute-long surgical subtasks, such as suturing or ultrasound scanning, often have distinguishable tool kinematics and visual features, and can be described as a series of fine-grained states with transition schematics. We propose daVinciNet - an end-to-end dual-task model for robot motion and surgical state predictions. daVinciNet performs concurrent end-effector trajectory and surgical state predictions using features extracted from multiple data streams, including robot kinematics, endoscopic vision, and system events. We evaluate our proposed model on an extended Robotic Intra-Operative Ultrasound (RIOUS+) imaging dataset collected on a da Vinci Xi surgical system and the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our model achieves up to 93.85% short-term (0.5s) and 82.11% long-term (2s) state prediction accuracy, as well as 1.07mm short-term and 5.62mm long term trajectory prediction error.
|
|
14:15-14:30, Paper MoCT14.2 | |
>Hierarchical Optimization Control of Redundant Manipulator for Robot-Assisted Minimally Invasive Surgery |
> Video Attachment
|
|
Hu, Yingbai | Technische Universität München |
Su, Hang | Politecnico Di Milano |
Chen, Guang | Technical University of Munich |
Ferrigno, Giancarlo | Politecnico Di Milano |
De Momi, Elena | Politecnico Di Milano |
Knoll, Alois | Tech. Univ. Muenchen TUM |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy, Motion and Path Planning
Abstract: For the time varying optimization problem, the tracking error cannot converge to zero at the finite time because of the optimal solution changing over time. This paper proposes a novel varying parameter recurrent neural network (VPRNN) based hierarchical optimization of a 7-DoF surgical manipulator for Robot-Assisted Minimally Invasive Surgery (RAMIS), which guarantees task tracking, Remote Center of Motion (RCM) and manipulability index optimization. A theoretically grounded hierarchical optimization framework based is introduced to control multiple tasks based on their priority. Finally, the effectiveness of the proposed control strategy is demonstrated with both simulation and experimental results. The results show that the proposed VPRNN-based method can optimal three tasks at the same time and have better performance than previous work.
|
|
14:30-14:45, Paper MoCT14.3 | |
>Towards Autonomous Control of Magnetic Suture Needles |
> Video Attachment
|
|
Fan, Matthew | University of Maryland, College Park |
Liu, Xiaolong | University of Maryland College Park |
Jain, Kamakshi | University of Maryland College Park |
Lerner, Daniel | University of Maryland, College Park |
Mair, Lamar | Weinberg Medical Physics, Inc |
Irving, Weinberg | Weinberg Medical Physics, Inc |
Diaz-Mercado, Yancy | University of Maryland |
Krieger, Axel | University of Maryland |
Keywords: Medical Robots and Systems, Motion and Path Planning, Surgical Robotics: Planning
Abstract: This paper proposes a magnetic needle steering controller to manipulate mesoscale magnetic suture needles for executing planned suturing motion. This is an initial step towards our research objective: enabling autonomous control of magnetic suture needles for suturing tasks in minimally invasive surgery. To demonstrate the feasibility of accurate motion control, we employ a cardinally-arranged four-coil electromagnetic system setup and control magnetic suture needles in a 2-dimensional environment, i.e., a Petri dish filled with viscous liquid. Different from only using magnetic field gradients to control small magnetic agents under high damping conditions, the dynamics of a magnetic suture needle are investigated and encoded in the controller. Based on mathematical formulations of magnetic force and torque applied on the needle, we develop a kinematically constrained dynamic model that controls the needle to rotate and only translate along its central axis for mimicking the behavior of surgical sutures. A current controller of the electromagnetic system combining with closed-loop control schemes is designed for commanding the magnetic suture needles to achieve desired linear and angular velocities. To evaluate control performance of magnetic suture needles, we conduct experiments including needle rotation control, needle position control by using discretized trajectories, and velocity control by using a time-varying circular trajectory. The experiment results demonstrate our proposed needle steering controller can perform accurate motion control of mesoscale magnetic suture needles.
|
|
14:45-15:00, Paper MoCT14.4 | |
>Supervised Semi-Autonomous Control for Surgical Robot Based on Bayesian Optimization |
|
Chen, Junhong | Imperial College London |
Zhang, Dandan | Imperial College London |
Munawar, Adnan | Johns Hopkins University |
Zhu, Ruiqi | Imperial College London |
Lo, Benny Ping Lai | Imperial College London |
Fischer, Gregory Scott | Worcester Polytechnic Institute, WPI |
Yang, Guang-Zhong | Shanghai Jiao Tong University |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy
Abstract: The recent development of Robot-Assisted Minimally Invasive Surgery (RAMIS) has brought much benefit to ease the performance of complex Minimally Invasive Surgery (MIS) tasks and lead to more clinical outcomes. Compared to direct master-slave manipulation, semi-autonomous control for the surgical robot can enhance the efficiency of the operation, particularly for repetitive tasks. However, operating in a highly dynamic in-vivo environment is complex. Supervisory control functions should be included to ensure flexibility and safety during the autonomous control phase. This paper presents a haptic rendering interface to enable supervised semi-autonomous control for a surgical robot. Bayesian optimization is used to tune user-specific parameters during the surgical training process. User studies were conducted on a customized simulator for validation. Detailed comparisons are made between with and without the supervised semi-autonomous control mode in terms of the number of clutching events, task completion time, master robot end-effector trajectory and average control speed of the slave robot. The effectiveness of the Bayesian optimization is also evaluated, demonstrating that the optimized parameters can significantly improve users' performance. Results indicate that the proposed control method can reduce the operator's workload and enhance operation efficiency.
|
|
15:00-15:15, Paper MoCT14.5 | |
>Parallel Haptic Rendering for Orthopedic Surgery Simulators |
|
Faieghi, Reza | Ryerson University |
Atashzar, S. Farokh | New York University (NYU), US |
Tutunea-Fatan, O. Remus | Western University |
Eagleson, Roy | University of Western Ontario |
Keywords: Haptics and Haptic Interfaces, Virtual Reality and Interfaces, Computational Geometry
Abstract: This study introduces a haptic rendering algorithm for simulating surgical bone machining operations. The proposed algorithm is a new variant of the voxmap pointshell method, where the bone and surgical tool geometries are represented by voxels and points, respectively. The algorithm encompasses computationally efficient methods in a data parallel framework to rapidly query intersecting voxel-point pairs, remove intersected bone voxels to replicate bone removal and compute elemental cutting forces. A new force model is adopted from the composite machining literature to calculate the elemental forces with higher accuracy. The integration of the algorithm with graphics rendering for visuo-haptic simulations is also outlined. The algorithm is benchmarked against state-of-the-art methods and is validated against prior experimental data collected during bone drilling and glenoid reaming trials. The results indicate improvements in computational efficiency and the force/torque prediction accuracy compared to the existing methods, which can be ultimately translated into higher realism in simulating orthopedic procedures.
|