| |
Last updated on May 31, 2021. This conference program is tentative and subject to change
Technical Program for Tuesday June 1, 2021
|
TuAT1 Award Session, Time zone: GMT+1 |
Add to My Program |
Automation Award Session |
|
|
Chair: Mahmoudian, Nina | Purdue University |
Co-Chair: Xiao, Xiao | Southern University of Science and Technology |
|
02:00-02:15, Paper TuAT1.1 | Add to My Program |
Fabric Defect Detection Using Tactile Information |
|
Long, Xingming | Tsinghua University |
Zhang, Yifan | Tsinghua University |
Fang, Bin | Tsinghua university |
Luo, GuoYi | Tsinghua University |
Sun, Fuchun | Tsinghua Univerisity |
|
02:15-02:30, Paper TuAT1.2 | Add to My Program |
A General-Purpose Anomalous Scenario Synthesizer for Rotary Equipment |
|
Yeung, Yip Fun | MIT |
Alshehri, Ali | Massachusetts Institute of Technology |
Wampler, Lois | Massachusetts Institute of Technology |
Mikio, Furokawa | MIT |
Takayuki, Hirano | Japan Steel Works |
Youcef-Toumi, Kamal | Massachusetts Institute of Technology |
|
02:30-02:45, Paper TuAT1.3 | Add to My Program |
Robust Trajectory Optimization Over Uncertain Terrain with Stochastic Complementarity |
|
Drnach, Luke | Georgia Institute of Technology |
Zhao, Ye | Georgia Institute of Technology |
|
02:45-03:00, Paper TuAT1.4 | Add to My Program |
Automated Fabrication of the High-Fidelity Cellular Micro-Scaffold through Proportion-Corrective Control of the Photocuring Process |
|
Li, Xin | Beijing Institute of Technology |
Wang, Huaping | Beijig Institute of Technology |
Shi, Qing | Beijing Institute of Technology |
Liu, JiaXin | Beijing Institute of Technology |
Xin, Zhanhua | Beijing Institute of Technology |
Dong, Xinyi | Beijing Institute of Technology |
Huang, Qiang | Beijing Institute of Technology |
Fukuda, Toshio | Meijo University |
|
TuAT2 Award Session, Time zone: GMT+1 |
Add to My Program |
Manipulation Award Session |
|
|
Chair: Diller, Eric D. | University of Toronto |
|
02:00-02:15, Paper TuAT2.1 | Add to My Program |
StRETcH: A Soft to Resistive Elastic Tactile Hand |
|
Matl, Carolyn | University of California, Berkeley |
Koe, Josephine | University of California Berkeley |
Bajcsy, Ruzena | Univ of California, Berkeley |
|
02:15-02:30, Paper TuAT2.2 | Add to My Program |
A Parallelized Iterative Algorithm for Real-Time Simulation of Long Flexible Cable Manipulation |
|
Lee, Jeongmin | Seoul National University |
Lee, Minji | Seoul National University |
Yoon, Jaemin | Seoul National University |
Lee, Dongjun | Seoul National University |
|
02:30-02:45, Paper TuAT2.3 | Add to My Program |
KPAM 2.0: Feedback Control for Category-Level Robotic Manipulation |
|
Gao, Wei | Massachusetts Institute of Technology |
Tedrake, Russ | Massachusetts Institute of Technology |
|
02:45-03:00, Paper TuAT2.4 | Add to My Program |
Policy Blending and Recombination for Multimodal Contact-Rich Tasks |
|
Narita, Tetsuya | Sony Corporation |
Kroemer, Oliver | Carnegie Mellon University |
|
TuAT3 Award Session, Time zone: GMT+1 |
Add to My Program |
Robot Vision Award Session |
|
|
Chair: Li, Zhijun | University of Science and Technology of China |
|
02:00-02:15, Paper TuAT3.1 | Add to My Program |
CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth |
|
Zuo, Xingxing | Zhejiang University |
Merrill, Nathaniel | University of Delaware |
Li, Wei | Inceptio |
Liu, Yong | Zhejiang University |
Pollefeys, Marc | ETH Zurich |
Huang, Guoquan (Paul) | University of Delaware |
|
02:15-02:30, Paper TuAT3.2 | Add to My Program |
Interval-Based Visual-LiDAR Sensor Fusion |
|
Voges, Raphael | Leibniz Universität Hannover |
Wagner, Bernardo | Leibniz Universität Hannover |
|
02:30-02:45, Paper TuAT3.3 | Add to My Program |
OmniDet: Surround View Cameras Based Multi-Task Visual Perception Network for Autonomous Driving |
|
Ravi Kumar, Varun | Valeo |
Yogamani, Senthil | Valeo Vision Systems |
Rashed, Hazem | Valeo |
Sistu, Ganesh | VALEO |
Witt, Christian | Valeo |
Leang, Isabelle | Valeo |
Milz, Stefan | Valeo Schalter und Sensoren GmbH |
Mäder, Patrick | Technische Universität Ilmenau |
|
02:45-03:00, Paper TuAT3.4 | Add to My Program |
VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments |
|
Minoda, Koji | University of Tokyo |
Schilling, Fabian | EPFL |
Wüest, Valentin | EPFL |
Floreano, Dario | Ecole Polytechnique Federal, Lausanne |
Yairi, Takehisa | University of Tokyo |
|
TuAT4 Award Session, Time zone: GMT+1 |
Add to My Program |
Best Paper Award Session |
|
|
Co-Chair: Chen, Weinan | Southern University of Science and Technology |
|
02:00-02:15, Paper TuAT4.1 | Add to My Program |
An Artin Braid Group Representation of Knitting Machine State with Applications to Validation and Optimization of Fabrication Plans |
|
Lin, Jenny | Carnegie Mellon |
McCann, James | Carnegie Mellon University |
|
02:15-02:30, Paper TuAT4.2 | Add to My Program |
Extrinsic Contact Sensing with Relative-Motion Tracking from Distributed Tactile Measurements |
|
Ma, Daolin | Massachusetts Institute of Technology |
Dong, Siyuan | MIT |
Rodriguez, Alberto | Massachusetts Institute of Technology |
|
02:30-02:45, Paper TuAT4.3 | Add to My Program |
Distributed Coordinated Path Following Using Guiding Vector Fields |
|
Yao, Weijia | University of Groningen |
Garcia de Marina, Hector | Universidad Complutense de Madrid |
Sun, Zhiyong | Eindhoven University of Technology (TU/e) |
Cao, Ming | University of Groningen |
|
02:45-03:00, Paper TuAT4.4 | Add to My Program |
Sim-To-Real Learning of All Common Bipedal Gaits Via Periodic Reward Composition |
|
Siekmann, Jonah | Oregon State University |
Godse, Yesh | Oregon State University |
Fern, Alan | Oregon State University |
Hurst, Jonathan | Oregon State University |
|
TuAT5 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning: Learning-Based Prediction |
|
|
Chair: Liu, Changliu | Carnegie Mellon University |
|
02:00-02:15, Paper TuAT5.1 | Add to My Program |
Uncertainty-Aware Non-Linear Model Predictive Control for Human-Following Companion Robot |
|
Sekiguchi, Shunichi | Keio University |
Yorozu, Ayanori | University of Tsukuba |
Kuno, Kazuhiro | Equos Research Co., Ltd |
Okada, Masaki | EQUOS RESERACH |
Watanabe, Yutaka | Equos Research Co; Ltd |
Takahashi, Masaki | Keio University |
Keywords: Optimization and Optimal Control, Machine Learning for Robot Control, Human-Aware Motion Planning
Abstract: For a companion robot that follows a person as an assistant, predicting human walking is important to produce a proactive movement that is helpful to maintain an appropriate area decided by the human personal space. However, fully trusting the prediction may result in obstructing human walking because it is not always accurate. Hence, we consider the estimation of uncertainty (i.e., entropy) of the prediction to enable the robot to move without causing overconfident motion and without being late for the person it follows. To consider this uncertainty of the prediction to the controller, we introduce a reliability value that changes based on the entropy of the prediction. This value expresses the extent the controller should trust the prediction result, and it affects the cost function of our controller. We propose an uncertainty-aware robot controller based on nonlinear model predictive control to realize natural human-followings. We found that our uncertainty-aware control system can produce an appropriate robot movement, such as not obstructing the human walking and avoiding delay, in both simulations using actual human walking data and real-robot experiments.
|
|
02:15-02:30, Paper TuAT5.2 | Add to My Program |
Path Planning in Uncertain Ocean Currents Using Ensemble Forecasts |
|
Yoo, Chanyeol | University of Technology Sydney |
Lee, James Ju Heon | University of Technology Sydney |
Anstee, Stuart David | Defence Science and Technology Group |
Fitch, Robert | University of Technology Sydney |
Keywords: Marine Robotics, Motion and Path Planning
Abstract: We present a path planning framework for marine robots subject to uncertain ocean currents that exploits data from ensemble forecasting, which is a technique for current prediction used in oceanography. Ensemble forecasts represent a distribution of predicted currents as a set of flow fields that are considered to be equally likely. We show that the typical approach of computing the vector-wise mean and variance over this set can yield meaningless results, and propose an alternative approach that considers each flow field in the ensemble simultaneously. Our framework finds a sequence of vehicle controls that minimises the root-mean-square error distance (RMSE) over the full set of ensemble-induced trajectories. The key to achieving computational efficiency in this approach is our use of Monte Carlo tree search (MCTS) with a specialised heuristic that improves convergence rate while preserving asymptotic optimality and the anytime property. We demonstrate our results using real ensemble forecasts provided by the Australian Bureau of Meteorology, and provide comparisons with the deterministic mean-based approach where we observe RMSE reductions of 92% and 43% in two example scenarios. Further, we argue that the framework can be used in a plan-as-you-go manner where ensemble forecasts change over time. These results help to introduce ensemble forecasts as a viable source of data to improve path planning in marine robotics.
|
|
02:30-02:45, Paper TuAT5.3 | Add to My Program |
Distributed Motion Coordination Using Convex Feasible Set Based Model Predictive Control |
|
Zhou, Hongyu | Norwegian University of Science and Technology |
Liu, Changliu | Carnegie Mellon University |
Keywords: Intelligent Transportation Systems, Distributed Robot Systems, Motion and Path Planning
Abstract: The implementation of optimization-based motion coordination approaches in real world multi-agent systems remains challenging due to their high computational complexity and potential deadlocks. This paper presents a distributed model predictive control (MPC) approach based on convex feasible set (CFS) algorithm for multi-vehicle motion coordination in autonomous driving. By using CFS to convexify the collision avoidance constraints, collision-free trajectories can be computed in real time. We analyze the potential deadlocks and show that a deadlock can be resolved by changing vehicles' desired speeds. The MPC structure ensures that our algorithm is robust to low-level tracking errors. The proposed distributed method has been tested in multiple challenging multi-vehicle environments, including unstructured road, intersection, crossing, platoon formation, merging, and overtaking scenarios. The numerical results and comparison with other approaches (including a centralized MPC and reciprocal velocity obstacles) show that the proposed method is computationally efficient and robust, and avoids deadlocks.
|
|
02:45-03:00, Paper TuAT5.4 | Add to My Program |
Risk Conditioned Distributional Soft Actor-Critic for Risk-Sensitive Navigation |
|
Choi, Jinyoung | NAVERLABS |
Dance, Christopher | NAVER LABS Europe |
Kim, Jung-eun | NAVER LABS |
Hwang, SeulBIn | Naverlabs |
Park, Kyung-sik | NAVER LABS |
Keywords: Deep Learning Methods, Collision Avoidance, Motion and Path Planning
Abstract: Modern navigation algorithms based on deep reinforcement learning (RL) have proven to be efficient and robust. However, most deep RL algorithms operate in a risk-neutral manner, making no special attempt to shield users from `outcomes that may hurt the most', even if such shielding might cause little loss of performance. Furthermore, such algorithms typically make no provisions to ensure safety in the presence of inaccuracies in the models on which they were trained, beyond adding a cost-of-collision and some domain randomization while training, in spite of the formidable complexity of the environments in which they operate. In this paper, we present a novel distributional RL algorithm that not only learns an uncertainty-aware policy, but can also change its risk measure without expensive fine-tuning or retraining. Our method shows superior performance and safety over baselines in partially-observed navigation tasks. We also demonstrate that agents trained using our method can adapt their policies to a wide range of risk measures in a zero-shot manner.
|
|
TuAT6 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning and Control II |
|
|
Chair: Cheng, Shing Shin | The Chinese University of Hong Kong |
Co-Chair: Lu, Guoyu | Rochester Institute of Technology |
|
02:00-02:15, Paper TuAT6.1 | Add to My Program |
NEO: A Novel Expeditious Optimisation Algorithm for Reactive Motion Control of Manipulators |
|
Haviland, Jesse | Queensland University of Technology |
Corke, Peter | Queensland University of Technology |
Keywords: Motion Control, Collision Avoidance
Abstract: We present NEO, a fast and purely reactive motion controller for manipulators which can avoid static and dynamic obstacles while moving to the desired end-effector pose. Additionally, our controller maximises the manipulability of the robot during the trajectory, while avoiding joint position and velocity limits. NEO is wrapped into a strictly convex quadratic programme which, when considering obstacles, joint limits, and manipulability on a 7 degree-of-freedom robot, is generally solved in a few ms. While NEO is not intended to replace state-of-the-art motion planners, our experiments show that it is a viable alternative for scenes with moderate complexity while also being capable of reactive control. For more complex scenes, NEO is better suited as a reactive local controller, in conjunction with a global motion planner. We compare NEO to motion planners on a standard benchmark in simulation and additionally illustrate and verify its operation on a physical robot in a dynamic environment. We provide an open-source library which implements our controller.
|
|
02:15-02:30, Paper TuAT6.2 | Add to My Program |
Optimized Method for Planning and Controlling the Somersault Motion of Quadruped Robot |
|
Chen, Teng | Shandong University |
Rong, Xuewen | Shandong University |
Li, Yibin | Shandong University |
Keywords: Legged Robots, Motion Control
Abstract: A method for planning and controlling the somersault motion of a quadruped robot is proposed in this paper. The method divides the somersault motion into 5 stages according to intuitive understanding. Based on the simplified dynamic model, the linear programming method is used to obtain the maximum ground reaction force under the constraints of joint torque and friction cone, and then the optimal leg thrusting trajectory is obtained by double integration of the acceleration. In order to achieve the buffered landing of the robot after somersault, a whole body controller based on null space projection is used to obtain the optimal joint torque under the constraints of the robot's foot position, torso position and torso posture. The somersault motion control method proposed in this paper has been verified by the dynamics simulation software Webots and quadruped robot platform Yobogo. The results show that the robot can complete stable front flip and back flip under the constraints of joint output torque and foot motion space constraints.
|
|
02:30-02:45, Paper TuAT6.3 | Add to My Program |
Motion Coupling Analysis for the Decoupled Design of a Two-Segment Notched Continuum Robot |
|
Zeng, Wenhui | The Chinese University of Hong Kong |
Yan, Junyan | The Chinese University of Hong Kong |
Huang, Xu | The Chinese University of Hong Kong |
Cheng, Shing Shin | The Chinese University of Hong Kong |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Surgical Robotics: Laparoscopy, Medical Robots and Systems
Abstract: Multi-segment continuum robots, that offer inherent compliance and dexterity, are suitable for deployment in minimally invasive surgical procedures. Cable-driven mechanism is commonly used in continuum surgical robots but it leads to motion coupling between different segments. In this paper, we present a coupled mechanics model for a two-segment notched continuum robot to analyze the coupled deflection in the proximal segment due to the distal cable force. The model has been developed for two different conditions in which the proximal segment is initially bent (general condition) and initially straight (special condition). It allows us to introduce a decoupled design methodology that systematically determines a stiffness parameter in each of the segments, based on the desired coupled bending angle and other design requirements. Using the method, we fabricated a decoupled notched continuum robot and evaluated the model accuracy with mean errors of 0.57 degree and 0.61 degree for general and special conditions, respectively, throughout the 90 degree distal segment bending angle. It was also demonstrated in a maxillary sinus phantom that the distal segment was capable of independently performing omnidirectional steering without the proximal segment getting in contact with its surrounding nasal wall.
|
|
02:45-03:00, Paper TuAT6.4 | Add to My Program |
VINS-Motion: Tightly-Coupled Fusion of VINS and Motion Constraint |
|
Yu, Zhelin | UESTC |
Zhu, Lidong | UESTC |
Lu, Guoyu | Rochester Institute of Technology |
Keywords: Visual-Inertial SLAM, SLAM, Localization
Abstract: In this paper, we develop a novel visual-inertial navigation system with motion constraint (VINS-Motion), which extends the visual-inertial navigation system (VINS) to incorporate vehicle motion constraints for improving the autonomous vehicles localization accuracy. Besides the prior information, IMU measurement residual, and visual measurement residual utilized in VINS, vehicle orientation/velocity constraint is first exploited to constitute motion residual. We minimize the sum of priors and Mahalanobis norms of three kinds of residuals to obtain a maximum posteriori estimation, thus increasing system consistency and accuracy. Stop detection is also added to help eliminate the abnormal jitter of the estimated poses during stopping, thus ensuring reasonability of the trajectory. The proposed approach is validated on public datasets and compared against state-of-the-art algorithms, which demonstrates that VINS-Motion achieves significantly higher positioning accuracy.
|
|
TuAT7 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Medical Imaging and Sensing I |
|
|
Chair: Kim, Chunwoo | Korea Institute of Science and Technology (KIST) |
Co-Chair: Luo, Xiongbiao | Xiamen University |
|
02:00-02:15, Paper TuAT7.1 | Add to My Program |
Robust Three-Dimensional Shape Sensing for Flexible Endoscopic Surgery Using Multi-Core FBG Sensors |
|
Lu, Yiang | The Chinese University of Hong Kong |
Lu, Bo | The Chinese University of Hong Kong |
Li, Bin | The Chinese University of Hong Kong |
Guo, Huanhuan | The Chinese University of Hong Kong |
Liu, Yunhui | Chinese University of Hong Kong |
Keywords: Soft Sensors and Actuators, Surgical Robotics: Laparoscopy, Medical Robots and Systems
Abstract: In this letter, we propose a novel 3D shape sensing algorithm for flexible endoscopic surgery using multi-core fiber Bragg grating (FBG) sensors. Considering the signal noises and environmental perturbations, the direct use of FBG measurements for a real-time shape sensing and position estimation is regarded as far from accurate and stable, especially when utilized in the sensing of long and flexible surgical instruments. To solve this problem, a novel and generic model-based filtering technique for the iterative curvature/twist estimation by taking advantage of the configurations of the multi-core FBGs in the optical fiber is introduced to remedy the sensory noises. Besides, we introduce an enhanced moving average approach to smooth the estimated curvatures and twists spatially on the fiber. We extensively validate our algorithm by conducting shape sensing tasks in simulations under varying conditions, and in experiments using a robotic-assisted colonoscope system integrated with a multi-core FBG fiber. The results prove our method substantially outperforms the conventional approach in the aspect of estimation accuracy and robustness, showing the superiority and application feasibility.
|
|
02:15-02:30, Paper TuAT7.2 | Add to My Program |
Robot-To-Image Registration with Geometric Marker for CT-Guided Robotic Needle Insertion |
|
Ikeda, Iori | Waseda University |
Sekine, Kai | Waseda University |
Tsumura, Ryosuke | Worcester Polytechnic Institute |
Iwata, Hiroyasu | Waseda University |
Keywords: Medical Robots and Systems, Embedded Systems for Robotic and Automation
Abstract: A computed tomography (CT)-guided robotic needle requires registration to transfer coordinates between the robot and CT image for accurate insertion. In our previous work, we proposed a geometric marker that allows direct registration between a CT image and robot and demonstrated its proof of concept. In this paper, we present a registration algorithm for calculating the six-degrees-of-freedom error of one CT scan, and we demonstrated the registration with our needle insertion robot. We obtained its geometric shape from differences between multiple images, which we converted into the coordinate axes of the marker, to calculate the posture of the marker. Since the algorithm uses changes in the cross-sectional shapes of images, it can be adapted for markers of different sizes. In the evaluation, the registration error for insertion by the CT-guided robotic needle with the proposed algorithm was calculated to be 1.8 mm and 0.35°. The accuracy of the proposed algorithm is sufficient for lower abdominal insertion and shows its potential for clinical applications. When the markers are half the size of the original, the rotational error is the same as the original, but the positional error is about half. This suggested the possibility of miniaturizing the marker.
|
|
02:30-02:45, Paper TuAT7.3 | Add to My Program |
Shape Sensor Using Magnetic Induction with Frequency Sweeping for Medical Catheters |
|
Jeon, Jiyun | KIST |
Kim, Chunwoo | Korea Institute of Science and Technology (KIST) |
Keywords: Medical Robots and Systems, Surgical Robotics: Steerable Catheters/Needles, Soft Sensors and Actuators
Abstract: Shape sensors are important for safer and more dexterous manipulation of the medical catheters. Among the electromagnetic based shape sensors, a voice coil shape sensor measures the variation of a mutual inductance between the coils placed along the tube due to the bending of the tube. Owing to the design flexibility of a voice coil, it offers the small size without the external magnetic field generator outside patient body. This paper presents an improved voice coil shape sensor in terms of modeling and measurement. Analytic model that incorporates the bending of an exciter coils is used to improve the accuracy of the sensor and band-pass filter is applied to simplify the measurements. Bending angle from sensors placed in multiple locations can be measured through a single channel from frequency sweep input. The simulation and experimental results verify the improvement and demonstrate that the sensor system can reconstruct a shape of a catheter placed through larynx.
|
|
02:45-03:00, Paper TuAT7.4 | Add to My Program |
Robotically Surgical Vessel Localization Using Robust Hybrid Video Motion Magnification |
|
Fan, Wenkang | Xiamen University |
Zheng, Zhuohui | XiamenUniversity |
Zeng, Wankang | Xiamen University, Xiamen 361005, China |
Chen, Yinran | Xiamen University |
Zeng, Hui-Qing | Xiamen University |
Shi, Hong | Fujian Cancer Hospital & Fujian Medical University Cancer Hospit |
Luo, Xiongbiao | Xiamen University |
Keywords: Surgical Robotics: Laparoscopy, Surgical Robotics: Planning, Computer Vision for Medical Robotics
Abstract: Vessel and neurovascular bundle localization plays an essential role in endoscopic and robotic surgery. It still remains challenging to spare vessels and neurovascular bundles to avoid inadvertent injury due to limited visual and tactile perception of surgeons. This work assumes that surgeons have great difficulty in intuitively perceiving small pulsatile motion of vessels and neurovascular bundles from complex surgical field provided by endoscopic videos, and proposes a new surgical video pulsatile motion magnification method to help surgeons easily and precisely recognize vessels or neurovascular bundles by their visual systems. The new method consists of robust hybrid temporal filtering and deeply learned spatial decomposition. The proposed hybrid temporal filtering can significantly magnify pulsatile motion more consistent with reality and simultaneously keep non-pulsating regions in magnified videos almost identical to original videos, while learning-based spatial decomposition can reduce noise and ring artifacts in magnified videos. We evaluate our method on surgical videos acquired from robotic prostatectomy, with the experimental results showing that our method essentially outperforms current motion magnification approaches. In particular, visual quality and quantitative assessment of our method are certainly better than these methods.
|
|
TuAT8 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Mechanism Design IV |
|
|
Chair: Raghavendra Kulkarni, Suhas | Nanyang Technological University |
|
02:00-02:15, Paper TuAT8.1 | Add to My Program |
Temperature Compensated 3D Printed Strain Sensor for Advanced Manufacturing Applications |
|
Munasinghe, Nuwan | University of Technology Sydney (UTS) |
Masangkay, John | University of Technology Sydney |
Paul, Gavin | University of Technology Sydney |
Keywords: Additive Manufacturing
Abstract: Additive Manufacturing, has evolved beyond prototyping to manufacturing end-products. The authors are involved in developing a large-scale extrusion-based 3D printer to print mining equipment - a Gravity Separation Spiral, and embedding sensors to monitor the operational conditions remotely. This paper presents a temperature-compensated strain sensor that can be 3D printed inline within large-scale 3D printed equipment. The sensor is printed using conductive carbon filament and embedded in a Polylactic acid (PLA) base. A half-bridge setup is proposed to reduce the impact of temperature variations. Temperature-controlled tests have been conducted with the proposed half-bridge and compared with a non-temperature compensated quarter-bridge setup. Results show that the half-bridge configuration reduces the temperature impact on the strain measurement significantly (68%) compared to the quarter-bridge, in the range of 25-40 °C. Deflection testing conducted on the printed sensor shows a near-linear relationship between bending strain and voltage. Multiple bending cycles have shown that there is no significant hysteresis. ANSYS simulations are used to accurately estimate the internal temperature since embedding a temperature sensor would affect the structural integrity. Although carbon black material is naturally brittle, steps have been taken in the design to avoid undesirable cracking. Results from laser microscopy analysis of the printed traces showed no crack defects.
|
|
02:15-02:30, Paper TuAT8.2 | Add to My Program |
Design of a Deployable Underwater Robot for the Recovery of Autonomous Underwater Vehicles Based on Origami Technique |
|
Li, Jisen | Chinese University of Hong Kong(Shenzhen) |
Yang, Yuliang | Peng Cheng Laboratory |
Zhang, Yvmei | Peng Cheng Laboratory |
Zhu, Hua | Peng Cheng Laboratory |
Li, Yongqi | Peng Cheng Laboratory |
Huang, Qiujun | Peng Cheng Laboratory |
Lu, Haibo | Peng Cheng Laboratory |
He, Shan | Robotics Research Center, Peng Cheng Lab |
Li, Shengquan | Pengcheng Lab |
Zhang, Wei | Southern University of Science and Technology |
Mei, Tao | Peng Cheng Laboratory |
Wu, Feng | University of Science and Technology of China |
Zhang, Aidong | The Chinese University of Hong Kong, Shenzhen |
Keywords: Mechanism Design, Marine Robotics, Simulation and Animation
Abstract: The recovery of autonomous underwater vehicles (AUVs) has been a challenging mission due to the limited localization accuracy and movement capability of the AUVs. To overcome these limitations, we propose a novel design of a deployable underwater robot (DUR) for the recovery mission. Utilizing the origami structure, the DUR can transform between open and closed states to maximize the performance at different recovery stages. At the approaching stage, the DUR will remain closed state to reduce the drag force. While at the capturing state, the DUR will deploy to form a much larger opening to improve the success rate of docking. Meanwhile, the thrusters’ configuration also changes with the transformation of the robot body. The DUR can achieve a high driven force in the forward direction with the closed state which leads to a fast-approaching speed. While with the open state, the DUR can achieve more balanced force and torque maneuverability to prepare for agile position adjustment for the docking. CFD simulation has been used to analyze the drag forces and identify the hydrodynamic coefficients. A prototype of the robot has been fabricated and tested in an indoor water pool. Both simulation and experiment results validate the feasibility of the proposed design.
|
|
02:30-02:45, Paper TuAT8.3 | Add to My Program |
Modelling and Optimisation of a Mechanism-Based Metamaterial for a Wrist Flexion-Extension Assistive Device |
|
Raghavendra Kulkarni, Suhas | Nanyang Technological University |
Alexandre Pinto Sales de Noronha, Bernardo | Nanyang Technological University |
Campolo, Domenico | Nanyang Technological University |
Accoto, Dino | Nanyang Technological University |
Keywords: Wearable Robotics, Prosthetics and Exoskeletons, Physically Assistive Devices
Abstract: In this paper we present a methodology for optimising the design of a metamaterial structure with one degree of freedom that is able to simultaneously bend and stretch. The structure is intended for assisting flexion-extension of the wrist joint. The metamaterial is comprised of serially connected, individually designed cells. The design parameters can be chosen to optimally fit a desired planar curve such as the curvature of the skin on a plane normal to the flexion/extension axis of the wrist joint. A tool for the optimised design is described and the experimental validation of the output is conducted, to show the ability of the mechanism to conform to a 2D curve, also exhibiting a change in length, which is desirable for reducing sliding and shear on the skin. The design tool allows the generation of metamaterials optimised for a multitude of other applications where actuated mechanisms must be worn by a user for rehabilitation or assistance purposes.
|
|
02:45-03:00, Paper TuAT8.4 | Add to My Program |
Mechatronic Design of a Low-Noise Active Knee Prosthesis with High Backdrivability |
|
Fu, Guoxiang | Peking University |
Zhu, Jinying | Beijing Institute of Technology |
Wang, Zilu | Peking University |
Mai, Jingeng | Peking University |
Wang, Qining | Peking University |
Keywords: Human-Centered Robotics, Human-Centered Automation, Prosthetics and Exoskeletons
Abstract: In this paper, we present a low-damping active knee prosthesis with low noise and high backdrivability. The proposed prosthesis is driven by a motor and then decelerated by a four-stage synchronous belt. High backdrivability given by this structural form accelerates the response of the prosthesis. Also, a control system containing several sensors are embedded in the prototype of the designed prosthesis to distinguish different situations and provide corresponding strategies. Based on this mechatronic design, a three-layer control method is proposed. Preliminary experiments were carried out on a transfemoral amputee, demonstrating the features of low noise, high backdrivability and ability to reproduce natural walking gait.
|
|
TuAT9 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Manipulation Control III |
|
|
Chair: Wu, Yan | A*STAR Institute for Infocomm Research |
Co-Chair: Gao, Fei | Zhejiang University |
|
02:00-02:15, Paper TuAT9.1 | Add to My Program |
Introspective Visuomotor Control: Exploiting Uncertainty in Deep Visuomotor Control for Failure Recovery |
|
Hung, Chia-Man | University of Oxford |
Sun, Li | University of Sheffield |
Wu, Yizhe | University of Oxford |
Havoutis, Ioannis | University of Oxford |
Posner, Ingmar | Oxford University |
Keywords: Learning from Demonstration, Visual Servoing, Perception for Grasping and Manipulation
Abstract: End-to-end visuomotor control is emerging as a compelling solution for robot manipulation tasks. However, imitation learning-based visuomotor control approaches tend to suffer from a common limitation, lacking the ability to recover from an out-of-distribution state caused by compounding errors. In this paper, instead of using tactile feedback or explicitly detecting the failure through vision, we investigate using the uncertainty of a policy neural network. We propose a novel uncertainty-based approach to detect and recover from failure cases. Our hypothesis is that policy uncertainties can implicitly indicate the potential failures in the visuomotor control task and that robot states with minimum uncertainty are more likely to lead to task success. To recover from high uncertainty cases, the robot monitors its uncertainty along a trajectory and explores possible actions in the state-action space to bring itself to a more certain state. Our experiments verify this hypothesis and show a significant improvement on task success rate: 12% in pushing, 15% in pick-and-reach and 22% in pick-and-place.
|
|
02:15-02:30, Paper TuAT9.2 | Add to My Program |
Sim-To-Real Visual Grasping Via State Representation Learning Based on Combining Pixel-Level and Feature-Level Domain Adaptation |
|
Suh, Il Hong | Hanyang University |
Park, Young-Bin | Hanyang University |
Lee, Sang Hyoung | Korea Institute of Industrial Technology |
Keywords: Reinforcement Learning, Transfer Learning, Representation Learning
Abstract: In this study, we present a method to grasp diverse unseen real-world objects using an off-policy actor-critic deep reinforcement learning (RL) with the help of a simulation and the use of as little real-world data as possible. Actor-critic deep RL is unstable and difficult to tune when a raw image is given as an input. Therefore, we use state representation learning (SRL) to make actor-critic RL feasible for visual grasping tasks. Meanwhile, to reduce visual reality gap between simulation and reality, we also employ a typical pixel-level domain adaptation that can map simulated images to realistic ones. In our method, as the SRL model is a common preprocessing module for simulated and real-world data, we perform SRL using real and adapted images. This pixel-level domain adaptation enables the robot to learn grasping skills in a real environment using small amounts of real-world data. However, the controller trained in the simulation should adapt to the real world efficiently. Hence, we propose a method combining a typical pixel-level domain adaptation and the proposed SRL model, where we perform SRL based on a feature-level domain adaptation. In evaluations of vision-based robotics grasping tasks, we show that the proposed method achieves a substantial improvement over a method that only employs a pixel-level or domain adaptation.
|
|
02:30-02:45, Paper TuAT9.3 | Add to My Program |
Dexterous Manoeuvre through Touch in a Cluttered Scene |
|
Liang, Wenyu | Institute for Infocomm Research, A*STAR |
Ren, Qinyuan | Zhejiang University |
Chen, Xiaoqiao | Zhejiang University |
Gao, Junli | Guangdong University of Technology |
Wu, Yan | A*STAR Institute for Infocomm Research |
Keywords: Perception-Action Coupling, Sensor-based Control
Abstract: Manipulation in a densely cluttered environment creates complex challenges in perception to close the control loop, many of which are due to the sophisticated physical interaction between the environment and the manipulator. Drawing from bio-inspiration, to handle the task in such a scenario, tactile sensing can be used to provide an additional dimension of the rich contact information from the interaction for decision making and action selection to manoeuvre towards a target. In this paper, a new tactile-based motion planning and control framework for a robot manipulator to manoeuvre in a cluttered environment is proposed and developed. An iterative two-stage machine learning approach is used in this framework: an autoencoder is used to extract important cues from tactile sensory readings and a reinforcement learning technique is used to generate optimal motion sequence to efficiently reach the given target. The framework is implemented on a KUKA LBR iiwa robot mounted with a SynTouch BioTac tactile sensor and tested with real-life experiments. The results show that the system is able to manipulate in the cluttered environment to reach the target effectively.
|
|
02:45-03:00, Paper TuAT9.4 | Add to My Program |
Mapless-Planner: A Robust and Fast Planning Framework for Aggressive Autonomous Flight without Map Fusion |
|
Ji, Jialin | Zhejiang University |
Wang, Zhepei | Zhejiang University |
Wang, Yingjian | Zhejiang University |
Xu, Chao | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Aerial Systems: Applications, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: Maintaining a map online is resource-consuming while a robust navigation system usually needs environment abstraction via a well-fused map. In this paper, we propose a mapless planner which directly conducts such abstraction on the unfused sensor data. A limited-memory data structure with a reliable proximity query algorithm is proposed for maintaining raw historical information. A sampling-based scheme is designed to extract the free-space skeleton. A smart waypoint selection strategy enables to generate high-quality trajectories within the resultant flight corridors. Our planner differs from other mapless ones in that it can abstract and exploit the environment information efficiently. The online replan consistency and success rate are both significantly improved against conventional mapless methods.
|
|
TuAT10 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Machine Learning: Applications |
|
|
Chair: Zhang, Ang | The Chinese University of Hong Kong |
Co-Chair: Ogata, Tetsuya | Waseda University |
|
02:00-02:15, Paper TuAT10.1 | Add to My Program |
Robotic Indoor Scene Captioning from Streaming Video |
|
Li, Xinghang | Tsinghua University |
Guo, Di | Tsinghua University |
Liu, Huaping | Tsinghua University |
Sun, Fuchun | Tsinghua Univerisity |
Keywords: Semantic Scene Understanding, Deep Learning for Visual Perception
Abstract: Robots are usually equipped with cameras to explore the indoor scene and it is expected that the robot can well describe the scene with natural language. Although some great success has been achieved in image and video captioning technology, especially on many public datasets, the caption generated from indoor scene video is still not informative and coherent enough. In this paper, we propose the problem of textit{Indoor Scene Captioning from Streaming Video}, which aims at generating a more accurate and informative caption from streaming video. To solve this problem, we firstly design an algorithm to organize the visual information of the indoor scene into a scene graph, and then implement a scene graph guided captioning method, which takes the scene graph and video frames as input to generate the caption from the video streaming. The proposed framework is evaluated both on the AI2THOR dataset and a real-world robotic platform, demonstrating the effectiveness of the framework.
|
|
02:15-02:30, Paper TuAT10.2 | Add to My Program |
Geometry-Aware Unsupervised Domain Adaptation for Stereo Matching |
|
Sakuma, Hiroki | SenseTime Japan Ltd |
Konishi, Yoshinori | SenseTime Japan Ltd |
Keywords: Deep Learning for Visual Perception, Visual Learning, Recognition
Abstract: Recently proposed DNN-based stereo matching methods that learn priors directly from data are known to suffer a drastic drop in accuracy in new environments. Although supervised approaches with ground truth disparity maps often work well, collecting them in each deployment environment is cumbersome and costly. For this reason, many unsupervised domain adaptation methods based on image-to-image translation have been proposed, but these methods do not preserve the geometric structure of a stereo image pair because the image-to-image translation is applied to each view separately. To address this problem, in this paper, we propose an attention mechanism that aggregates features in the left and right views, called Stereoscopic Cross Attention (SCA). Incorporating SCA to an image-to-image translation network makes it possible to preserve the geometric structure of a stereo image pair in the process of the image-to-image translation. We empirically demonstrate the effectiveness of the proposed unsupervised domain adaptation based on the image-to-image translation with SCA.
|
|
02:30-02:45, Paper TuAT10.3 | Add to My Program |
Reasoning Operational Decisions for Robots Via Time Series Causal Inference |
|
Cao, Yu | University of Edinburgh |
Li, Boyang | The Hong Kong Polytechnic University |
Li, Qian | University of Edinburgh |
Stokes, Adam Andrew | University of Edinburgh |
Ingram, David | University of Edinburgh |
Kiprakis, Aristides | University of Edinburgh |
Keywords: Marine Robotics, Cognitive Modeling, Big Data in Robotics and Automation
Abstract: Justifying operational decisions for robots is a challenging task as the operator or the robot itself has to understand the underlying physical interaction between the robot and the environment to predict the potential outcome. It is desirable to understand how the decision influences the operational performance in the way of causal relationship for the purpose of explainable decision-making. Here we propose a novel causal inference framework for the discovery and inference on the reasoning of the operational decisions for robots. It unifies both domain knowledge integration and model-free causal inference, allowing a data-driven causal knowledge learning on time series data. The framework is evaluated in the experiments of an underwater robot with complex environmental interactions. The results show that the framework can learn the causal structure and inference model to accurately explain and predict the operation performance with integrated physics.
|
|
02:45-03:00, Paper TuAT10.4 | Add to My Program |
Embodying Pre-Trained Word Embeddings through Robot Actions |
|
Toyoda, Minori | Waseda University |
Suzuki, Kanata | Fujitsu Laboratories LTD |
Mori, Hiroki | Waseda University |
Hayashi, Yoshihiko | Waseda University |
Ogata, Tetsuya | Waseda University |
Keywords: Embodied Cognitive Science, Learning from Experience, Multi-Modal Perception for HRI
Abstract: We propose a promising neural network model with which to acquire a grounded representation of robot actions and the linguistic descriptions thereof. Properly responding to various linguistic expressions, including polysemous words, is an important ability for robots that interact with people via linguistic dialogue. Previous studies have shown that robots can use words that are not included in the action-description paired datasets by using pre-trained word embeddings. However, the word embeddings trained under the distributional hypothesis are not grounded, as they are derived purely from a text corpus. In this paper, we transform the pre-trained word embeddings to embodied ones by using the robot's sensory-motor experiences. We extend a bidirectional translation model for actions and descriptions by incorporating non-linear layers that retrofit the word embeddings. By training the retrofit layer and the bidirectional translation model alternately, our proposed model is able to transform the pre-trained word embeddings to adapt to a paired action-description dataset. Our results demonstrate that the embeddings of synonyms form a semantic cluster by reflecting the experiences (actions and environments) of a robot. These embeddings allow the robot to properly generate actions from unseen words that are not paired with actions in a dataset.
|
|
TuAT11 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Machine Learning for Pose Estimation |
|
|
Chair: Chen, Zexi | Zhejiang University |
Co-Chair: Wang, Yue | Zhejiang University |
|
02:00-02:15, Paper TuAT11.1 | Add to My Program |
HueCode: A Meta-Marker Exposing Relative Pose and Additional Information in Different Colored Layers |
|
Okada, Yoshito | Tohoku University |
Fujikura, Daiki | TOHOKU UNIVERSITY |
Ozawa, Yu | Tohoku University |
Tadakuma, Kenjiro | Tohoku University |
Ohno, Kazunori | Tohoku University |
Tadokoro, Satoshi | Tohoku University |
Keywords: Localization, Recognition, Object Detection, Segmentation and Categorization
Abstract: In this paper, HueCode, a meta-marker that robustly and simultaneously exposes the relative pose between a marker and a camera along with additional information, is proposed. It occupies the area of a single marker by overlaying multiple types of markers in different colored layers. Using perspective information from the first (most recognizable) type of element marker, the second or higher marker can be recognized with a better success rate. An experiment using a HueCode made from an ArUco marker and a QR code showed that the QR code within the HueCode is recognizable at an elevation angle of up to 15°, compared to 25° for a normal QR code. In addition, the versatility of HueCode is demonstrated using two robotic applications. The first is a lightweight estimation of absolute 6-DoF poses for mobile robots in a GNSS-denied environment, and the other is object annotation in two-dimensional image or three-dimensional space without any prior knowledge. The former realized pose estimation with a position error of less than 0.05 m, and the latter enabled annotation of a mirror and a transparent object, which are difficult for other sensors and machine learning to recognize.
|
|
02:15-02:30, Paper TuAT11.2 | Add to My Program |
REDE: End-To-End Object 6D Pose Robust Estimation Using Differentiable Outliers Elimination |
|
Hua, Weitong | Zhejiang University |
Zhou, Zhongxiang | Zhejiang University |
Wu, Jun | Zhejiang University |
Huang, Huang | Beijing Institute of Technology |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Deep Learning for Visual Perception, RGB-D Perception, Perception for Grasping and Manipulation
Abstract: Object 6D pose estimation is a fundamental task in many applications. Conventional methods solve the task by detecting and matching the keypoints, then estimating the pose. Recent efforts bringing deep learning into the problem mainly overcome the vulnerability of conventional methods to environmental variation due to the hand-crafted feature design. However, these methods cannot achieve end-to-end learning and good interpretability at the same time. In this paper, we propose REDE, a novel end-to-end object pose estimator using RGB-D data, which utilizes network for keypoint regression, and a differentiable geometric pose estimator for pose error back-propagation. Besides, to achieve better robustness when outlier keypoint prediction occurs, we further propose a differentiable outliers elimination method that regresses the candidate result and the confidence simultaneously. Via confidence weighted aggregation of multiple candidates, we can reduce the effect from the outliers in the final estimation. Finally, following the conventional method, we apply a learnable refinement process to further improve the estimation. The experimental results on three benchmark datasets show that REDE slightly outperforms the state-of-the-art approaches and is more robust to object occlusion. Our code is available at https://github.com/HuaWeitong/REDE.
|
|
02:30-02:45, Paper TuAT11.3 | Add to My Program |
PREGAN: Pose Randomization and Estimation for Weakly Paired Image Style Translation |
|
Chen, Zexi | Zhejiang University |
Guo, Jiaxin | Zhejiang University |
Xu, Xuecheng | Zhejiang University |
Wang, Yunkai | Zhejiang University |
Huang, Huang | Beijing Institute of Technology |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Transfer Learning, Deep Learning for Visual Perception, Computer Vision for Automation
Abstract: Utilizing the trained model under different conditions without data annotation is attractive for robot applications. Towards this goal, one class of methods is to translate the image style from another environment to the one on which models are trained. In this paper, we propose a weakly-paired setting for the style translation, where the content in the two images is aligned with errors in poses. These images could be acquired by different sensors in different conditions that share an overlapping region, e.g. with LiDAR or stereo cameras, from sunny days or foggy nights. We consider this setting to be more practical with: (i) easier labeling than the paired data; (ii) better interpretability and detail retrieval than the unpaired data. To translate across such images, we propose PREGAN to train a style translator by intentionally transforming the two images with a random pose, and to estimate the given random pose by differentiable non-trainable pose estimator given that the more aligned in style, the better the estimated result is. Such adversarial training enforces the network to learn the style translation, avoiding being entangled with other variations. Finally, PREGAN is validated on both simulated and real-world collected data to show the effectiveness. Results on down-stream tasks, classification, road segmentation, object detection, and feature matching show its potential for real applications.
|
|
02:45-03:00, Paper TuAT11.4 | Add to My Program |
Deep Samplable Observation Model for Global Localization and Kidnapping |
|
Chen, Runjian | Zhejiang University |
Yin, Huan | Zhejiang Univerisity |
Jiao, Yanmei | Zhejiang University |
Dissanayake, Gamini | University of Technology Sydney |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Localization, Deep Learning Methods
Abstract: Global localization and kidnapping are two challenging problems in robot localization. The popular method, Monte Carlo Localization (MCL) addresses the problem by iteratively updating a set of particles with a “sampling-weighting” loop. Sampling is decisive to the performance of MCL. However, traditional MCL can only sample from a uniform distribution over the state space. Variants of MCL fail to provide an accurate distribution or generalize across scenes. To better deal with these problems, we present a distribution proposal model named Deep Samplable Observation Model (DSOM). DSOM takes a map and a 2D laser scan as inputs and outputs a conditional multimodal probability distribution of the pose, making the samples more focusing on the regions with higher likelihood. With such samples, the convergence is expected to be more effective and efficient. Considering that the learning-based sampling model may fail to capture the accurate pose sometimes, we furthermore propose the Adaptive Mixture MCL (AdaM MCL), which deploys a trusty mechanism to adaptively select updating mode for each particle to tolerate this situation. Equipped with DSOM, AdaM MCL can achieve more accurate estimation, faster convergence and better scalability than previous methods in both synthetic and real scenes. Even in real environments with long-term changes, AdaM MCL is able to localize the robot using DSOM trained only by simulation observations from a SLAM map or a blueprint map.
|
|
TuAT12 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Localization and Mapping VII |
|
|
Chair: Kneip, Laurent | ShanghaiTech |
|
02:00-02:15, Paper TuAT12.1 | Add to My Program |
B-Splines for Purely Vision-Based Localization and Mapping on Non-Holonomic Ground Vehicles |
|
Huang, Kun | ShanghaiTech University |
Wang, Yifu | Australian National University |
Kneip, Laurent | ShanghaiTech |
Keywords: SLAM, Nonholonomic Mechanisms and Systems, Kinematics
Abstract: Purely vision-based localization and mapping is a cost-effective and thus attractive solution to localization and mapping on smart ground vehicles. However, the accuracy and especially robustness of vision-only solutions remain rivalled by more expensive, lidar-based multi-sensor alternatives. We show that a significant increase in robustness can be achieved if taking non-holonomic kinematic constraints on the vehicle motion into account. Rather than using approximate planar motion models or simple, pair-wise regularization terms, we demonstrate the use of B-splines for an exact imposition of smooth, non-holonomic trajectories inside the 6 DoF bundle adjustment. We introduce both hard and soft formulations and compare their computational efficiency and accuracy against traditional solutions. Through results on both simulated and real data, we demonstrate a significant improvement in robustness and accuracy in degrading visual conditions.
|
|
02:15-02:30, Paper TuAT12.2 | Add to My Program |
Robust SRIF-Based LiDAR-IMU Localization for Autonomous Vehicles |
|
Ouyang, Zhanpeng | Shanghaitech University |
Li, Kun | Alibaba Group |
Hu, Lan | ShanghaiTech University |
Hao, Dayang | Alibaba Group |
Kneip, Laurent | ShanghaiTech |
Keywords: Localization, Sensor Fusion, Autonomous Vehicle Navigation
Abstract: We present a tightly-coupled multi-sensor fusion architecture for autonomous vehicle applications, which achieves centimetre-level accuracy and high robustness in various scenarios. In order to realize robust and accurate point-cloud feature matching we propose a novel method for extracting structural, highly discriminative features from LiDAR point clouds. For high frequency motion prediction and noise propagation, we use incremental on-manifold IMU preintegration. We also adopt a multi-frame sliding window square root inverse filter, so that the system maintains numerically stable results under the premise of limited power consumption. To verify our methodology, we test the fusion algorithm in multiple applications and platforms equipped with a LiDAR-IMU system. Our results demonstrate that our fusion framework attains state-of-the-art localization accuracy, high robustness and a good generalization ability.
|
|
02:30-02:45, Paper TuAT12.3 | Add to My Program |
Structure Reconstruction Using Ray-Point-Ray Features: Representation and Camera Pose Estimation |
|
He, Yijia | Institute of Automation, Chinese Academy of Sciences |
Liu, Xiangyue | Beihang University |
Liu, Xiao | Megvii Technology Inc |
Zhao, Ji | TuSimple |
Keywords: SLAM, Mapping
Abstract: Straight line features have been increasingly utilized in visual SLAM and 3D reconstruction systems. The straight lines' parameterization, parallel constraint, and co-planar constraint are studied in many recent works. In this paper, we explore the novel intersection constraint of straight lines for structure reconstruction. First, a minimum parameterized representation of Ray-Point-Ray (RPR) structures is proposed to represent the intersection of two straight lines in the 3D space. Second, an efficient solver is designed for the camera pose estimation, which leverages the perpendicularity and intersection of straight lines. Third, we build a stereo visual odometry based on RPR features and evaluate it on the simulation and real datasets. The experimental results verify that the intersection constraints from RPR can effectively improve the accuracy and efficiency of line-based SLAM and reconstruction system.
|
|
02:45-03:00, Paper TuAT12.4 | Add to My Program |
Lightweight 3-D Localization and Mapping for Solid-State LiDAR |
|
Wang, Han | Nanyang Technological University |
Wang, Chen | Carnegie Mellon University |
Xie, Lihua | NanyangTechnological University |
Keywords: SLAM, Mapping, Factory Automation
Abstract: The LIght Detection And Ranging (LiDAR) sensor has become one of the most important perceptual devices due to its important role in simultaneous localization and mapping (SLAM). Existing SLAM methods are mainly developed for mechanical LiDAR sensors, which are often adopted by large scale robots. Recently, the solid-state LiDAR is introduced and becomes popular since it provides a cost-effective and lightweight solution for small scale robots. Compared to mechanical LiDAR, solid-state LiDAR sensors have higher update frequency and angular resolution, but also have smaller field of view (FoV), which is very challenging for existing LiDAR SLAM algorithms. Therefore, it is necessary to have a more robust and computationally efficient SLAM method for this new sensing device. To this end, we propose a new SLAM framework for solid-state LiDAR sensors. It involves feature extraction, odometry estimation, and probability map building. In the experiments, we demonstrate both the accuracy and efficiency of our method using an Intel L515 solid-state LiDAR. The results show that our method is able to provide precise localization at up to 30Hz on a warehouse robot and a handheld device. We made the source codes public at https://github.com/wh200720041/floam_ssl.
|
|
TuAT13 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning-Based Manipulation VII |
|
|
Co-Chair: Tsuji, Toshiaki | Saitama University |
|
02:00-02:15, Paper TuAT13.1 | Add to My Program |
Living Object Grasping Using Two-Stage Graph Reinforcement Learning |
|
Hu, Zhe | City University of Hong Kong |
Zheng, Yu | Tencent |
Pan, Jia | University of Hong Kong |
Keywords: Deep Learning in Grasping and Manipulation, Dexterous Manipulation, Reinforcement Learning
Abstract: Living objects are hard to grasp because they can actively dodge and struggle by writhing or deforming while or even prior to being contacted and modeling or predicting their responses to grasping is extremely difficult.This paper presents an algorithm based on reinforcement learning (RL) to attack this challenging problem. Considering the complexity of living object grasping, we divide the whole task into pre-grasp and in-hand stages and let the algorithm switch stages automatically. The pre-grasp stage is aimed at finding a good pose of a robot hand approaching a living object for performing a grasp. Dense reward functions are proposed for facilitating the learning of right hand actions based on the poses of both hand and object. Since an object held in hand may struggle to escape, the robot hand needs to adjust its configuration and respond correctly to the object's movement.Hence, the goal of the in-hand stage is to determine an appropriate adjustment of finger configuration in order for the robot hand to keep holding the object. At this stage, we treat the robot hand as a graph and use the graph convolutional network (GCN) to determine the hand action. We test our algorithm with both simulation and real experiments, which show its good performance in living object grasping. More results are available on our website: https://sites.google.com/view/graph-rl.
|
|
02:15-02:30, Paper TuAT13.2 | Add to My Program |
Reinforcement Learning for Robotic Assembly Using Non-Diagonal Stiffness Matrix |
|
Oikawa, Masahide | Saitama University |
Kusakabe, Tsukasa | Saitama University |
Kutsuzawa, Kyo | Tohoku University |
Sakaino, Sho | University of Tsukuba |
Tsuji, Toshiaki | Saitama University |
Keywords: Assembly, Compliance and Impedance Control, Reinforcement Learning
Abstract: Contact-rich tasks, wherein multiple contact transitions occur in a series of operations, are extensively studied for task automation. Precision assembly, one of the typical examples of contact-rich tasks, requires high time constants to cope with the change in contact state. This paper therefore proposes a local trajectory optimization method for precision assembly with high time constants. Since the non-diagonal components of the stiffness matrix is effective for inducing motion at high sampling frequency, we design a stiffness matrix to guide motion and propose a method to control a peg through the selection of the stiffness matrix. We introduced reinforcement learning (RL) for the selection of the stiffness matrix, because the relationship between the direction to be guided and the sensor response is difficult to model. The architecture with different sampling rates for RL and admittance control has an advantage of rapid response owing to the high time constant of the local trajectory optimization. The effectiveness of the method was verified via experiments involving two contact-rich tasks. The average total time of peg insertion was 1.64 s, which is less than the half the time reported by the best of the existing state of the art studies.
|
|
02:30-02:45, Paper TuAT13.3 | Add to My Program |
Uncertainty-Aware Contact-Safe Model-Based Reinforcement Learning |
|
Kuo, Cheng-Yu | Nara Institute of Science and Technology |
Schaarschmidt, Andreas | Karlsruhe University of Technology |
Cui, Yunduan | Shenzhen Institutes of Advanced Technology, Chinese Academy of Sc |
Asfour, Tamim | Karlsruhe Institute of Technology (KIT) |
Matsubara, Takamitsu | Nara Institute of Science and Technology |
Keywords: Machine Learning for Robot Control, Reinforcement Learning, Probabilistic Inference
Abstract: This paper presents contact-safe Model-based Reinforcement Learning (MBRL) for robot applications that achieves contact-safe behaviors in the learning process. In typical MBRL, we cannot expect the data-driven model to generate accurate and reliable policies to the intended robotic tasks during the learning process due to data scarcity. Operating these unreliable policies in a contact-rich environment could cause damage to the robot and its surroundings. To alleviate the risk of causing damage through unexpected intensive physical contacts, we present the contact-safe MBRL that associates the probabilistic Model Predictive Control's (pMPC) control limits with the model uncertainty so that the allowed acceleration of controlled behavior is adjusted according to learning progress. Control planning with such uncertainty-aware control limits is formulated as a deterministic MPC problem using a computationally-efficient approximated GP dynamics and an approximated inference technique. Our approach's effectiveness is evaluated through bowl mixing tasks with simulated and real robots, scooping tasks with a real robot as examples of contact-rich manipulation skills.
|
|
02:45-03:00, Paper TuAT13.4 | Add to My Program |
Reducing the Deployment-Time Inference Control Costs of Deep Reinforcement Learning Agents Via an Asymmetric Architecture |
|
Chang, Chin-Jui | Academia Sinica |
Chu, Yu-Wei | National Tsing Hua University |
Ting, Chao-Hsien | National Tsing Hua University |
Liu, Hao Kang | National Tsing Hau University |
Hong, Zhang-Wei | National Tsing Hua University |
Lee, Chun-Yi | National Tsing Hua University |
Keywords: Machine Learning for Robot Control, Reinforcement Learning, AI-Based Methods
Abstract: Deep reinforcement learning (DRL) has been demonstrated to provide promising results in several challenging decision making and control tasks. However, the required inference costs of deep neural networks (DNNs) could prevent DRL from being applied to mobile robots which cannot afford high energy-consuming computations. To enable DRL methods to be affordable in such energy-limited platforms, we propose an asymmetric architecture that reduces the overall inference costs via switching between a computationally expensive policy and an economic one. The experimental results evaluated on a number of representative benchmark suites for robotic control tasks demonstrate that our method is able to reduce the inference costs while retaining the agent's overall performance.
|
|
TuAT14 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning in Control |
|
|
Chair: Yang, Xing | Harbin Institute of Technolgoy, Shenzhen |
|
02:00-02:15, Paper TuAT14.1 | Add to My Program |
Sample Efficient Reinforcement Learning Via Model-Ensemble Exploration and Exploitation |
|
Yao, Yao | Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen Internat |
Xiao, Li | Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen Internat |
An, Zhicheng | Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen Internat |
Zhang, Wanpeng | Tsinghua University |
Luo, Dijun | Tencent |
Keywords: Reinforcement Learning, Deep Learning Methods, Probabilistic Inference
Abstract: Model-based deep reinforcement learning has achieved success in various domains that require high sample efficiencies, such as Go and robotics. However, there are some remaining issues, such as planning efficient explorations to learn more accurate dynamic models, evaluating the uncertainty of the learned models, and more rational utilization of models. To mitigate these issues, we present MEEE, a model-ensemble method that consists of optimistic exploration and weighted exploitation. During exploration, unlike prior methods directly selecting the optimal action that maximizes the expected accumulative return, our agent first generates a set of action candidates and then seeks out the optimal action that takes both expected return and future observation novelty into account. During exploitation, different discounted weights are assigned to imagined transition tuples according to their model uncertainty respectively, which will prevent model predictive error propagation in agent training. Experiments on several challenging continuous control benchmark tasks demonstrated that our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity.
|
|
02:15-02:30, Paper TuAT14.2 | Add to My Program |
Dreaming: Model-Based Reinforcement Learning by Latent Imagination without Reconstruction |
|
Okada, Masashi | Panasonic Corporation |
Taniguchi, Tadahiro | Ritsumeikan University |
Keywords: Representation Learning, Reinforcement Learning, Machine Learning for Robot Control
Abstract: In the present paper, we propose a decoder-free extension of Dreamer, a leading model-based reinforcement learning (MBRL) method from pixels. Dreamer is a sample- and cost-efficient solution to robot learning, as it is used to train latent state-space models based on a variational autoencoder and to conduct policy optimization by latent trajectory imagination. However, this autoencoding based approach often causes object vanishing, in which the autoencoder fails to perceives key objects for solving control tasks, and thus significantly limiting Dreamer's potential. This work aims to relieve this Dreamer's bottleneck and enhance its performance by means of removing the decoder. For this purpose, we firstly derive a likelihood-free and InfoMax objective of contrastive learning from the evidence lower bound of Dreamer. Secondly, we incorporate two components, (i) independent linear dynamics and (ii) the random crop data augmentation, to the learning scheme so as to improve the training performance. In comparison to Dreamer and other recent model-free reinforcement learning methods, our newly devised Dreamer with InfoMax and without generative decoder (Dreaming) achieves the best scores on 5 difficult simulated robotics tasks, in which Dreamer suffers from object vanishing.
|
|
02:30-02:45, Paper TuAT14.3 | Add to My Program |
A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning |
|
Abdulsamad, Hany | Technische Universität Darmstadt |
Nickl, Peter | Technical University of Darmstadt |
Klink, Pascal | Technische Universität Darmstadt |
Peters, Jan | Technische Universität Darmstadt |
Keywords: Model Learning for Control, Probabilistic Inference, Machine Learning for Robot Control
Abstract: Probabilistic regression techniques in control and robotics applications have to fulfill different criteria of data-driven adaptability, computational efficiency, scalability to high dimensions, and the capacity to deal with different modalities in the data. Classical regressors usually fulfill only a subset of these properties. In this work, we extend seminal work on Bayesian nonparametric mixtures and derive an efficient variational Bayes inference technique for infinite mixtures of probabilistic local polynomial models with well-calibrated certainty quantification. We highlight the model's power in combining data-driven complexity adaptation, fast prediction, and the ability to deal with discontinuous functions and heteroscedastic noise. We benchmark this technique on a range of large real-world inverse dynamics datasets, showing that the infinite mixture formulation is competitive with classical Local Learning methods and regularizes model complexity by adapting the number of components based on data and without relying on heuristics. Moreover, to showcase the practicality of the approach, we use the learned models for online inverse dynamics control of a Barrett-WAM manipulator, significantly improving the trajectory tracking performance.
|
|
02:45-03:00, Paper TuAT14.4 | Add to My Program |
Model-Based Domain Randomization of Dynamics System with Deep Bayesian Locally Linear Embedding |
|
Park, J. hyeon | Seoul National University |
Park, Sungyong | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Model Learning for Control, Probabilistic Inference, Deep Learning Methods
Abstract: Domain randomization (DR) is a powerful tool to make a policy robust to the uncertainty of dynamics caused by unobservable environmental parameters. Conventional DR has adopted model-free reinforcement learning as a policy optimizer. However, the model-free methods in DR demand high time-complexity due to the randomization process where the environment is extremely changed. In this paper, we introduce model-based dynamics and policy learning for efficient DR. A Bayesian model of locally linear embedding is designed to fit the stochastic dynamics in DR. By virtue of locally linear dynamics, model-based optimal control is substituted for the policy optimization. Unlike previous works, our proposed Bayesian model with a MNIW prior allows the locally linear embedding to capture the dynamics in DR as a stochastic model. We show that a training method that combines variational and adversarial approaches is adequate for Bayesian embedding. Finally, a model-based controller is designed on our Bayesian locally linear embedding, and it shows better performance in DR environments compared with the non-Bayesian model of locally linear embedding.
|
|
TuAT15 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning for Motion Planning |
|
|
Chair: Zhang, Zhengyan | Harbin Institute of Technology, Shenzhen |
|
02:00-02:15, Paper TuAT15.1 | Add to My Program |
Deep Imitation Learning for Autonomous Navigation in Dynamic Pedestrian Environments |
|
Qin, Lei | Singapore-MIT Alliance for Research and Technology |
Huang, Zefan | Singapore-MIT Alliance for Research and Technology |
Zhang, Chen | National University of Singapore |
Guo, Hongliang | University of Electronic Science and Technology of China |
Ang Jr, Marcelo H | National University of Singapore |
Rus, Daniela | MIT |
Keywords: Autonomous Vehicle Navigation, Imitation Learning
Abstract: Navigation through dynamic pedestrian environments in a socially compliant manner is still a challenging task for autonomous vehicles. Classical methods usually lead to unnatural vehicle behaviours for pedestrian navigation due to the difficulty in modeling social conventions mathematically. This paper presents an end-to-end path planning system that achieves autonomous navigation in dynamic environments through imitation learning. The proposed system is based on a fully convolutional neural network that maps the raw sensory data into a confidence map for path extraction. Additionally, a classification network is introduced to reduce the unnecessary re-plannings and ensures that the vehicle goes back to the global path when re-planning is not needed. The imitation learning based path planner is implemented on an autonomous wheelchair and tested in a new real-world dynamic pedestrian environment. Experimental results show that the proposed system is able to generate paths for different driving tasks, such as pedestrian following, static and dynamic obstacles avoidance, etc. In comparison to the state-of-the-art method, our system is superior in terms of generating human-like trajectories.
|
|
02:15-02:30, Paper TuAT15.2 | Add to My Program |
Learning from Demonstration without Demonstrations |
|
Blau, Tom | University of Sydney |
Morere, Philippe | University of Sydney |
Francis, Gilad | The University of Sydney |
Keywords: Reinforcement Learning, Learning from Demonstration, Motion and Path Planning
Abstract: State-of-the-art reinforcement learning (RL) algorithms suffer from high sample complexity, particularly in the sparse reward case. A popular strategy for mitigating this problem is to learn control policies by imitating a set of expert demonstrations. The drawback of such approaches is that an expert needs to produce demonstrations, which may be costly in practice. To address this shortcoming, we propose Probabilistic Planning for Demonstration Discovery (P2D2), a technique for automatically discovering demonstrations without access to an expert. We formulate discovering demonstrations as a search problem and leverage widely-used planning algorithms such as Rapidly-exploring Random Tree to find demonstration trajectories. These demonstrations are used to initialize a policy, then refined by a generic RL algorithm. We provide theoretical guarantees of P2D2 finding successful trajectories, as well as bounds for its sampling complexity. We experimentally demonstrate the method outperforms classic and intrinsic exploration RL techniques in a range of classic control and robotics tasks, requiring only a fraction of exploration samples and achieving better asymptotic performance.
|
|
02:30-02:45, Paper TuAT15.3 | Add to My Program |
Optimal Cooperative Maneuver Planning for Multiple Nonholonomic Robots in a Tiny Environment Via Adaptive-Scaling Constrained Optimization |
|
Li, Bai | Hunan University |
Zhang, Youmin | Concordia University |
Acarman, Tankut | Galatasaray University |
Ouyang, Yakun | Hunan University |
Kong, Qi | JDR&D Center of Automated Driving, JD Inc |
Shao, Zhijiang | Zhejiang University |
Keywords: Cooperating Robots
Abstract: This paper is focused on the time-optimal Multi-Vehicle Trajectory Planning (MVTP) problem for multiple car-like robots when they travel in a tiny indoor scenario occupied by static obstacles. Herein, the complexity of the concerned MVTP task includes i) the non-convexity and narrowness of the environment, ii) the nonholonomy and nonlinearity of the vehicle kinematics, iii) the pursuit for a time-optimal solution, and iv) the absence of predefined homotopic routes for the vehicles. The aforementioned factors, when mixed together, are beyond the capability of the prevalent coupled or decoupled MVTP methods. This work proposes an adaptive-scaling constrained optimization (ASCO) approach, aiming to find the optimum of the nominally intractable MVTP problem in a decoupled way. Concretely, an iterative computation framework is built, wherein each intermediate subproblem contains only risky collision avoidance constraints within a certain range, thus being tractable in the scale. During the iteration, the constraint activation scale can change adaptively, thereby enabling to promote the convergence rate, to recover from an intermediate failure, and to get rid of a poor initial guess. ASCO is compared versus the state-of-the-art MVTP methods and is validated in real experiments conducted by a team of three car-like robots.
|
|
02:45-03:00, Paper TuAT15.4 | Add to My Program |
Optimization-Based Framework for Excavation Trajectory Generation |
|
Yang, Yajue | City University of Hong Kong |
Long, Pinxin | Baidu Inc |
Song, Xibin | Baidu Inc |
Pan, Jia | University of Hong Kong |
Zhang, Liangjun | Baidu USA |
Keywords: Robotics and Automation in Construction, Motion and Path Planning, Optimization and Optimal Control
Abstract: In this paper, we present a novel optimization-based framework for autonomous excavator trajectory generation under task-specific constraints. Traditional excavation trajectory generators over-simplify the geometric trajectory parameterization thereby limiting the space for optimization. To expand the search space, we formulate a generic task specification for excavation by constraining the instantaneous motion of the bucket and adding a target-oriented constraint to control the amount of excavated soil. The trajectory is represented with a waypoint interpolating spline. Time intervals between waypoints are relaxed as variables to facilitate generating the time-optimal trajectory in one stage. Experiments on a real robot platform demonstrate that our method is adaptive to different terrain shapes and outperforms other optimal path planners in terms of the minimum joint length and minimum travel time.
|
|
TuAT16 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Humanoids and Animaloids VII |
|
|
Chair: Zhang, Tianyi | Southern University of Science and Technology |
Co-Chair: Harada, Kensuke | Osaka University |
|
02:00-02:15, Paper TuAT16.1 | Add to My Program |
Reachability-Based Push Recovery for Humanoid Robots with Variable-Height Inverted Pendulum |
|
Yang, Shunpeng | Southern University of Science and Technology |
Chen, Hua | Southern University of Science and Technology |
Zhang, Luyao | Southern Universityof Science and Technology, China |
Cao, Zhefeng | Southern University of Science and Technology |
Wensing, Patrick M. | University of Notre Dame |
Liu, Yizhang | UBTECH |
Pang, Jianxin | UBTECH |
Zhang, Wei | Southern University of Science and Technology |
Keywords: Legged Robots, Humanoid and Bipedal Locomotion, Whole-Body Motion Planning and Control
Abstract: This paper studies push recovery for humanoid robots based on a variable-height inverted pendulum (VHIP) model. We first develop an approach for treating zero-step capturability of the VHIP with a novel methodology based on Hamilton-Jacobi (HJ) reachability analysis. Such an approach uses the sub-zero level set of a value function to encode capturability of the VHIP, where the value function is obtained by numerically solving a HJ variational inequality offline. Based on this analysis, a simple and effective method for adjusting foothold locations is then devised for cases where the VHIP state is not zero-step capturable. In addition, the HJ reachability analysis naturally induces an optimal control law that allows for rapid planning with the VHIP during push recovery online. To enable use of the strategy with a position-controlled humanoid robot, an associated differential inverse kinematics based tracking controller is employed. The effectiveness of the overall framework is demonstrated with the UBTECH Walker robot in the MuJoCo simulator. Simulation validations show an approximately 20% improvement in push robustness as compared to the methods based on the classical linear inverted pendulum model.
|
|
02:15-02:30, Paper TuAT16.2 | Add to My Program |
Meaningful Centroidal Frame Orientation of Multi-Body Floating Locomotion Systems |
|
Du, Wenqian | Sorbonne University, ISIR, Paris 6 |
Wang, Ze | Sorbonne University - Université Pierre Et Marie CURIE |
Moullet, Etienne | Sorbonne Université |
Ben Amar, Faiz | Université Pierre Et Marie Curie, Paris 6 |
Keywords: Underactuated Robots, Dynamics, Kinematics
Abstract: In this paper, we propose a meaningful definition of rotational centroidal orientation which is somewhat missed in the state-of-the-art centroidal momentum and dynamics theory for locomotion robots with one floating base. This centroidal instantaneous orientation rotates as the robot runs, and it is extracted from the total system angular inertia. The new centroidal frame is proposed to be parallel with the principal axes of the centroidal angular inertia, which can describe the whole-robot rotational motion. To avoid high fluctuations of centroidal frame orientation parameters between adjacent control loops, we develop one algorithm to enable the centroidal instantaneous frame to be smooth. The relationship between the centroidal angle rate and the centroidal angular velocity is derived, as well as the relationship in the acceleration level, which can be used for whole-body torque control. The new centroidal orientation or Euler angle is verified by two-scenario simulations, and another scenario is used to track and control the centroidal angular motion in the first-order kinematics level. The idea has considerable potential for system design, motion generation, and torque control in robotics communities with different research topics and theoretical backgrounds.
|
|
02:30-02:45, Paper TuAT16.3 | Add to My Program |
Online Object Searching by a Humanoid Robot in an Unknown Environment |
|
Tsuru, Masato | Osaka University |
Escande, Adrien | AIST |
Tanguy, Arnaud | CNRS-UM LIRMM |
Chappellet, Kevin | CNRS |
Harada, Kensuke | Osaka University |
Keywords: Humanoid Robot Systems, SLAM, Computer Vision for Automation
Abstract: This paper proposes a framework for an autonomous humanoid robot, aimed at searching for a target object in an unknown environment using 3D-simultaneous localization and mapping (SLAM). The robot determines the next viewpoint in real-time from an environment map and object recognition results, and automatically finds and grasps the target object. Whereas most robot exploration studies require a static map, hints regarding object position, or area size limitations, our system can globally find an occluded object in an unknown environment, based only on the 3D target model. Notably, our robot can predict an unobserved area, and actively reveal it while avoiding obstacles. Our 3D-SLAM approach also estimates a self-camera location in a world coordinate system, so that our robot can re-plan its footsteps while walking, without pause. We validated the efficacy of this method through real experiments with an "HRP2-KAI" in several environments, and achieved fully automated searching and grasping.
|
|
02:45-03:00, Paper TuAT16.4 | Add to My Program |
Origami-Inspired New Material Feeding Mechanism for Soft Growing Robots to Keep the Camera Stay at the Tip by Securing Its Path |
|
Kim, Ji-Hun | Korea University of Technology and Education |
JaeHyung, Jang | Korea Advanced Institute of Science and Technology |
Lee, Sang-min | KAIST |
Jeong, Sang-Goo | KAIST |
Kim, Yong-Jae | Korea University of Technology and Education |
Ryu, Jee-Hwan | Korea Advanced Institute of Science and Technology |
Keywords: Soft Robot Materials and Design, Soft Robot Applications, Mechanism Design
Abstract: Soft growing robots, extending via tip eversion, have been attracting a lot of interests due to their unique locomotion. Although having a visual feedback from the tip of this type of robot would greatly enhance the usefulness of these robots for exploring in the field, because the material at the tip continually moves as the robot grows, mounting a camera to the tip of the robot and keeping it stay at the tip during growth has been a major challenge. While previous designs seriously impeded the robots' intrinsic advantages, what remains still open is how to keep the camera stay at the tip without encumbering the robots' natural ability to morph its shape and grow along or over obstacle. This paper, for the first time, proposes a method to keep a camera stay at the tip during growth while maintaining the compliant nature of the robot. Origami-based material feeding mechanism is designed for securing the camera's path from the base to the tip, which allows controlling the camera growing speed independent of the robot. We experimentally investigate the required control parameters, and present a vision-based control method for the camera position adjustment. Finally, we demonstrate the feasibility of the proposed design in simulated experimental scenarios including a narrow passage and a cluttered environment.
|
|
TuAT17 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Human-Robot Interaction XI |
|
|
Co-Chair: Yang, Liangjing | Zhejiang University |
|
02:00-02:15, Paper TuAT17.1 | Add to My Program |
Exploiting Inherent Human Motor Behaviour in the Online Personalisation of Human-Prosthetic Interfaces |
|
Garcia-Rosas, Ricardo | The University of Melbourne |
Yu, Tianshi | The University of Melbourne |
Oetomo, Denny | The University of Melbourne |
Manzie, Chris | University of Melbourne |
Tan, Ying | The University of Melbourne |
Choong, Peter | The University of Melbourne |
Keywords: Prosthetics and Exoskeletons, Human-Robot Collaboration, Human Factors and Human-in-the-Loop
Abstract: Human-prosthetic interfaces require their settings to be tuned to individual users. This can potentially be done autonomously while the prosthesis user performs a task by using online personalisation algorithms. These online personalisation algorithms adjust the interface parameters to optimise a given measure of performance. For convergence to be reached, both the human and the personalisation algorithm need to optimise towards the same objective. To date, task-oriented measures of performance have been utilised as the objective, requiring explicit feedback of the measure of performance to the prosthesis user, which is not practical. In this paper, the use of inherent human motor behaviour as the measure of performance for online personalisation algorithms is proposed and investigated. This allows the personalisation procedure to occur without the prosthesis user needing explicit knowledge of the measure of performance. The methodology for formulating inherent human motor behaviour within the framework of online personalisation of human-prosthetic interfaces is presented and validated through an experiment with nine able-bodied subjects. Experimental results demonstrate the efficacy of inherent human motor behaviour-based measures of performance in the design of an intuitive human-prosthetic interface specifically, applicable to human-robot interaction in general.
|
|
02:15-02:30, Paper TuAT17.2 | Add to My Program |
Design and Clinical Validation of a Robotic Ankle-Foot Simulator with Series Elastic Actuator for Ankle Clonus Assessment Training |
|
Pei, Yinan | University of Illinois at Urbana-Champaign |
Han, Tianyi | University of Illinois at Urbana-Champaign / Zhejiang University |
Zallek, Christopher | OSF Healthcare, Illinois Neurological Institute |
Liu, Tao | Zhejiang University |
Yang, Liangjing | Zhejiang University |
Hsiao-Wecksler, Elizabeth | University of Illinois at Urbana-Champaign |
Keywords: Medical Robots and Systems, Haptics and Haptic Interfaces, Education Robotics
Abstract: To fulfill the need of reliable and consistent medical training of the neurological examination technique to assess ankle clonus, a series elastic actuator (SEA) based haptic training simulator was proposed and developed. The simulator’s mechanism and controller were designed to render a quiet and safe training environment. Benchtop tests demonstrated that the prototype simulator was able to accurately estimate the interaction torque from the trainee and closely track a chirp torque command up to 10 Hz. The high-level impedance controller could switch between different clinically encountered states based on trainee’s assessment technique. The simulator was evaluated by a group of 17 experienced physicians and physical therapists. Subjects were instructed to induce sustained clonus using their normal technique. The simulator was assessed in two common clinical positions (seated and supine). Subjects scored simulation realism on a variety of control features. To expedite controller design iteration, feedback from Day 1 was used to modify simulation parameters prior to testing on Day 2 with a new subject group. On average, all subjects could successfully trigger a sustained clonus response within 4-5 attempts in the first position and 2-3 in the second. Feedback on the fidelity of simulation realism improved between Day 1 and Day 2. Results suggest that this SEA-based simulator could be a viable training tool for healthcare trainees learning to assess ankle clonus.
|
|
02:30-02:45, Paper TuAT17.3 | Add to My Program |
A Hybrid Impedance Controller for Series Elastic Actuators to Render a Wide Range of Stable Stiffness in Uncertain Environments |
|
Lee, Yu-Shen | National Cheng Kung University |
Chiao, Kuan-Wei | National Cheng Kung University |
Lan, Chao-Chieh | National Cheng Kung University |
Keywords: Compliance and Impedance Control, Force Control, Compliant Joints and Mechanisms
Abstract: Accurate and wide-range stiffness control is important for safe human-robot interaction. Accurate stiffness control can be better achieved using series elastic actuators (SEAs) than conventional rigid actuators. However, the stable range of virtual stiffness rendered by SEAs is limited by the stiffness of the actual spring, which cannot be too high in order to ensure good force control accuracy. Adding a virtual damper or derivative gain can increase the stable range of virtual stiffness, but the stable range would highly depend on the environmental stiffness. To relax the stiffness limitation in uncertain environments and explore more merits of SEAs, this paper proposes a hybrid impedance controller. This new controller linearly combines the spring force feedback and inertia force feedback. The stable range of virtual stiffness can be easily increased to ten times the actual spring stiffness with minimum effect on the force control accuracy. Unlike typical impedance controllers, the environmental stiffness can be used to raise the stable range of stiffness and hence the robustness of the controller can be ensured. Experiments will be provided to verify the hybrid impedance controller. We expect that the hybrid impedance controller can be used for SEAs in unstructured environments to provide a wide range of virtual stiffness.
|
|
02:45-03:00, Paper TuAT17.4 | Add to My Program |
Soft-Jig-Driven Assembly Operations |
|
Kiyokawa, Takuya | Nara Institute of Science and Technology |
Sakuma, Tatsuya | Nara Institute of Science and Technology |
Takamatsu, Jun | Nara Institute of Science and Technology |
Ogasawara, Tsukasa | Nara Institute of Science and Technology |
Keywords: Assembly, Soft Robot Applications, Industrial Robots
Abstract: To design a general-purpose assembly robot system that can handle objects of various shapes, we propose a soft jig that fits to the shapes of assembly parts. The functionality of the soft jig is based on a jamming gripper developed in the field of soft robotics. The soft jig has a bag covered with a malleable silicone membrane, which has high friction, elongation, and contraction rates for keeping parts fixed. The bag is filled with glass beads to achieve a jamming transition. We propose a method to configure parts-fixing on the soft jig based on contact relations, reachable directions, and the center of gravity of the parts that are fixed on the jig. The usability of the soft jig was evaluated in terms of the fixing performance and versatility for various shapes and postures of parts.
|
|
TuAT18 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Human-Robot Interaction IV |
|
|
|
02:00-02:15, Paper TuAT18.1 | Add to My Program |
Comparison of Three Feedback Modalities for Haptics Sensation in Remote Machine Manipulation |
|
Haruna, Masaki | Mitubishi Electric Corporationi, and Kansai University |
Kawaguchi, Noboru | Mitsubishi Electric Corporation |
Ogino, Masaki | Faculty of Informatics |
Koike-Akino, Toshiaki | Mitsubishi Electric Research Laboratories (MERL) |
Keywords: Perception for Grasping and Manipulation, Telerobotics and Teleoperation, Product Design, Development and Prototyping
Abstract: It is expected that the use of haptic information will improve the operability of remote machine systems. In this paper, for an object grasping task with the remotely operated arm, we compare three modal feedbacks based on either sound, vibration, and light as pseudo-haptic information of contact with the object. An experimental evaluation test was carried out by measuring the gripping force and brain waves for seven subjects. It was verified that the feedback with light can minimize the gripping force and suppress the processing load in the brain. This result supports the visual force tactile method, which superimposes haptic information on the fingertip contact point of a remote machine as an image to improve a remote machine operation system with high operability without the need of highly complex and expensive interface.
|
|
02:15-02:30, Paper TuAT18.2 | Add to My Program |
Prediction-Error Negativity to Assess Singularity Avoidance Strategies in Physical Human-Robot Collaboration |
|
Aldini, Stefano | University of Technology Sydney |
Singh, Avinash Kumar | University of Technology Sydney |
Carmichael, Marc | Centre for Autonomous Systems |
Wang, Yu-Kai | University of Technology Sydney |
Liu, Dikai | University of Technology, Sydney |
Lin, Chin-Teng | UTS |
Keywords: Physical Human-Robot Interaction, Human Factors and Human-in-the-Loop, Neurorobotics
Abstract: In physical human-robot collaboration (pHRC), singularity avoidance strategies are often critical to obtain stable interaction dynamics. It is hypothesised a predictable singularity avoidance strategy is preferred in pHRC as humans tend to maximise predictability when using complex systems. By using an electroencephalogram (EEG), it is possible to assess the predictability of a task through a feature found in event-related potentials (ERP) and called prediction-error negativity (PEN). In this paper, two research questions are addressed. Can a complex pHRC singularity avoidance strategy generate a detectable PEN? Are PEN and human preferences related when comparing different control settings in a singularity avoidance strategy? Fourteen participants compared two different sets of parameters (modes) in a singularity avoidance strategy based on the exponentially damped least-squared (EDLS) method. ERP results are presented in terms of power spectral density (PSD). ERP results were then compared with human preferences to see whether they are related. Results show that the mode that causes PEN is also the one that participants did not like, suggesting that a lack of predictability might have an impact on human preference.
|
|
02:30-02:45, Paper TuAT18.3 | Add to My Program |
A Large Area Robotic Skin with Sparsely Embedded Microphones for Human-Robot Tactile Communication |
|
Yang, Min Jin | Korea Advanced Institute of Science and Technology (KAIST) |
Park, Kyungseo | Korea Advanced Institute of Science and Technology |
Kim, Jung | KAIST |
Keywords: Physical Human-Robot Interaction, Force and Tactile Sensing
Abstract: A human can socially interact in a non-verbal manner by understanding the intention behind a tactile stimulus. Patting on one's back is one of tactile communications, which is considered as a sign of encouragement in most cultures. The majority of such tactile communication is carried out by a dynamic tactile on large passive body parts and differently interpreted by how and where on the body is touched. Thus any robotic system that physically interacts with a human requires a dynamic tactile sensor for further social interaction. This paper presents a large dynamic tactile sensor that could cover a robot's passive body parts using a few sparsely distributed microphones to cover a large area in an efficient manner. A porous structured mesh, neoprene, and loop fabric are used to form a sensor's skin that could well generate and transfer a signal to distributed microphones when a touch is introduced. TDOA source localisation algorithms are implemented to find the touch point locating in between the distributed microphones, and a simple convolutional neural network is trained to classify a type of the touch. A localising performance is qualitatively achieved in a testbed of the sensor and applied to a mannequin's back to show the applicability, which classified a touch into six classes with an accuracy of 88 %.
|
|
02:45-03:00, Paper TuAT18.4 | Add to My Program |
Star Topology Based Interaction for Robust Trajectory Forecasting in Dynamic Scene |
|
Zhu, Yanliang | Meituan-Dianping |
Ren, Dongchun | Meituan-Dianping |
Qian, Deheng | MeiTuan |
Li, Xin | Meituan-Dianping Group |
Fan, Mingyu | Wenzhou University |
Xia, Huaxia | Meituan |
Keywords: Autonomous Agents, Multi-Robot Systems, Deep Learning Methods
Abstract: Motion prediction of multiple agents in a dynamic scene is a crucial component in many real applications, including intelligent monitoring and autonomous driving. Due to the complex interactions among the agents and their interactions with the surrounding scene, accurate trajectory prediction is still a great challenge. In this paper, we propose a new method for robust trajectory prediction of multiple intelligent agents in a dynamic scene. The input of the method includes the observed trajectories of all agents, and optionally, the planning of the ego-agent and the surrounding high definition map at every time steps. Given observed trajectories, an efficient approach in a star computational topology is utilized to compute both the spatiotemporal interaction features and the current interaction features between the agents, where the time complexity scales linearly to the number of agents. Moreover, on an autonomous vehicle, the proposed prediction method can make use of the planning of ego-agent to improve the modeling of the interaction between surrounding agents. To increase the robustness to upstream perception noises, at the training stage, we randomly mask out the input data, a.k.a. the points on the observed trajectories of agents and the lane sequence. Experiments on autonomous driving and pedestrian-walking datasets demonstrate that the proposed method is not only effective when the planning of ego-agent and the high definition map are provided, but also achieves stat
|
|
TuAT19 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Field Robotics V |
|
|
Co-Chair: Zhang, Liangjun | Baidu USA |
|
02:00-02:15, Paper TuAT19.1 | Add to My Program |
A Peg-In-Hole Task Strategy for Holes in Concrete |
|
Yasutomi, André Yuji | Hitachi Ltd |
Mori, Hiroki | Waseda University |
Ogata, Tetsuya | Waseda University |
Keywords: Robotics and Automation in Construction, Deep Learning in Grasping and Manipulation, Reinforcement Learning
Abstract: A method that enables an industrial robot to accomplish the peg-in-hole task for holes in concrete is proposed. The proposed method involves slightly detaching the peg from the wall, when moving between search positions, to avoid the negative influence of the concrete's high friction coefficient. It uses a deep neural network (DNN), trained via reinforcement learning, to effectively find holes with variable shape and surface finish (due to the brittle nature of concrete) without analytical modeling or control parameter tuning. The method uses displacement of the peg toward the wall surface, in addition to force and torque, as one of the inputs of the DNN. Since the displacement increases as the peg gets closer to the hole (due to the chamfered shape of holes in concrete), it is a useful parameter for inputting in the DNN. The proposed method was evaluated by training the DNN on a hole 500 times and attempting to find 12 unknown holes. The results of the evaluation show the DNN enabled a robot to find the unknown holes with average success rate of 96.1% and average execution time of 12.5 seconds. Additional evaluations with random initial positions and a different type of peg demonstrate the trained DNN can generalize well to different conditions. Analyses of the influence of the peg displacement input showed the success rate of the DNN is increased by utilizing this parameter. These results validate the proposed method in terms of its effectiveness in the construction field.
|
|
02:15-02:30, Paper TuAT19.2 | Add to My Program |
Semantic Mapping of Construction Site from Multiple Daily Airborne LiDAR Data |
|
Westfechtel, Thomas | The University of Tokyo |
Ohno, Kazunori | Tohoku University |
Akegawa, Tetsu | Tohoku University |
Yamada, Kento | Tohoku Univ |
Bezerra, Ranulfo | Tohoku University |
Kojima, Shotaro | Tohoku University |
Suzuki, Taro | Chiba Institute of Technology |
Komatsu, Tomohiro | KOWATECH Co |
Shibata Yukinori, Shibata | Sato Komuten Co |
Asano, Kimitaka | Sanyo-Technics Co |
Nagatani, Keiji | The University of Tokyo |
Miyamoto, Naoto | Tohoku Univ |
Suzuki, Takahiro | Tohoku University |
Harada, Tatsuya | The University of Tokyo |
Tadokoro, Satoshi | Tohoku University |
Keywords: Field Robots, Semantic Scene Understanding, Robotics and Automation in Construction
Abstract: Semantic maps are an important tool to provide robots with high-level knowledge about the environment, enabling them to better react to and interact with their surroundings. However, as a single measurement of the environment is solely a snapshot of a specific time, it does not necessarily reflect the underlying semantics. In this work, we propose a method to create a semantic map of a construction site by fusing multiple daily data. The construction site is measured by an unmanned aerial vehicle (UAV) equipped with a LiDAR. We extract clusters above ground level from the measurements and classify them using either a random forest or a deep learning based classifier. Furthermore, we combine the classification results of several measurements to generalize the classification of the single measurements and create a general semantic map of the working site. We measured two construction fields for our evaluation. The classification models can achieve an average intersection over union (IoU) score of 69.2% during classification on the Sanbongi field, which is used for training, validation and testing and an IoU score of 49.16% on a hold-out testing field. In a final step, we show how the semantic map can be employed to suggest a parking spot for a dump truck, and in addition, show that the semantic map can be utilized to improve path planning inside the construction site.
|
|
02:30-02:45, Paper TuAT19.3 | Add to My Program |
TaskNet: A Neural Task Planner for Autonomous Excavator |
|
Zhao, Jinxin | Baidu |
Zhang, Liangjun | Baidu USA |
Keywords: Field Robots, AI-Based Methods, Task Planning
Abstract: We present a novel task planner - TaskNet for an autonomous excavator based on a data-driven method, which plans feasible task-level sequence by learning from demonstration data. Given a high-level excavation objective, our TaskNet planner can decompose it into sub-tasks, each of which can be further decomposed into task primitives with specifications. We train our TaskNet using an excavation trace generator and evaluate its performance using a 3D physically-based terrain and excavator simulator. As compared to imitation learning-based methods, the experimental results show that TaskNet can effectively learn task decomposition strategies. The resulting sequences of task primitives can be used as inputs by any excavator motion planner for generating feasible joint-level trajectories. We further validate TaskNet on a state-of-the-art autonomous excavator hardware and software system. The 49-ton autonomous excavator can successfully perform material loading tasks.
|
|
02:45-03:00, Paper TuAT19.4 | Add to My Program |
Steering Induced Roll Quantification During Ship Turning Circle Manoeuvre |
|
Esnault, Nathanael | The University of Auckland |
Patel, Nitish | Univ of Auckland |
Tunnicliffe, Jon | The University of Auckland |
Keywords: Marine Robotics, Dynamics, Underactuated Robots
Abstract: A well known and well-studied feature of boats' dynamic is the effect of steering-induced roll. This property is used as a technique to stabilise ships in waves, called rudder roll stabilisation (RRS) to make the navigation safer and more pleasant. This technique is based on the generation of induced roll. Because of its specific application, studies have been limited to commercial vessels using a single propeller-rudder system (SPRS). This study not only broadens the technique to any propulsion and steering mechanisms that can be used with RRS by introducing the thrust asymmetry, but it also incorporates the effect of the centrifugal forces that were previously left off. To prove the capabilities of the new concept, a test is effectuated employing an RC demonstrator fitted with a differential jet pump system (DJPS), performing a turning test manoeuvre.
|
|
TuAT20 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Grasping II |
|
|
Chair: Liu, Jianbang | The Chinese University of Hong Kong |
|
02:00-02:15, Paper TuAT20.1 | Add to My Program |
Dig-Grasping Via Direct Quasistatic Interaction Using Asymmetric Fingers: An Approach to Effective Bin Picking |
|
Tong, Zhekai | The Hong Kong University of Science and Technology |
Ng, Yu Hin | The Hong Kong University of Science and Technology |
Kim, Chung Hee | The Hong Kong University of Science and Technology |
He, Tierui | The Hong Kong University of Science and Technology |
Seo, Jungwon | The Hong Kong University of Science and Technology |
Keywords: Grasping, Grippers and Other End-Effectors
Abstract: This paper introduces a new method for simultaneously singulating and picking objects from clutter. The method can lead to effective robotic bin picking, which still remains elusive despite its importance in many industrial and domestic applications, especially for objects with a thin profile. We leverage planar quasistatic pushing manipulation as a way of standardized physical interaction between a robot and the object to pick. A gripper designed with digit asymmetry, realized as a two-fingered gripper with different finger lengths, is suggested as the key to successful singulating and picking through the controlled pushing maneuver. A detailed account of the manipulation process and design principles will be presented. An extensive set of experiments validate the effectiveness of our approach in three-dimensional bin picking tasks. Beyond picking, more complex manipulation capabilities such as autonomous pick-and-place/pack will also be presented.
|
|
02:15-02:30, Paper TuAT20.2 | Add to My Program |
Uncertainty-Aware Self-Supervised Target-Mass Grasping of Granular Foods |
|
Takahashi, Kuniyuki | Preferred Networks |
Ko, Wilson Kien Ho | Suzume K.K |
Ummadisingu, Avinash | Preferred Networks, Inc |
Maeda, Shin-ichi | Preferred Networks |
Keywords: Deep Learning in Grasping and Manipulation, Grasping
Abstract: Food packing industry workers typically pick a target amount of food by hand from a food tray and place them in containers. Since menus are diverse and change frequently, robots must adapt and learn to handle new foods in a short time-span. Learning to grasp a specific amount of granular food requires a large training dataset, which is challenging to collect reasonably quickly. In this study, we propose ways to reduce the necessary amount of training data by augmenting a deep neural network with models that estimate its uncertainty through self-supervised learning. To further reduce human effort, we devise a data collection system that automatically generates labels. We build on the idea that we can grasp sufficiently well if there is at least one low-uncertainty (high-confidence) grasp point among the various grasp point candidates. We evaluate the methods we propose in this work on a variety of granular foods- coffee beans, rice, oatmeal and peanuts, each of which has a different size, shape and material properties such as volumetric mass density or friction. For these foods, we show significantly improved grasp accuracy of user-specified target masses using smaller datasets by incorporating uncertainty.
|
|
02:30-02:45, Paper TuAT20.3 | Add to My Program |
SCT-CNN: A Spatio-Channel-Temporal Attention CNN for Grasp Stability Prediction |
|
Yan, Gang | Waseda University |
Schmitz, Alexander | Waseda University |
Funabashi, Satoshi | Waseda University, Sugano Lab |
Somlor, Sophon | Waseda University |
Tomo, Tito Pradhono | Waseda University |
Sugano, Shigeki | Waseda University |
Keywords: Deep Learning in Grasping and Manipulation, Perception for Grasping and Manipulation
Abstract: Recently, tactile sensing has attracted great interest for robotic manipulation. Predicting if a grasp will be stable or not, i.e. if the grasped object will drop out of the gripper while being lifted, can aid robust robotic grasping. Previous methods paid equal attention to all regions of the tactile data matrix or all time-steps in the tactile sequence, which may include irrelevant or redundant information. In this paper, we propose to equip Convolution Neural Networks with spatial-channel and temporal attention mechanisms (SCT attention CNN) to predict future grasp stability. To the best of our knowledge, this is the first time to use attention mechanisms for predicting grasp stability only relying tactile information. We implement our experiments with 52 daily objects. Moreover, we compare different spatio-temporal models and attention mechanisms as an empirical study. We found a significant accuracy improvement of up to 5% when using SCT attention. We believe that attention mechanisms can also improve the performance of other tactile learning tasks in the future, such as slip detection and hardness perception.
|
|
02:45-03:00, Paper TuAT20.4 | Add to My Program |
Tactile Velocity Estimation for Controlled In-Grasp Sliding |
|
Chen, Yuan | Samsung AI Center New York |
Prepscius, Colin | Samsung |
Lee, Daewon | Samsung AI Center New York |
Lee, Daniel | Cornell Tech |
Keywords: Force and Tactile Sensing, In-Hand Manipulation, Grippers and Other End-Effectors
Abstract: This paper studies the problem of controlling the sliding motion of an object held by a robot manipulator. We show how a parallel-jaw gripper can reliably control the motion of a rigid, prism-like object by 1) estimating the object's sliding velocity using measurements from tactile sensors at the gripper's fingertips and 2) controlling the grip strength to regulate the sliding velocity. We first train a neural network to estimate the sliding velocity from only tactile signals with data of tactile sensor measurements associated with various sliding velocities determined by an external motion capture system from repeated sliding trials of 28 different objects varying in size, shape, and surface texture. The velocity estimates from the neural network are then used as feedback for a closed-loop grip controller that maintains the desired sliding velocity. Experimental results show that our neural network estimates the object's sliding velocity with mean squared error under 0.5 (cm/s)^2, generalizes well to objects of new shapes and surface textures, and enables our closed loop grip controller to reliably slide objects at different target velocities.
|
|
TuAT21 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Biologically-Inspired Robots |
|
|
Chair: Ren, Hongliang | The Chinese University of Hong Kong (CUHK) |
Co-Chair: Ma, Nachuan | Southern University of Science and Technology |
|
02:00-02:15, Paper TuAT21.1 | Add to My Program |
Multiphysics Simulation of Magnetically Actuated Robotic Origami Worms |
|
Swaminathan, Ruphan | National Institute of Technology Tiruchirappalli |
Cai, Catherine | National University of Singapore |
Yuan, Sishen | Harbin Institute of Technology , Shenzhen |
Ren, Hongliang | The Chinese University of Hong Kong (CUHK) |
Keywords: Simulation and Animation, Biologically-Inspired Robots
Abstract: Multiphysics simulation of magnetically actuated origami robots promises a range of applications such as synthetic data generation, design parameter optimization, predicting the robot’s performance, and implementing various control algorithms, but has rarely been explored. This paper presents a realistic multiphysics simulation of magnetically actuated origami robots implemented in Grasshopper3D using the Kangaroo plug-in. Due to the interaction between multiple magnets, the complex motion dynamics of a worm-like robot are generated and analyzed. We further show the possibility of accurately simulating origami structures made of different materials and permanent magnets of different shapes, sizes, and magnetic strength. The simulation’s unknown parameters are determined by conducting similar practical and simulated experiments and comparing their characteristics. Further, we show the close resemblance between the real and simulated behavior of the origami robot.
|
|
02:15-02:30, Paper TuAT21.2 | Add to My Program |
Spherical Magnetic Joint for Inverted Locomotion of Multi-Legged Robot |
|
Sison, Harn | Osaka University |
Ratsamee, Photchara | Cyber Media Center, Osaka University |
Higashida, Manabu | Osaka University |
Mashita, Tomohiro | Osaka University |
Uranishi, Yuki | Osaka University |
Takemura, Haruo | Osaka University |
Keywords: Legged Robots, Mechanism Design, Robotics and Automation in Construction
Abstract: In this paper, we present a spherical magnetic joint for the inverted locomotion of a multi-legged robot. The permanent magnet's spherical shape allows the robot to attach its foot to a steel surface without energy consumption. However, the robot's inverted locomotion requires foot flexibility for placement and gait construction of the robot. Therefore, the spherical magnetic joint mechanism was designed and implemented for the robot feet to deal with angular placement. For decoupling the foot from the steel surface, the attractive force is adjusted by tilting the adjustable sleeve mechanism at an adequate angle between the surface and foot tip. Experimental results show that the spherical magnetic joint can maintain the attractive force at any angle, and the sleeve mechanism can reduce 20% of the reaction force for pulling the legs from the steel surfaces. Furthermore, the designed gait for inverted locomotion with a spherical magnetic joint was tested and compared to prove the concept of the spherical magnetic joint and sleeve mechanism.
|
|
02:30-02:45, Paper TuAT21.3 | Add to My Program |
An Open-Source Mechanical Design of ALARIS Hand: A 6-DOF Anthropomorphic Robotic Hand |
|
Nurpeissova, Ayaulym | Nazarbayev University |
Tursynbekov, Talgat | Nazarbayev University |
Shintemirov, Almas | Nazarbayev University |
Keywords: Multifingered Hands, Mechanism Design, Grasping
Abstract: This paper presents a new open-source mechanical design of a 6-DOF anthropomorphic ALARIS robotic hand that can serve as a low-cost design platform for further customization and utilization for research and educational purposes. The presented hand design employs linkage-based three-phalange finger and two-phalange adaptive thumb designs with non-backdrivable worm-and-rack transmission mechanisms. Combination of design improvements and solutions, discussed in the paper, are implemented in a functional robotic hand prototype with powerful grasping capabilities, which utilizes off-the-shelf inexpensive components and 3D printing technology ensuring the hand low manufacturing cost and replicability. The open-source mechanical design of the presented ALARIS robotic hand is freely available for downloading from the authors’ research lab web-site https://www.alaris.kz and https://github.com/alarisnu/alaris_hand.
|
|
02:45-03:00, Paper TuAT21.4 | Add to My Program |
Biomimetic Operational Space Control for Musculoskeletal Humanoid Optimizing across Muscle Activation and Joint Nullspace |
|
Toshimitsu, Yasunori | University of Tokyo |
Kawaharazuka, Kento | The University of Tokyo |
Nishiura, Manabu | University of Tokyo |
Koga, Yuya | The University of Tokyo |
Omura, Yusuke | The University of Tokyo |
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Tendon/Wire Mechanism, Modeling, Control, and Learning for Soft Robots, Biomimetics
Abstract: We have implemented a force-based operational space controller on a physical musculoskeletal humanoid robot arm. The controller calculates muscle activations based on a biomimetic Hill-type muscle model. We propose a method to include the joint torque nullspace in the optimization process, which enables the robot to exploit the nullspace to gradually lower its overall muscle activation. We have verified in experiments that it can react compliantly to external disturbances while retaining its operational space task.
|
|
TuAT22 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Applications of Micro and Nano Robotics I |
|
|
Chair: Zhang, Li | The Chinese University of Hong Kong |
Co-Chair: Shang, Wanfeng | Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences |
|
02:00-02:15, Paper TuAT22.1 | Add to My Program |
Parallel Actuation of Nanorod Swarm and Nanoparticle Swarm to Different Targets |
|
Du, Xingzhou | The Chinese University of Hong Kong |
Jin, Dongdong | The Chinese University of Hong Kong |
Wang, Qianqian | The Chinese University of Hong Kong |
Yang, Shihao | The Chinese University of Hong Kong |
Chiu, Philip, Wai-yan | Chinese University of Hong Kong |
Zhang, Li | The Chinese University of Hong Kong |
Keywords: Micro/Nano Robots, Swarm Robotics, Biologically-Inspired Robots
Abstract: After years of development, various swarms of robots have been proposed for many complicated tasks, such as forming patterns, cooperative locomotion, and adapting to different environments. However, controlling microrobotic swarms is still a challenging task owing to the lacking of integrated devices on the small-scale agents, and actuation of multiple microrobotic swarms to different targets under the same global input will be even more difficult. In this work, we present a swarm of nickel nanorods and its diverse locomotion velocity compared with Fe3O4 nanoparticle swarms is implemented for actuating the two swarms to different targets under the same customized oscillating magnetic field. The effects of the magnetic anisotropy of agents on the macroscopic swarm behaviour are analysed theoretically. To prove the strategy, the speeds of the two swarms were characterized through experiments, and demonstrations were conducted to show the capability of driving the two swarms to different locations in the same environment. Furthermore, parallel locomotion of the two swarms towards opposite directions was also achieved on a tilted substrate. This work has proved the feasibility of simultaneously actuating two swarms to diverse targets and promoted fundamental understandings of microrobotic swarms.
|
|
02:15-02:30, Paper TuAT22.2 | Add to My Program |
Robotic Micromanipulation for Active Pin Alignment in Electronic Soldering Industry |
|
Ren, Hao | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
Wu, Xinyu | CAS |
Shang, Wanfeng | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
Keywords: Automation at Micro-Nano Scales, Visual Servoing, Industrial Robots
Abstract: In the context of robotic high-precision soldering, we propose an image-based pin alignment control method based on active plastic deformation. The plastic deformation is a well-known failure mechanism in most situations, which includes a phenomenon that the objects do not return original state. Here, in contrast to this convention, we utilize the plastic deformation of the metal pin to do pin alignment for improving the quality of the solder joint. To address this, we embed the springback compensation into the image-based pin alignment controller. Lastly, the proposed strategy is successfully demonstrated and evaluated in a practical modified robotic manipulation system. The result shows that the alignment error is less than 20 μm, which is far less than pin alignment without considering plastic deformation and elastic recovery. This work considers active plastic deformation and spontaneous elastic recovery of soft object, which would greatly promote the use of robotics in micromanufacturing and microfabrication in lad and industry, especially for soft objects.
|
|
02:30-02:45, Paper TuAT22.3 | Add to My Program |
In-Situ Bonding of Multilayer Microfluidic Devices Assisted by a Fully-Automated Aligning System |
|
Li, Pengyun | Beijing Institute of Technology |
Liu, Xiaoming | Beijing Institute of Technology |
Liu, Dan | Beijing Institute of Technology |
Tang, Xiaoqing | Beijing Institute of Technology |
Kojima, Masaru | Osaka University |
Huang, Qiang | Beijing Institute of Technology |
Arai, Tatsuo | University of Electro-Communications |
Keywords: Micro/Nano Robots, Automation at Micro-Nano Scales, Biological Cell Manipulation
Abstract: Three-dimensional multi-layer microfluidic device (MMD) fabricated by polydimethylsiloxane (PDMS) is a solution for on-chip high-complexity serial or parallel processes. In this paper, we propose and set up an automated 8-DOF alignment system assisted by computer vision, which is capable of automatic leveling, aligning, and in-situ bonding of multiple PDMS layers. A microscope is motorized by a Z stage with a grating ruler to optically focus the marks and record the Z positions. A 3-PRS mechanism with flexible hinges is proposed to achieve the leveling of the top layer. An XYZR platform is utilized for in-plane alignment of the top layer and moving the top layer down to bond with the bottom layer. The Z positions of marks, translational and rotational offsets obtained by image processing are used for the automated leveling and aligning of the PDMS layers. Experimental results showed that translational and rotational alignment errors are less than 5 μm and 0.1°, respectively. The whole bonding procedure, including the plasma treatment, took the time shorter than 3 mins. Finally, a fabricated 3-layer microfluidic device with a deformable microchannel is applied to the cell squeezing and proved that the proposed alignment system has great potential in developing the functional MMDs.
|
|
02:45-03:00, Paper TuAT22.4 | Add to My Program |
Robotic Handling of Micro-Objects Using Stochastic Optically-Actuated End-Effector |
|
Ta, Quang Minh | Nanyang Technological University |
Cheah, C. C. | Nanyang Technological University |
Keywords: Motion Control, Automation at Micro-Nano Scales, Grippers and Other End-Effectors
Abstract: Robotic handling of objects in the micro-world is a challenging problem. Due to the natural differences between micro-manipulation and robotic manipulators, it is difficult to produce such a micro-hand which can function as robotic hands in the physical world. In this paper, we propose a robotic handling approach for micro-objects using stochastic optically-actuated end-effector. An end-effector is first constructed by several optically-actuated micro-particles so as to perform an autonomous grasping task on a micro-object. Once being grasped by the end-effector, the object is maneuvered by using a robotic stage which acts as a robotic mobile base. This paper, therefore, offers a robotic control approach which is able to perform a ``pick and place" task in the micro-world. In the proposed approach, the Brownian effect on the optically-actuated end-effector is considered so as to reflect the real essence of optical tweezing. Besides, with the usage of the end-effector for stable grasping of the target object, the Brownian effect on the grasped object can also be reduced. In the proposed approach, a simple feedback control method is utilized for manipulation of the grasped object, and thus further enhancing the robustness of the control system in the presence of Brownian perturbations. Rigorous mathematical formulation and stability analysis is carried out, and experimental validation is performed to illustrate the feasibility of the robotic handling approach.
|
|
TuAT23 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Control and Optimization II |
|
|
Co-Chair: Ha, Sehoon | Georgia Institute of Technology |
|
02:00-02:15, Paper TuAT23.1 | Add to My Program |
A Class of Optimal Switching Mixed Data Injection Attack in Cyber-Physical Systems |
|
Gao, Sheng | Tongji University |
Zhang, Hao | Tongji University |
Wang, Zhuping | Tongji University |
Huang, Chao | Tongji University |
Keywords: Optimization and Optimal Control, Sensor Networks
Abstract: This paper considers a class of switching mixed data injection attacks using input derivatives in cyber-physical systems with linear quadratic(LQ) cost from the perspective of the attacker. The attacker injects data mixed with false data and its derivative into a healthy system, which destroys the performance of the original system. For this situations, the optimal mixed data injection attack strategy is designed to minimize the quadratic cost and most damage the performance of the system. After that, in order to increase the complexity, concealment and energy saving of the attack, we designed the optimal switching mixed data injection attack strategy. Finally, numerical results and comparative experiments are provided to illustrate the effectiveness of the proposed method.
|
|
02:15-02:30, Paper TuAT23.2 | Add to My Program |
Observation Space Matters: Benchmark and Optimization Algorithm |
|
Kim, Joanne Taery | Lawrence Livermore National Laboratory |
Ha, Sehoon | Georgia Institute of Technology |
Keywords: Reinforcement Learning, Performance Evaluation and Benchmarking, Simulation and Animation
Abstract: Recent advances in deep reinforcement learning (deep RL) enable researchers to solve challenging control problems, from simulated environments to real-world robotic tasks. However, deep RL algorithms are known to be sensitive to the problem formulation, including observation spaces, action spaces, and reward functions. There exist numerous choices for observation spaces but they are often designed solely based on prior knowledge due to the lack of established principles. In this work, we conduct benchmark experiments to verify common design choices for observation spaces, such as Cartesian transformation, binary contact flags, a short history, or global positions. Then we propose a search algorithm to find the optimal observation spaces, which examines various candidate observation spaces and removes unnecessary observation channels with a Dropout-Permutation test. We demonstrate that our algorithm significantly improves learning speed compared to manually designed observation spaces. We also analyze the proposed algorithm by evaluating different hyperparameters.
|
|
02:30-02:45, Paper TuAT23.3 | Add to My Program |
Interleaving Fast and Slow Decision Making |
|
Gulati, Aditya | International Institute of Information Technology, Bangalore |
Soni, Sarthak | International Institute of Information Technology Bangalore |
Rao, Shrisha | International Institute of Information Technology -Bangalore |
Keywords: AI-Based Methods, Agent-Based Systems
Abstract: The “Thinking, Fast and Slow” paradigm of Kahneman proposes that we use two different styles of thinking---a fast and intuitive System 1 for certain tasks, along with a slower but more analytical System 2 for others. While the idea of using this two-system style of thinking is gaining popularity in AI and robotics, our work considers how to interleave the two styles of decision-making, i.e., how System 1 and System 2 should be used together. For this, we propose a novel and general framework which includes a new System 0 to oversee Systems 1 and 2. At every point when a decision needs to be made, System 0 evaluates the situation and quickly hands over the decision-making process to either System 1 or System 2. We evaluate such a framework on a modified version of the classic Pac-Man game, with an already-trained RL algorithm for System 1, a Monte-Carlo tree search for System 2, and several different possible strategies for System 0. As expected, arbitrary switches between Systems 1 and 2 do not work, but certain strategies do well. With System 0, an agent is able to perform better than one that uses only System 1 or System 2.
|
|
02:45-03:00, Paper TuAT23.4 | Add to My Program |
Multi-Output Infinite Horizon Gaussian Processes |
|
Lim, Jaehyun | Yonsei University |
Park, Jehyun | Yonsei University |
Nah, Sungjae | Yonsei University |
Choi, Jongeun | Yonsei University |
Keywords: Probabilistic Inference, Probability and Statistical Methods, Model Learning for Control
Abstract: Learning the uncertain dynamical environments for online learning and prediction from noisy sensory measurement streams is essential for various tasks in robotics. Recently, Gaussian process (GP) online learning such as an infinite-horizon Gaussian process (IHGP) has shown effectiveness to cope with non-stationary dynamical random processes in learning hyperparameters online by reducing the computational cost. However, the IHGP was originally proposed to deal with only a single-output. Therefore, to tackle complex real-world problems, we propose a multi-output infinite-horizon Gaussian process (MOIHGP) that generalizes the single-output IHGP to deal with multiple outputs for better prediction. Our approach allows us to consider correlations between multiple outputs for better prediction, even with occlusions in a Bayesian way. Finally, we successfully demonstrate the effectiveness of our approach by benchmark and experimental results. For simulated benchmark experiments with high noise levels, our approach reduced 16.6% of the averaged RMSE value achieved by the single-output IHGP.
|
|
TuAT24 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Aerial Robotics: Planning and Control |
|
|
Chair: Xu, Chao | Zhejiang University |
|
02:00-02:15, Paper TuAT24.1 | Add to My Program |
Estimation and Adaption of Indoor Ego Airflow Disturbance with Application to Quadrotor Trajectory Planning |
|
Wang, Luqi | Hong Kong University of Science and Technology |
Zhou, Boyu | Hong Kong University of Science and Technology |
Liu, Chuhao | Hong Kong University of Science and Technology |
Shen, Shaojie | Hong Kong University of Science and Technology |
Keywords: Aerial Systems: Applications, Motion and Path Planning, Collision Avoidance
Abstract: It is ubiquitously accepted that during the autonomous navigation of the quadrotors, one of the most widely adopted unmanned aerial vehicles (UAVs), safety always has the highest priority. However, it is observed that the ego airflow disturbance can be a significant adverse factor during flights, causing potential safety issues, especially in narrow and confined indoor environments. Therefore, we propose a novel method to estimate and adapt indoor ego airflow disturbance of quadrotors, meanwhile applying it to trajectory planning. Firstly, the hover experiments for different quadrotors are conducted against the proximity effects. Then with the collected acceleration variance, the disturbances are modeled for the quadrotors according to the proposed formulation. The disturbance model is also verified under hover conditions in different reconstructed complex environments. Furthermore, the approximation of Hamilton-Jacobi reachability analysis is performed according to the estimated disturbances to facilitate the safe trajectory planning, which consists of kinodynamic path search as well as B-spline trajectory optimization. The whole planning framework is validated on multiple quadrotor platforms in different indoor environments.
|
|
02:15-02:30, Paper TuAT24.2 | Add to My Program |
Real-Time Active Detection of Targets and Path Planning Using UAVs |
|
Chen, Fangping | Peking University |
Lu, Yuheng | Peking University |
Li, Yunyi | Peking University |
Xie, Xiaodong | Peking University |
Keywords: Aerial Systems: Perception and Autonomy, Motion and Path Planning
Abstract: This article proposes a new method that enables Unmanned Aerial Vehicles (UAVs) to actively find targets and shoot photographs of them in an unknown environment, while successfully avoiding surrounding obstacles and planning optimize routes. Owing to the limited computing ability on the UAVs, we obtained the point cloud data of surrounding objects, and selected the best segmentation method of the point cloud to perform real-time semantic segmentation on the collected point cloud data. The point cloud data with semantic attributes were merged into voxels. We reconstruct the real-time distance and angle between the surface of obstacles and the surrounding obstacles through Euclidean Signed Distance Fields (ESDFs), and adjust the gimbal angle and focal length of UAVs and use the two-dimensional image recognition to shoot the photographs of the target precisely. Considering the increasing scale of UAVs power inspections, we can improve the efficiency of fine inspections of power transmission lines by using the method we proposed.
|
|
02:30-02:45, Paper TuAT24.3 | Add to My Program |
EVA-Planner: Environmental Adaptive Quadrotor Planning |
|
Quan, Lun | Zhejiang University |
Zhang, Zhiwei | Zhejiang University |
Zhong, Xingguang | Zhejiang University |
Xu, Chao | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Aerial Systems: Applications, Collision Avoidance
Abstract: The quadrotor is popularly used in challenging environments due to its superior agility and flexibility. In these scenarios, trajectory planning plays a vital role in generating safe motions to avoid obstacles while ensuring flight smoothness. Although many works on quadrotor planning have been proposed, a research gap exists in incorporating self-adaptation into a planning framework to enable a drone to automatically fly slower in denser environments and increase its speed in a safer area. In this paper, we propose an environmental adaptive planner to adjust the flight aggressiveness effectively based on the obstacle distribution and quadrotor state. Firstly, we design an environmental adaptive safety aware method to assign the priority of the surrounding obstacles according to the environmental risk level and instantaneous motion tendency. Then, we apply it into a multi-layered model predictive contouring control (Multi-MPCC) framework to generate adaptive, safe, and dynamical feasible local trajectories. Extensive simulations and real-world experiments verify the efficiency and robustness of our planning framework. Benchmark comparison also shows superior performances of our method with another advanced environmental adaptive planning algorithm. Moreover, we release our planning framework as open-source ros-packages}.
|
|
02:45-03:00, Paper TuAT24.4 | Add to My Program |
EGO-Planner: An ESDF-Free Gradient-Based Local Planner for Quadrotors |
|
Zhou, Xin | ZHEJIANG UNIVERSITY |
Wang, Zhepei | Zhejiang University |
Xu, Chao | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation, Aerial Systems: Applications
Abstract: Gradient-based planners are widely used for quadrotor local planning, in which a Euclidean Signed Distance Field (ESDF) is crucial for evaluating gradient magnitude and direction. Nevertheless, computing such a field has much redundancy since the trajectory optimization procedure only covers a very limited subspace of the ESDF updating range. In this paper, an ESDF-free gradient-based planning framework is proposed, which significantly reduces computation time. The main improvement is that the collision term in penalty function is formulated by comparing the colliding trajectory with a collision-free guiding path. The resulting obstacle information will be stored only if the trajectory hits new obstacles, making the planner only extract necessary obstacle information. Then, we lengthen the time allocation if dynamical feasibility is violated. An anisotropic curve fitting algorithm is introduced to adjust higher order derivatives of the trajectory while maintaining the original shape. Benchmark comparisons and real-world experiments verify its robustness and high-performance. The source code is released as ros packages.
|
|
TuBT1 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Navigation and Mapping |
|
|
Chair: Fu, Zhongtao | Kings College London |
Co-Chair: Vidal-Calleja, Teresa A. | University of Technology Sydney |
|
03:00-03:15, Paper TuBT1.1 | Add to My Program |
Differential Information Aided 3-D Registration for Accurate Navigation and Scene Reconstruction |
|
Jin, Wu | UESTC |
Zhang, Shuyang | Shenzhen Unity Drive Innovation Technology Co. Ltd., |
Zhu, Yilong | HKUST |
Geng, Ruoyu | Hong Kong University of Science and Technology |
Fu, Zhongtao | Kings College London |
Ma, Fulong | The Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Sensor Fusion, Engineering for Robotic Systems
Abstract: A novel 3-dimensional (3-D) alignment method for point-cloud registration is proposed where the time-differential information of the measured points is employed. The new problem turns out to be a novel multi-dimensional optimization. Analytical solution to this optimization is then obtained, which sets the ground of further correspondence matching using k-D trees. Finally, via many examples, we show that the new method owns better registration accuracy in real-world experiments.
|
|
03:15-03:30, Paper TuBT1.2 | Add to My Program |
Autonomous Navigation in Dynamic Environments with Multi-Modal Perception Uncertainties |
|
Guo, Hongliang | Singapore MIT Alliance of Research and Technology |
Huang, Zefan | Singapore-MIT Alliance for Research and Technology |
Ho, Qi Heng | Singapore-MIT Alliance for Research and Technology |
Ang Jr, Marcelo H | National University of Singapore |
Rus, Daniela | MIT |
Keywords: Probabilistic Inference, Motion and Path Planning, Robot Safety
Abstract: This paper addresses the safe path planning problem for autonomous mobility with multi-modal perception uncertainties. Specifically, we assume that different sensor inputs lead to different Gaussian process regulated perception uncertainties (named as multi-modal perception uncertainties). We implement a Bayesian inference algorithm which merges the multi-modal GP-regulated uncertainties into a unified one and translates the unified uncertainty into a dynamic risk map. With the safe path planner taking the risk map as input, we are able to plan a safe path for the autonomous vehicle to follow. Experimental results on an autonomous golf cart testbed validate the applicability and efficiency of the proposed algorithm.
|
|
03:30-03:45, Paper TuBT1.3 | Add to My Program |
Learning World Transition Model for Socially Aware Robot Navigation |
|
Cui, Yuxiang | Zhejiang University |
Zhang, Haodong | Zhejiang University |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Social HRI, Human-Aware Motion Planning, Acceptability and Trust
Abstract: Moving in dynamic pedestrian environments is one of the important requirements for autonomous mobile robots. We present a model-based reinforcement learning approach for robots to navigate through crowded environments. The navigation policy is trained with both real interaction data from multi-agent simulation and virtual data from a deep transition model that predicts the evolution of surrounding dynamics of mobile robots. A reward function considering social conventions is designed to guide the training of the policy. Specifically, the policy model takes laser scan sequence and robot's own state as input and outputs steering command. The laser sequence is further transformed into stacked local obstacle maps disentangled from robot's ego motion to separate the static and dynamic obstacles, simplifying the model training. We observe that the policy using our method can be trained with significantly less real interaction data in simulator but achieve similar level of success rate in social navigation tasks compared with other methods. Experiments are conducted in multiple social scenarios both in simulation and on real robots, the learned policy can guide the robots to the final targets successfully in a socially compliant manner. Code is available at https://github.com/YuxiangCui/model-based-social-navigation.
|
|
03:45-04:00, Paper TuBT1.4 | Add to My Program |
Probabilistic Dynamic Crowd Prediction for Social Navigation |
|
Kiss, Stefan | University of Technology Sydney |
Katuwandeniya, Kavindie | University of Technology Sydney |
Alempijevic, Alen | University of Technology Sydney |
Vidal-Calleja, Teresa A. | University of Technology Sydney |
Keywords: Human-Aware Motion Planning, Probabilistic Inference, Motion and Path Planning
Abstract: In this paper, we present a novel approach that predicts spatially and temporally crowd behaviour for robotic social navigation. Integrating mobile robots into human society involves the fundamental problem of navigation in crowds. A robot should attempt to navigate in a way that is minimally invasive to the humans in its environment. However, planning in a dynamic environment is difficult as the environment must be predicted into the future. This problem has been thoroughly studied considering the behaviour of pedestrians at the level of individuals. Instead, we represent a pedestrian crowd by its macroscopic properties over space, such as density and velocity. With this spatial representation, we propose to learn a convolutional recurrent model to predict these properties into the future. The key design of a probabilistic loss function capturing the crowd's macroscopic properties empowers the spatio-temporal crowd prediction. Using a social invasiveness metric defined on these properties predicted by our convolutional recurrent model, we develop a framework that produces globally-optimal plans in expectation. Extensive results using a realistic pedestrian simulator show the validity and performance of the proposed social navigation approach.
|
|
TuBT2 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Multiple and Distributed Systems I |
|
|
Chair: Tamura, Yasumasa | Tokyo Institute of Technology |
Co-Chair: Defago, Xavier | Tokyo Institute of Technology |
|
03:00-03:15, Paper TuBT2.1 | Add to My Program |
PRIMAL2: Pathfinding Via Reinforcement and Imitation Multi-Agent Learning - Lifelong |
|
Damani, Mehul | Nanyang Technological University |
Luo, Zhiyao | National University of Singapore |
Wenzel, Emerson | Tufts University |
Sartoretti, Guillaume Adrien | National University of Singapore (NUS) |
Keywords: Path Planning for Multiple Mobile Robots or Agents, AI-Based Methods, Distributed Robot Systems
Abstract: Multi-agent path finding (MAPF) is an indispensable component of large-scale robot deployments in numerous domains ranging from airport management to warehouse automation. In particular, this work addresses lifelong MAPF (LMAPF) for real-world warehouse operations, an online variant of the problem where agents are immediately assigned a new goal upon reaching their current one. Effectively solving LMAPF in such dense and highly structured environments requires expensive coordination between agents as well as frequent replanning abilities, a daunting task for existing coupled and decoupled approaches alike. With the purpose of achieving considerable agent coordination without any compromise on reactivity and scalability, we introduce PRIMAL2, a distributed RL framework for LMAPF where agents learn fully decentralized policies to reactively plan paths online in a partially observable world. We extend our previous work to dense and highly structured worlds by identifying behaviors and conventions which improve implicit agent coordination, and enable their learning through the construction of a novel local agent observation and various training aids. We present extensive results of PRIMAL2 in both MAPF and LMAPF environments and compare its performance to SotA planners in terms of makespan and throughput. We show that PRIMAL2 significantly surpasses our previous work and performs comparably to these baselines, while allowing real-time re-planning and scaling up to 2048 agents.
|
|
03:15-03:30, Paper TuBT2.2 | Add to My Program |
Consensus-Based Control Barrier Function for Swarm |
|
Machida, Manao | NEC |
Ichien, Masumi | NEC Corporation |
Keywords: Swarm Robotics, Multi-Robot Systems, Robot Safety
Abstract: In swarm control, many robots coordinate their actions in a distributed and decentralized way. We propose a consensus-based control barrier function (CCBF) for a swarm. CCBF restricts the states of the whole distributed system, not just those of the individual robots. The barrier function is approximated by a consensus filter. We prove that CCBF constrains the control inputs for holding the forward invariance of the safety set. Moreover, we applied CCBF to a practical problem and conducted an experiment with actual robots. The results showed that CCBF restricted the states of multiple robots to the safety set. To the best of our knowledge, this is the first CBF that can restrict the state of the whole distributed system with only local communication. CCBF has various applications such as monitoring with a swarm and maintaining the network between a swarm and a base station.
|
|
03:30-03:45, Paper TuBT2.3 | Add to My Program |
Bayesian Disturbance Injection: Robust Imitation Learning of Flexible Policies |
|
Oh, Hanbit | Nara Institute of Science and Technology |
Sasaki, Hikaru | Nara Institute of Science and Technology |
Michael, Brendan | Nara Institute of Science and Technology |
Matsubara, Takamitsu | Nara Institute of Science and Technology |
Keywords: Imitation Learning, Learning from Demonstration
Abstract: Scenarios requiring humans to choose from multiple seemingly optimal actions are commonplace, however standard imitation learning often fails to capture this behavior. Instead, an over-reliance on replicating expert actions induces inflexible and unstable policies, leading to poor generalizability in an application. To address the problem, this paper presents the first imitation learning framework that incorporates Bayesian variational inference for learning flexible non-parametric multi-action policies, while simultaneously robustifying the policies against sources of error, by introducing and optimizing disturbances to create a richer demonstration dataset. This combinatorial approach forces the policy to adapt to challenging situations, enabling stable multi-action policies to be learned efficiently. The effectiveness of our proposed method is evaluated through simulations and real-robot experiments for a table-sweep task using the UR3 6-DOF robotic arm. Results show that, through improved flexibility and robustness, the learning performance and control safety are better than comparison methods.
|
|
03:45-04:00, Paper TuBT2.4 | Add to My Program |
Active Modular Environment for Robot Navigation |
|
Kameyama, Shota | Tokyo Institute of Technology |
Okumura, Keisuke | Tokyo Institute of Technology |
Tamura, Yasumasa | Tokyo Institute of Technology |
Defago, Xavier | Tokyo Institute of Technology |
Keywords: Multi-Robot Systems, Path Planning for Multiple Mobile Robots or Agents, Autonomous Vehicle Navigation
Abstract: This paper presents a novel robot-environment interaction in navigation tasks such that robots have neither a representation of their working space nor planning function, instead, an active environment takes charge of these aspects. This is realized by spatially deploying computing units, called cells, and making cells manage traffic in their respective physical region. Different from stigmegic approaches, cells interact with each other to manage environmental information and to construct instructions on how robots move. As a proof-of-concept, we present an architecture called AFADA and its prototype, consisting of modular cells and robots moving on the cells. The instructions from cells are based on a distributed routing algorithm and a reservation protocol. We demonstrate that AFADA enables a robot to move efficiently in a dynamic environment that stochastic changes its topology, comparing to self-navigation by a robot itself. This is followed by several demos, including multi-robot navigation, highlighting the power of offloading both representation and planning from robots to the environment. We expect that the concept of AFADA contributes to developing the infrastructure for multiple robots because it can engage online and lifelong planning and execution.
|
|
TuBT3 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Multiple and Distributed Systems III |
|
|
Chair: Lee, Ki Myung Brian | University of Technology Sydney |
|
03:00-03:15, Paper TuBT3.1 | Add to My Program |
Deep Reinforcement Learning of Event-Triggered Communication and Control for Multi-Agent Cooperative Transport |
|
Shibata, Kazuki | Toyota Central R&D Labs., INC |
Jimbo, Tomohiko | Toyota Central R&d Labs., Inc |
Matsubara, Takamitsu | Nara Institute of Science and Technology |
Keywords: Cooperating Robots, Multi-Robot Systems, Distributed Robot Systems
Abstract: In this paper, we explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for multi-agent cooperative transport. Typical end-to-end deep neural network policies may be insufficient for covering communication and control; these methods cannot decide the timing of communication and can only work with fixed-rate communications. Therefore, our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again. Such event-triggered control policies are efficiently optimized using a multi-agent deep deterministic policy gradient. We confirmed that our approach could balance the transport performance and communication savings through numerical simulations.
|
|
03:15-03:30, Paper TuBT3.2 | Add to My Program |
Multi-Robot Task Allocation Games in Dynamically Changing Environments |
|
Park, Shinkyu | KAUST |
Zhong, Yaofeng Desmond | Princeton University |
Leonard, Naomi | Princeton University |
Keywords: Multi-Robot Systems, Distributed Robot Systems, Planning, Scheduling and Coordination
Abstract: We propose a game-theoretic multi-robot task allocation framework that enables a large team of robots to optimally allocate tasks in dynamically changing environments. As our main contribution, we design a decision-making algorithm that defines how the robots select tasks to perform and how they repeatedly revise their task selections in response to changes in the environment. Our convergence analysis establishes that the algorithm enables the robots to learn and asymptotically achieve the optimal stationary task allocation. Through experiments with a multi-robot trash collection application, we assess the algorithm’s responsiveness to changing environments and resilience to failure of individual robots.
|
|
03:30-03:45, Paper TuBT3.3 | Add to My Program |
An Upper Confidence Bound for Simultaneous Exploration and Exploitation in Heterogeneous Multi-Robot Systems |
|
Lee, Ki Myung Brian | University of Technology Sydney |
Kong, Felix Honglim | The University of Technology Sydney |
Cannizzaro, Ricardo | DST Group |
Palmer, Jennifer L. | Defence Science and Technology Group |
Johnson, David | University of Sydney |
Yoo, Chanyeol | University of Technology Sydney |
Fitch, Robert | University of Technology Sydney |
Keywords: Multi-Robot Systems, Perception-Action Coupling, Aerial Systems: Applications
Abstract: Heterogeneous multi-robot systems are advantageous for operations in unknown environments because functionally specialised robots can gather environmental information, while other robots perform desired tasks. We define this decomposition as the scout-task robot architecture} and show how it avoids the need to balance between exploration and exploitation by allowing the system to perform both simultaneously. The challenge is how to guide exploration in a way that improves overall system performance for time-limited tasks. We derive a novel upper confidence bound for simultaneous exploration and exploitation based on mutual information and present a general solution for scout-task coordination using decentralised Monte Carlo tree search. We evaluate the performance of our algorithms in a multi-drone surveillance scenario where scout robots are equipped with low resolution, long range sensors and task robots observe more detailed information using higher resolution, short range sensors. These results introduce a new class of coordination problems for heterogeneous teams with many practical applications beyond surveillance.
|
|
03:45-04:00, Paper TuBT3.4 | Add to My Program |
Priority Patrolling Using Multiple Agents |
|
Mallya, Deepak | Indian Institute of Technology Bombay |
Kandala, Sumanth | Indian Institute of Technology Bombay |
Vachhani, Leena | Indian Institute of Technology Bombay |
Sinha, Arpita | Indian Insitute of Technology, Bombay |
Keywords: Multi-Robot Systems, Surveillance Robotic Systems, Planning, Scheduling and Coordination
Abstract: The Patrolling Problem is a crucial feature of the surveillance task in defense and other establishments. Most of the works in the literature concentrate on reducing the Idleness value at each location in the environment. However, there are often a few prioritized locations that cannot be left unvisited beyond a certain Time Period. In this paper, we study the problem of Prioritized patrolling - the task of patrolling the given environment using multiple agents while ensuring the prioritized locations are visited within the pre-specified Time Period. We present a novel algorithm, namely, Time Period Based Patrolling (TPBP) algorithm, to solve the prioritized patrolling problem. It determines a sequence of walks for each agent online that complies with the Time Period requirement of the Priority nodes while reducing the Idleness of all the other nodes. We have tested and validated the algorithm using SUMO - a realistic simulator developed for traffic management. Since the existing strategies are not designed for Prioritized Patrolling, we show through comparison that proposed algorithm is required to solve the problem.
|
|
TuBT4 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning: Semantic Scene |
|
|
Chair: Huh, Jinwook | Samsung |
|
03:00-03:15, Paper TuBT4.1 | Add to My Program |
Anticipatory Navigation in Crowds by Probabilistic Prediction of Pedestrian Future Movements |
|
Zhi, Weiming | University of Sydney |
Lai, Tin | University of Sydney |
Ott, Lionel | ETH Zurich |
Ramos, Fabio | University of Sydney, NVIDIA |
Keywords: Big Data in Robotics and Automation, Autonomous Vehicle Navigation, Human-Aware Motion Planning
Abstract: Critical for the coexistence of humans and robots in dynamic environments is the capability for agents to understand each other's actions, and anticipate their movements. This paper presents Stochastic Process Anticipatory Navigation (SPAN), a framework that enables nonholonomic robots to navigate in environments with crowds, while anticipating and accounting for the motion patterns of pedestrians. To this end, we learn a predictive model to predict continuous-time stochastic processes to model future movement of pedestrians. Anticipated pedestrian positions are used to conduct chance constrained collision-checking, and are incorporated into a time-to-collision control problem. An occupancy map is also integrated to allow for probabilistic collision-checking with static obstacles. SPAN is novel in incorporating continuous-time stochastic processes, produced by a learned model, into a control problem for anticipatory navigation. We demonstrate the capability of SPAN in crowded simulation environments, as well as with a real-world pedestrian dataset.
|
|
03:15-03:30, Paper TuBT4.2 | Add to My Program |
Real-Time Human Lower Limbs Motion Estimation and Feedback for Potential Applications in Robotic Gait Aid and Training |
|
Wang, Lei | Zhejiang University |
Li, Qingguo | Queen's University |
Yi, Jingang | Rutgers University |
Zhang, Jinyuan | Weldon School of Biomedical Engineering, Purdue University |
Liu, Tao | Zhejiang University |
Keywords: Human-Centered Automation, Rehabilitation Robotics, Human Factors and Human-in-the-Loop
Abstract: Real-time lower limbs motion or gait measurement is an important part in human-robotic interaction for the control of robotic walkers and rehabilitation devices. Laser range finder or infrared sensor that is mounted on the device has been widely used in applications. Although these sensors can provide accurate horizontal motion information of lower limbs during human walking, it is still difficult to measure the angular motion of lower limbs due to their functional principles. Using inertial measurement units (IMU) can measure the angular motion of lower limbs, but it requires a large amount of IMU units for measurements of all lower limb segments. In this study, a novel method is developed for real-time monitoring lower limbs (shanks and thighs) motion in human walking using just two shank-mounted IMUs. A pose prediction model based on multiple linear regression and Kalman filter is proposed. The root-mean-square error (RMSE) of the thigh orientation and the knee joint angle estimation in sagittal plane are 6.1 ± 1.3 and 6.8 ± 1.4 degs, respectively. The RMSE of the ankle, knee, and hip position estimation are 4.2 ± 1.3, 4.2 ± 1.1 and 3.5 ± 0.9 cm, respectively.
|
|
03:30-03:45, Paper TuBT4.3 | Add to My Program |
Virtual Surfaces and Attitude Aware Planning and Behaviours for Negative Obstacle Navigation |
|
Hines, Thomas | CSIRO |
Stepanas, Kazys | CSIRO Data61 |
Talbot, Fletcher | CSIRO |
Sa, Inkyu | CSIRO |
Lewis, Jake | Euclideon Holographics |
Hernandez, Emili | Emesent |
Kottege, Navinda | CSIRO |
Hudson, Nicolas | X, the Moonshot Factory |
Keywords: Field Robots, Search and Rescue Robots, Mapping
Abstract: This paper presents an autonomous navigation system for ground robots traversing aggressive unstructured terrain through a cohesive arrangement of mapping, deliberative planning and reactive behaviour modules. All systems are aware of terrain slope, visibility and vehicle orientation, enabling robots to recognize, plan and react around unobserved areas and overcome negative obstacles, slopes, steps, overhangs and narrow passageways. This is one of pioneer works to explicitly and simultaneously couple mapping, planning and reactive components in dealing with negative obstacles. The system was deployed on three heterogeneous ground robots for the DARPA Subterranean Challenge, and we present results in Urban and Cave environments, along with simulated scenarios, that demonstrate this approach.
|
|
03:45-04:00, Paper TuBT4.4 | Add to My Program |
Cost-To-Go Function Generating Networks for High Dimensional Motion Planning |
|
Huh, Jinwook | Samsung |
Isler, Volkan | University of Minnesota |
Lee, Daniel | Cornell Tech |
Keywords: Motion and Path Planning, Deep Learning in Grasping and Manipulation, Deep Learning Methods
Abstract: This paper presents c2g-HOF networks which learn to generate cost-to-go functions for manipulator motion planning. The c2g-HOF architecture consists of a cost-to-go function over the configuration space represented as a neural network (c2g-network) as well as a Higher Order Function (HOF) network which outputs the weights of the c2g-network for a given input workspace. Both networks are trained end-to-end in a supervised fashion using costs computed from traditional motion planners. Once trained, c2g-HOF can generate a smooth and continuous cost-to-go function directly from workspace sensor inputs (represented as a point cloud in 3D or an image in 2D). At inference time, the weights of the c2g-network are computed very efficiently and near-optimal trajectories are generated by simply following the gradient of the cost-to-go function. We compare c2g-HOF with traditional planning algorithms for various robots and planning scenarios. The experimental results indicate that planning with c2g-HOF is significantly faster than other motion planning algorithms, resulting in orders of magnitude improvement when including collision checking. Furthermore, despite being trained from sparsely sampled trajectories in configuration space, c2g-HOF generalizes to generate smoother, and often lower cost, trajectories. We demonstrate cost-to-go based planning on a 7 DoF manipulator arm where motion planning in a complex workspace requires only 0.13 seconds for the entire trajectory.
|
|
TuBT5 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning: Optimization |
|
|
Chair: Kobayashi, Taisuke | Nara Institute of Science and Technology |
Co-Chair: Xiong, Rong | Zhejiang University |
|
03:00-03:15, Paper TuBT5.1 | Add to My Program |
Smooth-RRT*: Asymptotically Optimal Motion Planning for Mobile Robots under Kinodynamic Constraints |
|
Kang, Yiting | University of Science and Technology Beijing |
Yang, Zhi | University of Science & Technology Beijing |
Zeng, Riya | University of Science and Technology Beijing |
Wu, Qi | Beijing Electric Vehicle Co., Ltd |
Keywords: Field Robots, Nonholonomic Motion Planning, Dynamics
Abstract: Nowadays, various algorithms based on the Rapidly-exploring Random Tree (RRT) methods are utilized to solve motion planning problems. Based on the RRT*, we developed a novel reconnection method that enables the planner to directly generate a smooth curved trajectory. Meanwhile, kinodynamic constraints of the robots are considered to generate the control input, which improves the feasibility of the algorithm. The trajectory planned by the Smooth-RRT* is significantly suitable for the non-holonomic robots. Planning tests are conducted in four scenarios to demonstrate performance of the proposed algorithm in comparison with the original RRT* and kinodynamic-RRT (Kino-RRT). Smooth-RRT* yields shorter and smoother planned path in all the scenarios compared with the Kino-RRT. It finds a solution with fewer expansion nodes than the RRT* under the same time consumption. The results demonstrate that the proposed algorithm can generate a smooth trajectory satisfied with the kinodynamic constraints and ensure the asymptotic optimality.
|
|
03:15-03:30, Paper TuBT5.2 | Add to My Program |
Continuous Optimization-Based Task and Motion Planning with Signal Temporal Logic Specifications for Sequential Manipulation |
|
Takano, Rin | NEC Corporation |
Oyama, Hiroyuki | NEC Corporation |
Yamakita, Masaki | Tokyo Inst. of Technology |
Keywords: Task and Motion Planning, Hybrid Logical/Dynamical Planning and Verification, Manipulation Planning
Abstract: We propose a new optimization-based task and motion planning (TAMP) with signal temporal logic (STL) specifications for robotic sequential manipulation such as pick-and-place tasks. Given a high-level task specification, the TAMP problem is to plan a trajectory that satisfies the specification. This is, however, a challenging problem due to the difficulty of combining continuous motion planning and discrete task specifications. The optimization-based TAMP with temporal logic specifications is a promising method, but existing works use mixed integer problems (MIP) and do not scale well. To address this issue, in our approach, a new hybrid system model without discrete variables is introduced and combined with smooth approximation methods for STL. This allows the TAMP to be formulated as a nonlinear programming problem whose computational cost is significantly less than that of MIP. Furthermore, it is also possible to deal with nonlinear dynamics and geometric constraints represented by nonlinear functions. The effectiveness of the proposed method is demonstrated with both numerical experiments and a real robot.
|
|
03:30-03:45, Paper TuBT5.3 | Add to My Program |
Proximal Policy Optimization with Relative Pearson Divergence |
|
Kobayashi, Taisuke | Nara Institute of Science and Technology |
Keywords: Reinforcement Learning, Machine Learning for Robot Control, Deep Learning Methods
Abstract: The recent remarkable progress of deep reinforcement learning (DRL) stands on regularization of policy for stable and efficient learning. A popular method, named proximal policy optimization (PPO), has been introduced for this purpose. PPO clips density ratio of the latest and baseline policies with a threshold, while its minimization target is unclear. As another problem of PPO, the symmetric threshold is given numerically while the density ratio itself is in asymmetric domain, thereby causing unbalanced regularization of the policy. This paper therefore proposes a new variant of PPO by considering a regularization problem of relative Pearson (RPE) divergence, so-called PPO-RPE. This regularization yields the clear minimization target, which constrains the latest policy to the baseline one. Through its analysis, the intuitive threshold-based design consistent with the asymmetry of the threshold and the domain of density ratio can be derived. Through four benchmark tasks, PPO-RPE performed as well as or better than the conventional methods in terms of the task performance by the learned policy.
|
|
03:45-04:00, Paper TuBT5.4 | Add to My Program |
Optimal Object Placement for Minimum Discontinuity Non-Revisiting Coverage Task |
|
Yang, Tong | Zhejiang University |
Valls Miro, Jaime | University of Technology Sydney |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Manipulation Planning
Abstract: This work considers the optimal non-repetitive coverage tasks with a single non-redundant manipulator for the case when the object can be positioned at a predefined set of locations within the workcell. The scenario is often encountered in typical industrial settings, for instance when the object presents itself along a conveyor belt and its surface can not be serviced at a single location - the object being large or complex for that endeavour. Given the non-bijective nature of manipulator kinematics between task and joint space, without an explicit consideration of joint-space continuity during its construction, a continuous coverage path designed in task-space may easily be truncated into intermittent segments where the manipulator needs to adopt a different configuration to continue the task, resulting in manipulator motions where the end-effector will need to lift off the surface, an altogether undesirable characteristic affecting the quality of the final product for smooth operations on objects such as polishing, painting or deburring. In this work, a novel algorithm to optimally partition the task-space whilst considering the various finite locations where the object may be stationed is proposed that ensures joint-space coverage continuity with minimal lift-offs. Results from the algorithm being challenged to achieve coverage of a number of objects, both in simulation and in real tests with an industrial manipulator.
|
|
TuBT6 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning: Autonomous Driving |
|
|
Chair: Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Co-Chair: Saha, Indranil | IIT Kanpur |
|
03:00-03:15, Paper TuBT6.1 | Add to My Program |
ICurb: Imitation Learning-Based Detection of Road Curbs Using Aerial Images for Autonomous Driving |
|
Xu, Zhenhua | The Hong Kong University of Science and Technology |
Sun, Yuxiang | Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Field Robots, Imitation Learning, Deep Learning Methods
Abstract: Detection of road curbs is an essential capability for autonomous driving. It can be used for autonomous vehicles to determine drivable areas on roads. Usually, road curbs are detected on-line using vehicle-mounted sensors, such as video cameras and 3-D Lidars. However, on-line detection using video cameras may suffer from challenging illumination conditions, and Lidar-based approaches may be difficult to detect far-away road curbs due to the sparsity issue of point clouds. In recent years, aerial images are becoming more and more available worldwide, and we find that the visual appearances between road areas and off-road areas are usually different in aerial images, so we propose a novel solution to detect road curbs off-line using aerial images. The input to our method is an aerial image, and the output is directly a graph (i.e., vertices and edges) representing road curbs. To this end, we formulate the problem as an imitation learning problem, and design a novel network and an innovative training strategy to train an agent to iteratively find the road-curb graph. The experimental results on a pubic dataset confirm the effectiveness and superiority of our method. This work is accompanied with a demonstration video and a supplementary document at https://sites.google.com/view/icurb.
|
|
03:15-03:30, Paper TuBT6.2 | Add to My Program |
Search-Based Online Trajectory Planning for Car-Like Robots in Highly Dynamic Environments |
|
Lin, Jiahui | The Chinese University of Hong Kong |
Zhou, Tong | The Chinese University of Hong Kong |
Zhu, Delong | The Chinese University of Hong Kong |
Liu, Jianbang | The Chinese University of Hong Kong |
Meng, Max Q.-H. | The Chinese University of Hong Kong |
Keywords: Motion and Path Planning, Collision Avoidance, Autonomous Vehicle Navigation
Abstract: This paper presents a search-based partial motion planner to generate dynamically feasible trajectories for car-like robots in highly dynamic environments. The planner searches for smooth, safe, and near-time-optimal trajectories by exploring a state graph built on motion primitives, which are generated by discretizing the time dimension and the control space. To enable fast online planning, we first propose an efficient path searching algorithm based on the aggregation and pruning of motion primitives. We then propose a fast collision checking algorithm that takes into account the motions of moving obstacles. The algorithm linearizes relative motions between the robot and obstacles and then check collisions by comparing a point-line distance. Benefiting from the fast searching and collision checking algorithms, the planner can effectively and safely explore the state-time space to generate near-time-optimal solutions. The results through extensive experiments show that the proposed method can generate feasible trajectories within milliseconds while maintaining a higher success rate than up-to-date methods, which significantly demonstrates its advantages.
|
|
03:30-03:45, Paper TuBT6.3 | Add to My Program |
Task-Space Decomposed Motion Planning Framework for Multi-Robot Loco-Manipulation |
|
Zhang, Xiaoyu | The Shenzhen Institute of Artifical Intellifence and Robotics Fo |
Yan, Lei | The University of Edinburgh |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Multi-Robot Systems, Motion and Path Planning, Manipulation Planning
Abstract: This paper introduces a novel task-space decomposed motion planning framework for multi-robot simultaneous locomotion and manipulation. When several manipulators hold an object, closed-chain kinematic constraints are formed, and it will make the motion planning problems challenging by inducing lower-dimensional singularities. Unfortunately, the constrained manifold will be even more complicated when the manipulators are equipped with mobile bases. We address the problem by introducing a dual-resolution motion planning framework which utilizes a convex task region decomposition method, with each resolution tuned to efficient computation for their respective roles. Concretely, this dual-resolution approach enables a global planner to explore the low-dimensional decomposed task-space regions toward the goal, then a local planner computes a path in high-dimensional constrained configuration space. We demonstrate the proposed method in several simulations, where the robot team transports the object toward the goal in the obstacle-rich environments.
|
|
03:45-04:00, Paper TuBT6.4 | Add to My Program |
SMT-Based Optimal Deployment of Mobile Robot Rechargers |
|
Kundu, Tanmoy | Indian Institute of Technology - Kanpur |
Saha, Indranil | IIT Kanpur |
Keywords: Planning, Scheduling and Coordination, Path Planning for Multiple Mobile Robots or Agents, Motion and Path Planning
Abstract: Efficient recharging is an essential requirement for autonomous mobile robots. In an indoor robotic application, charging stations can be installed offline. However, frequent trips to the charging stations cause inefficiency in the performance of the mobile robots. In an outdoor environment, a charging station cannot even be installed easily. We propose a framework and algorithms for enabling a group of mobile wireless rechargers to fulfill the energy requirement of autonomous mobile robots in a workspace efficiently. Our algorithm finds the optimal trajectory for the mobile rechargers in such a way that once there is a need for a recharge, the robots do not need to spend significant time and energy to get access to a recharger. Our algorithm is based on a reduction of the problems to Satisfiability Modulo Theory (SMT) solving problems. We present extensive experimental results to show that the optimal trajectories for mobile rechargers can be generated successfully for different types of robots and workspaces within a reasonable time. Moreover, a comparison with the performance of static charging stations establishes that mobile rechargers are more effective in terms of allowing the autonomous robot to continue their work for a longer time.
|
|
TuBT7 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion and Path Planning II |
|
|
Chair: Zhang, Wei | National University of Singapore |
Co-Chair: Gao, Fei | Zhejiang University |
|
03:00-03:15, Paper TuBT7.1 | Add to My Program |
A Global-Local Coupling Two-Stage Path Planning Method for Mobile Robots |
|
Jian, Zhiqiang | Xi'an Jiaotong University |
Zhang, Songyi | Xi'an Jiaotong University |
Chen, Shitao | Xi'an Jiaotong University |
Nan, Zhixiong | Xi'an Jiaotong University |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Motion and Path Planning, Collision Avoidance, Wheeled Robots
Abstract: The path planning of mobile robots is an optimization problem that is difficult to solve directly owing to its nonlinear characteristics. This paper proposes the "global-local" Coupling Two-Stage Path Planning (CTSP) method. First, the globally optimal solution in the configuration space is given by the global planner. Then, in the local planning stage, the optimal solution of the local environment is constantly searched, guided by the prior information of the globally optimal solution. The strategy used in the global planning stage is the iterative optimization method based on an initial solution. The local planning stage adopts the sampling-evaluation strategy, that is, sampling the candidate paths and then using the evaluation function to perform path selection. The proposed method has two innovations: 1) a novel global iterative optimization method is proposed and 2) a new cost function for evaluating the sampled paths is constructed, which improves the coupling of the global and local paths. We implement and test this method in a simulation environment, where the experimental results verify the effectiveness of the proposed method.
|
|
03:15-03:30, Paper TuBT7.2 | Add to My Program |
Learn to Navigate Maplessly with Varied LiDAR Configurations: A Support Point-Based Approach |
|
Zhang, Wei | National University of Singapore |
Liu, Ning | Nationl University of Singapore |
Zhang, Yunfeng | National University of Singapore |
Keywords: Machine Learning for Robot Control, AI-Enabled Robotics, Motion Control
Abstract: Deep reinforcement learning (DRL) demonstrates great potential in mapless navigation domain. However, such a navigation model is normally restricted to a fixed configuration of the range sensor because its input format is fixed. In this paper, we propose a DRL model that can address range data obtained from different range sensors with different installation positions. Our model first extracts the goal-directed features from each obstacle point. Subsequently, it chooses global obstacle features from all point-feature candidates and uses these features for the final decision. As only a few points are used to support the final decision, we refer to these points as support points and our approach as support point-based navigation (SPN). Our model can handle data from different LiDAR setups and demonstrates good performance in simulation and real-world experiments. Moreover, it shows great potential in crowded scenarios with small obstacles when using a high-resolution LiDAR.
|
|
03:30-03:45, Paper TuBT7.3 | Add to My Program |
Fast Replanning Multi-Heuristic A* |
|
Ha, Junhyoung | Korea Institute of Science and Technology |
Kim, Soonkyum | Korea Institute of Science and Technology |
Keywords: Motion and Path Planning, Optimization and Optimal Control
Abstract: In this paper, we proposed a novel path replanning algorithm on arbitrary graphs. To avoid computationally heavy preprocessing and to reduce required memory to store the information of the previous search and expanded vertices, we defined the feature vertices, which are extracted from the previous path by a simple algorithm to compare the costs between adjacent vertices along the path once. Proper additional heuristic functions are designed for these feature vertices to work as local attractors to guide the search toward the previous path's neighbors. To avoid unnecessary expansions and speed up the search, these additional heuristic functions are properly managed to stop intriguing or guiding search toward the feature vertices. The proposed algorithm of Fast Replanning Multi-Heuristic A* is a variation of Shared Multi-Heuristic A* while removing or deactivating the additional heuristic functions during the search. Fast Replanning Multi-Heuristic A* guarantees the bounded suboptimality while efficiently exploring the graph toward the goal vertex. The performance of the proposed algorithm was compared with weighted A* by simulating numerous path replanning problems in maze-like maps.
|
|
03:45-04:00, Paper TuBT7.4 | Add to My Program |
Generating Large-Scale Trajectories Efficiently Using Double Descriptions of Polynomials |
|
Wang, Zhepei | Zhejiang University |
Ye, Hongkai | Zhejiang University |
Xu, Chao | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Motion and Path Planning, Aerial Systems: Applications, Autonomous Vehicle Navigation
Abstract: For quadrotor trajectory planning, describing a polynomial trajectory through coefficients and end-derivatives both enjoy their own convenience in energy minimization. We name them double descriptions of polynomial trajectories. The transformation between them, causing most of the inefficiency and instability, is formally analyzed in this paper. Leveraging its analytic structure, we design a linear-complexity scheme for both jerk/snap minimization and parameter gradient evaluation, which possesses efficiency, stability, flexibility, and scalability. With the help of our scheme, generating an energy optimal (minimum snap) trajectory only costs 1 mu s per piece at the scale up to 1,000,000 pieces. Moreover, generating large-scale energy-time optimal trajectories is also accelerated by an order of magnitude against conventional methods.
|
|
TuBT8 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Mechanism Design V |
|
|
Chair: Okada, Kei | The University of Tokyo |
Co-Chair: Krebs, Hermano Igo | MIT |
|
03:00-03:15, Paper TuBT8.1 | Add to My Program |
Restoring Force Design of Active Self-Healing Tension Transmission System and Application to Tendon-Driven Legged Robot |
|
Nakashima, Shinsuke | The University of Tokyo |
Kawaharazuka, Kento | The University of Tokyo |
Nishiura, Manabu | University of Tokyo |
Asano, Yuki | The University of Tokyo |
Kakiuchi, Yohei | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Biologically-Inspired Robots, Tendon/Wire Mechanism, Failure Detection and Recovery
Abstract: Self-healing function is a promising approach for damage management of high-load robot applications such as legged robots. Although the function is getting major in soft robotics, its application to life-sized “stiff” robots is of relatively minor interest. Although the authors have devised several self-healing tensile modules for tendon-driven robots, the design guideline to fulfil the large load endurance and large stroke are still unclear. The paper focuses on the parametric design for unleaked liquid-assisted healing of low melting point alloy structure. The method was validated with a benchtop module test. Moreover, the module enabled tendon-driven monopod testbed to perform squat motion three times after the landing impact fracture and the self-healing sequence, which was never accomplished.
|
|
03:15-03:30, Paper TuBT8.2 | Add to My Program |
A Translational Parallel Continuum Robot Reinforced by Origami and Cross-Routing Tendons |
|
Troeung, Charles | Monash University |
Chen, Chao | Monash University |
Keywords: Tendon/Wire Mechanism, Parallel Robots, Mechanism Design
Abstract: We introduce an origami-reinforced parallel continuum robot which is capable of maintaining the orientation of the end effector regardless of the bending shape. The cross-routing tendons provide an effective actuation of the robot because the constant length of the backbones prevent the actuation of parallel arrangement of tendons. We utilise the arclength relationship of parallel curves to show that parallel backbones of equal lengths and constant spacing between the backbones enable the constant orientation of the end effector. The origami shell is introduced to increase the torsional stiffness of the continuum robot and minimise any twisting. This leads to planar and parallel bending of all backbones. Planar quintic Pythagorean Hodograph curves are utilised for shape reconstruction, which is more accurate as compared to the curves with piecewise constant curvatures. The concept of this continuum robot and the accuracy of the reconstruction are validated experimentally.
|
|
03:30-03:45, Paper TuBT8.3 | Add to My Program |
Design of a 3-DOF Coupled Tendon-Driven Waist Joint |
|
Wang, Yiwei | The University of Electro-Communications |
Li, Wenyang | University of Electro-Communications |
Togo, Shunta | Graduate School of Informatics and Engineering, the University O |
Yokoi, Hiroshi | The University of Electro-Communications |
Jiang, Yinlai | The University of Electro-Communications |
Keywords: Humanoid Robot Systems, Tendon/Wire Mechanism, Mechanism Design
Abstract: This paper proposes a coupled tendon-driven waist joint for humanoid robots. The waist joint was designed as a 3 degrees of freedom (DOF) structure to simulate the motion of a human waist. The power transmission was designed by adopting a 3-motor 3-DOF (3M3D) coupled tendon-driven mechanism, so that the torque on the joints was multiplied. We derived the torque transmission formula and the rotation angle formula of the 3M3D tendon-driven structures and designed the waist joint by adopting an appropriate structure according to their features. To evaluate the accuracy and load capacity of the waist joint, we performed a rotational accuracy experiment and a maximum torque experiment. The experiment results showed that the maximum error of joint rotation was below 1°, and the maximum torque of the pitch, roll, and yaw rotations were 87[Nm], 53[Nm], and 22.2[Nm], respectively.
|
|
03:45-04:00, Paper TuBT8.4 | Add to My Program |
Design and Modeling of a Variable-Stiffness Spring Mechanism for Impedance Modulation in Physical Human–Robot Interaction |
|
Chaichaowarat, Ronnapee | Chulalongkorn University |
Nishimura, Satoshi | Massachusetts Institute of Technology |
Krebs, Hermano Igo | MIT |
Keywords: Compliant Joints and Mechanisms, Mechanism Design, Rehabilitation Robotics
Abstract: Our goal is to investigate different approaches to modulate stiffness and apply them to human-robot interaction. Here we report on our effort employing the concept of adjustable unsupported-length cantilever leaf spring, which has been previously applied to different designs of variable stiffness actuators. By transmitting the interaction force through the elastic component directly to the supporting structure instead of the actuation unit, this type of actuator requires low power to adjust and to maintain a desired stiffness. In the design of a 1-translational degree of freedom body weight support system of a rehabilitation robot, we used a leaf spring mechanism for stiffness modulation relying only on the spring deflection in combination with a non-backdrivable actuator for adjusting the vertical equilibrium position. This paper describes our approach in determining the spring parameters to attain a desired range of stiffness with a short traveling distance of the adjuster. To model the spring stiffness under deflection, the ideal cantilever support model cannot be assumed for a conventional design of dual roller-pairs slider, especially with a soft spring. A beam deflection model considering the non-zero slopes at the contact points between the rollers and the spring is presented, along with the validation experiments using different spring thicknesses on our prototype.
|
|
TuBT9 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Mechanism Design I |
|
|
Chair: Togo, Shunta | Graduate School of Informatics and Engineering, the University of Electro-Communications |
Co-Chair: Jiang, Yinlai | The University of Electro-Communications |
|
03:00-03:15, Paper TuBT9.1 | Add to My Program |
Development of a Humanoid Shoulder Based on 3-Motor 3 Degrees-Of-Freedom Coupled Tendon-Driven Joint Module |
|
Li, Wenyang | University of Electro-Communications |
Wang, Yiwei | The University of Electro-Communications |
Togo, Shunta | Graduate School of Informatics and Engineering, the University O |
Yokoi, Hiroshi | The University of Electro-Communications |
Jiang, Yinlai | The University of Electro-Communications |
Keywords: Developmental Robotics, Tendon/Wire Mechanism, Flexible Robotics
Abstract: In this study, a new humanoid shoulder is developed using a coupled tendon-driven mechanism that consists of three motors to drive three degrees of freedom (DoFs). In the coupled tendon-driven mechanism, multiple motors simultaneously drive multiple joints with each joint driven by at least two motors coupled with tendons. The torque of a motor is redistributed to multiple joints in a certain proportion to improve the load capacity of the joints. The 3M3D (3-motor 3-DoF) coupled tendon-driven joint module is systematically analyzed and divided into four categories according to the torque transmission characteristics between the motors and the joints: fully routed motor-joint form, 1-unrouted motor-joint form, 2-unrouted motor-joint form, and 3-unrouted motor-joint form. The four categories were analyzed and compared with respect to the characteristics of the motor torque redistribution with different tendon routing forms. The 2-unrouted motor-joint form was adopted for the development of the 3-DoF shoulder joint. The problem of the coexistence of coupling and non-coupling in one joint is solved by introducing a 2-DoF rolling joint. The relationship between the angle and torque between the motors and joints was verified experimentally. The experimental results revealed that the 3M3D coupled tendon-driven shoulder realized torque enhancement effectively by tendon coupling, and the angle accuracy was sufficient as a humanoid shoulder.
|
|
03:15-03:30, Paper TuBT9.2 | Add to My Program |
Mecanum Crank: A Novel Omni-Directional Vehicle Using Crank Leg |
|
Noda, Satsuya | National Institute of Technology, Fukushima College |
Kunii, Haruki | NITFC |
Yaginuma, Mutsuki | NITFC |
Yamanobe, Kazushi | NITFC |
Keywords: Mechanism Design, Wheeled Robots, Field Robots
Abstract: A vehicle is expected to exhibit omni-directional locomotion capability to provide improved rough terrain vehicle functionality. Generally, rough terrain vehicles are not holonomic and cannot travel in the lateral direction, whereas typical omni-directional vehicles have difficulty in traveling on rough terrain. This paper proposes installing a crank leg for a Mecanum-wheeled vehicle ("Mecanum crank") to enhance its rough-terrain locomotion. Compared with other holonomic vehicles designed for rough terrain locomotion, the proposed design exhibited superior capability in the longitudinal wheel direction. To avoid the conflict between the crank leg and Mecanum wheel, this paper proposes the use of a differential gear system. The simple structure of the crank leg enables the implementation of a swing equalizer and roller grousers, which further enhance its rough-terrain locomotion. In longitudinal locomotion experiments, the proposed mechanism could climb a 95 mm-high step, which is 95% of the Mecanum wheel diameter. Furthermore, Mecanum crank could climb miniature scale stairs and execute lateral locomotion on them.
|
|
03:30-03:45, Paper TuBT9.3 | Add to My Program |
Internally-Balanced Displacement-Force Converter for Stepless Control of Spring Deformation Compensated by Cam with Variable Pressure Angle |
|
Shimizu, Tori | Tohoku University |
Tadakuma, Kenjiro | Tohoku University |
Watanabe, Masahiro | Tohoku University |
Takane, Eri | Tohoku University |
Konyo, Masashi | Tohoku University |
Tadokoro, Satoshi | Tohoku University |
Keywords: Actuation and Joint Mechanisms, Force Control, Mechanism Design
Abstract: The force required to drive a mechanism can be compensated by adding an equivalent load in the opposite direction. By reversing the input and output of the load compensation, we proposed the concept of a displacement-force converter that enables the deformation of the elastic element to be controlled steplessly by a minimal external force. Its principle was proved in our previous study, but challenges arose owing to the use of a wire and pulley. Here, we introduce a new compensation method using a noncircular cam that generates a compensation torque due to the contact force from the follower, which is split in the tangential direction of the cam by the pressure angle varying at rotation. Using a prototype for proof of concept, the maximum control force required for the extension of the spring was successfully reduced by 23.2%. Furthermore, uniform forces were obtained between extension and compression so that the difference between them decreased from 543% to 49% relative to compression. Thus, actuators and current supplies requiring less power could be selected. Moreover, the prototype model was incorporated into a variable stiffness mechanism of a soft robotic gripper as a wire tensioner to show the expandability of the displacement-force converter.
|
|
03:45-04:00, Paper TuBT9.4 | Add to My Program |
2-DOF Spherical Parallel Mechanism Capable of Biaxial Swing Motion with Active Arc Sliders |
|
Saiki, Naoto | Tohoku University |
Tadakuma, Kenjiro | Tohoku University |
Watanabe, Masahiro | Tohoku University |
Takane, Eri | Tohoku University |
Nobutoki, Masashi | Aisin Seiki Co., Ltd |
Suzuki, Shintaro | Aisin Seiki Co., Ltd |
Konyo, Masashi | Tohoku University |
Tadokoro, Satoshi | Tohoku University |
Keywords: Mechanism Design, Actuation and Joint Mechanisms, Parallel Robots
Abstract: Most articulated robots comprise multiple joints and links that control the position and posture of the end effector. The kinematic pair arrangement determines characteristics such as output force. The link configurations can be classified into serial link and parallel link mechanisms. A typical parallel link mechanism is the spherical parallel mechanism (SPM), designed to ensure the end effector has only rotational degrees of freedom. However, the kinematic pair arrangement has not been sufficiently examined in two degrees of freedom (2-DOF) SPMs. Herein, we present a basic design method for the proposed 2-DOF SPM curved biaxial swing mechanism, with inputs comprising arc sliders. The swinging area of the passive link was small, and infinite rotation around a certain axis was achieved without collision or transfer to a singular posture. Using the kinematics of this mechanism, we clarified the linear roll output and non-linear pitch output. Moreover, we fabricated a prototype and measured its basic drive characteristics. The results revealed that the output performance was greatly dependent on the rotation angle, high movable range in the roll axis, and low movable range in the pitch axis.
|
|
TuBT10 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Manipulation Control I |
|
|
Chair: Zhao, Ye | Georgia Institute of Technology |
Co-Chair: Maeda, Yusuke | Yokohama National University |
|
03:00-03:15, Paper TuBT10.1 | Add to My Program |
Position and Orientation Control of Polygonal Objects by Sensorless In-Hand Caging Manipulation |
|
Komiyama, Shun | Yokohama National University |
Maeda, Yusuke | Yokohama National University |
Keywords: In-Hand Manipulation, Manipulation Planning, Dexterous Manipulation
Abstract: In this work, we propose an approach to manipulate objects by position-controlled robot hands: in-hand caging manipulation. In this method, an object is manipulated based on caging without force sensing or force control. An object is caged by a robot hand throughout manipulation, and we can locate the object around a goal by deformation of the cage without sensing the object configuration. In this paper, we considered 2-D polygonal objects as the targets of in-hand caging manipulation. We also considered some position-controlled hands as the devices to manipulate the objects. A motion planning algorithm for the hands was proposed and applied to planar in-hand caging manipulation. By moving the hands according to the result of the motion planning, it was possible to manipulate the objects without sensing. Our proposed method can be applied to a device such as a part feeder that aligns variously shaped parts in the same orientation.
|
|
03:15-03:30, Paper TuBT10.2 | Add to My Program |
Non-Fixed Contact Manipulation Control Framework for Deformable Objects with Active Contact Adjustment |
|
Huang, Jing | The Chinese University of Hong Kong |
Cai, Yuanpei | CUHK |
Chu, Xiangyu | The Chinese University of Hong Kong |
Taylor, Russell H. | The Johns Hopkins University |
Au, K. W. Samuel | The Chinese University of Hong Kong |
Keywords: Contact Modeling, Dexterous Manipulation, Visual Servoing
Abstract: The assumption of fixed contact between robots and deformable objects (DOs) is widely used by previous DO manipulation (DOM) studies. However, the fixed contact setting is inapplicable to many real-life applications due to various factors, such as the end-effector's type and the DO's intrinsic material properties. In such cases, the non-fixed contact (NFC) configuration, which has not been well-investigated, usually demonstrates better applicability in terms of contact flexibility and payload capacity. In this paper, we investigate the problem of DOM with NFC, particularly the contact condition characterization and active contact adjustment strategy. To this end, we propose a versatile index and an optimization-based method for the contact description and adjustment. Then a systematic control framework combining both deformation control and active contact adjustment control is proposed for task-level DOM control. The feasibility and effectiveness of the proposed method were verified via the presented experiments.
|
|
03:30-03:45, Paper TuBT10.3 | Add to My Program |
3D Biped Locomotion Control Including Seamless Transition between Walking and Running Via 3D ZMP Manipulation |
|
Sugihara, Tomomichi | Preferred Networks, Inc |
Imanishi, Kenta | Osaka University |
Yamamoto, Takanobu | Graduate School of Engineering, Osaka University |
Caron, Stephane | ANYbotics AG |
Keywords: Humanoid and Bipedal Locomotion
Abstract: A novel control scheme for biped robots to manipulate the ZMP three-dimensionally apart from the actual ground profile is presented. It is shown that the linear inverted-pendulum-like dynamics with this scheme can represent a wider class of movements including variation of the body height. Moreover, this can also represent the motion in aerial phase. Based on this, the foot-guided controller proposed by the authors is enhanced to enable the robots to locomote on highly uneven terrains and also to seamlessly transition between walking and running without pre-planning the overall motion reference. The controller guarantees the capturability at landing and defines the motion by a time-variant state feedback, which is analytically derived from a model predictive optimization. It is verified through some computer simulations.
|
|
03:45-04:00, Paper TuBT10.4 | Add to My Program |
Modeling and Balance Control of SuperArm for Overhead Tasks |
|
Luo, Jianwen | The Chinese University of Hong Kong, Shenzhen |
Su, Yao | UCLA MAE Department |
Gong, Zelin | Southern University of Science and Technology |
Ruan, Lecheng | University of California Los Angeles |
Zhao, Ye | Georgia Institute of Technology |
Asada, Harry | MIT |
Fu, Chenglong | Southern University of Science and Technology |
Keywords: Physical Human-Robot Interaction, Wearable Robotics, Force Control
Abstract: Overhead manipulation often needs collaboration of two operators, which is challenging in confined space such as in a compartment or on a ladder. Supernumerary Robotic Arm (SuperArm), as a promising wearable robotics solution for overhead tasks, can provide optimal assistance in terms of broader workspace, diverse manipulation functionalities, and labor-saving operations. However, the human-centered SuperArm interaction mechanism, taking into account human safety, is rarely studied to date, in particular, in the context of human standing balance. Motivated by this missing mechanism, our study proposes a novel method for the human-centered overhead tasks so that an individual operator can accomplish the overhead tasks with the assistance of SuperArm via tunable interaction force and support force regulation. The SuperArm-human interaction is modeled and a dynamics control method based on QR decomposition is adopted to decouple joint torques of the SuperArm and the interaction forces. As such, the supporting force can be regulated independently to guarantee the operator-SuperArm interaction forces in a safe region. Force plate is used for measuring the CoP position as an evaluation method of the standing balance. The critical horizontal push force is learned through experiment and used to guide the SuperArm balancing control. This method is implemented on a SuperArm prototype worn on the operator's back, providing necessary supporting forces for the overhead object while allowi
|
|
TuBT11 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Localization IV |
|
|
Co-Chair: Hsu, Li-ta | Hong Kong Polytechnic University |
|
03:00-03:15, Paper TuBT11.1 | Add to My Program |
GCC-PHAT with Speech-Oriented Attention for Robotic Sound Source Localization |
|
Wang, Jiadong | National University of Singapore |
Qian, Xinyuan | National University of Singapore |
Pan, Zihan | National University of Singapore |
Zhang, Malu | National University of Singapore |
Li, Haizhou | Institute for Infocomm Research |
Keywords: Robot Audition, Localization
Abstract: Robotic audition is a basic sense that helps robots perceive the surroundings and interact with humans. Sound Source Localization (SSL) is an essential module for a robotic system. However, the performance of most sound source localization techniques degrades in noisy and reverberant environments due to inaccurate Time Difference of Arrival (TDoA) estimation. In robotic sound source localization, we are more interested in detecting the arrival of human speech than other sound sources. Ideally, we expect an effective TDoA estimation to respond only to speech signals, while masking off other interferences. In this paper, we propose a novel technique that learns to attend to speech fundamental frequency and harmonics while suppressing noise interference and reverberation. The novel TDoA feature is referred to as Generalized Cross Correlation with Phase Transform and Speech Mask (GCC-PHAT-SM). We perform sound source localization experiments on real-world data captured from a robotic platform. Experiments show that GCC-PHAT-SM feature significantly outperforms traditional Generalized CrossCorrelation (GCC) feature in noisy and reverberant acoustic environments.
|
|
03:15-03:30, Paper TuBT11.2 | Add to My Program |
Towards Robust GNSS Positioning and Real-Time Kinematic Using Factor Graph Optimization |
|
Wen, Weisong | Hong Kong Polytechnic University |
Hsu, Li-ta | Hong Kong Polytechnic University |
Keywords: Localization, Autonomous Vehicle Navigation
Abstract: Global navigation satellite systems (GNSS) are one of the utterly popular sources for providing globally referenced positioning for autonomous systems. However, the performance of the GNSS positioning is significantly challenged in urban canyons, due to the signal reflection and blockage from buildings. Given the fact that the GNSS measurements are highly environmentally dependent and time-correlated, the conventional filtering-based method for GNSS positioning cannot simultaneously explore the time-correlation among historical measurements. As a result, the filtering-based estimator is sensitive to unexpected outlier measurements. In this paper, we present a factor graph-based formulation for GNSS positioning and real-time kinematic (RTK). The formulated factor graph framework effectively explores the time-correlation of pseudorange, carrier-phase, and doppler measurements, and leads to the non-minimal state estimation of the GNSS receiver. The feasibility of the proposed method is evaluated using datasets collected in challenging urban canyons of Hong Kong and significantly improved positioning accuracy is obtained, compared with the filtering-based estimator.
|
|
03:30-03:45, Paper TuBT11.3 | Add to My Program |
Camera Relocalization Using Deep Point Cloud Generation and Hand-Crafted Feature Refinement |
|
Wang, Junyi | BeiHang University |
Qi, Yue | BeiHang University |
Keywords: Localization, Deep Learning Methods, Visual Learning
Abstract: Visual localization plays an indispensable role in robotics. Both learning and hand-crafted feature based methods for relocalization process keep their effectiveness and weakness. However, current algorithms seldom consider these two kinds of features under one framework. In this paper, focusing on this task, we propose a novel relocalization framework for RGB or RGB-D data source, which is composed of coarse localization process by learning features and pose refinement by hand-crafted features. In particular, coarse stage contains deep point cloud generation and registration. In this stage, instead of regressing camera pose directly, the paper novelly designs a neural network called PGNet to construct sparse point cloud with RGB or RGB-D as inputs. Further more, by means of training set, hand-crafted feature space is established. Based on the obtained camera pose in coarse stage, accurate point-topoint correspondences are set up through searching the space. Then accurate camera pose is obtained by applying RANSAC to correspondences or solving PnP. Finally, experiments on both outdoor and indoor benchmark datasets demonstrate state-ofthe-art performance over other existing methods.
|
|
03:45-04:00, Paper TuBT11.4 | Add to My Program |
Semantic Histogram Based Graph Matching for Real-Time Multi-Robot Global Localization in Large Scale Environment |
|
Guo, Xiyue | The Shenzhen Institute of Artificial Intelligence and Robotics F |
Hu, Junjie | The Chinese University of Hong Kong, Shenzhen |
Chen, Junfeng | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Deng, Fuqin | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Keywords: Localization, Multi-Robot SLAM, SLAM
Abstract: The core problem of visual multi-robot simultaneous localization and mapping (MR-SLAM) is how to efficiently and accurately perform multi-robot global localization (MR-GL). The difficulties are two-fold. The first is the difficulty of global localization for significant viewpoint difference. Appearance-based localization methods tend to fail under large viewpoint changes. Recently, semantic graphs have been utilized to overcome the viewpoint variation problem. However, the methods are highly time-consuming, especially in large-scale environments. This leads to the second difficulty, which is how to perform real-time global localization. In this paper, we propose a semantic histogram-based graph matching method that is robust to viewpoint variation and can achieve real-time global localization. Based on that, we develop a system that can accurately and efficiently perform MR-GL for both homogeneous and heterogeneous robots. The experimental results show that our approach is about 30 times faster than Random Walk based semantic descriptors. Moreover, it achieves an accuracy of 95% for global localization, while the accuracy of the state-of-the-art method is 85%.
|
|
TuBT12 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Localization and Mapping X |
|
|
Chair: Fischer, Tobias | Queensland University of Technology |
Co-Chair: Kim, Young J. | Ewha Womans University |
|
03:00-03:15, Paper TuBT12.1 | Add to My Program |
Intelligent Reference Curation for Visual Place Recognition Via Bayesian Selective Fusion |
|
Molloy, Timothy L. | University of Melbourne |
Fischer, Tobias | Queensland University of Technology |
Milford, Michael J | Queensland University of Technology |
Nair, Girish | University of Melbourne |
Keywords: Localization
Abstract: A key challenge in visual place recognition (VPR) is recognizing places despite drastic visual appearance changes due to factors such as time of day, season, weather or lighting conditions. Numerous approaches based on deep-learnt image descriptors, sequence matching, domain translation, and probabilistic localization have had success in addressing this challenge, but most rely on the availability of carefully curated representative reference images of the possible places. In this paper, we propose a novel approach, dubbed Bayesian Selective Fusion, for actively selecting and fusing informative reference images to determine the best place match for a given query image. The selective element of our approach avoids the counterproductive fusion of every reference image and enables the dynamic selection of informative reference images in environments with changing visual conditions (such as indoors with flickering lights, outdoors during sunshowers or over the day-night cycle). The probabilistic element of our approach provides a means of fusing multiple reference images that accounts for their varying uncertainty via a novel training-free likelihood function for VPR. On difficult query images from two benchmark datasets, we demonstrate that our approach matches and exceeds the performance of alternative fusion approaches along with state-of-the-art techniques that are provided with prior (unfair) knowledge of the best reference images.
|
|
03:15-03:30, Paper TuBT12.2 | Add to My Program |
Accelerating Probabilistic Volumetric Mapping Using Ray-Tracing Graphics Hardware |
|
Min, Heajung | Ewha Womans University |
Han, Kyung Min | Ewha Woman's Univeristy |
Kim, Young J. | Ewha Womans University |
Keywords: Mapping, Simulation and Animation, Software, Middleware and Programming Environments
Abstract: Probabilistic volumetric mapping (PVM) represents a 3D environmental map for an autonomous robotic navigational task. A popular implementation such as Octomap is widely used in the robotics community for such a purpose. The Octomap relies on octree to represent a PVM and its main bottleneck lies in massive ray-shooting to determine the occupancy of the underlying volumetric voxel grids. In this paper, we propose GPU-based ray shooting to drastically improve the ray shooting performance in Octomap. Our main idea is based on the use of recent ray-tracing RTX GPU, mainly designed for real-time photo-realistic computer graphics and the accompanying graphics API, known as DXR. Our ray-shooting first maps leaf-level voxels in the given octree to a set of axis-aligned bounding boxes (AABBs) and employ massively parallel ray shooting on them using GPUs to find free and occupied voxels. These are fed back into CPU to update the voxel occupancy and restructure the octree. In our experiments, we have observed more than three-orders-of-magnitude performance improvement in terms of ray shooting using ray-tracing RTX GPU over a state-of-the-art Octomap CPU implementation, where the benchmarking environments consist of more than 77K points and 25K∼34K voxel grids.
|
|
03:30-03:45, Paper TuBT12.3 | Add to My Program |
ERASOR: Egocentric Ratio of Pseudo Occupancy-Based Dynamic Object Removal for Static 3D Point Cloud Map Building |
|
Lim, Hyungtae | Korea Advanced Institute of Science and Technology |
Hwang, Sungwon | Korea Advanced Institute of Science and Technology |
Myung, Hyun | KAIST (Korea Adv. Inst. Sci. & Tech.) |
Keywords: Mapping, Range Sensing
Abstract: Scan data of urban environments often include representations of dynamic objects, such as vehicles, pedestrians, and so forth. However, when it comes to constructing a 3D point cloud map with sequential accumulations of the scan data, the dynamic objects often leave unwanted traces in the map. These traces of dynamic objects act as obstacles and thus impede mobile vehicles from achieving good localization and navigation performances. To tackle the problem, this paper presents a novel static map building method called textit{ERASOR}, Egocentric RAtio of pSeudo Occupancy-based dynamic object Removal, which is fast and robust to motion ambiguity. Our approach directs its attention to the nature of most dynamic objects in urban environments being inevitably in contact with the ground. Accordingly, we propose the novel concept called textit{pseudo occupancy} to express the occupancy of unit space and then discriminate spaces of varying occupancy. Finally, Region-wise Ground Plane Fitting (R-GPF) is adopted to distinguish static points from dynamic points within the candidate bins that potentially contain dynamic points. As experimentally verified on SemanticKITTI, our proposed method yields promising performance against state-of-the-art methods overcoming the limitations of existing ray tracing-based and visibility-based methods.
|
|
03:45-04:00, Paper TuBT12.4 | Add to My Program |
UVIP: Robust UWB Aided Visual-Inertial Positioning System for Complex Indoor Environments |
|
Yang, Bo | Nanjing University of Information Science & Technology |
Li, Jun | Southeast University |
Zhang, Hong | University of Alberta |
Keywords: Sensor Fusion, Localization, Range Sensing
Abstract: Indoor positioning without GPS is a challenge task, especially, in complex scenes or when sensors fail. In this paper, we develop an ultra-wideband aided visual-inertial positioning system (UVIP) which aims to achieve accurate and robust positioning results in complex indoor environments. To this end, a point-line-based stereo visual-inertial odometry (PL-sVIO) is firstly designed to improve the positioning accuracy in structured or low-textured scenarios by making use of line features. Secondly, a loop closure method is proposed to suppress the drift of PL-sVIO based on image patch features described by a CNN for handing the situation of a large environment and viewpoint variation. Thirdly, an accurate relocalization approach is presented for the case when the visual sensor fails. In this scheme, a top-to-down matching strategy from image to point and line features is presented to improve relocalization performance. Finally, the UWB sensor is combined with the visual-inertial system to further improve the accuracy and robustness of the positioning system and provide the results in a fixed reference frame. Thus, desirable real-time positioning results are derived for complex indoor scenes. Evaluations on challenging public datasets and real-world experiments are conducted to demonstrate that the proposed UVIP can provide more accurate and robust positioning results in complex indoor environments, even in the case when the visual sensor fails or in the absence of UWB anchors.
|
|
TuBT13 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
LiDAR-Based Localization II |
|
|
Chair: Cai, Kuanqi | Harbin Institute of Technology/The Chinese University of Hong Kong |
Co-Chair: Li, Zhaoting | Southern University of Science and Technology |
|
03:00-03:15, Paper TuBT13.1 | Add to My Program |
LiDAR-Based Initial Global Localization Using Two-Dimensional (2D) Submap Projection Image (SPI) |
|
Li, Yanhao | Shanghai Jiao Tong University |
Li, Hao | Shanghai Jiao Tong University |
Keywords: Intelligent Transportation Systems, SLAM, Localization
Abstract: Initial global localization is important to mobile robotics in terms of navigation initialization (or re-initialization) and loop closure in SLAM. 3D LiDARs are commonly used for mobile robotics, yet LiDAR-based initial global localization (especially at large scale such as in outdoor environments) is still challenging due to lack of salient features in LiDAR range data. Inspired by visual SLAM oriented initial global localization methods, we propose a method of LiDAR-based initial global localization using 2D submap projection image (SPI). For this, global descriptors from SPIs are extracted for place recognition; pose estimation is realized by feature point matching between the queried SPI and SPIs from a global map database. The proposed initial global localization module runs at 2.4 Hz with precision of 1.2 m and for translation and 1.2degree for rotation, which can serve as a suitable initial estimate for subsequent pose estimation refinement via existing mature point cloud registration methods.
|
|
03:15-03:30, Paper TuBT13.2 | Add to My Program |
Automatic Hyper-Parameter Tuning for Black-Box LiDAR Odometry |
|
Koide, Kenji | National Institute of Advanced Industrial Science and Technology |
Yokozuka, Masashi | Nat. Inst. of Advanced Industrial Science and Technology |
Oishi, Shuji | National Institute of Advanced Industrial Science and Technology |
Banno, Atsuhiko | National Instisute of Advanced Industrial Science and Technology |
Keywords: SLAM, Localization
Abstract: LiDAR odometry algorithms are complex and involve a number of hyper-parameters. The choice of hyper-parameters can substantively affect the performance of odometry estimation, and it is necessary to carefully fine-tune the hyper-parameters depending on the sensor, environment, and algorithm to achieve the best estimation results. While odometry estimation algorithms are often tuned manually, this is time-consuming and may also result in a sub-optimal parameter set. This paper presents an automatic hyper-parameter tuning approach for LiDAR odometry estimation. By taking advantage of the sequential model-based optimization (SMBO) approach, we automatically optimize the hyper-parameter set of a black-box odometry estimation algorithm without detailed knowledge of the algorithm. In addition, a LiDAR data augmentation approach is also proposed to prevent overfitting. Through evaluation, we show that the combination of SMBO-based parameter exploration and data augmentation enables us to efficiently and robustly optimize the hyper-parameter set for several different odometry estimation algorithms. We also demonstrate that the optimized parameter set exhibits superior performance with respect to KITTI dataset and in a real use scenario.
|
|
03:30-03:45, Paper TuBT13.3 | Add to My Program |
Locus: LiDAR-Based Place Recognition Using Spatiotemporal Higher-Order Pooling |
|
Vidanapathirana, Kavisha | Queensland University of Technology |
Moghadam, Peyman | CSIRO |
Harwood, Ben | Data61 CSIRO |
Zhao, Muming | CSIRO |
Sridharan, Sridha | Queensland University of Technology |
Fookes, Clinton | Queensland University of Technology |
Keywords: Localization, Field Robots, Recognition
Abstract: Place Recognition enables the estimation of a globally consistent map and trajectory by providing non-local constraints in Simultaneous Localisation and Mapping (SLAM). This paper presents Locus, a novel place recognition method using 3D LiDAR point clouds in large-scale environments. We propose a method for extracting and encoding topological and temporal information related to components in a scene and demonstrate how the inclusion of this auxiliary information in place description leads to more robust and discriminative scene representations. Second-order pooling along with a non-linear transform is used to aggregate these multi-level features to generate a fixed-length global descriptor, which is invariant to the permutation of input features. The proposed method outperforms state-of-the-art methods on the KITTI dataset. Furthermore, Locus is demonstrated to be robust across several challenging situations such as occlusions and viewpoint changes in 3D LiDAR point clouds. The open-source implementation is available at: https://github.com/csiro-robotics/locus.
|
|
03:45-04:00, Paper TuBT13.4 | Add to My Program |
Automated Extrinsic Calibration for 3D LiDARs with Range Offset Correction Using an Arbitrary Planar Board |
|
Kim, Junha | Seoul National University |
Kim, Changhyeon | Seoul National University |
Han, Youngsoo | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Range Sensing, Calibration and Identification
Abstract: This paper proposes an automatic and accuracy-enhanced extrinsic calibration method for 3D LiDARs with a range offset correction, which needs only an arbitrarily-shaped single planar board. One of the most exhaustive parts of existing LiDAR calibration procedures is to manually find target objects from massive point clouds. To obviate user interventions, we propose an automated planar board detection from LiDAR range images. To extract a target completely, we suppress outliers and restore rejected inliers of the target board by introducing a target completion method. We empirically find that range measurements of various LiDARs are mainly skewed by constant offset values. To compensate for this, we suggest a range offset model for each laser channel in calibration procedures. The relative pose between LiDARs and range offsets are jointly estimated by minimizing bi-directional point-to-board distances within the iterative re-weighted least squares (IRLS) framework. To verify the suggested range offset model, we obtain and analyze extensive real-world measurements. By conducting experiments using the various sensor configurations and shapes of boards, we quantitatively and qualitatively confirm accuracy and versatility of the proposed method by comparing with the state-of-the-art LiDAR calibration methods. All the source code and data used in the paper are available at : https://github.com/JunhaAgu/AutoL2LCalib.
|
|
TuBT14 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning-Based Human-Robot Interaction |
|
|
Chair: Sun, Liting | University of California, Berkeley |
Co-Chair: Park, Jaeheung | Seoul National University |
|
03:00-03:15, Paper TuBT14.1 | Add to My Program |
Machine Learning-Based Human-Following System: Following the Predicted Position of a Walking Human |
|
Wang, Ansheng | The University of Tokyo |
Makino, Yasutoshi | The University of Tokyo |
Shinoda, Hiroyuki | Univ. of Tokyo |
Keywords: Human-Centered Robotics, Human Detection and Tracking, Machine Learning for Robot Control
Abstract: Human–robot interaction (HRI) has been widely researched in diverse applications. A robot following a person is one such scenario investigated in the HRI field. However, human movements and actions are complex and can change dramatically. We herein demonstrate a machine learning-based system that allows a person-following robot to track in real-time the predicted future motion of a walking human, from a first-person perspective. We assume that a depth sensor that can detect the human skeleton is loaded on a mobile robot to provide data on the user’s motion from a first-person perspective. The system calculates the coordinates of the center of gravity (COG) and 25 body joints of the user. These coordinates of COG and 25 body joints are relative to the robot based on the position of the person tracked, and these are used for the input dataset of a neural network (NN) that predicts human motion. A five-layered NN estimates the relative vectors in real-time between the current person’s COG and the future position of the 25 body joints. Using a proportional–integral–derivative (PID) controller, the person-following robot can track the predicted position of a walking human 0.5 s in advance to increase the robustness of following and to avoid delays.
|
|
03:15-03:30, Paper TuBT14.2 | Add to My Program |
Anytime Game-Theoretic Planning with Active Reasoning about Humans' Latent States for Human-Centered Robots |
|
Tian, Ran | UC Berkeley |
Sun, Liting | University of California, Berkeley |
Tomizuka, Masayoshi | University of California |
Isele, David | University of Pennsylvania, Honda Research Institute USA |
Keywords: Human-Centered Robotics, Cognitive Modeling, Human-Aware Motion Planning
Abstract: A human-centered robot needs to reason about the cognitive limitation and potential irrationality of its human partner to achieve seamless interactions. This paper proposes an anytime game-theoretic planner that integrates iterative reasoning models, a partially observable Markov decision process, and chance-constrained Monte-Carlo belief tree search for robot behavioral planning. Our planner enables a robot to safely and actively reason about its human partner's latent cognitive states (bounded intelligence and irrationality) in real-time to maximize its utility better}. We validate our approach in an autonomous driving domain where our behavioral planner and a low-level motion controller hierarchically control an autonomous car to negotiate traffic merges. Simulations and user studies are conducted to show our planner's effectiveness.
|
|
03:30-03:45, Paper TuBT14.3 | Add to My Program |
Momentum Observer-Based Collision Detection Using LSTM for Model Uncertainty Learning |
|
Lim, Daegyu | Seoul National University |
Kim, Donghyeon | Graduate School of Convergence Science and Technology, Seoul Nat |
Park, Jaeheung | Seoul National University |
Keywords: Safety in HRI, Deep Learning Methods, Physical Human-Robot Interaction
Abstract: As robots begin to collaborate with people in real life, their applicability and practicality are continuously increasing. To reliably employ robots nearby, safety needs to be rigorously ensured. In addition to collision prevention algorithms, studies are being actively conducted on collision handling methods. Momentum Observer (MOB) was developed to estimate disturbance torque without using joint acceleration. However, the estimated disturbance from MOB contains not only the applied external torque but also model uncertainty such as friction and modeling error due to imprecise system identification. Our proposed method handles this problem by learning the model uncertainty with Long Short-Term Memory (LSTM) and thereby estimates the purely applied external torque with only proprioceptive sensors. The proposed method can be applied to the extreme modeling error case when there is no model on the dynamics of the robot at all. The experiments using a real robot show that the external torque can be estimated and collisions can be detected accordingly even in a limited situation where a precise dynamics model and friction model are not available.
|
|
03:45-04:00, Paper TuBT14.4 | Add to My Program |
Deep Learning and Mixed Reality to Autocomplete Teleoperation |
|
Kassem Zein, Mohammad | American University of Beirut (AUB) |
Al Aawar, Majd | American University of Beirut |
Asmar, Daniel | American University of Beirut |
Elhajj, Imad | American University of Beirut |
Keywords: Telerobotics and Teleoperation, Virtual Reality and Interfaces, Human Performance Augmentation
Abstract: Teleoperation of robots can be challenging, especially for novice users with little to no experience at such tasks. The difficulty is largely due to the numerous degrees of freedom users must control and their limited perception bandwidth. To help mitigate these challenges, we propose in this paper a solution which relies on artificial intelligence to understand user intended motion and then on mixed reality to communicate the estimated trajectories to the users in an intuitive manner. User intended motion is estimated using a deep learning network trained on a dataset of motion primitives. During teleoperation, the estimated motions are augmented onto a first-person live video feed from the robot. Finally, if a suggested motion is accepted by the user, the robot is driven along that trajectory in an autonomous manner. We validate our proposed mixed reality teleoperation scheme with simulation experiments on a drone and demonstrate, through subjective and objective evaluation, its advantages over other teleoperation methods.
|
|
TuBT15 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning in Robotics and Automation I |
|
|
Chair: Zhang, Zhengyan | Harbin Institute of Technology, Shenzhen |
Co-Chair: Kuniyoshi, Yasuo | The University of Tokyo |
|
03:00-03:15, Paper TuBT15.1 | Add to My Program |
Learning Spatial Context with Graph Neural Network for Multi-Person Pose Grouping |
|
Lin, Jiahao | National University of Singapore |
Lee, Gim Hee | National University of Singapore |
Keywords: Deep Learning for Visual Perception
Abstract: Bottom-up approaches for image-based multi-person pose estimation consist of two stages: (1) keypoint detection and (2) grouping of the detected keypoints to form person instances. Current grouping approaches rely on learned embedding from only visual features that completely ignore the spatial configuration of human poses. In this work, we formulate the grouping task as a graph partitioning problem, where we learn the affinity matrix with a Graph Neural Network (GNN). More specifically, we design a Geometry-aware Association GNN that utilizes spatial information of the keypoints and learns local affinity from the global context. The learned geometry-based affinity is further fused with appearance-based affinity to achieve robust keypoint association. Spectral clustering is used to partition the graph for the formation of the pose instances. Experimental results on two benchmark datasets show that our proposed method outperforms existing appearance-only grouping frameworks, which shows the effectiveness of utilizing spatial context for robust grouping.
|
|
03:15-03:30, Paper TuBT15.2 | Add to My Program |
Automatic Hanging Point Learning from Random Shape Generation and Physical Function Validation |
|
Takeuchi, Kosuke | The University of Tokyo |
Yanokura, Iori | University of Tokyo |
Kakiuchi, Yohei | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Deep Learning for Visual Perception, Data Sets for Robotic Vision, Recognition
Abstract: The purpose of this paper is the robotic hanging manipulation of an object of various shapes that is not limited to a specific category. To achieve this, we propose a method that allows the estimator to learn many different shapes with hanging points without any manual annotation. A random shape generator using GAN solves the limitation of the number of 3D models and can handle objects of various shapes. In addition, hanging is repeated in the dynamics simulation, and hanging points are automatically generated. A large amount of training data is generated by rendering random-textured objects with hanging points in the random simulation environment. A deep neural network trained with these data was able to estimate hanging points of an unknown category object in the real world and achieved hanging manipulation by a robot.
|
|
03:30-03:45, Paper TuBT15.3 | Add to My Program |
Gaze-Based Dual Resolution Deep Imitation Learning for High-Precision Dexterous Robot Manipulation |
|
Kim, Heecheol | The University of Tokyo |
Ohmura, Yoshiyuki | The University of Tokyo |
Kuniyoshi, Yasuo | The University of Tokyo |
Keywords: Imitation Learning, Deep Learning in Grasping and Manipulation, Bioinspired Robot Learning
Abstract: A high-precision manipulation task, such as needle threading, is challenging. Physiological studies have proposed connecting low-resolution peripheral vision and fast movement to transport the hand into the vicinity of an object, and using high-resolution foveated vision to achieve the accurate homing of the hand to the object. The results of this study demonstrate that a deep imitation learning based method, inspired by the gaze-based dual resolution visuomotor control system in humans, can solve the needle threading task. First, we recorded the gaze movements of a human operator who was teleoperating a robot. Then, we used only a high-resolution image around the gaze to precisely control the thread position when it was close to the target. We used a low-resolution peripheral image to reach the vicinity of the target. The experimental results obtained in this study demonstrate that the proposed method enables precise manipulation tasks using a general-purpose robot manipulator and improves computational efficiency.
|
|
03:45-04:00, Paper TuBT15.4 | Add to My Program |
Graph Convolutional Network Based Configuration Detection for Freeform Modular Robot Using Magnetic Sensor Array |
|
Tu, Yuxiao | The Chinese University of Hong Kong, Shenzhen |
Liang, Guanqi | The Chinese University of Hong Kong, Shenzhen |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Keywords: Cellular and Modular Robots, Localization
Abstract: Modular self-reconfigurable robotic (MSRR) systems are potentially more robust and more adaptive than conventional systems. Following our previous work where we proposed a freeform MSRR module called FreeBOT, this paper presents a novel configuration detection system for FreeBOT using a magnetic sensor array. A FreeBOT module can be connected by up to 11 modules, and the proposed configuration detection system can locate a variable number of connection points accurately in real-time. By equipping FreeBOT with 24 magnetic sensors, the magnetic field density produced by magnets and steel spherical shells can be monitored. The connectable area is split into 199 non-uniform regions, including 84 uniform regions. Using a Graph Convolutional Network (GCN) based algorithm, the connection points can be located accurately under ferromagnetic environments. The system can locate a variable number of connection points for such a region division with only single connection point training data. Finally, the localization algorithm can run faster than 40 Hz on FreeBOT. With the real-time configuration detection system, the FreeBOT system has the potential to reconfigure automatically and accurately.
|
|
TuBT16 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning and Control in Robotics and Automation |
|
|
Chair: Weng, Paul | Shanghai Jiao Tong University |
Co-Chair: Liu, Boyi | Chinese Academy of Sciences |
|
03:00-03:15, Paper TuBT16.1 | Add to My Program |
Hyperparameter Auto-Tuning in Self-Supervised Robotic Learning |
|
Huang, Jiancong | Guangdong University of Technology |
Rojas, Juan | Chinese University of Hong Kong |
Zimmer, Matthieu | Shanghai Jiao Tong University |
Wu, Hongmin | Guangdong Institute of Intelligent Manufacturing |
Guan, Yisheng | Guangdong University of Technology |
Weng, Paul | Shanghai Jiao Tong University |
Keywords: Reinforcement Learning, AI-Based Methods, Representation Learning
Abstract: Policy optimization in reinforcement learning requires the selection of numerous hyperparameters across different environments. Fixing them incorrectly may negatively impact optimization performance leading notably to insufficient or redundant learning. Insufficient learning (due to convergence to local optima) results in under-performing policies whilst redundant learning wastes time and resources. The effects are further exacerbated when using single policies to solve multi-task learning problems. In this paper, we study how the Evidence Lower Bound (ELBO) used in Variational Auto-Encoders (VAEs) is affected by the diversity of image samples. Different tasks or setups in visual reinforcement learning incur varying diversity. We exploit the ELBO to create an auto-tuning technique in self-supervised reinforcement learning. Our approach can auto-tune three hyperparameters: the replay buffer size, the number of policy gradient updates during each epoch, and the number of exploration steps during each epoch. We use the state-of-the-art self-supervised robotic learning framework (Reinforcement Learning with Imagined Goals (RIG) using Soft Actor-Critic) as baseline for experimental verification. Experiments show that our method can auto-tune online and yields the best performance at a fraction of the time and computational resources. Code, video, and appendix for simulated and real-robot experiments can be found at url{www.JuanRojas.net/autotune}.
|
|
03:15-03:30, Paper TuBT16.2 | Add to My Program |
An Analytical Diabolo Model for Robotic Learning and Control |
|
von Drigalski, Felix Wolf Hans Erich | OMRON SINIC X Corporation |
Joshi, Devwrat Omkar | Osaka University |
Murooka, Takayuki | The University of Tokyo |
Tanaka, Kazutoshi | OMRON SINIC X Corporation |
Hamaya, Masashi | OMRON SINIC X Corporation |
Ijiri, Yoshihisa | OMRON Corp |
Keywords: Art and Entertainment Robotics, Data Sets for Robot Learning, Dual Arm Manipulation
Abstract: In this paper, we present a model of a diabolo that can be used for training agents in simulation to play diabolo, as well as running it on a real dual robot arm system. We first derive an analytical model of the diabolo-string system and compare its accuracy using data recorded via motion capture, which we release as a public dataset of skilled play with diabolos of different dynamics. We show that our model outperforms a deep-learning-based predictor, both in terms of precision and physically consistent behavior. Next, we describe a method based on optimal control to generate robot trajectories that produce the desired diabolo trajectory, as well as a system to transform higher-level actions into robot motions. Finally, we test our method on a real robot system playing the diabolo, and throw it to and catch it from a human player.
|
|
03:30-03:45, Paper TuBT16.3 | Add to My Program |
Peer-Assisted Robotic Learning: A Data-Driven Collaborative Learning Approach for Cloud Robotic Systems |
|
Liu, Boyi | Chinese Academy of Sciences |
Wang, Lujia | Shenzhen Institutes of Advanced Technology |
Chen, Xinquan | Shenzhen Instiutes of Advanced Technology, Shenzhen, China |
Huang, Lexiong | Shenzhen Institutes of Advanced Technology |
Han, Dong | Shenzhen Institutes of Advanced Technology, Chinese Academy of S |
Xu, Cheng-Zhong | University of Macau |
Keywords: Big Data in Robotics and Automation, Multi-Robot Systems, Machine Learning for Robot Control
Abstract: A technological revolution is occurring in the field of robotics with the data-driven deep learning technology. However, building datasets for each local robot is laborious. Meanwhile, data islands between local robots make data unable to be utilized collaboratively. To address this issue, the work presents Peer-Assisted Robotic Learning (PARL) in robotics, which is inspired by the peer-assisted learning in cognitive psychology and pedagogy. PARL implements data collaboration with the framework of cloud robotic systems. Both data and models are shared by robots to the cloud after semantic computing and training locally. The cloud converges the data and performs augmentation, integration, and transferring. Finally, fine tune this larger shared dataset in the cloud to local robots. Furthermore, we propose the DAT Network (Data Augmentation and Transferring Network) to implement the data processing in PARL. DAT Network can realize the augmentation of data from multi-local robots. We conduct experiments on a simplified self-driving task for robots (cars). DAT Network has a significant improvement in the augmentation in self-driving scenarios. Along with this, the self-driving experimental results also demonstrate that PARL is capable of improving learning effects with data collaboration of local robots.
|
|
03:45-04:00, Paper TuBT16.4 | Add to My Program |
Imitation Learning of Hierarchical Driving Model: From Continuous Intention to Continuous Trajectory |
|
Wang, Yunkai | Zhejiang University |
Zhang, Dongkun | Zhejiang University |
Wang, Jingke | Zhejiang University |
Chen, Zexi | Zhejiang University |
Li, Yuehua | Zhejiang Lab |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Imitation Learning, Motion and Path Planning, Vision-Based Navigation
Abstract: One of the challenges to reduce the gap between the machine and the human level driving is how to endow the system with the learning capacity to deal with the coupled complexity of environments, intentions, and dynamics. In this paper, we propose a hierarchical driving model with explicit models of continuous intention and continuous dynamics, which decouples the complexity in the observation-to-action reasoning in the human driving data. Specifically, the continuous intention module takes perception to generate a potential map encoded with obstacles and intentions. Then, the potential map is regarded as a condition, together with the current dynamics, to generate a continuous trajectory as output by a continuous function approximator network, whose derivatives can be used for supervision without additional parameters. Finally, our method is validated by both datasets and stimulation, demonstrating that our method has higher prediction accuracy of displacement and velocity and generates smoother trajectories. Our method is also deployed on the real vehicle with loop latency, validating its effectiveness. To the best of our knowledge, this is the first work to produce the driving trajectory using a continuous function approximator network.
|
|
TuBT17 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Humanoids and Animaloids III |
|
|
Chair: Zhao, Yue | Shanghai Jiao Tong University |
Co-Chair: Park, Hae-Won | Korea Advanced Institute of Science and Technology |
|
03:00-03:15, Paper TuBT17.1 | Add to My Program |
Lywal: A Leg-Wheel Transformable Quadruped Robot with Picking up and Transport Functions |
|
Xue, Yongjiang | Tiangong University |
Yuan, Xichen | TIANGONG UNIVERSITY |
Wang, Yuhai | Tiangong University |
Yang, Yang | Tiangong University |
Lu, Siyu | Tiangong University |
Zhang, Bo | Tiangong University |
Lai, Juezhu | TIANGONG UNIVERSITY |
Wang, Jianming | Tianjin Polytechnic University |
Xiao, Xuan | Tiangong University |
Keywords: Legged Robots, Mechanism Design, Wheeled Robots
Abstract: This paper introduces a leg-wheel transformable quadruped robot named Lywal which can switch to the leg-mode and the wheel-mode for locomotion, and the claw-mode for picking up and transport functions. First, the mechanical structure of Lywal is designed by using an innovative 2-DoF transformable mechanism. Second, the calculation of kinematics is analyzed in detail. Then, the switching-mode strategy and the mobile control strategies in different modes are designed. Finally, the prototype of Lywal is built. The properties of the mobile modes are analyzed, and the picking-up and transport functions of the claw-mode are verified through physical experiments.
|
|
03:15-03:30, Paper TuBT17.2 | Add to My Program |
Design of a Compact Embedded Hydraulic Power Unit for Bipedal Robots |
|
Cho, Buyoun | KAIST |
Kim, Min-Su | KAIST |
Kim, Sung Woo | KAIST |
Shin, Seunghoon | KAIST |
Jeong, Yeseong | KAIST |
Oh, Jun Ho | Korea Advanced Inst. of Sci. and Tech |
Park, Hae-Won | Korea Advanced Institute of Science and Technology |
Keywords: Legged Robots, Hydraulic/Pneumatic Actuators, Embedded Systems for Robotic and Automation
Abstract: This paper proposes a design method of a compact embedded hydraulic power unit (HPU) for a bipedal robot and a controller to regulate the supply pressure. The HPU consists of an integrated pump-motor unit. The unit is immersed in hydraulic oil for efficient space utilization and heat dissipation from the motor. This HPU design is analyzed to establish a relationship between the thermal variables and the motor design parameters such as gap radius and wire radius via its thermal and electrical modeling. Through this analysis, the design parameters of suitable pump-driving motor are chosen. This paper also proposes a control method for the HPU to regulate the supply pressure while minimizing the energy loss caused by the bypass through a pressure-regulating valve. The HPU is mounted on top of the bipedal robot platform, LIGHT, with twelve degrees of freedom actuated by the proposed HPU. Finally, the durability of the designed HPU is demonstrated through a long-term driving test at a high pressure. Furthermore, air-walking and squat motion experiments are conducted with the bipedal robot to demonstrate the capabilities of the HPU and its controller.
|
|
03:30-03:45, Paper TuBT17.3 | Add to My Program |
Stair Climbing Capability-Based Dimensional Synthesis for the Multi-Legged Robot |
|
Li, Huayang | Shanghai Jiao Tong University |
Qi, Chenkun | Shanghai Jiao Tong University |
Chen, Xianbao | Shanghai Jiao Tong University |
Mao, Liheng | Shanghai Jiao Tong University |
Zhao, Yue | Shanghai Jiao Tong University |
Gao, Feng | Shanghai Jiao Tong University |
Keywords: Legged Robots, Mechanism Design, Simulation and Animation
Abstract: Staircase is a typical obstacle for the legged robot to overcome in buildings. This paper studies the stair climbing capability-based dimensional synthesis for a hexapod legged robot, i.e., exploring how to determine the leg length and the longitudinal body length concerning the target staircase in the mechanical design stage. In climbing a staircase, leg-staircase interference is one of the predominant issues. The three possible interference cases are illustrated in detail with a 2-DOF (degree of freedom) leg mechanism and the staircase size, based on the predefined tripod gait sequence. The mathematical relationships between the leg length, longitudinal body length, and the target staircase size are derived. The leg length and the body length are finally determined with the target staircase size. The virtual simulations and prototype experiments verify the effectiveness of the dimensional synthesis for the hexapod robot.
|
|
03:45-04:00, Paper TuBT17.4 | Add to My Program |
Versatile Locomotion by Integrating Ankle, Hip, Stepping, and Height Variation Strategies |
|
Ding, Jiatao | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Xin, Songyan | The University of Edinburgh |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Humanoid and Bipedal Locomotion, Legged Robots, Motion and Path Planning
Abstract: Stable walking in real-world environments is a challenging task for humanoid robots, especially when considering the dynamic disturbances, e.g., caused by external perturbations that may be encountered during locomotion. The varying nature of disturbance necessitates high adaptability. In this paper, we propose an enhanced Nonlinear Model Predictive Control (NMPC) approach for robust and adaptable walking – we term it versatile locomotion, by limiting both the Center of Pressure (CoP) and Divergent Component of Motion (DCM) movements. Due to utilization of the Nonlinear Inverted Pendulum plus Flywheel model, the robot is endowed with the capabilities of CoP manipulation (if equipped with finite-sized feet), step location adjustment, upper body rotation, and vertical height variation. Considering the feasibility constraints, especially the usage of relaxed CoP constraints, the NMPC scheme is established as a Quadratically Constrained Quadratic Programming problem, which is solved efficiently by Sequential Quadratic Programming with enhanced solvability. Simulation experiments demonstrate the effectiveness of our method to recruit optimal hybrid strategies in order to realize versatile locomotion, for the robot with finite-sized or point feet.
|
|
TuBT18 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Human-Robot Interaction VIII |
|
|
Chair: Chen, Chao | Monash University |
|
03:00-03:15, Paper TuBT18.1 | Add to My Program |
Human-In-The-Loop Auditory Cueing Strategy for Gait Modification |
|
Wu, Tina LY | Monash University |
Murphy, Anna | Monash Health |
Chen, Chao | Monash University |
Kulic, Dana | Monash University |
Keywords: Human Factors and Human-in-the-Loop, Rehabilitation Robotics, Wearable Robotics
Abstract: External feedback in the form of visual, auditory and tactile cues has been used to assist patients to overcome mobility challenges. However, these cues can become less effective over time. There is limited research on adapting cues to account for inter and intra-personal variations in cue responsiveness. We propose a cue-provision framework that consists of a gait performance monitoring algorithm and an adaptive cueing strategy to improve gait performance. The proposed approach learns a model of the person's response to cues using Gaussian Process regression. The model is then used within an on-line optimization algorithm to generate cues to improve gait performance. We conduct a study with healthy participants to evaluate the ability of the adaptive cueing strategy to influence human gait, and compare its effectiveness to two other cueing approaches: the standard fixed cue approach and a proportional cue approach. The results show that adaptive cueing is more effective in changing the person's gait state once the response model is learned compared to the other methods.
|
|
03:15-03:30, Paper TuBT18.2 | Add to My Program |
A Self-Training Approach-Based Traversability Analysis for Mobile Robots in Urban Environments |
|
Lee, Hyunsuk | Korea University |
Chung, Woojin | Korea University |
Keywords: Mapping, Learning from Experience
Abstract: This paper presents a method for LiDAR sensor-based traversability analysis for autonomous mobile robots in urban environments. Although urban environments are structured environments, a typical terrain comprises hazardous regions for mobile robots. Therefore, a reliable method for detecting traversable regions is required to prevent robots from getting stuck in the middle of the road. Conventional approaches require considerable efforts to obtain a model for traversability analysis for a specific robot or environment. In particular, learning-based methods require explicit training data. This paper introduces a method for traversability mapping based on a self-training algorithm to eliminate the hand labeling process. A neural network was applied to the underlying classifier of the self-training algorithm. With our approach, the model can be learned with even weakly labeled data obtained from robot-specific parameters and the robot’s footprint. In practical experiments, the self-trained model performed better performance than the existing supervised learning method. Moreover, as the fraction of unlabeled data increased, the performance also increased. Therefore, the demonstrations in the urban environments indicate the effectiveness of the proposed method for traversability mapping.
|
|
03:30-03:45, Paper TuBT18.3 | Add to My Program |
Active and Interactive Mapping with Dynamic Gaussian Process Implicit Surfaces for Mobile Manipulators |
|
Liu, Liyang | University of Sydney |
Vidal-Calleja, Teresa A. | University of Technology Sydney |
Wu, Lan | University of Technology Sydney |
Paul, Gavin | University of Technology, Sydney |
Vu, Thanh | University of Technology Sydney |
Fryc, Simon | University of Technology Sydney |
Keywords: Mapping, Perception for Grasping and Manipulation
Abstract: In this letter, we present an interactive probabilistic mapping framework for a mobile manipulator picking objects from a pile. The aim is to map the scene, actively decide where to go next and which object to pick, make changes to the scene by picking the chosen object, and then map these changes alongside. The proposed framework uses a novel dynamic Gaussian Process (GP) Implicit Surface method to incrementally build and update the scene map that reflects environment changes. Actively the framework computes the next-best-view, balancing the terms of object reachability for picking and map information gain (IG) for fidelity and coverage. To enforce a priority of visiting boundary segments over unknown regions, the IG formulation includes an uncertainty-gradient based frontier score by exploiting the GP kernel derivative. This leads to an efficient strategy that addresses the often conflicting requirement of unknown environment exploration and object picking exploitation given a limited execution horizon. We demonstrate the effectiveness of our framework with software simulation and real-life experiments.
|
|
03:45-04:00, Paper TuBT18.4 | Add to My Program |
Proactive Interaction Framework for Intelligent Social Receptionist Robots |
|
Xue, Yang | Baidu |
Wang, Fan | Baidu International Technology (Shenzhen) Co., Ltd |
Tian, Hao | Baidu |
Zhao, Min | Baidu |
Li, Jiangyong | Baidu |
Pan, Haiqing | Baidu |
Dong, Yueqiang | Baidu |
Keywords: Social HRI, Emotional Robotics, Intention Recognition
Abstract: Existing approaches of proactive HRI are either based on multi-stage decision processes or based on end-to-end decision models. However, the rule-based approaches require sedulous expert efforts and only handle minimal pre-defined scenarios. Besides, existing works with end-to-end models are limited to very general greetings or few behavior patterns (< 10). To address those challenges, we propose a new end-to-end framework, the TransFormer with Visual Tokens for Human-Robot Interaction (TFVT-HRI). The proposed framework extracts visual tokens of relative objects from an RGB camera first. To ensure the correct interpretation of the scenario, a transformer decision model is then employed to process the visual tokens, which is augmented with the temporal and spatial information. It predicts the appropriate action to take in each scenario and identifies the right target. Our data is collected from an in-service receptionist robot in an office building, which is then annotated by experts for appropriate proactive behavior. The action set includes 1000+ diverse patterns by combining language, emoji expression, and body motions. We compare our model with other SOTA end-to-end models on both offline test sets and online user experiments in realistic office building environments to validate this framework. It is demonstrated that the decision model achieves SOTA performance, resulting in more humanness and intelligence when compared with the previous reactive reception policies.
|
|
TuBT19 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Field Robotics VIII |
|
|
Co-Chair: Shi, Fan | The University of Tokyo |
|
03:00-03:15, Paper TuBT19.1 | Add to My Program |
A Coach-Based Bayesian Reinforcement Learning Method for Snake Robot Control |
|
Jia, Yuanyuan | Ritsumeikan University |
Ma, Shugen | Ritsumeikan University |
Keywords: Biologically-Inspired Robots, Redundant Robots, Reinforcement Learning
Abstract: Reinforcement Learning (RL) usually needs thousands of episodes, leading its applications on physical robots expensive and challenging. Little research has been reported about snake robot control using RL due to additional difficulty of high redundancy of freedom. We propose a coach-based deep learning method for snake robot control, which can effectively save convergence time with much less episodes. The main contributions include: 1) a unified graph-based Bayesian framework integrating a coach module to guide the RL agent; 2) an explicit stochastic formulation of robot-environment interaction with uncertainty; 3) an efficient and robust training process for snake robot control to achieve both path planning and obstacle avoidance simultaneously. The performance has been demonstrated on both simulation and real-world data in comparison with state-of-the art, showing promising results.
|
|
03:15-03:30, Paper TuBT19.2 | Add to My Program |
Estimation of Spatially-Correlated Ocean Currents from Ensemble Forecasts and Online Measurements |
|
To, Kwun Yiu Cadmus | University of Technology Sydney |
Kong, Felix Honglim | The University of Technology Sydney |
Lee, Ki Myung Brian | University of Technology Sydney |
Yoo, Chanyeol | University of Technology Sydney |
Anstee, Stuart David | Defence Science and Technology Group |
Fitch, Robert | University of Technology Sydney |
Keywords: Marine Robotics, Environment Monitoring and Management, Probability and Statistical Methods
Abstract: We present a method to estimate two-dimensional, time-invariant oceanic flow fields based on data from both ensemble forecasts and online measurements. Our method produces a realistic estimate in a computationally efficient manner suitable for use in marine robotics for path planning and related applications. We use kernel methods and singular value decomposition to find a compact model of the ensemble data that is represented as a linear combination of basis flow fields and that preserves the spatial correlations present in the data. Online measurements of ocean current, taken for example by marine robots, can then be incorporated using recursive Bayesian estimation. We provide computational analysis, performance comparisons with related methods, and demonstration with real-world ensemble data to show the computational efficiency and validity of our method. Possible applications in addition to path planning include active perception for model improvement through deliberate choice of measurement locations.
|
|
03:30-03:45, Paper TuBT19.3 | Add to My Program |
Semi-Supervised Gated Recurrent Neural Networks for Robotic Terrain Classification |
|
Ahmadi, Ahmadreza | KAIST |
Nygaard, Tønnes F. | University of Oslo |
Kottege, Navinda | CSIRO |
Howard, David | CSIRO |
Hudson, Nicolas | X, the Moonshot Factory |
Keywords: Legged Robots, Deep Learning Methods, Data Sets for Robot Learning
Abstract: Legged robots are popular candidates for missions in challenging terrains due to their versatile locomotion strategies. Terrain classification is a key enabling technology for autonomous legged robots, allowing them to harness their innate flexibility to adapt to the demands of their operating environment. We show how highly capable machine learning techniques, namely gated recurrent neural networks, allow our target legged robot to correctly classify the terrain it traverses in both supervised and semi-supervised fashions. Tests on a benchmark dataset shows that our time-domain classifiers are well capable of handling raw and variable-length data with small amount of labels and outperform frequency-domain classifiers. The classification results on our own extended dataset opens up a range of high-performance behaviours that are specific to those environments. Furthermore, we show how raw unlabelled data is used to improve significantly the classification results in a semi-supervised model.
|
|
03:45-04:00, Paper TuBT19.4 | Add to My Program |
Circus ANYmal: A Quadruped Learning Dexterous Manipulation with Its Limbs |
|
Shi, Fan | The University of Tokyo |
Homberger, Timon | ETH Zurich |
Lee, Joonho | ETH Zurich Robotic Systems Laboratory |
Miki, Takahiro | ETH Zurich |
Zhao, Moju | The University of Tokyo |
Farshidian, Farbod | ETH Zurich |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Hutter, Marco | ETH Zurich |
Keywords: Legged Robots, In-Hand Manipulation, Deep Learning in Grasping and Manipulation
Abstract: Quadrupedal robots are skillful at locomotion tasks while lacking manipulation skills, not to mention dexterous manipulation abilities. Inspired by the animal behavior and the duality between multi-legged locomotion and multi-fingered manipulation, we showcase a circus ball challenge on a quadrupedal robot, ANYmal. We employ a model-free reinforcement learning approach to train a deep policy that enables the robot to balance and manipulate a light-weight ball robustly using its limbs without any contact measurement sensor. The policy is trained in the simulation, in which we randomize many physical properties with additive noise and inject random disturbance force during manipulation, and achieves zero-shot deployment on the real robot without any adjustment. In the hardware experiments, dynamic performance is achieved with a maximum rotation speed of 15 deg/s, and robust recovery is showcased under external poking. To our best knowledge, it is the first work that demonstrates the dexterous dynamic manipulation on a real quadrupedal robot.
|
|
TuBT20 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Deep Learning in Robotics I |
|
|
Chair: Tanaka, Kanji | University of Fukui |
|
03:00-03:15, Paper TuBT20.1 | Add to My Program |
Long-Range Hand Gesture Recognition Via Attention-Based SSD Network |
|
Zhou, Liguang | The Chinese University of Hong Kong, Shenzhen |
Du, Chenping | Sun Yatsen University |
Sun, Zhenglong | Chinese University of Hong Kong, Shenzhen |
Lam, Tin Lun | The Chinese University of Hong Kong, Shenzhen |
Xu, Yangsheng | The Chinese University of Hong Kong, Shenzhen / Shenzhen Institu |
Keywords: Multi-Modal Perception for HRI, Data Sets for Robotic Vision, Aerial Systems: Applications
Abstract: Hand gesture recognition plays an essential role in the human-robot interaction (HRI) field. Most previous research only studies hand gesture recognition in a short distance, which cannot be applied for interaction with mobile robots like unmanned aerial vehicles (UAVs) at a longer and safer distance. Therefore, we investigate the challenging long-range hand gesture recognition problem for the interaction between humans and UAVs. To this end, we propose a novel attention-based single shot multibox detector (SSD) model that incorporates both spatial and channel attention for hand gesture recognition. We notably extend the recognition distance from 1 meter to 7 meters through the proposed model without sacrificing speed. Besides, we present a long-range hand gesture (LRHG) dataset collected by the USB camera mounted on mobile robots. The hand gestures are collected at discrete distance levels from 1 meter to 7 meters, where most of the hand gestures are small and at low resolution. Experiments with the self-built LRHG dataset show our methods reach the surprising performance-boosting over the state-of-the-art method like the SSD network on both short-range (1 meter) and long-range (up to 7 meters) hand gesture recognition tasks.
|
|
03:15-03:30, Paper TuBT20.2 | Add to My Program |
Spectral Temporal Graph Neural Network for Trajectory Prediction |
|
Cao, Defu | Peking University |
Li, Jiachen | University of California, Berkeley |
Ma, Hengbo | University of California, Berkeley |
Tomizuka, Masayoshi | University of California |
Keywords: Intelligent Transportation Systems
Abstract: An effective understanding of the contextual environment and accurate motion forecasting of surrounding agents is crucial for the development of autonomous vehicles and social mobile robots. This task is challenging since the behavior of an autonomous agent is not only affected by its own intention, but also by the static environment and surrounding dynamically interacting agents. Previous works focused on utilizing the spatial and temporal information in time domain while not sufficiently taking advantage of the cues in frequency domain. To this end, we propose a Spectral Temporal Graph Neural Network (SpecTGNN), which can capture inter-agent correlations and temporal dependency simultaneously in frequency domain in addition to time domain. SpecTGNN operates on both an agent graph with dynamic state information and an environment graph with the features extracted from context images in two streams. The model integrates graph Fourier transform, spectral graph convolution and temporal gated convolution to encode history information and forecast future trajectories. Moreover, we incorporate a multi-head spatio-temporal attention mechanism to mitigate the effect of error propagation in a long time horizon. We demonstrate the performance of SpecTGNN on two public trajectory prediction benchmark datasets, which achieves state-of-the-art performance in terms of prediction accuracy.
|
|
03:30-03:45, Paper TuBT20.3 | Add to My Program |
Dark Reciprocal-Rank: Teacher-To-Student Knowledge Transfer from Self-Localization Model to Graph-Convolutional Neural Network |
|
Takeda, Koji | University of Fukui |
Tanaka, Kanji | University of Fukui |
Keywords: Localization, Deep Learning for Visual Perception, Multi-Robot SLAM
Abstract: In visual robot self-localization, graph-based scene representation and matching have recently attracted research interest as robust and discriminative methods for self-localization. Although effective, their computational and storage costs do not scale well to large-size environments. To alleviate this problem, we formulate self-localization as a graph classification problem and attempt to use the graph convolutional neural network (GCN) as a graph classification engine. A straightforward approach is to use visual feature descriptors that are employed by state-of-the-art self-localization systems, directly as graph node features. However, their superior performance in the original self-localization system may not necessarily be replicated in GCN-based self-localization. To address this issue, we introduce a novel teacher-to-student knowledge-transfer scheme based on rank matching, in which the reciprocal-rank vector output by an off-the-shelf state-of-the-art teacher self-localization model is used as the dark knowledge to transfer. Experiments indicate that the proposed graph-convolutional self-localization network (GCLN) can significantly outperform state-of-the-art self-localization systems, as well as the teacher classifier. The code and dataset are available at https://github.com/KojiTakeda00/Reciprocal_rank_KT_GCN.
|
|
03:45-04:00, Paper TuBT20.4 | Add to My Program |
Efficient SE(3) Reachability Map Generation Via Interplanar Integration of Intra-Planar Convolutions |
|
Han, Yiheng | Tsinghua University |
Pan, Jia | University of Hong Kong |
Xia, Mengfei | Tsinghua University |
Zeng, Long | Tsinghua University |
Liu, Yong-Jin | Tsinghua University |
Keywords: Task and Motion Planning, Industrial Robots
Abstract: Convolution has been used for fast computation of reachability maps, but its application is limited to planar robots, 2D workspace, or robots with special spatial arrangements for joints, due to the high computational costs when performing SE(3) convolution operations for general joint arrangements in industrial robots and 3D workspace. In this paper, we find that the SE(3) convolution can be decomposed into a set of SE(2) convolutions, which significantly reduces the computational complexity when computing the reachability map of high-DOF robotic manipulators in the 3D workspace. We also leverage GPU parallel computing and Fast Fourier transform to further accelerate the computation procedure. We demonstrate the time efficiency and quality of our approach using a set of numerical experiments for constructing reachability maps and also present a multi-robot plant phenotyping system that uses the computed reachability map for efficient viewpoint selection and path planning.
|
|
TuBT21 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Biomedical Robotics I |
|
|
Chair: Iordachita, Ioan Iulian | Johns Hopkins University |
Co-Chair: Ren, Hongliang | The Chinese University of Hong Kong (CUHK) |
|
03:00-03:15, Paper TuBT21.1 | Add to My Program |
Orientation Control of an Electromagnetically Actuated Soft-Tethered Colonoscope Based on 2OR Pseudo-Rigid-Body Model |
|
Li, Yehui | The Chinese University of Hong Kong |
Li, Weibing | The Chinese University of Hong Kong |
Xin, Wenci | The Chinese University of Hong Kong |
Zhang, Xue | CUHK |
Xian, Yitian | The Chinese University of Hong Kong |
Chiu, Philip, Wai-yan | Chinese University of Hong Kong |
Li, Zheng | The Chinese University of Hong Kong |
Keywords: Medical Robots and Systems, Modeling, Control, and Learning for Soft Robots, Motion Control
Abstract: Colorectal cancer incidence has been steadily rising worldwide. Magnetic colonoscopes provide new approaches to conduct colon inspection and treatment. This paper presents a novel electromagnetically actuated soft-tethered colonoscope to achieve precise and stable orientation control. An inflated balloon is designed to eliminate the unpredictable disturbance of the floating tether. A 2OR Pseudo-Rigid-Body (PRB) model of the soft tether is developed to analyze the relationship between the tether deflection and applied force and torque. A closed-loop control framework is constructed with visual position feedback. Experiments are first conducted to validate the assumption of the PRB model and the efficacy of the magnetic field model. Then, trajectory tracking tasks and disturbance rejection tests are performed to validate the feasibility of the proposed solution and closed-loop control. Results show that the colonoscope can stably and accurately orient to the desired orientation with an absolute mean position error of less than 0.5 mm and an average velocity of 3.5 mm/s. The distal tip can quickly re-stabilize to the desired orientation even when a large disturbance exists.
|
|
03:15-03:30, Paper TuBT21.2 | Add to My Program |
An Integrated High-Dexterity Cooperative Robotic Assistant for Intraocular Micromanipulation |
|
Jinno, Makoto | Kokushikan University |
Li, Gang | Johns Hopkins University |
Patel, Niravkumar | Johns Hopkins University |
Iordachita, Ioan Iulian | Johns Hopkins University |
Keywords: Medical Robots and Systems, Dexterous Manipulation
Abstract: Retinal surgeons are required to manipulate multiple surgical instruments in a confined intraocular space. The Steady-Hand Eye Robot (SHER), developed in our previous study, enables tremor-free tool manipulation by employing a cooperative control scheme whereby the surgeon and robot can co-manipulate the surgical instruments. However, as straight tools can only approach a target from one direction, the instrument could potentially collide with the eye lens when attempting to access the anterior portion of the retina. In addition, it can be difficult to approach a target on the retina from a suitable direction during procedures such as vein cannulation or membrane peeling. Snake-like robots offer greater dexterity and allow access to a target on the retina from suitable directions. In this study, we present an integrated, high-dexterity, cooperative robotic assistant for intraocular micromanipulation. This robotic assistant comprises an improved integrated robotic intraocular snake (I2RIS) with a user interface (a tactile switch or joystick unit) for the manipulation of the snake-like distal end, and the SHER. The integrated system was evaluated through a set of experiments wherein subjects were requested to touch or insert into randomly-assigned targets. The results indicate that the high-dexterity robotic assistant can touch or insert the tip into the same target from multiple directions, with no significant increase in task completion time for either user interface.
|
|
03:30-03:45, Paper TuBT21.3 | Add to My Program |
A Sigmoid-Colon-Straightening Soft Actuator with Peristaltic Motion for Colonoscopy Insertion Assistance: Easycolon |
|
Kim, Hansoul | Korea Advanced Institute of Science and Technology |
Kim, Joonhwan | Korea Advanced Institute of Science and Technology(KAIST) |
You, Jae Min | Korea Advanced Institute of Science and Technology |
Lee, Seung Woo | The Catholic University of Korea |
Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
Kwon, Dong-Soo | KAIST |
Keywords: Medical Robots and Systems, Soft Robot Applications
Abstract: A colonoscopy is the most typical method to inspect for colorectal cancer; however, inserting colonoscopes in nonfixed sites, such as the sigmoid colon, requires very skilled insertion techniques. Since the sigmoid colon is one of the most difficult nonfixed sites for insertion, straightening it is a major step in colonoscopy. Previous studies have proposed various methods to assist the colonoscopy insertion process, but there are still challenges that must be addressed in clinical environments. The goal of this study was to assist colonoscopy operators to straighten the sigmoid colon using peristaltic motions generated with a soft actuator mounted on a commercial colonoscope. The peristaltic motions of the proposed system were combined with expanding and extending behaviors, and the straightening strategy was defined based on the analysis of the sigmoid colon handling process of colonoscopy. The peristaltic motions of the soft actuator were implemented using two balloons and a tendon-sheath mechanism. The colon shortening speed was measured to be about 80 mm/min, which contributed to the straightening of the sigmoid colon, thereby helping significantly with the process of colonoscopy.
|
|
03:45-04:00, Paper TuBT21.4 | Add to My Program |
A Miniature Manipulator with Variable Stiffness towards Minimally Invasive Transluminal Endoscopic Surgery |
|
Li, Changsheng | Beijing Institute of Technology |
Yan, Yusheng | Harbin Engineering University |
Xiao, Xiao | Southern University of Science and Technology |
Gu, Xiaoyi | National University of Singapore |
Gao, Huxin | National University of Singapore |
Duan, Xingguang | Beijing Institute of Technology |
Zuo, Xiuli | QiluHospitalofShandongUniversity |
Li, Yanqing | Qilu Hospital of Shandong University |
Ren, Hongliang | Faculty of Engineering, National University of Singapore |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy, Tendon/Wire Mechanism
Abstract: This paper presents a miniature manipulator with variable stiffness towards minimally invasive transluminal endoscopic surgery surgery. The manipulator is composed of hollow modules with pinholes in the sidewall, compact with a 4 mm diameter, and dexterous with six degrees of freedom (DOFs). The cable tubes and torque coils are adopted to avoid the motion coupling of the joints. The worms gears with an anti-backdrive property are used to transfer power from the motor to the reels. The variable stiffness of the manipulator is achieved by using the spring-sliding block-based driver system. The system, including the manipulator and the driving system, is described in detail. The kinematics, workspace and the variable stiffness are analyzed. Tests in terms of the grasping force and variable stiffness are conducted. Results show that the grasping force is larger than 3 N, and the manipulator's stiffness can be tuned with the change rates of 2.45 and 1.69. The manipulator's performances, such as the manipulator's dexterity and the coordination of operation under the master-slave configuration, were preliminarily demonstrated through the basic operation.
|
|
TuBT22 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Applications of Micro and Nano Robotics II |
|
|
Co-Chair: Ohtsuka, Toshiyuki | Kyoto University |
|
03:00-03:15, Paper TuBT22.1 | Add to My Program |
Design of Soft Sensor for Feedback Control of Bio-Actuator Powered by Skeletal Muscle |
|
Kim, Eunhye | Meijo University |
Takeuchi, Masaru | Nagoya University |
Ohira, Ryosuke | Meijo University |
Nomura, Takuto | Nagoya University |
Hasegawa, Yasuhisa | Nagoya University |
Huang, Qiang | Beijing Institute of Technology |
Fukuda, Toshio | Meijo University |
Keywords: Micro/Nano Robots, Soft Sensors and Actuators, Biological Cell Manipulation
Abstract: In spite of recent high attention of the biohybrid robot system, the previous researches focused on actuation system depend on simple on/off control without feedback control. To solve this problem, we proposed a soft sensor for feedback control of a bio-actuator driven by skeletal muscle. The proposed soft sensor can measure contraction forces of the proposed bio-actuator [1]. The bio-actuator was constructed with tendon structure and culture template made by polydimethylsiloxane (PDMS). It generated contraction forces at 0.3 mN when applying electrical stimulation. To measure that kind of small amount of contraction forces (0.3 mN), we fabricated a soft sensor using liquid metal, Galinstan, and HTV-2000. At first, we measured the Young’s modulus of the bioactuator and sensor and then fabricated the soft sensor having 68.52 kPa of Young’s modulus that is similar the bioactuator (45.8 kPa). Next, we simulated the sensor to estimate the resistance change according to the applied force. Since the resistance change is too small, we design the circuit to amplify the signal. Then, we detect very small resistance at milli-ohm. In addition, we analyzed time response to detect signal of actuator faster than 200 ms. As a result, the proposed sensor can measure the force of bioactuator without time delay.
|
|
03:15-03:30, Paper TuBT22.2 | Add to My Program |
Molecular Transport of a Magnetic Nanoparticle Swarm towards Thrombolytic Therapy |
|
Manamanchaiyaporn, Laliphat | Thammasat University |
Tang, Xiuzhen | Department of Ultrasound in Medicine and Shanghai Institute of U |
Yan, Xiaohui | The Chinese University of Hong Kong |
Zheng, Yuanyi | Department of Ultrasound in Medicine and Shanghai Institute of U |
Keywords: Micro/Nano Robots, Medical Robots and Systems
Abstract: Due to miniature size, controllability, navigation and versatile capability, micro/ nanorobots is appealing for minimally invasive procedures. In particular, blood clot as an unwanted biological object in vessel leads to severe diseases. Herein, we report a magnetite nanoparticle swarm (Fe3O4) that is capable of inducing hydrodynamic effect to capture tissue plasminogen activator (t-PA) in emergent vortices under the dynamic magnetic field. Once the swarm move to approach a site of a blood clot, the caged t-PA molecules are transported together with it across the interface for thrombolysis. In addition, the dynamic motion of each individual spinning nanorobot can assist the clot removal by exerting mechanical force to rub against the softened clot with loose fibrin fibers that is incompletely dissolved by chemical lysis of t-PA. Feasibility and performance of the swarm towards thrombolytic therapy are approved by in vitro experiments. The result reports that blood clot with 3-mm-diameter and 9-mm-length is completely removed in two hours, faster than the clinical procedure applying t-PA alone about 3 times. The contribution is extensible to apply for medical treatments ranged from simple tasks (e.g. detoxification) to complex tasks (e.g. tumor destroy).
|
|
03:30-03:45, Paper TuBT22.3 | Add to My Program |
Efficient Single Cell Mechanical Measurement by Integrating a Cell Arraying Microfluidic Device with Magnetic Tweezer |
|
Tang, Xiaoqing | Beijing Institute of Technology |
Liu, Xiaoming | Beijing Institute of Technology |
Li, Pengyun | Beijing Institute of Technology |
Liu, Dan | Beijing Institute of Technology |
Kojima, Masaru | Osaka University |
Huang, Qiang | Beijing Institute of Technology |
Arai, Tatsuo | University of Electro-Communications |
Keywords: Micro/Nano Robots, Biological Cell Manipulation, Automation at Micro-Nano Scales
Abstract: Cell stiffness is an essential label-free biomarker used to diagnose and sort cells at the single-cell level. Here, we integrated magnetic tweezers on an efficient cell arraying microfluidic device to evaluate the mechanical properties of single cells. Two motion modes under pulsed electromagnetic fields have been proposed for magnetic beads. They were magnetically actuated to approach the measuring cells at a distance in rotation mode and move experimentally with the maximal velocity of 23 μm s-1 under a rotating magnetic field of 45 Hz and 45 mT. In contrast, the magnetic beads were driven at close range in translation mode to approach and apply extrusion pressure on the target cells under a locally constant magnetic field gradient. The simulation results of the fluid field caused by the moving bead revealed the difference distribution in velocity and pressure under two motion modes, proving the rationality of the motion mode setting. Experimentally, Hela cells and C2C12 cells arrayed in the microfluidic device were physically squeezed by magnetic tweezers, and cell stiffness was measured. Compared to the measurement results with AFM, the proposed method used to measure Young’s modulus of the cells was thought to be dependable. We envision the proposed on-chip platform integrated with the magnetic tweezer could show a potential application for the efficient and flexible measurement of biomechanical properties in the future.
|
|
03:45-04:00, Paper TuBT22.4 | Add to My Program |
A Portable Acoustofluidic Device for Multifunctional Cell Manipulation and Reconstruction |
|
Zhang, Wei | Beihang University |
Song, Bin | BEIHANG UNIVERSITY |
Bai, Xue | School of Mechanical Engineering & Automation, Beihang Universit |
Guo, Jingli | Beihang University |
Feng, Lin | Beihang University |
Arai, Fumihito | The University of Tokyo |
Keywords: Biological Cell Manipulation, Micro/Nano Robots, Motion Control
Abstract: Microbubble-induced acoustic microstreaming for efficient on-chip micromanipulation is widely developed in biological applications. However, it is still challenging to simultaneously transport, trap, and rotate single cells using one device in a biocompatible manner, while expensive and bulky traditional acoustic driving system also increases its limitation. This paper presents a portable acoustofluidic device for multifunctional cell manipulation and 3D reconstruction, using acoustically oscillating bottom bubble array. Based on the Arduino-based driving system, multiple bubble-induced microvortices were generated and utilized to achieve multifunctional manipulation in a noninvasive manner. Self-propelled transportation of single or multiple cells is first accomplished by bottom bubble array; Controllable trapping, 3D rotation (in the x-y or x-z plane) of DU145 cells are further performed by every single microbubble. Through experiments, rotation direction, speed and axis can be modulated by tuning the driving frequency and voltage. Finally, 3D cell reconstruction combining imaging processing algorithm with out-of-plane rotation enables a sufficient illustration of cell structures and surface morphology, providing an efficient properties measurement function. All these aspects of this device show great potentials in bioengineering, biophysics and biomedicine.
|
|
TuBT23 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Applications of Robotic Exploration |
|
|
Chair: Kong, Detian | Chinese University of HK |
Co-Chair: Zhang, Li | The Chinese University of Hong Kong |
|
03:00-03:15, Paper TuBT23.1 | Add to My Program |
Design and Soft-Landing Control of a Six-Legged Mobile Repetitive Lander for Lunar Exploration |
|
Yin, Ke | Shanghai Jiao Tong University |
Gao, Feng | Shanghai Jiao Tong University |
Sun, Qiao | Shanghai Jiao Tong University |
Liu, Jimu | Shanghai Jiao Tong University |
Xiao, Tao | Beijing Institute of Technology |
Yang, Jianzhong | Beijing Institute of Spacecraft System Engineering |
Jiang, Shuiqing | Beijing Institute of Spacecraft System Engineering |
Chen, Xianbao | Shanghai Jiao Tong University |
Sun, Jing | Shanghai Jiao Tong University |
Liu, Renqiang | Shanghai Jiao Tong University |
Qi, Chenkun | Shanghai Jiao Tong University |
Keywords: Space Robotics and Automation, Legged Robots, Force Control
Abstract: The autonomous robots consisting of an immovable lander and a rover are widely deployed to explore extraterrestrial planets. However, these robots have two main limitations: (1) the separate design for lander and rover respectively results in heavy mass and big volume of the whole system, which increases the launching cost sharply; (2) the rover’s detection area has to be restricted to the vicinity of the immovable lander. To overcome these problems, we designed a novel six-legged mobile repetitive lander called “HexaMRL”, which integrates the functions of both lander and rover, including folding, deploying, repetitive soft-landing, and walking. A hybrid compliant mechanism taking advantages of both active and passive compliances was adopted on its leg. An integrated drive unit (IDU) was utilized to imitate the dynamics of a spring and a damper to absorb the landing impact energy, while the structure remains intact. Moreover, a control method based on state machine for soft-landing on the Moon was proposed. HexaMRL achieved repetitive soft-landing on a 5-DoF lunar gravity testing platform (5-DoF-LGTP) with a vertical landing velocity of 1.9 m/s and a payload of 140 kg. The drive torque safety margin is improved by 23.4% based on the hybrid compliant leg comparing with the standalone active compliant leg.
|
|
03:15-03:30, Paper TuBT23.2 | Add to My Program |
LEAF: Latent Exploration Along the Frontier |
|
Bharadhwaj, Homanga | University of Toronto, Canada |
Garg, Animesh | University of Toronto |
Shkurti, Florian | University of Toronto |
Keywords: Reinforcement Learning, Machine Learning for Robot Control, Representation Learning
Abstract: Self-supervised goal proposal and reaching is a key component for exploration and efficient policy learning algorithms. Such a self-supervised approach without access to any oracle goal sampling distribution requires deep exploration and commitment so that long horizon plans can be efficiently discovered. In this paper, we propose an exploration framework, which learns a dynamics-aware manifold of reachable states. For a goal, our proposed method deterministically visits a state at the current frontier of reachable states (commitment/reaching) and then stochastically explores to reach the goal (exploration). This allocates exploration budget near the frontier of the reachable region instead of its interior. We target the challenging problem of policy learning from initial and goal states specified as images, and do not assume any access to the underlying ground-truth states of the robot and the environment. To keep track of reachable latent states, we propose a distance-conditioned reachability network that is trained to infer whether one state is reachable from another within the specified latent space distance. Given an initial state, we obtain a frontier of reachable states from that state. By incorporating a curriculum for sampling easier goals (closer to the start state) before more difficult goals, we demonstrate that the proposed self-supervised exploration algorithm, superior performance compared to existing baselines on a set of challenging robotic environments.
|
|
03:30-03:45, Paper TuBT23.3 | Add to My Program |
LAFFNet: A Lightweight Adaptive Feature Fusion Network for Underwater Image Enhancement |
|
Yang, Hao-Hsiang | AsusTek Computer Inc |
Huang, Kuan-Chih | AsusTek Computer Inc |
Chen, Wei-Ting | National Taiwan University |
Keywords: Marine Robotics, AI-Based Methods, Deep Learning for Visual Perception
Abstract: Underwater image enhancement is an important low-level computer vision task for autonomous underwater vehicles and remotely operated vehicles to explore and understand the underwater environments. Recently, deep convolutional neural networks (CNNs) have been successfully used in many computer vision problems, and so does underwater image enhancement. There are many deep-learning-based methods with impressive performance for underwater image enhancement, but their memory and model parameter costs are hindrances in practical application. To address this issue, we propose a lightweight adaptive feature fusion network (LAFFNet). The model is the encoder-decoder model with multiple adaptive feature fusion (AAF) modules. AAF subsumes multiple branches with different kernel sizes to generate multi-scale feature maps. Furthermore, channel attention is used to merge these feature maps adaptively. Our method reduces the number of parameters from 2.5M to 0.15M (around 94% reduction) but outperforms state-of-the-art algorithms by extensive experiments. Furthermore, we demonstrate our LAFFNet effectively improves high-level vision tasks like salience object detection and single image depth estimation.
|
|
03:45-04:00, Paper TuBT23.4 | Add to My Program |
Ultrasound Doppler Imaging and Navigation of Collective Magnetic Cell Microrobots in Blood |
|
Wang, Qianqian | The Chinese University of Hong Kong |
Tian, Yuan | The Chinese University of Hong Kong |
Du, Xingzhou | The Chinese University of Hong Kong |
Chan, Kai Fung | The Chinese University of Hong Kong |
Zhang, Li | The Chinese University of Hong Kong |
Keywords: Micro/Nano Robots, Swarm Robotics
Abstract: We propose ultrasound Doppler imaging and magnetic navigation of collective cell microrobots in whole blood. Cell microrobots are cultured using stem cells and iron microparticles, they have spheroidal structures and can be actuated under external magnetic fields. A collective of cell microrobots can be reversibly gathered and spread due to the tunable magnetic interaction, and are able to exhibit collective motion in whole blood under rotating magnetic fields. Simulation results indicate that the induced blood flow around the collective pattern affects the motion of red blood cells (RBCs), and experimental results show that Doppler signals are observed when emitting ultrasound waves to the microrobots. The induced Doppler signals are affected by the input field frequency and the ultrasound parameters (pulse repetition frequency). Due to the induced three-dimensional blood flow, Doppler signals can be observed when the imaging plane is above the collective microrobots, which enables indirect localization when performing navigation on an uneven surface. Our study investigates a strategy for pattern formation and navigation of collective microrobots under ultrasound Doppler imaging, demonstrating that the integration of collective control approach and medical imaging holds great potential for real-time active delivery tasks.
|
|
TuBT24 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Aerial Robotics: Mechanics and Control I |
|
|
Chair: Zhao, Moju | The University of Tokyo |
Co-Chair: Kim, H. Jin | Seoul National University |
|
03:00-03:15, Paper TuBT24.1 | Add to My Program |
Fixed-Root Aerial Manipulator: Design, Modeling, and Control of Multilink Aerial Arm to Adhere Foot Module to Ceilings Using Rotor Thrust |
|
Nishio, Takuzumi | The University of Tokyo |
Zhao, Moju | The University of Tokyo |
Anzai, Tomoki | The University of Tokyo |
Kojima, Kunio | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Aerial Systems: Mechanics and Control, Motion Control, Mechanism Design
Abstract: Precise aerial manipulation is important for multirotor robots. For multirotors equipped with arms, the root pose error due to the floating body affects the precision at the end effector. Fixed-root approaches, such as perching on surfaces using the rotor suction force, are useful to address this problem. Furthermore, it is difficult for arm-equipped multirotors to generate large wrenches at the end effector owing to joint torque limitations. For multilink aerial robots with rotors distributed to each link, the thrust of rotors can produce large torques. Therefore, such multirotor robots can generate comparatively large wrenches at the end effector. In this paper, we introduce a rotor-distributed multilink robot that can perch on surfaces. First, we designed a root footplate and arm module for a multilink aerial robot. During perching, the joint between these two links can be passive to prevent peeling. Second, we propose a quadratic programming (QP) based controller to calculate the desired thrust for perching motion, considering the static friction and zero moment point (ZMP) conditions on the footplate. Finally, we conducted root-body perching motion tests. The manipulations of the multilink aerial robot during perching become more accurate than those during flight because the root position adheres to the environment.
|
|
03:15-03:30, Paper TuBT24.2 | Add to My Program |
Aerial Manipulator Pushing a Movable Structure Using a DOB-Based Robust Controller |
|
Lee, Dongjae | Seoul National University |
Seo, Hoseong | Seoul National University |
Jang, Inkyu | Seoul National University |
Lee, Seung Jae | University of California, Berkeley |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control, Robust/Adaptive Control
Abstract: This paper deals with the problem of an aerial manipulator pushing a movable structure. Contrary to physical interaction with a static structure, suitable consideration of the interacting force during the motion of the structure is required to stably perform this movable structure interaction. To accomplish the task of pushing a structure while ensuring the stability of the aerial manipulator, we present a nonlinear disturbance-observer (DOB)-based robust control approach by regarding the interaction force as a disturbance to the system. Furthermore, to utilize the proposed controller for pushing a movable structure, we propose an algorithm to generate an end-effector position reference that enables safe operation in a realistic situation. We validate the proposed control framework with successful demonstrations on pushing two types of movable structures, a heavy rolling cart (42 [kg]), and a real-like hinged door.
|
|
03:30-03:45, Paper TuBT24.3 | Add to My Program |
Data-Driven MPC for Quadrotors |
|
Torrente, Guillem | Sony AI |
Kaufmann, Elia | University of Zurich |
Foehn, Philipp | University of Zurich |
Scaramuzza, Davide | University of Zurich |
Keywords: Model Learning for Control
Abstract: Aerodynamic forces render accurate high-speed trajectory tracking with quadrotors extremely challenging. These complex aerodynamic effects become a significant disturbance at high speeds, introducing large positional tracking errors, and are extremely difficult to model. To fly at high speeds, feedback control must be able to account for these aerodynamic effects in real-time. This necessitates a modeling procedure that is both accurate and efficient to evaluate. Therefore, we present an approach to model aerodynamic effects using Gaussian Processes, which we incorporate into a Model Predictive Controller to achieve efficient and precise real-time feedback control, leading to up to 70% reduction in trajectory tracking error at high speeds. We verify our method by extensive comparison to a state-of-the-art linear drag model in synthetic and real-world experiments at speeds of up to 50 km/h and accelerations beyond 4g. Upon acceptance, we plan to release the code open source.
|
|
03:45-04:00, Paper TuBT24.4 | Add to My Program |
Singularity-Free Aerial Deformation by Two-Dimensional Multilinked Aerial Robot with 1-DoF Vectorable Propeller |
|
Zhao, Moju | The University of Tokyo |
Anzai, Tomoki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Aerial Systems: Mechanics and Control, Motion Control, Motion and Path Planning
Abstract: Two-dimensional multilinked structures can benefit aerial robots in both maneuvering and manipulation because of their deformation ability. However, certain types of singular forms must be avoided during deformation. Hence, an additional 1 Degrees-of-Freedom (DoF) vectorable propeller is employed in this work to overcome singular forms by properly changing the thrust direction. In this paper, we first extend modeling and control methods from our previous works for an under-actuated model whose thrust forces are not unidirectional. We then propose a planning method for the vectoring angles to solve the singularity by maximizing the controllability under arbitrary robot forms. Finally, we demonstrate the feasibility of the proposed methods by experiments where a quad-type model is used to perform trajectory tracking under challenging forms, such as a line-shape form, and the deformation passing these challenging forms.
|
|
TuCT1 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Navigation in Humanoids and Animaloids |
|
|
Co-Chair: Sun, Caiming | The Chinese University of Hong Kong, Shenzhen |
|
04:00-04:15, Paper TuCT1.1 | Add to My Program |
Autonomous Decentralized Shape-Based Navigation for Snake Robots in Dense Environments |
|
Sartoretti, Guillaume Adrien | National University of Singapore (NUS) |
Wang, Tianyu | Carnegie Mellon University |
Chuang, Gabriel | Carnegie Mellon University |
Li, Qingyang | Carnegie Mellon University |
Choset, Howie | Carnegie Mellon University |
Keywords: Biologically-Inspired Robots, Redundant Robots, Sensor-based Control
Abstract: In this work, we focus on the autonomous navigation of snake robots in densely-cluttered environments, where collisions between the robot and obstacles are frequent, which could happen often in disaster scenarios, underground caves, or grassland/forest environments. This work takes the view that obstacles are not to be avoided, but rather exploited to support and direct the motion of the snake robot. We build upon a decentralized state-of-the-art compliant controller for serpenoid locomotion, and develop a bi-stable dynamical system that relies on inertial feedback to continuously steer the robot toward a desired direction. We experimentally show that this controller allows the robot to autonomously navigate dense environments by consistently locomoting along a given, global direction of travel in the world, which could be selected by a human operator or a higher level planner. We further equip the robot with an onboard vision system, allowing the robot to autonomously select its own direction of travel, based on the obstacle distribution ahead of its position (i.e., enacting feedforward control). In those additional experiments on hardware, we show how such an exteroceptive sensor can allow the robot to steer before hitting obstacles and to preemptively avoid challenging regions where proprioception-only (i.e., torque and inertial) feedback control would not suffice.
|
|
04:15-04:30, Paper TuCT1.2 | Add to My Program |
Real-Time Optimal Navigation Planning Using Learned Motion Costs |
|
Yang, Bowen | The Hong Kong University of Science and Technology, Robotics Ins |
Wellhausen, Lorenz | ETH Zürich |
Miki, Takahiro | ETH Zurich |
Liu, Ming | Hong Kong University of Science and Technology |
Hutter, Marco | ETH Zurich |
Keywords: Motion and Path Planning, Machine Learning for Robot Control, Legged Robots
Abstract: Navigation on challenging terrain topographies requires the understanding of robots' locomotion capabilities to produce optimal solutions. We present an integrated framework for real-time autonomous navigation of mobile robots based on 2.5D elevation maps. The framework performs rapid global path planning and optimization that is aware of the locomotion capabilities of the robot. A GPU-aided, sampling-based path planner combined with a gradient-based path optimizer provides optimal paths by using a neural network-based locomotion cost predictor which is trained in simulation. We show that our approach is capable of planning and optimizing paths three orders of magnitude faster than RRT* on GPU-enabled hardware, enabling real-time deployment on mobile platforms. We successfully evaluate the framework on the ANYmal C quadrupedal robot in both simulation and real-world environments for path planning tasks on multiple complex terrains.
|
|
04:30-04:45, Paper TuCT1.3 | Add to My Program |
Humanoid Loco-Manipulation Planning Based on Graph Search and Reachability Maps |
|
Murooka, Masaki | AIST |
Kumagai, Iori | National Inst. of AIST |
Morisawa, Mitsuharu | National Inst. of AIST |
Kanehiro, Fumio | National Inst. of AIST |
Kheddar, Abderrahmane | CNRS-AIST |
Keywords: Humanoid and Bipedal Locomotion, Manipulation Planning, Multi-Contact Whole-Body Motion Planning and Control
Abstract: In this letter, we propose an efficient and highly versatile loco-manipulation planning for humanoid robots. Loco-manipulation planning is a key technological brick enabling humanoid robots to autonomously perform object transportation by manipulating them. We formulate planning of the alternation and sequencing of footsteps and grasps as a graph search problem with a new transition model that allows for a flexible representation of loco-manipulation. Our transition model is quickly evaluated by relocating and switching the reachability maps depending on the motion of both the robot and object. We evaluate our approach by applying it to loco-manipulation use-cases, such as a bobbin rolling operation with regrasping, where the motion is automatically planned by our framework.
|
|
04:45-05:00, Paper TuCT1.4 | Add to My Program |
Autonomous Navigation for Adaptive Unmanned Underwater Vehicles Using Fiducial Markers |
|
Chen, Juan | Peng Cheng Laboratory (PCL) |
Sun, Caiming | The Chinese University of Hong Kong, Shenzhen |
Zhang, Aidong | The Chinese University of Hong Kong, Shenzhen |
Keywords: Marine Robotics, Autonomous Vehicle Navigation
Abstract: This paper presents an integrated methodology and experimental validation of an autonomous framework for unmanned underwater vehicles (UUVs) merely equipped with a conventional monocular camera and a pressure sensor to accomplish high-performance autonomy. Optimal pose of the UUV is solved iteratively by Levenburg-Marquardt optimization for the Perspective-n-Point (PnP) problem. To guarantee a consistent localization system, a properly-tuned EKF with extra outlier removal approaches including applying Chi-square tests of innovations adequately removes measurement noises, meanwhile provides unknown navigation state estimations. A classic adaptive controller is developed to enable autonomous mobility. Real-time experiments are designed to demonstrate underwater autonomous performance with a miniature commercial UUV, BlueROV2 Heavy.
|
|
TuCT2 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Multiple and Distributed Systems II |
|
|
Co-Chair: Kim, H. Jin | Seoul National University |
|
04:00-04:15, Paper TuCT2.1 | Add to My Program |
Command Filtered Tracking Control for High-Order Systems with Limited Transmission Bandwidth |
|
Bao, Jialei | Beijing Jiaotong University |
Liu, Peter X. | Carleton University |
Wang, Huanqing | Carleton University |
Zheng, Minhua | Beijing Jiaotong University |
Zhao, Ying | Beijing Jiaotong University |
Keywords: Motion Control, Industrial Robots, Robust/Adaptive Control
Abstract: This paper investigates the tracking control problem of a class of high-order decentralized systems subjected to limited communication bandwidth. {An} event-triggered control method is proposed, where the controller is triggered only when specific events happen. Moreover, the computational complexity is reduced by introducing command filters for virtual control signals. Specifically, the backstepping scheme is adopted as the main design framework, by which the n-th order nonlinear system is divided into n command-cascaded first order subsystems. And virtual control commands are sent through a second-order low-pass filter, by which the time derivatives of the virtual commands can be obtained directly. The theoretical analysis shows the stability of the proposed method. The tracking performance is illustrated by a simulation example.
|
|
04:15-04:30, Paper TuCT2.2 | Add to My Program |
Online Trajectory Planning for Multiple Quadrotors in Dynamic Environments Using Relative Safe Flight Corridor |
|
Park, Jungwon | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Collision Avoidance, Distributed Robot Systems
Abstract: This paper presents a new distributed multi-agent trajectory planning algorithm that generates safe, dynamically feasible trajectories considering the uncertainty of obstacles in dynamic environments. We extend the relative safe flight corridor (RSFC) presented in previous work to replace time-variant, non-convex collision avoidance constraints to convex ones, and we adopt a relaxation method based on reciprocal collision avoidance (RCA) to reduce the total flight time and distance without loss of success rate. The proposed algorithm can compute the trajectories for 50 agents on average 49.7 ms per agent with Intel i7 desktop and can generate safe trajectories for 16 agents with a success rate of more than 93% in simulation environments with four dynamic obstacles when the velocity of dynamic obstacles is below the maximum velocity of quadrotors. We validate the robustness of the proposed algorithm through a real flight test with four quadrotors and one moving human.
|
|
04:30-04:45, Paper TuCT2.3 | Add to My Program |
Multi-Scale Cost Volumes Cascade Network for Stereo Matching |
|
Jia, Xiaogang | National University of Defense Technology |
Chen, Wei | National University of Defense Technology |
Li, Chen | National University of Defense Technology |
Liang, Zhengfa | National University of Defense Technology |
Wu, Mingfei | National University of Defense Technology |
Tan, Yusong | National University of Defense Technology |
Huang, Libo | National University of Defense Technology |
Keywords: RGB-D Perception, Vision-Based Navigation, Deep Learning for Visual Perception
Abstract: Stereo matching is essential for robot navigation. However, the accuracy of current widely used traditional methods is low, while methods based on CNN need expensive computational cost and running time. This is because different cost volumes play a crucial role in balancing speed and accuracy. Thus we propose MSCVNet, which combines traditional methods and neural networks to improve the quality of cost volume. Concretely, our network first generates multiple 3D cost volumes with different resolutions and then uses 2D convolutions to construct a novel cascade hourglass network for cost aggregation. Meanwhile, we design an algorithm to distinguish and calculate the loss for discontinuous areas of the disparity result. According to the KITTI official website, our network is much faster than most top-performing methods (24×than CSPN, 44×than GANet, etc.). Meanwhile, compared to traditional methods (SPS-St, SGM) and other real-time stereo matching networks (Fast DS-CS, DispNetC, and RTSNet, etc.), our network achieves a big improvement in accuracy, demonstrating the feasibility and capability of the proposed method.
|
|
04:45-05:00, Paper TuCT2.4 | Add to My Program |
Hierarchical MCTS for Scalable Multi-Vessel Multi-Float Systems |
|
D'urso, Giovanni Salvatore | University of Technology Sydney |
Lee, James Ju Heon | University of Technology Sydney |
Pizarro, Oscar | Australian Centre for Field Robotics |
Yoo, Chanyeol | University of Technology Sydney |
Fitch, Robert | University of Technology Sydney |
Keywords: Marine Robotics, Multi-Robot Systems, Planning, Scheduling and Coordination
Abstract: Systems of multiple low-cost, underactuated floats combined with fully actuated surface vessels can improve the scalability and cost-effectiveness of autonomous systems for marine science and environmental monitoring. Here, we consider a coordination problem where surface vessels must drop off floats at locations such that they are likely to drift to observe given points of interest, and later must pick up the floats for redeployment. We define the Multi-Vessel Multi-Float (MVMF) problem and present a hierarchical solution based on the Dec-MCTS algorithm. Our solution defines customised sampling, rollout, and action generation algorithms to accommodate the problem's large search space and provide computational performance sufficient for practical application. We report analytical and simulation results that demonstrate the computational efficiency of our method and validate its behaviour in practical problems. These results immediately enable field experiments to progress the development of this exciting concept in multi-robot marine systems.
|
|
TuCT3 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Multiple and Distributed Systems IV |
|
|
Chair: Ma, Hang | Simon Fraser University |
Co-Chair: Lee, Dongjun | Seoul National University |
|
04:00-04:15, Paper TuCT3.1 | Add to My Program |
Distributed Heuristic Multi-Agent Path Finding with Communication |
|
Ma, Ziyuan | Simon Fraser University |
Luo, Yudong | University of Waterloo |
Ma, Hang | Simon Fraser University |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Reinforcement Learning
Abstract: Multi-Agent Path Finding (MAPF) is essential to large-scale robotic systems. Recent methods have applied reinforcement learning (RL) to learn decentralized polices in partially observable environments. A fundamental challenge of obtaining collision-free policy is that agents need to learn cooperation to handle congested situations. This paper combines communication with deep Q-learning to provide a novel learning based method for MAPF, where agents achieve cooperation via graph convolution. To guide RL algorithm on long-horizon goal-oriented tasks, we embed the potential choices of shortest paths from single source as heuristic guidance instead of using a specific path as in most existing works. Our method treats each agent independently and trains the model from a single agent's perspective. The final trained policy is applied to each agent for decentralized execution. The whole system is distributed during training and is trained under a curriculum learning strategy. Empirical evaluation in obstacle-rich environment indicates the high success rate with low average step of our method.
|
|
04:15-04:30, Paper TuCT3.2 | Add to My Program |
Distributed PDOP Coverage Control: Providing Large-Scale Positioning Service Using a Multi-Robot System |
|
Zhang, Liang | Harbin Institute of Technology, Swiss Federal Institute of Techn |
Zhang, Zexu | Harbin Institute of Technology |
Siegwart, Roland | ETH Zurich |
Chung, Jen Jen | Eidgenössische Technische Hochschule Zürich |
Keywords: Multi-Robot Systems, Optimization and Optimal Control, Distributed Robot Systems
Abstract: This manuscript develops a distributed control strategy for active positioning service of providing a position dilution of precision (PDOP) field using a multi-robot system (MRS) in three-dimensional (3D) mission space, which is solved by the coverage control scheme. After assigning the target coverage area on the x-y plane via a density distribution, the movement for each vehicle is generated based on a gradient-descent controller and hence can return a locally optimal PDOP field on the ground plane. Every robot only needs to collect the positions of neighboring robots whose covered areas are intersected with itself and then calculates the gradient of PDOP within its own coverage dominance. As a result, our proposed motion control strategy is distributed and robust to node failure in the network. Finally, simulation results validate the correctness and efficiency of the proposed approach.
|
|
04:30-04:45, Paper TuCT3.3 | Add to My Program |
Autonomous Distributed System for Gait Generation for Single-Legged Modular Robots Connected in Various Configurations (I) |
|
Hayakawa, Tomohiro | Kyoto University |
Kamimura, Tomoya | Nagoya Institute of Technology |
Kaji, Shizuo | Kyushu University |
Matsuno, Fumitoshi | Kyoto University |
Keywords: Legged Robots, Distributed Robot Systems, Swarm Robotics
Abstract: To date, many gait generation strategies have been designed for robots with leg configurations that model those of natural creatures. However, their leg configurations are limited to the 2*N type, such as hexapod or myriapod; hence, simultaneously, the potential ability of legged robots is implicitly limited. We consider single-legged modular robots that can be arranged to form a cluster with arbitrary 2-D leg configurations. By choosing configurations appropriately, these robots have the potential to perform several types of tasks, as is the case for reconfigurable modular robots. However, to use appropriate configurations for a given task, a unified gait generation system for various configurations of a cluster is required. In this study, we propose an autonomous distributed control system for each single-legged modular robot to collectively achieve static walking of the cluster with various leg configurations on planar ground. Moreover, our system is an autonomous distributed system with scalability and fault tolerance, in which each module determines the moving pattern of its foot through local communication without global information, such as the entire leg configuration of the cluster.We verified that several types of clusters achieved static walking using our system not only in dynamic simulations, but also in real robot experiments.
|
|
04:45-05:00, Paper TuCT3.4 | Add to My Program |
A Distributed Two-Layer Framework for Teleoperated Platooning of Fixed-Wing UAVs Via Decomposition and Backstepping |
|
Lee, Minhyeong | Seoul National University |
Lee, Dongjun | Seoul National University |
Keywords: Aerial Systems: Mechanics and Control, Distributed Robot Systems, Telerobotics and Teleoperation
Abstract: We propose a novel distributed control framework for teleoperated platooning of multiple three-dimensional (3D) fixed-wing unmanned aerial vehicles (UAVs), consisting of the following two layers: 1) virtual frame layer, which generates the target 3D nonholonomic motion of the virtual nonholonomic frames (VNFs) using the nonholonomic decomposition (D. J. Lee, 2010) and backstepping in such a way that the VNFs are to maintain the platoon formation in a distributed manner while respecting the directionality of the fixed-wing UAV; and 2) local control layer, which drives each fixed-wing UAV to track their respective VNF with their under-actuation and aerodynamic disturbance effects fully taken into account by using the backstepping and Lyapunov-based design techniques. Convergence and stability of salient aspects of each layer and their combination are theoretically established. Simulations with 25 fixed-wing UAVs and a haptic device are also performed to validate the theory with their multimedia provided at https://youtu.be/Z3Mo66KIsns.
|
|
TuCT4 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning: Task-Based Planning |
|
|
Chair: Nam, Changjoo | Inha University |
|
04:00-04:15, Paper TuCT4.1 | Add to My Program |
Integrated Task Assignment and Path Planning for Capacitated Multi-Agent Pickup and Delivery |
|
Chen, Zhe | Monash University |
Alonso-Mora, Javier | Delft University of Technology |
Bai, Xiaoshan | University of Gronengin |
Harabor, Daniel Damir | Monash University |
Stuckey, Peter James | Monash University |
Keywords: Task and Motion Planning, Motion and Path Planning, Path Planning for Multiple Mobile Robots or Agents
Abstract: Multi-agent Pickup and Delivery (MAPD) is a challenging industrial problem where a team of robots is tasked with transporting a set of packages, each from an initial location and each to a specified target location. Appearing in the context of automated warehouse logistics and automated mail sortation, MAPD requires first deciding which robot is assigned what package (i.e., Task Assignment or TA) followed by a subsequent coordination problem where each robot must be assigned collision-free paths so as to successfully complete its assignment (i.e., Multi-Agent Path Finding or MAPF). Leading methods in this area solve MAPD sequentially: first assigning tasks, then assigning paths. In this work we propose a new coupled method where task assignment choices are informed by actual delivery costs instead of by lowerbound estimates. The main ingredients of our approach are a marginal-cost assignment heuristic and a meta-heuristic improvement strategy based on Large Neighbourhood Search. As a further contribution, we also consider a variant of the MAPD problem where each robot can carry multiple packages instead of just one. Numerical simulations show that our approach yields efficient and timely solutions and we report significant improvement compared to other recent methods from the literature.
|
|
04:15-04:30, Paper TuCT4.2 | Add to My Program |
Social Trajectory Planning for Urban Autonomous Surface Vessels (I) |
|
Park, Shinkyu | KAUST |
Cap, Michal | CTU in Prague |
Alonso-Mora, Javier | Delft University of Technology |
Ratti, Carlo | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Autonomous Vehicle Navigation, Learning from Demonstration, Motion and Path Planning
Abstract: In this article, we propose a trajectory planning algorithm that enables autonomous surface vessels to perform socially compliant navigation in a city’s canal. The key idea behind the proposed algorithm is to adopt an optimal control formulation in which the deviation of movements of the autonomous vessel from nominal movements of human-operated vessels is penalized. Consequently, given a pair of origin and destination points, it finds vessel trajectories that resemble those of human-operated vessels. To formulate this, we adopt kernel density estimation (KDE) to build a nominal movement model of human-operated vessels from a prerecorded trajectory dataset, and use a Kullback–Leibler control cost to measure the deviation of the autonomous vessel’s movements from the model. We establish an analogy between our trajectory planning approach and the maximum entropy inverse reinforcement learning (MaxEntIRL) approach to explain how our approach can learn the navigation behavior of human-operated vessels. On the other hand, we distinguish our approach from the MaxEntIRL approach in that it does not require well-defined bases, often referred to as features, to construct its cost function as required in many of inverse reinforcement learning approaches in the trajectory planning context.
|
|
04:30-04:45, Paper TuCT4.3 | Add to My Program |
A Geometric Folding Pattern for Robot Coverage Path Planning |
|
Zhu, Lifeng | Southeast University |
Yao, Shuai | Southeast University, China Jiliang University |
Li, Boyang | Southeast University, Tsinghua University |
Jia, Yiyang | University of Tsukuba |
Mitani, Jun | University of Tsukuba |
Song, Aiguo | Southeast University |
Keywords: Motion and Path Planning
Abstract: Conventional coverage path planning algorithms are mainly based on the zigzag and spiral patterns or their combinations. The traversal order is limited by the linear or inside-outside manner. We propose a new set of coverage patterns induced from geometric folding operations, called the geometric folding pattern, to make coverage paths with more flexible traversal order. We study the modeling and parameterization of the geometric folding patterns. Then, a sampling operator is introduced. Based on the computational tools, we demonstrate the application of the proposed patterns in designing coverage paths. We show that the simple geometric folding patterns are flexible and controllable, which enables more choices for the coverage path planning problem.
|
|
04:45-05:00, Paper TuCT4.4 | Add to My Program |
Tree Search-Based Task and Motion Planning with Prehensile and Non-Prehensile Manipulation for Obstacle Rearrangement in Clutter |
|
Lee, Jinhwi | Hanyang University |
Nam, Changjoo | Inha University |
Park, Jong Hyeon | Hanyang University |
Kim, ChangHwan | Korea Institute of Science and Technology |
Keywords: Task and Motion Planning, Manipulation Planning, Service Robotics
Abstract: We propose a tree search-based planning algorithm for a robot manipulator to rearrange objects and grasp a target in a dense space. We consider environments where tasks cannot be completed with prehensile planning only. As assuming that a manipulator is only allowed to grasp from the top, we aim to minimize the number of rearrangement actions and the total execution time, which affects the efficiency of manipulation. The proposed search algorithm determines the optimal sequence of object rearrangement with prehensile and non-prehensile grasping until grasping a target. For non-prehensile grasping, a heuristic function is employed to model frictions and contacts between objects and a table. Experimental results in a realistic simulated environment show that the proposed algorithm can reduce the number of rearranged obstacles up to 27% and the total execution time up to 15% with 14 objects compared to the previous work.
|
|
TuCT5 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning: Robot Perception |
|
|
Co-Chair: Ma, Han | The Chinese University of Hong Kong |
|
04:00-04:15, Paper TuCT5.1 | Add to My Program |
Active Information Acquisition under Arbitrary Unknown Disturbances |
|
Wakulicz, Jennifer | University of Technology Sydney, Centre for Autonomous Systems |
Kong, He | University of Sydney |
Sukkarieh, Salah | The University of Sydney: The Australian Centre for Field Roboti |
Keywords: Motion and Path Planning, Optimization and Optimal Control
Abstract: Trajectory optimization of sensing robots to actively gather information of targets has received much attention in the past. It is well-known that under the assumption of linear Gaussian target dynamics and sensor models the stochastic Active Information Acquisition problem is equivalent to a deterministic optimal control problem. However, the above-mentioned assumptions regarding the target dynamic model are limiting. In real-world scenarios, the target may be subject to disturbances whose models or statistical properties are hard or impossible to obtain. Typical scenarios include abrupt maneuvers, jumping disturbances due to interactions with the environment, anomalous misbehaviors due to system faults/attacks, etc. Motivated by the above considerations, in this paper we consider targets whose dynamic models are subject to arbitrary unknown inputs whose models or statistical properties are not assumed to be available. In particular, we formulate the sensor trajectory planning problem to best track evolutions of both the target state and unknown input with the aid of an unknown input decoupled filter. Inspired by concepts of Reduced Value Iteration, a suboptimal solution that expands a search tree via Forward Value Iteration with informativeness-based pruning is proposed. Concrete suboptimality performance guarantees for tracking both the state and the unknown input are established. Numerical simulations of a target tracking example are presented to compare the proposed solu
|
|
04:15-04:30, Paper TuCT5.2 | Add to My Program |
Real-Time Obstacle Avoidance with a Virtual Torque Approach for a Robotic Tool in the End Effector |
|
Song, Kai-Tai | National Chiao Tung University |
Lee, Yi-Hung | Institute of Electrical and Control Engineering, National Chiao |
Keywords: Collision Avoidance, Motion and Path Planning, RGB-D Perception
Abstract: This paper proposes a real-time obstacle avoidance control scheme for a 6-DOF manipulator with a tool in the end effector. The system consists of environment monitoring, robot-tool segmentation and collision-free motion planning of the manipulator. A Kinect V2 RGB-D camera is used to track obstacles including human and objects in the working environment. The K-D tree algorithm is then adopted to cluster point clouds of the tool and the obstacles. For robot-tool segmentation, we propose a method to model the tool in the end effector and predict its pose in order to solve the camera occlusion problem. For collision-free motion planning, a novel potential field algorithm is proposed to take into consideration of the pose of the tool. A virtual torque approach is proposed and added to the potential field in order to generate a smoother and shorter avoidance motion. The experimental results on a TM5-700 cobot show that the manipulator with a tool in the end effector effectively avoided an obstacle in real time and completed the assigned task. It is shown that the path length with the proposed virtual torque is shortened by 80.43% compared with the case without using the virtual torque.
|
|
04:30-04:45, Paper TuCT5.3 | Add to My Program |
A Robotic Platform to Navigate MRI-Guided Focused Ultrasound System |
|
Dai, Jing | The University of Hong Kong |
He, Zhuoliang | The University of Hong Kong |
Fang, Ge | The University of Hong Kong |
Wang, Xiaomei | The University of Hong Kong |
Li, Yingqi | The University of Hong Kong |
Cheung, Chim Lee | The University of Hong Kong |
Liang, Liyuan | The University of Hong Kong |
Iordachita, Ioan Iulian | Johns Hopkins University |
Chang, Hing-Chiu | The University of Hong Kong |
Kwok, Ka-Wai | The University of Hong Kong |
Keywords: Surgical Robotics: Planning, Medical Robots and Systems, Hydraulic/Pneumatic Actuators
Abstract: Focused ultrasound technology attracts increasing interests accrediting to its non-invasive and painless treatment of tumors. Magnetic resonance imaging guidance has been introduced to monitor this procedure, thus allowing the ultrasound foci to be precisely controlled. However, manual positioning of the FUS transducers is challenging, especially for the intra-operative adjustment in the MRI room. Currently, there are very few devices capable to provide robotic transducer positioning for the treatment of abdominopelvic organ diseases under MRI. The high intensity focused ultrasound spot would have to be “steered” to ablate large (>Ø 3.5 cm) or multiple tumors (e.g. in liver). To this end, we proposed a hydraulic-driven tele-operated robot platform that enables 5-DoF manipulation of the FUS transducer. Even operated close to the MRI iso-center, the robot can guarantee zero electromagnetic artifact to the MR image. Our proof-of-concept robot prototype can offer a large workspace (100mm×100mm×35mm) for FUS foci steering. Accurate manipulation (0.2 mm in translation, 0.4 degrees in rotation) of the FUS transducer holder is achieved using rolling diaphragm-sealed hydraulic actuators. The robot control responsiveness (from 0.1 to 4 Hz) is also evaluated to show the potential to compensate for the spot tracking error induced by respiratory motion. We also demonstrate the use of wireless radiofrequency markers to continuously register the robot task space in the MRI coordinates.
|
|
04:45-05:00, Paper TuCT5.4 | Add to My Program |
Approximating Constraint Manifolds Using Generative Models for Sampling-Based Constrained Motion Planning |
|
Acar, Cihan | Institute for Infocomm Research (I2R), A*STAR |
Tee, Keng Peng | Institute for Infocomm Research |
Keywords: Motion and Path Planning, Manipulation Planning
Abstract: Sampling-based motion planning under task constraints is challenging because the null-measure constraint manifold in the configuration space makes rejection sampling extremely inefficient, if not impossible. This paper presents a learning-based sampling strategy for constrained motion planning problems. We investigate the use of two well-known deep generative models, the Conditional Variational Autoencoder (CVAE) and the Conditional Generative Adversarial Net (CGAN), to generate constraint-satisfying sample configurations. Instead of precomputed graphs, we use generative models conditioned on constraint parameters for approximating the constraint manifold. This approach allows for the efficient drawing of constraint-satisfying samples online without any need for modification of available sampling-based motion planning algorithms. We evaluate the efficiency of these two generative models in terms of their sampling accuracy and coverage of sampling distribution. Simulations and experiments are also conducted for different constraint tasks with two robotic systems.
|
|
04:45-05:00, Paper TuCT5.5 | Add to My Program |
Maintaining a Reliable World Model Using Action-Aware Perceptual Anchoring |
|
Liang, Ying Siu | Agency for Science, Technology and Research (A*STAR) |
Choi, Dongkyu | Agency for Science, Technology and Research |
Kwok, Kenneth | Institute of High Performance Computing |
Keywords: Cognitive Modeling, Visual Tracking
Abstract: Reliable perception is essential for robots that interact with the world. But sensors alone are often insufficient to provide this capability, and they are prone to errors due to various conditions in the environment. Furthermore, there is a need for robots to maintain a model of its surroundings even when objects go out of view and are no longer visible. This requires anchoring perceptual information onto symbols that represent the objects in the environment. In this paper, we present a model for action-aware perceptual anchoring that enables robots to track objects in a persistent manner. Our rule-based approach considers inductive biases to perform high-level reasoning over the results from low-level object detection, and it improves the robot's perceptual capability for complex tasks. We evaluate our model against existing baseline models for object permanence and show that it outperforms these on a snitch localisation task using a dataset of 1,371 videos. We also integrate our action-aware perceptual anchoring in the context of a cognitive architecture and demonstrate its benefits in a realistic gearbox assembly task on a Universal Robot.
|
|
TuCT6 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning: Collision Avoidance |
|
|
Co-Chair: Zhang, Jiafan | ABB Corporate Research Center, China |
|
04:00-04:15, Paper TuCT6.1 | Add to My Program |
VR-ORCA: Variable Responsibility Optimal Reciprocal Collision Avoidance |
|
Guo, Ke | The University of Hong Kong |
Wang, Dawei | The University of Hong Kong |
Fan, Tingxiang | The University of Hong Kong |
Pan, Jia | University of Hong Kong |
Keywords: Collision Avoidance, Path Planning for Multiple Mobile Robots or Agents
Abstract: As one of the most popular multi-agent path planning approaches, the optimal reciprocal collision avoidance (ORCA) algorithm assumes that each agent takes half the responsibility for collision avoidance. However, due to the asymmetric situation faced by adjacent agents, they are expected to take different responsibilities for collision avoidance to improve the entire crowd's navigation performance. Thus, in this paper, we propose the variable responsibility optimal reciprocal collision avoidance (VR-ORCA) algorithm, which relaxes the original assumption in ORCA and only requires the responsibilities of a pair of agents sum to one. In particular, the responsibility division between a pair of nearby agents is determined independently for each agent by minimizing a cost function involving their common neighbors. We validate our approach on a variety of simulated benchmarks and the results demonstrate that our novel method is beneficial in shorter travel time and distance than that of ORCA.
|
|
04:15-04:30, Paper TuCT6.2 | Add to My Program |
Dynamic Window Approach with Human Imitating Collision Avoidance |
|
Matsuzaki, Sango | Honda R&D Co., Ltd |
Aonuma, Shinta | Honda R&D Co., Ltd |
Hasegawa, Yuji | Honda R&D Co., Ltd |
Keywords: Autonomous Vehicle Navigation, Imitation Learning, Motion and Path Planning
Abstract: The autonomous navigation in the crowded environment is a challenging task due to the sensor occlusion and the complex nature of the abstract social interactions. And yet, humans are capable of navigating in such complex environment. In this paper, we propose an effective navigation method that combines the learning-based and model-based methods in a way that a cost function that includes human imitation factor learned via deep learning is integrated into the dynamic window approach (DWA) [1]. The experiments conducted on simulations shows that by training the robot to imitate the human trajectory, our navigation method is safer and more efficient than the state-of-the-art methods. Additionally, we successfully deployed a physical robot in an actual environment, and we validate that our navigation quality shares similar tendency with human in the path length, travel time, and the collision avoidance.
|
|
04:30-04:45, Paper TuCT6.3 | Add to My Program |
Disruption-Resistant Deformable Object Manipulation on Basis of Online Shape Estimation and Prediction-Driven Trajectory Correction |
|
Tanaka, Daisuke | Shinshu University |
Arnold, Solvi | Shinshu Univeristy |
Yamazaki, Kimitoshi | Shinshu University |
Keywords: Deep Learning in Grasping and Manipulation, Motion and Path Planning
Abstract: We consider the problem of deformable object manipulation with variable goal states and mid-manipulation disruptions. We propose an approach that integrates online shape estimation, prediction of shape transitions, generation of manipulation trajectories, and mid-manipulation trajectory correction. All functionalities are implemented using two neural network architectures. We apply this approach to the problem of cloth folding, and perform evaluation experiments in simulation, and on real cloth on robot hardware. We demonstrate that the system can achieve good approximation of given goal states, even when the manipulation process is disrupted by cloth sliding or external interference.
|
|
04:45-05:00, Paper TuCT6.4 | Add to My Program |
Dynamic Movement Primitive Based Motion Retargeting for Dual-Arm Sign Language Motions |
|
Liang, Yuwei | Zhejiang University |
Li, Weijie | Zhejiang University |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Mao, Yichao | ABB |
Zhang, Jiafan | ABB Corporate Research Center, China |
Keywords: Learning from Demonstration, Social HRI, Service Robotics
Abstract: We aim to develop an efficient programming method for equipping service robots with the skill of performing sign language motions. This paper addresses the problem of transferring complex dual-arm sign language motions characterized by the coordination among arms and hands from human to robot, which is seldom considered in previous studies of motion retargeting techniques. In this paper, we propose a novel motion retargeting method that leverages graph optimization and Dynamic Movement Primitives (DMPs) for this problem. We employ DMPs in a leader-follower manner to parameterize the original trajectories while preserving motion rhythm and relative movements between human body parts, and adopt a three-step optimization procedure to find deformed trajectories for robot tracking while ensuring feasibility for robot execution. Experimental results of several Chinese Sign Language (CSL) motions have been successfully performed on ABB’s YuMi dualarm collaborative robot (14-DOF) with Inspire-Robotics’ multifingered hands (6-DOF), a system with 26 DOFs in total.
|
|
TuCT7 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Motion Planning and Control I |
|
|
Chair: Kurniawati, Hanna | Australian National University |
|
04:00-04:15, Paper TuCT7.1 | Add to My Program |
SA-LOAM: Semantic-Aided LiDAR SLAM with Loop Closure |
|
Li, Lin | Zhejiang University |
Kong, Xin | Zhejiang University |
Zhao, Xiangrui | Zhejiang University |
Li, Wanlong | Beijing Huawei Digital Technologies Co., Ltd |
Wen, Feng | Huawei Technologies Co., Ltd |
Zhang, Hongbo | Huawei Technologies |
Liu, Yong | Zhejiang University |
Keywords: SLAM, Semantic Scene Understanding, Localization
Abstract: LiDAR-based SLAM system is admittedly more accurate and stable than others, while its loop closure optimization is still an open issue. With the development of 3D semantic segmentation for point clouds, semantic information can be obtained conveniently and steadily, which is essential for high-level intelligence and conductive to SLAM. In this paper, we present a novel semantic-aided LiDAR SLAM with loop closure based on LOAM, coined SA-LOAM, which leverages semantics for odometry as well as loop closure detection. Specifically, we propose a semantic-assisted ICP, includes semantically matching, downsampling and plane constraint, and integrate a semantic graph-based place recognition method in our loop closure detection module. Benefit from reasonably using semantics, we can improve the localization accuracy, detect loop closures effectively, and construct a global consistent semantic map even in large-scale scenes. Extensive experiments on KITTI and Ford Campus dataset show that our system significantly improves baseline performance, has a certain generalization ability to different data, achieving competitive results compared with state-of-the-art method.
|
|
04:15-04:30, Paper TuCT7.2 | Add to My Program |
Reinforcement Learning-Based Visual Navigation with Information-Theoretic Regularization |
|
Wu, Qiaoyun | Nanjing University of Aeronautics and Astronautics |
Xu, Kai | National University of Defense Technology |
Wang, Jun | Nanjing University of Aeronautics and Astronautics |
Xu, Mingliang | National University of Defense Technology |
Gong, Xiaoxi | Nanjing University of Aeronautics and Astronautics |
Manocha, Dinesh | University of Maryland |
Keywords: Vision-Based Navigation, Reinforcement Learning, Machine Learning for Robot Control
Abstract: To enhance the cross-target and cross-scene generalization of target-driven visual navigation based on deep reinforcement learning(RL), we introduce an information-theoretic regularization term into the RL objective. The regularization maximizes the mutual information between navigation actions and visual observation transforms of an agent, thus promoting more informed navigation decisions. This way, the agent models the action-observation dynamics by learning a variational generative model. Based on the model, the agent generates (imagines) the next observation from its current observation and navigation target. This way, the agent learns to understand the causality between navigation actions and the changes in its observations, finally embodied in predicting the next action for navigation via comparing the current and the imagined next observations. Cross-target and cross-scene evaluations on the AI2-THOR framework show that our method attains at least a 10% improvement of average success rate over some state-of-the-art models. We further evaluate our model in two real-world settings: navigation in unseen indoor scenes from a discrete Active Vision Dataset (AVD) and continuous real-world environments with a TurtleBot. We demonstrate that our navigation model is able to successfully achieve navigation tasks in these scenarios. Videos and models can be found in the supplementary material.
|
|
04:30-04:45, Paper TuCT7.3 | Add to My Program |
An On-Line POMDP Solver for Continuous Observation Spaces |
|
Hoerger, Marcus | Australian National University |
Kurniawati, Hanna | Australian National University |
Keywords: Motion and Path Planning
Abstract: Planning under partial obervability is essential for autonomous robots. A principled way to address such planning problems is the Partially Observable Markov Decision Process (POMDP). Although solving POMDPs is computationally intractable, substantial advancements have been achieved in developing approximate POMDP solvers in the past two decades. However, computing robust solutions for problems with continuous observation spaces remains challenging. Most on-line solvers rely on discretising the observation space or artificially limiting the number of observations that are considered during planning to compute tractable policies. In this paper we propose a new on-line POMDP solver, called Lazy Belief Extraction for Continuous Observation POMDPs (LABECOP), that combines methods from Monte-Carlo-Tree-Search and particle filtering to construct a policy reprentation which doesn't require discretised observation spaces and avoids limiting the number of observations considered during planning. Experiments on three different problems involving continuous observation spaces indicate that LABECOP performs similar or better than state-of-the-art POMDP solvers.
|
|
04:45-05:00, Paper TuCT7.4 | Add to My Program |
Modeling and Simulation of Running Expansion with Trunk and Pelvic Rotation Assist Suit |
|
Ren, Hongyuan | Hokkaido University |
Tanaka, Takayuki | Hokkaido University |
Murai, Akihiko | The National Institute of Advanced Industrial Science and Techno |
Keywords: Physically Assistive Devices, Simulation and Animation, Wearable Robotics
Abstract: Human running motion is a complex motion that not only based on the movement of legs but also needs the entire body to participate in. Therefore, we focused on the upper body, designed a trunk and pelvic rotation assist suit to apply an external force on the chest and pelvis for assisting the running motion, changing the energy flow between the upper and lower body to improve the running efficiency. However, human motion is very complicated and difficult to analyze, it is hard to explore the specific effect of the external force which is provided by the rotation assist suit on human motion. In this paper, we proposed an advanced spring-loaded inverted pendulum model for simulating the human running motion in natural running status and expanded running status in which the motion is expanded by wearing the trunk and pelvic rotation assist suit, parametrically represent the changes of human due to the intervention of the external force, and discuss the effect of the external assistant force on the human running motion.
|
|
TuCT8 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Mechanism and Control II |
|
|
Chair: Pang, Jianxin | UBTECH |
Co-Chair: Shen, Yutian | The Chinese University of Hong Kong |
|
04:00-04:15, Paper TuCT8.1 | Add to My Program |
Pneumatic Actuation-Based Bidirectional Modules with Variable Stiffness and Closed-Loop Position Control |
|
Chen, Yaohui | Huazhong Agricultural University |
Chung, Hoam | Monash University |
Chen, Bernard | Monash University |
Ping, Ho Yi | Monash University |
Sun, Yonghang | Monash Univerity |
Keywords: Hydraulic/Pneumatic Actuators, Actuation and Joint Mechanisms, Soft Sensors and Actuators
Abstract: This paper extends our previous work on a pneumatic bending module and presents two more modules for rotational and translational motions. In these modules, antagonistic chambers enveloped by rigid shells are adopted to realize bidirectional actuation, and they are characterized by safe actuation, enhanced torque/force output, independent stiffness tuning, and real-time position control. Due to their mechanical modularity, they can be conveniently assembled into robotic systems with multiple degrees of freedom (DoFs) according to different requirements. A complete workflow is presented including the module design, fabrication, theoretical modelling, controller design, and experimental validation. A reconfigurable robotic arm with high dexterity is also assembled using these modules, demonstrating the effectiveness of the proposed modules to develop robotic systems for safe, forceful, and precise tasks.
|
|
04:15-04:30, Paper TuCT8.2 | Add to My Program |
A Capturability-Based Control Framework for the Underactuated Bipedal Walking |
|
Yuan, Haihui | Zhejiang Lab |
Song, Sumian | Zhejiang Lab |
Du, Ruilong | Zhejiang Lab |
Zhu, Shiqiang | Zhejiang Lab |
Gu, Jason | Dalhousie University |
Zhao, Mingguo | Tsinghua University |
Pang, Jianxin | UBTECH |
Keywords: Humanoid and Bipedal Locomotion, Legged Robots
Abstract: This work considers the control of underactuated bipedal walking, and a novel capturability-based control framework is presented. Firstly, a new definition of stable walking is presented, and a novel foot-placement based control method is proposed. Then, a controller design method is presented based on this control method. During the controller design, the foot placement adjustment is achieved by updating the virtual constraints using a heuristic method, and an improved virtual constraint control method is proposed to enforce the virtual constraints. Finally, the effectiveness of the presented control framework is illustrated on a five-link underactuated planar biped by numerical simulations.
|
|
04:30-04:45, Paper TuCT8.3 | Add to My Program |
Appearance-Based Loop Closure Detection Via Bidirectional Manifold Representation Consensus |
|
Zhang, Kaining | Wuhan University |
Li, Zizhuo | Wuhan University |
Ma, Jiayi | Wuhan University |
Keywords: Vision-Based Navigation, SLAM, Autonomous Vehicle Navigation
Abstract: Loop closure detection (LCD), which aims to deal with the drift emerging when robots travel around the route, plays a key role in a simultaneous localization and mapping system. Unlike most current methods which focus on seeking an appropriate representation of images, we propose a novel two-stage pipeline dominated by the estimation of spatial geometric relationship. When a query image occurs, we select candidates on-line according to the similarity of global semantic features in the first stage, and then conduct robust geometric confirmation to verify true loop-closing pairs in the second stage. To this end, a robust feature matching algorithm, termed as bidirectional manifold representation consensus (BMRC), is proposed. In particular, we utilize manifold representation to construct local neighborhood structures of feature points and formulate the matching problem into an optimization model, enabling linearithmic time complexity via a closed-form solution. Furthermore, we propose a dynamic place partition strategy based on BMRC to segment image streams with similar content into a place, which can mine more valid candidate frames, improving the recall rate of the whole system. Extensive experiments on several publicly available datasets reveal that BMRC has a good performance in the general feature matching task and the proposed pipeline outperforms the current state-of-the-art approaches in the LCD task.
|
|
04:45-05:00, Paper TuCT8.4 | Add to My Program |
Synergetic Effect between Limbs and Spine Dynamics in Quadruped Walking Robots |
|
Li, Longchuan | Ritsumeikan University |
Ma, Shugen | Ritsumeikan University |
Tokuda, Isao | Ritsumeikan University |
Asano, Fumihiko | Japan Advanced Institute of Science and Technology |
Nokata, Makoto | Ritsumeikan University |
Tian, Yang | Ritsumeikan University |
Du, Liang | Ritsumeikan University |
Keywords: Biomimetics, Underactuated Robots, Dynamics
Abstract: Biological observations on tetrapods locomotion deduce that anti-phase synchronization (APS) between fore and rear parts is beneficial for achieving a high-speed walking. On the other hand, theoretical analysis and experimental studies on quadruped robots suggest that a flexible spine potentially improves the gait efficiency and adaptability via smoothing the ground collisions. However, these two mechanisms have never been placed together by a comprehensive investigation in terms of their synergetic effect. Namely, an advanced principle is still lacking in combining the APS and the spine flexibility for quadruped walking robots. To address this issue, we construct a mathematical model for a quadruped dynamic walker under different spine conditions. First, the APS effect is generated via entrainment-based control method under a rigid spine condition. Then, flexible spines realized by three kinds of springs are compared with the rigid one via theoretical analysis. The results suggest that the APS mechanism and the flexible spine can be synergized via an appropriate deformation control. The theoretical findings not only uncover locomotion control mechanisms for quadruped walking robots, but also provide additional understandings of tetrapods dynamic walking from a mechanical engineering point of view.
|
|
TuCT9 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Mechanism Design II |
|
|
Chair: Liarokapis, Minas | The University of Auckland |
Co-Chair: Tahara, Kenji | Kyushu University |
|
04:00-04:15, Paper TuCT9.1 | Add to My Program |
A Locally-Adaptive, Parallel-Jaw Gripper with Clamping and Rolling Capable, Soft Fingertips for Fine Manipulation of Flexible Flat Cables |
|
Chapman, Jayden | The University of Auckland |
Gorjup, Gal | The University of Auckland |
Dwivedi, Anany | University of Auckland |
Matsunaga, Saori | Mitsubishi Electric Corporation |
Mariyama, Toshisada | Mitsubishi Electric Corporation |
MacDonald, Bruce | University of Auckland |
Liarokapis, Minas | The University of Auckland |
Keywords: Assembly, Compliant Assembly, Disassembly
Abstract: Flexible flat cables (FFC) are very popular for connecting different components in modern electronics (e.g., mobile phones, laptops, tablets, etc.). The manipulation of FFCs typically relies on highly trained workers that spend hours performing the same repetitive processes, or on autonomous robotic systems that are equipped with simple clamping mechanisms or pneumatically driven suction cups. Such robotic systems are difficult to program and reprogram and often rely on sophisticated sensing elements and complicated control laws. Moreover, the performance and robustness of such systems is far from sufficient, hindering their mass adoption. The manipulation of FFCs is also quite challenging. A good gripper should be able to pinch the cable steadily and execute insertion tasks of the cable connector with ease. The suction cup based solution is a good approach for holding the cable, but it makes the cable connector insertion very challenging as it can only apply limited shear forces. In this paper, we propose a locally-adaptive, pneumatic, parallel-jaw robot gripper equipped with fingertips that are able to both pinch the cable with a soft clamping mechanism and roll the cable surface on the soft fingertip structure until it reaches the desired connector. The gripper base accommodates a camera that allows for the recognition and pose estimation of the flat, flexible cables and other electronic components. The gripper is of low-cost and low-complexity.
|
|
04:15-04:30, Paper TuCT9.2 | Add to My Program |
Stable, Sensor-Less and Compliance-Less Module Connection for Automated Construction System of a Modularized Rail Structure |
|
Yasuda, Mari | The University of Tokyo |
Warisawa, Shin'ichi | The University of Tokyo |
Fukui, Rui | The University of Tokyo |
Keywords: Mechanism Design, Cellular and Modular Robots, Robotics in Hazardous Fields
Abstract: Unmanned robots have been proposed for the decommissioning of Fukushima Dai-ichi Nuclear Power Plant. To achieve efficient movement of robots in the high-radiation environment, we propose an ``automated construction system of a modularized rail structure." In the high-radiation environment, the rail structure must be constructed by a remotely controlled robot using minimal sensors. In addition, a compliant mechanism that allows small misalignments is not feasible in the module connection task due to multiple load conditions. Therefore, this study aims to achieve stable, sensor-less, and compliance-less construction using a remotely controlled robot. To achieve this goal, a geometrical model of a connection mechanism is generated and used for contact analysis of the kinematic chain transition. An analysis of the relative angle and distance between the connection surfaces of the modules effectively illustrates the conditions of connection success and failure. Based on the analysis results, the design is modified to stabilize the connection task and employed to update the constructor robot. Thus, the stability of the module connection is improved, even under various load conditions.
|
|
04:30-04:45, Paper TuCT9.3 | Add to My Program |
Numerical Simulations of a Novel Force Controller Serially Combining the Admittance and Impedance Controllers |
|
Fujiki, Takuto | Kyushu University |
Tahara, Kenji | Kyushu University |
Keywords: Force Control, Industrial Robots, Compliance and Impedance Control
Abstract: This paper proposes a novel force controller that serially combines admittance and impedance controllers. The proposed controller is adaptable to an unknown changeable environment in terms of stiffness, and it is able to achieve high control accuracy and stable operation. First, conventional admittance and impedance controllers are recalled, and based on them, a new force controller is designed. Next, the proposed controller is applied to a one DoF system in contact with an external environment in the case where the contact stiffness is changeable, and compare the behavior of the proposed controller with that of the conventional simple admittance and impedance controllers through numerical simulations. Additionally, the proposed controller is applied to a two DoFs system including some nonlinearities, and proposes a design of the desired anisotropic admittance and impedance parameters to the proposed controller. This effectiveness is also demonstrated through numerical simulation results.
|
|
04:45-05:00, Paper TuCT9.4 | Add to My Program |
Kinematic Stability Based AFG-RRT* Path Planning for Cable-Driven Parallel Robots |
|
Mishra, Utkarsh Aashu | Indian Institute of Technology Roorkee |
Métillon, Marceau | LS2N-CNRS |
Caro, Stéphane | CNRS/LS2N |
Keywords: Tendon/Wire Mechanism, Motion and Path Planning, Parallel Robots
Abstract: Motion planning for Cable-Driven Parallel Robots (CDPRs) is a challenging task due to various restrictions on cable tensions, collisions and obstacle avoidance. This paper deals with an optimal path planning strategy in order to both maximize the wrench capability and the dexterity of the robot in a cluttered environment. First, an asymptotically-optimal path finding method based on a variant of rapidly exploring random trees (RRT) is implemented along with the Gilbert Johnson–Keerthi (GJK) algorithm to account for the collision detections. Then, a goal biased Artificial Field Guide (AFG) is employed to reduce convergence time and ensure directional exploration. Finally, a post-processing algorithm is added to get a short and smooth resultant path by fitting appropriate splines. The proposed path planning strategy is analyzed and demonstrated on a simulated and experimental setup of a six-DOF spatial CDPR.
|
|
TuCT10 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Manipulation Control II |
|
|
Chair: Du, Liang | Ritsumeikan University |
Co-Chair: Lu, Haojian | Zhejiang University |
|
04:00-04:15, Paper TuCT10.1 | Add to My Program |
Generation of Efficient Rectilinear Gait Based on Dynamic Morphological Computation and Its Theoretical Analysis |
|
Li, Longchuan | Ritsumeikan University |
Ma, Shugen | Ritsumeikan University |
Tokuda, Isao | Ritsumeikan University |
Asano, Fumihiko | Japan Advanced Institute of Science and Technology |
Nokata, Makoto | Ritsumeikan University |
Tian, Yang | Ritsumeikan University |
Du, Liang | Ritsumeikan University |
Keywords: Underactuated Robots, Biologically-Inspired Robots, Dynamics
Abstract: Particular tasks performed in narrow and confined spaces require a capability of passing through a limited pathway. Consequently, previous research developed a snake-like robot that generates a 1-dimensional rectilinear gait with an appropriate mechanical design. In contrast, a rigorous mathematical model and its theoretical analysis framework still lack, which are inevitable for locomotion control and performance optimization. Under such background, this paper introduces a modified snake-like robot model with additional oscillation elements inspired by nature. Efficient and stable rectilinear gait is generated via positively utilizing the resonance effect. Moreover, linear components associate with oscillation properties are extracted from the nonlinear dynamical system. Accordingly, theoretical analysis based on dynamic morphological computation demonstrates that resonance curves of the linear oscillator can indeed elucidate the locomotion performance quite well. This enables estimation of the optimal parameters for locomotion control by conveniently calculating the resonance curve instead of solving the nonlinear equation of motion. Our methods provide a novel approach for theoretical analysis and performance optimization of locomotion robots with periodical gaits.
|
|
04:15-04:30, Paper TuCT10.2 | Add to My Program |
Simultaneous Precision Assembly of Multiple Objects through Coordinated Micro-Robot Manipulation |
|
Liu, Song | ShanghaiTech University |
Jia, Yuyu | ShanghaiTech University |
Li, Y.F. | City University of Hong Kong |
Guo, Yao | Shanghai Jiao Tong University |
Lu, Haojian | Zhejiang University |
Keywords: Dual Arm Manipulation, Assembly, Force Control
Abstract: Simultaneous assembly of multiple objects is a key technology to form solid connections among objects to get compact structures in precision assembly and micro-assembly. Dramatically different from traditional assembly of two objects, the interaction among multiple objects is more complicated on analysis and control. During simultaneous assembly of multiple objects, there are multiple mutually effected contact surfaces, and multiple force sensors are needed to perceive the interaction status. In this paper, a coordinated micro-robot manipulation strategy is proposed for simultaneous assembly problem, which is based on microscopic vision and force information. Taking simultaneous assembly of three objects as an instance, the proposed method is well articulated, including calibration of assembly system, force analysis for each contacting surface, and insertion control strategy for assembly process. The proposed method is applicable also to case with more objects. Experiment results demonstrate effectiveness of the proposed method.
|
|
04:30-04:45, Paper TuCT10.3 | Add to My Program |
Dynamic Compensation in Throwing Motion with High-Speed Robot Hand-Arm |
|
Takahashi, Akira | Chiba University |
Sato, Masaki | Chiba University |
Namiki, Akio | Chiba University |
Keywords: Motion Control, Dexterous Manipulation, Multifingered Hands
Abstract: In recent years, research and development have been carried out on manipulators equipped with multi-fingered robot hands as end-effectors to perform delicate and dexterous tasks. In high-speed movements of such multi-fingered hand-arms, the weight of the multi-fingered hands slows down the response of the arms. To solve this problem, we propose a control method in which a high-response hand is used to compensate for the delay in a low-response arm to throw a ball. In particular, the pitching motion accuracy is improved by predicting tracking errors based on the arm dynamics and using nonlinear model predictive control to compensate for arm tracking errors by using the hand motion. Experimental results of ball throwing are shown.
|
|
04:45-05:00, Paper TuCT10.4 | Add to My Program |
Policy Blending and Recombination for Multimodal Contact-Rich Tasks |
|
Narita, Tetsuya | Sony Corporation |
Kroemer, Oliver | Carnegie Mellon University |
Keywords: Deep Learning in Grasping and Manipulation, Force and Tactile Sensing, Manipulation Planning
Abstract: Multimodal information such as tactile, proximity and force sensing is essential for performing stable contact-rich manipulations. However, coupling multimodal information and motion control still remains a challenging topic. Rather than learning a monolithic skill policy that takes in all feedback signals at all times, skills should be divided into phases and learn to only use the sensor signals applicable to that phase. This makes learning the primitive policies for each phase easier, and allows the primitive policies to be more easily reused between different skills. However, stopping and abruptly switching between each primitive policy results in longer execution times and less robust behaviours. We therefore propose a blending approach to seamlessly combining the primitive policies into a reliable combined control policy. We evaluate both time-based and state-based blending approaches. The resulting approach was successfully evaluated in simulation and on a real robot, with an augmented finger vision sensor, on: opening a cap, turning a dial and flipping a breaker tasks. The evaluations show that the blended policies with multimodal feedback can be easily learned and reliably executed.
|
|
TuCT11 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Localization and Mapping XIII |
|
|
Chair: Zhang, Fu | University of Hong Kong |
Co-Chair: Indelman, Vadim | Technion - Israel Institute of Technology |
|
04:00-04:15, Paper TuCT11.1 | Add to My Program |
FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter |
|
Xu, Wei | University of Hong Kong |
Zhang, Fu | University of Hong Kong |
Keywords: Visual-Inertial SLAM, Sensor Fusion, Aerial Systems: Perception and Autonomy
Abstract: This paper presents a computationally efficient and robust LiDAR-inertial odometry framework. We fuse LiDAR feature points with IMU data using a tightly-coupled iterated extended Kalman filter to allow robust navigation in fast-motion, noisy or cluttered environments where degeneration occurs. To lower the computation load in the presence of a large number of measurements, we present a new formula to compute the Kalman gain. The new formula has computation load depending on the state dimension instead of the measurement dimension. The proposed method and its implementation are tested in various indoor and outdoor environments. In all tests, our method produces reliable navigation results in real-time: running on a quadrotor onboard computer, it fuses more than 1,200 effective feature points in a scan and completes all iterations of an iEKF step within 25 ms. Our codes are open-sourced on Github.
|
|
04:15-04:30, Paper TuCT11.2 | Add to My Program |
BALM: Bundle Adjustment for Lidar Mapping |
|
Liu, Zheng | University of Hong Kong |
Zhang, Fu | University of Hong Kong |
Keywords: SLAM, Mapping, Localization
Abstract: A local Bundle Adjustment (BA) on a sliding window of keyframes has been widely used in visual SLAM and proved to be very effective in lowering the drift. But in lidar SLAM, BA method is hardly used because the sparse feature points (e.g., edge and plane) make the exact point matching impossible. In this paper, we formulate the lidar BA as minimizing the distance from a feature point to its matched edge or plane. Unlike the visual SLAM (and prior plane adjustment method in lidar SLAM) where the feature has to be co-determined along with the pose, we show that the feature can be analytically solved and removed from the BA, the resultant BA is only dependent on the scan poses. This greatly reduces the optimization scale and allows large-scale dense plane and edge features to be used. To speedup the optimization, we derive the analytical derivatives of the cost function, up to second order, in closed form. Moreover, we propose a novel adaptive voxelization method to search feature correspondence efficiently. The proposed formulations are incorporated into a LOAM back-end for map refinement. Results show that, although as a back-end, the local BA can be solved very efficiently, even in real-time at 10Hz when optimizing 20 scans of point-cloud. The local BA also considerably lowers the LOAM drift. Our implementation of the BA optimization and LOAM are open-sourced to benefit the community.
|
|
04:30-04:45, Paper TuCT11.3 | Add to My Program |
Extrinsic Calibration of Multiple LiDARs of Small FoV in Targetless Environments |
|
Liu, Xiyuan | The University of Hong Kong |
Zhang, Fu | University of Hong Kong |
Keywords: Mapping, Localization, Calibration and Identification
Abstract: The integration of multiple solid state LiDARs could achieve similar performance as a spinning LiDAR, by focusing on the dedicated area of interests. However, due to their small FoV settings, it is either required to rely on external sensors or form FoV overlaps to calibrate the extrinsic parameters between multiple LiDAR units. To overcome such limitations, we develop a targetless calibration method, which creates FoV overlaps (hence co-visible features) through move- ments and constructs a factor graph to resolve the constraints between LiDAR poses and extrinsic parameters. By solving the formulated problem with graph optimization, our proposed method could calibrate the extrinsic of LiDARs with few or even no overlapped FoVs, meanwhile, produce a globally consistent point cloud map. Experiments on different sensor setups and scenes have demonstrated the accuracy and robustness of our proposed approach.
|
|
04:45-05:00, Paper TuCT11.4 | Add to My Program |
MOLTR: Multiple Object Localisation, Tracking and Reconstruction from Monocular RGB Videos |
|
Li, Kejie | The University of Adelaide |
Rezatofighi, Hamid | Monash University |
Reid, Ian | University of Adelaide |
Keywords: Mapping, Deep Learning for Visual Perception, Recognition
Abstract: Semantic aware reconstruction is more advantageous than geometric-only reconstruction for future robotic and AR/VR applications because it represents not only where things are, but also what things are. Object-centric mapping is a task to build an object-level reconstruction where objects are separate and meaningful entities that convey both geometry and semantic information. In this paper, we present MOLTR, a solution to object-centric mapping using only monocular image sequences and camera poses. It is able to localise, track and reconstruct multiple rigid objects in an online fashion when an RGB camera captures a video of the surrounding. Given a new RGB frame, MOLTR firstly applies a monocular 3D detector to localise objects of interest and extract their shape codes representing the object shape in a learnt embedding. Detections are then merged to existing objects in the map after data association. Motion state (i.e. kinematics and the motion status) of each object is tracked by a multiple model Bayesian filter and object shape is progressively refined by fusing multiple shape code. We evaluate localisation, tracking and reconstruction on benchmarking datasets for indoor and outdoor scenes, and show superior performance over previous approaches.
|
|
04:45-05:00, Paper TuCT11.5 | Add to My Program |
Efficient Modification of the Upper Triangular Square Root Matrix on Variable Reordering |
|
Elimelech, Khen | Technion – Israel Institute of Technology |
Indelman, Vadim | Technion - Israel Institute of Technology |
Keywords: SLAM, Probabilistic Inference
Abstract: In probabilistic state inference, we seek to estimate the state of an (autonomous) agent from noisy observations. It can be shown that, under certain assumptions, finding the estimate is equivalent to solving a linear least squares problem. Solving such a problem is done by calculating the upper triangular matrix R from the coefficient matrix A, using the QR or Cholesky factorizations; this matrix is commonly referred to as the "square root matrix". In sequential estimation problems, we are often interested in periodic optimization of the state variable order, e.g., to reduce fill-in, or to apply a predictive variable ordering tactic; however, changing the variable order implies expensive re-factorization of the system. Thus, we address the problem of modifying an existing square root matrix R, to convey reordering of the variables. To this end, we identify several conclusions regarding the effect of column permutation on the factorization, to allow efficient modification of R, without accessing A at all, or with minimal re-factorization. The proposed parallelizable algorithm achieves a significant improvement in performance over the state-of-the-art incremental Smoothing And Mapping (iSAM2) algorithm, which utilizes incremental factorization to update R.
|
|
TuCT12 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Localization and Mapping XII |
|
|
Co-Chair: Weiss, Stephan | Universität Klagenfurt |
|
04:00-04:15, Paper TuCT12.1 | Add to My Program |
Signal Temporal Logic Synthesis As Probabilistic Inference |
|
Lee, Ki Myung Brian | University of Technology Sydney |
Yoo, Chanyeol | University of Technology Sydney |
Fitch, Robert | University of Technology Sydney |
Keywords: Formal Methods in Robotics and Automation, Motion and Path Planning, Probability and Statistical Methods
Abstract: We reformulate the signal temporal logic (STL) synthesis problem as a maximum a-posteriori (MAP) inference problem. To this end, we introduce the notion of random STL (RSTL), which extends deterministic STL with random predicates. This new probabilistic extension naturally leads to a synthesis-as-inference approach. The proposed method with RSTL allows for differentiable, gradient-based synthesis while extending the class of possible uncertain semantics. We demonstrate that the proposed framework scales well with GPU-acceleration, and present realistic applications of uncertain semantics in robotics that involve target tracking and the use of occupancy grids.
|
|
04:15-04:30, Paper TuCT12.2 | Add to My Program |
Bias Compensated UWB Anchor Initialization Using Information-Theoretic Supported Triangulation Points |
|
Blüml, Julian | University of Klagenfurt |
Fornasier, Alessandro | University of Klagenfurt |
Weiss, Stephan | Universität Klagenfurt |
Keywords: Task and Motion Planning, Range Sensing, Sensor Networks
Abstract: For Ultra-Wide-Band (UWB) based navigation, an accurate initialization of the anchors in a reference coordinate system is crucial for precise subsequent UWB-inertial based pose estimation. This paper presents a strategy based on information theory to initialize such UWB anchors using raw distance measurements from tag to anchor(s) and aerial vehicle poses. We include a linear distance-dependent bias term and an offset in our estimation process in order to achieve unprecedented accuracy in the 3D position estimates of the anchors (error reduction by a factor of about 3.5 compared to current approaches) without the need of prior knowledge. After an initial coarse position triangulation of the anchors using random vehicle positions, a bounding volume is created in the vicinity of the roughly estimated anchor position. In this volume, we calculate points which provide the maximal triangulation related information based on the Fisher Information Theory. Using these information theoretic optimal points, a fine triangulation is done including bias term estimation. We evaluate our approach in simulations with realistic sensor noise as well as with real world experiments. We also fly an aerial vehicle with UWB-inertial based closed loop control demonstrating that precise anchor initialization does improve navigation precision. Our initialization approach is compared to state-of-the-art as well as to an initialization without the simultaneous bias estimation.
|
|
04:30-04:45, Paper TuCT12.3 | Add to My Program |
Multiresolution Representations for Large-Scale Terrain with Local Gaussian Process Regression |
|
Liu, Xu | Shenyang Institute of Automation, Chinese Academy of Sciences |
Li, Decai | Shenyang Institute of Automation, Chinese Academy of Sciences |
He, Yuqing | Shenyang Institute of Automation, Chinese Academy of Sciences |
Keywords: Mapping, Field Robots
Abstract: To address the problem of building accurate and coherent models for large-scale terrains from incomplete and noisy sensor data, this paper proposes a novel framework that can efficiently infer terrain structures by divisionally providing the best linear unbiased estimates for the elevation values. To avoid data ambiguity caused by the uncertainty of sensor data, the proposed method introduces elevation filtering to extract the terrain surfaces, which reduces the amount of data greatly while the contained terrain information is basically unchanged. Then, for the large-scale terrains, the Gaussian mixture model is used to divide the interested regions, which remarkably improves the prediction accuracy and speed. Finally, for each subregion, a gaussian process regression model based on the static kernel is used to create a multiresolution terrain representation, which can deal with incompleteness of sensor data by considering the spatial correlations of the terrain. Evaluations of the proposed technique were conducted on diverse large-scale field terrains, including the quarry, planetary emulation terrain and highland, showing that the proposed method outperforms the state-of-art terrain modeling techniques in terms of the prediction accuracy, computation speed and memory consumption. As a practical application, the path planning problem was explored based on this terrain modeling technique to produce a better path.
|
|
04:45-05:00, Paper TuCT12.4 | Add to My Program |
DiSCO: Differentiable Scan Context with Orientation |
|
Xu, Xuecheng | Zhejiang University |
Yin, Huan | Zhejiang Univerisity |
Chen, Zexi | Zhejiang University |
Li, Yuehua | Zhejiang Lab |
Wang, Yue | Zhejiang University |
Xiong, Rong | Zhejiang University |
Keywords: Localization, Representation Learning, Range Sensing
Abstract: Global localization is essential for robot navigation, of which the first step is to retrieve a query from the map database. This problem is called place recognition. In recent years, LiDAR scan based place recognition has drawn attention as it is robust against the environmental change. In this paper, we propose a LiDAR-based place recognition method, named Differentiable Scan Context with Orientation (DiSCO), which simultaneously finds the scan at a similar place and estimates their relative orientation. The orientation can further be used as the initial value for the down-stream local optimal metric pose estimation, improving the pose estimation especially when a large orientation between the current scan and retrieved scan exists. Our key idea is to transform the feature learning into the frequency domain. We utilize the magnitude of the spectrum as the place signature, which is theoretically rotation-invariant. In addition, based on the differentiable phase correlation, we can efficiently estimate the global optimal relative orientation using the spectrum. With such structural constraints, the network can be learned in an end-to-end manner, and the backbone is fully shared by the two tasks, achieving interpretability and light weight. Finally, DiSCO is validated on the NCLT and Oxford datasets with long-term outdoor conditions, showing better performance than the compared methods.
|
|
TuCT13 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Localization and Mapping IV |
|
|
Chair: Yue, Yufeng | Beijing Institute of Technology |
|
04:00-04:15, Paper TuCT13.1 | Add to My Program |
MSTSL: Multi-Sensor Based Two-Step Localization in Geometrically Symmetric Environments |
|
Wu, Zhenyu | Nanyang Technological University |
Yue, Yufeng | Beijing Institute of Technology |
Wen, Mingxing | Nanyang Technological University |
Zhang, Jun | Nanyang Technological University |
Peng, Guohao | Nanyang Technological University |
Wang, Danwei | Nanyang Technological University |
Keywords: Localization, Range Sensing, Probability and Statistical Methods
Abstract: Symmetric environment is one of the most intractable and challenging scenarios for mobile robots to accomplish global localization tasks, due to the highly similar geometrical structures and insufficient distinctive features. Existing localization solutions in such scenarios either depend on pre-deployed infrastructures which are expensive, inflexible, and hard to maintain; or rely on single sensor-based methods whose initialization module is incapable to provide enough unique information. Thus, this paper proposes a novel Multi-Sensor based Two-Step Localization framework named MSTSL, which addresses the problem of mobile robot global localization in geometrically symmetric environments by utilizing the measured magnetic field, 2-D LiDAR, and wheel odometry information. The proposed system mainly consists of two steps: 1) Magnetic Field-based Initialization, and 2) LiDAR-based Localization. Based on the pre-built magnetic field database, multiple initial hypotheses poses can firstly be determined by the proposed two-stage initialization algorithm. Then, utilizing the obtained multiple initial hypotheses, the robot can be localized more accurately by LiDAR-based localization. Extensive experiments demonstrate the practical utility and accuracy of the proposed system over the alternative approaches in real-world scenarios.
|
|
04:15-04:30, Paper TuCT13.2 | Add to My Program |
Range-Focused Fusion of Camera-IMU-UWB for Accurate and Drift-Reduced Localization |
|
Nguyen, Thien Hoang | Nanyang Technological University |
Nguyen, Thien-Minh | Nanyang Technological University |
Xie, Lihua | NanyangTechnological University |
Keywords: Sensor Fusion, Localization, SLAM
Abstract: In this work, we present a tightly-coupled fusion scheme of a monocular camera, a 6-DoF IMU, and a single unknown Ultra-wideband (UWB) anchor to achieve accurate and drift-reduced localization. Specifically, this paper focuses on incorporating the UWB sensor into an existing state-of-the-art visual-inertial system. Previous works toward this goal use a single nearest UWB range data to update robot positions in the sliding window ("position-focused") and have demonstrated encouraging results. However, these approaches ignore 1) the time-offset between UWB and camera sensors, and 2) all other ranges between two consecutive keyframes. Our approach shifts the perspective to the UWB measurements ("range-focused") by leveraging the propagated information readily available from the visual-inertial odometry pipeline. This allows the UWB data to be used in a more effective manner: the time-offset of each range data is eliminated and all available measurements can be utilized. Experimental results show that the proposed method consistently outperforms previous methods in both estimating the anchor position and reducing the drift in long-term trajectories.
|
|
04:30-04:45, Paper TuCT13.3 | Add to My Program |
Interactive Planning for Autonomous Urban Driving in Adversarial Scenarios |
|
Luo, Yuanfu | School of Computing, National University of Singapore |
Meghjani, Malika | Singapore University of Technology and Design |
Ho, Qi Heng | University of Colorado Boulder |
Hsu, David | National University of Singapore |
Rus, Daniela | MIT |
Keywords: Autonomous Vehicle Navigation, Motion and Path Planning, Autonomous Agents
Abstract: Autonomous urban driving among human-driven cars requires a holistic understanding of road rules, driver intents and driving styles. This is challenging as a short-term, single instance, driver intent of lane change may not correspond to their driving styles for a longer duration. This paper presents an interactive behavior planner which accounts for road context, short-term driver intent, and long-term driving styles to infer beliefs over the latent states of surrounding vehicles. We use a specialized Partially Observable Markov Decision Process to provide risk-averse decisions. Specifically, we consider adversarial driving scenarios caused by irrational drivers to validate the robustness of our proposed interactive behavior planner in simulation as well as on a full-size self-driving car. Our experimental results show that our algorithm enables safer and more travel time-efficient autonomous driving compared to baselines even in adversarial scenarios.
|
|
04:45-05:00, Paper TuCT13.4 | Add to My Program |
Kernel-Based 3-D Dynamic Occupancy Mapping with Particle Tracking |
|
Min, Youngjae | Korea Advanced Institute of Science and Technology |
Kim, Do-Un | KAIST |
Choi, Han-Lim | KAIST |
Keywords: Mapping, Aerial Systems: Perception and Autonomy, Object Detection, Segmentation and Categorization
Abstract: Mapping three-dimensional (3-D) dynamic environments is essential for aerial robots but challenging to consider the increased dimensions in both space and time compared to 2-D static mapping. This paper presents a kernel-based 3-D dynamic occupancy mapping algorithm, K3DOM, that distinguishes between static and dynamic objects while estimating the velocities of dynamic cells via particle tracking. The proposed algorithm brings the benefits of kernel inference such as its simple computation, consideration of spatial correlation, and natural measure of uncertainty to the domain of dynamic mapping. We formulate the dynamic occupancy mapping problem in a Bayesian framework and represent the map through Dirichlet distribution to update posteriors in a recursive way with intuitive heuristics. The proposed algorithm demonstrates its promising performance compared to baseline in diverse scenarios simulated in ROS environments.
|
|
TuCT14 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning-Based Manipulation IV |
|
|
Chair: Yang, Xing | Harbin Institute of Technolgoy, Shenzhen |
Co-Chair: Ijiri, Yoshihisa | OMRON Corp |
|
04:00-04:15, Paper TuCT14.1 | Add to My Program |
AdaGrasp: Learning a Gripper-Aware Grasping Policy |
|
Xu, Zhenjia | Columbia University |
Qi, Beichun | Columbia University |
Agrawal, Shubham | Columbia University |
Song, Shuran | Columbia University |
Keywords: Grasping, Deep Learning in Grasping and Manipulation, Perception for Grasping and Manipulation
Abstract: This paper aims to improve robots' versatility and adaptability by allowing them to use a large variety of end-effector tools and quickly adapt to new tools. We propose AdaGrasp, a method to learn a single grasping policy that generalizes to novel grippers. By training on a large collection of grippers, our algorithm is able to acquire generalizable knowledge of how different grippers should be used in various tasks. Given a visual observation of the scene and the gripper, AdaGrasp infers the possible grasp poses and their grasp scores by computing the cross convolution between the shape encodings of the gripper and scene. Intuitively, this cross convolution operation can be considered as an efficient way of exhaustively matching the scene geometry with gripper geometry under different grasp poses (i.e., translations and orientations), where a good "match" of 3D geometry will lead to a successful grasp. We validate our methods in both simulation and real-world environments. Our experiment shows that AdaGrasp significantly outperforms the existing multi-gripper grasping policy method, especially when handling cluttered environments and partial observations. Code and Data are available at https://adagrasp.cs.columbia.edu.
|
|
04:15-04:30, Paper TuCT14.2 | Add to My Program |
TRANS-AM: Transfer Learning by Aggregating Dynamics Models for Soft Robotic Assembly |
|
Tanaka, Kazutoshi | OMRON SINIC X Corporation |
Yonetani, Ryo | Omron Sinic X |
Hamaya, Masashi | OMRON SINIC X Corporation |
Lee, Robert | Australian Centre for Robotic Vision |
von Drigalski, Felix Wolf Hans Erich | OMRON SINIC X Corporation |
Ijiri, Yoshihisa | OMRON Corp |
Keywords: Transfer Learning, Reinforcement Learning, Soft Robot Applications
Abstract: Practical industrial assembly scenarios often require robotic agents to adapt their skills to unseen tasks quickly. While transfer reinforcement learning (RL) could enable such quick adaptation, much prior work has to collect many samples from source environments to learn target tasks in a model-free fashion, which still lacks sample efficiency on a practical level. In this work, we develop a novel transfer RL method named TRANSfer learning by Aggregating dynamics Models (TRANS-AM). TRANS-AM is based on model-based RL (MBRL) for its high-level sample efficiency, and only requires dynamics models to be collected from source environments. Specifically, it learns to aggregate source dynamics models adaptively in an MBRL loop to better fit the state-transition dynamics of target environments and execute optimal actions there. As a case study to show the effectiveness of this proposed approach, we address a challenging contact-rich peg-in-hole task with variable hole orientations using a soft robot. Our evaluations with both simulation and real-robot experiments demonstrate that TRANS-AM enables the soft robot to accomplish target tasks with fewer episodes compared when learning the tasks from scratch.
|
|
04:30-04:45, Paper TuCT14.3 | Add to My Program |
Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study |
|
Lin, Hongbin | Chinese University of Hong Kong |
Gao, Qian | The Chinese University of Hong Kong, Shenzhen |
Chu, Xiangyu | The Chinese University of Hong Kong |
Dou, Qi | The Chinese University of Hong Kong |
Deguet, Anton | Johns Hopkins University |
Kazanzides, Peter | Johns Hopkins University |
Au, K. W. Samuel | The Chinese University of Hong Kong |
Keywords: Medical Robots and Systems, Dynamics, Model Learning for Control
Abstract: Learning high-performance deep neural networks for dynamic modeling of high Degree-Of-Freedom (DOF) robots remains challenging due to the sampling complexity. Typical unknown system disturbance caused by unmodeled dynamics (such as internal compliance, cables) further exacerbates the problem. In this paper, a novel framework characterized by both high data efficiency and disturbance-adapting capability is proposed to address the problem of modeling gravitational dynamics using deep nets in feedforward gravity compensation control for high-DOF master manipulators with unknown disturbance. In particular, Feedforward Deep Neural Networks (FDNNs) are learned from both prior knowledge of an existing analytical model and observation of the robot system by Knowledge Distillation (KD). Through extensive experiments in high-DOF master manipulators with significant disturbance, we show that our method surpasses a standard Learning-from-Scratch (LfS) approach in terms of data efficiency and disturbance adaptation. Our initial feasibility study has demonstrated the potential of outperforming the analytical teacher model as the training data increases.
|
|
04:45-05:00, Paper TuCT14.4 | Add to My Program |
Learning to Place Objects Onto Flat Surfaces in Upright Orientations |
|
Newbury, Rhys | Monash University |
He, Kerry | Monash University |
Cosgun, Akansel | Monash University |
Drummond, Tom | Monash University |
Keywords: Deep Learning in Grasping and Manipulation, Perception for Grasping and Manipulation
Abstract: We study the problem of placing a grasped object on an empty flat surface in an upright orientation, such as placing a cup on its bottom rather than on its side. We aim to find the required object rotation such that when the gripper is opened after the object makes contact with the surface, the object would be stably placed in the upright orientation. We iteratively use two neural networks. At every iteration, we use a convolutional neural network to estimate the required object rotation, which is executed by the robot, and then a separate convolutional neural network to estimate the quality of a placement in its current orientation. Our approach places previously unseen objects in upright orientations with a success rate of 98.1% in free space and 90.3% with a simulated robotic arm, using a dataset of 50 everyday objects in simulation experiments. Real-world experiments were performed, which achieved an 88.0% success rate, which serves as a proof-of-concept for direct sim-to-real transfer.
|
|
TuCT15 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning in Robotics and Automation II |
|
|
Chair: Soh, Harold | National Universtiy of Singapore |
|
04:00-04:15, Paper TuCT15.1 | Add to My Program |
PVStereo: Pyramid Voting Module for End-To-End Self-Supervised Stereo Matching |
|
Wang, Hengli | The Hong Kong University of Science and Technology |
Fan, Rui | UC San Diego |
Cai, Peide | Hong Kong University of Science and Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Computer Vision for Automation, Data Sets for Robotic Vision, Deep Learning for Visual Perception
Abstract: Supervised learning with deep convolutional neural networks (DCNNs) has seen huge adoption in stereo matching. However, the acquisition of large-scale datasets with well-labeled ground truth is cumbersome and labor-intensive, making supervised learning-based approaches often hard to implement in practice. To overcome this drawback, we propose a robust and effective self-supervised stereo matching approach, consisting of a pyramid voting module (PVM) and a novel DCNN architecture, referred to as OptStereo. Specifically, our OptStereo first builds multi-scale cost volumes, and then adopts a recurrent unit to iteratively update disparity estimations at high resolution; while our PVM can generate reliable semi-dense disparity images, which can be employed to supervise OptStereo training. Furthermore, we publish the HKUST-Drive dataset, a large-scale synthetic stereo dataset, collected under different illumination and weather conditions for research purposes. Extensive experimental results demonstrate the effectiveness and efficiency of our self-supervised stereo matching approach on the KITTI Stereo benchmarks and our HKUST-Drive dataset. PVStereo, our best-performing implementation, greatly outperforms all other state-of-the-art self-supervised stereo matching approaches. Our project page is available at sites.google.com/view/pvstereo.
|
|
04:15-04:30, Paper TuCT15.2 | Add to My Program |
Embedding Symbolic Temporal Knowledge into Deep Sequential Models |
|
Xie, Yaqi | National University of Singapore |
Zhou, Fan | National University of Singapore |
Soh, Harold | National Universtiy of Singapore |
Keywords: Deep Learning Methods, Representation Learning, Imitation Learning
Abstract: Sequences and time-series often arise in robot tasks, e.g., in activity recognition and imitation learning. In recent years, deep neural networks (DNNs) have emerged as an effective data-driven methodology for processing sequences given sufficient training data and compute resources. However, when data is limited, simpler models such as logic/rule-based methods work surprisingly well, especially when relevant prior knowledge is applied in their construction. However, unlike DNNs, these structured models can be difficult to extend, and do not work well with raw unstructured data. In this work, we seek to learn flexible DNNs, yet leverage prior temporal knowledge when available. Our approach is to embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep models. Specifically, we construct semantic-based embeddings of automata generated from LTL formula via a Graph Neural Network. Experiments show that these learnt embeddings can lead to improvements on downstream robot tasks such as sequential action recognition and imitation learning.
|
|
04:30-04:45, Paper TuCT15.3 | Add to My Program |
Multi-Modal Mutual Information (MuMMI) Training for Robust Self-Supervised Deep Reinforcement Learning |
|
Chen, Kaiqi | National University of Singapore |
Lee, Yong | National University of Singapore |
Soh, Harold | National Universtiy of Singapore |
Keywords: Deep Learning Methods, Reinforcement Learning, Representation Learning
Abstract: This work focuses on learning useful and robust deep world models using multiple, possibly unreliable, sensors. We find that current methods do not sufficiently encourage a shared representation between modalities; this can cause poor performance on downstream tasks and over-reliance on specific sensors. As a solution, we contribute a new multi-modal deep latent state-space model, trained using a mutual information lower-bound. The key innovation is a specially-designed density ratio estimator that encourages consistency between the latent codes of each modality. We tasked our method to learn policies (in a self-supervised manner) on multi-modal Natural MuJoCo benchmarks and a challenging Table Wiping task. Experiments show our method significantly outperforms state-of-the-art deep reinforcement learning methods, particularly in the presence of missing observations.
|
|
04:45-05:00, Paper TuCT15.4 | Add to My Program |
Linguistic Descriptions of Human Motion with Generative Adversarial Seq2Seq Learning |
|
Goutsu, Yusuke | Institute of Industrial Science, the University of Tokyo |
Inamura, Tetsunari | National Institute of Informatics |
Keywords: Recognition
Abstract: In this paper, we propose a generative model that learns a sequence-to-sequence (Seq2Seq) translation between human whole-body motions and linguistic descriptions by natural language. Our model merges the Seq2Seq model with the training strategy of sequence generative adversarial nets (SeqGAN), which extends a GAN framework to solve the problem that the gradient cannot pass back to the generator network. This model considers a generator, trained using a policy gradient method, as a stochastic parameterized policy. In the policy gradient, we employ a Monte Carlo (MC) search to receive the final reinforcement learning (RL) reward from the discriminator. The proposed generative network is trained on the KIT Motion-Language Dataset, which is one of the few large-scale datasets available and includes 3,911 human motions and 6,278 natural language descriptions. During the experiments, we evaluated the effectiveness of our model by comparing its various configurations and parameter settings. Finally, our model achieves a remarkably high performance, outperforming an existing state-of-the-art method under the same dataset split for fair comparison. In addition, the qualitative results of the motion-to-language translation demonstrate that our model can generate semantically and grammatically correct sentences with detailed linguistic descriptions from human motions.
|
|
TuCT16 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Learning and Optimization |
|
|
Chair: Gao, Fei | Zhejiang University |
|
04:00-04:15, Paper TuCT16.1 | Add to My Program |
Evolvable Motion-Planning Method Using Deep Reinforcement Learning |
|
Nishi, Kaichiro | Hitachi, Ltd |
Nakasu, Nobuaki | Hitachi Ltd |
Keywords: Machine Learning for Robot Control, Industrial Robots, Deep Learning in Grasping and Manipulation
Abstract: A motion-planning method that can adapt to changes in the surrounding environment is proposed and evaluated. Automation of work is progressing in factories and distribution warehouses due to labor shortages. However, utilizing robots for transport operations in a distribution warehouse faces a problem; that is, tasks for setting up a robot, such as adjustment of acceleration for stabilization of the transportation operation, are time consuming. To solve that problem, we developed an evolvable robot motion-planning method. The aim of this method is to reduce the preparation cost by allowing the robot to automatically learn the optimized acceleration according to the weight and center of gravity of the objects to be transported. It was experimentally demonstrated that the proposed method can learn the optimized acceleration control from time-series data such as sensor information. The proposed method was evaluated in a simulator environment, and the results of the evaluation demonstrate that the learned model reduced the inertial force due to the acceleration of robot motion and shortened the transport time by 35% compared with the conventional method of manual adjustment. The proposed method was also evaluated in a real machine environment, and the evaluation results demonstrate that the method can be applied to a real robot. Since the speed of the robot does not need to be adjusted in the case of the proposed method, the adjustment man-hours can be reduced.
|
|
04:15-04:30, Paper TuCT16.2 | Add to My Program |
Learning Sequences of Manipulation Primitives for Robotic Assembly |
|
Vuong, Nghia | Nanyang Technological University |
Pham, Hung | Nanyang Technological University |
Pham, Quang-Cuong | NTU Singapore |
Keywords: Reinforcement Learning, Assembly
Abstract: This paper explores the idea that skillful assembly is best represented as dynamic sequences of Manipulation Primitives, and that such sequences can be automatically discovered by Reinforcement Learning. Manipulation Primitives, such as ``Move down until contact'', ``Slide along x while maintaining contact with the surface'', have enough complexity to keep the search tree shallow, yet are generic enough to generalize across a wide range of assembly tasks. Moreover, the additional ``semantics'' of the Manipulation Primitives make them more robust in sim2real and against model/environment variations and uncertainties, as compared to more elementary actions. Policies are learned in simulation, and then transferred onto the physical platform. Direct sim2real transfer (without retraining in real) achieves excellent success rates on challenging assembly tasks, such as round peg insertion with 0.04 mm clearance or square peg insertion with large hole position/orientation estimation errors.
|
|
04:30-04:45, Paper TuCT16.3 | Add to My Program |
Data-Efficient Learning for Complex and Real-Time Physical Problem Solving Using Augmented Simulation |
|
Ota, Kei | Mitsubishi Electric |
Jha, Devesh | Mitsubishi Electric Research Laboratories |
Romeres, Diego | Mitsubishi Electric Research Laboratories |
Vanbaar, Jeroen | MERL |
Smith, Kevin | Massachusetts Institute of Technology |
Semitsu, Takayuki | Mitsubishi Electric |
Oiki, Tomohiro | Mitsubishi Electric |
Sullivan, Alan | Mitsubishi Electric Research Lab |
Nikovski, Daniel | MERL |
Tenenbaum, Joshua | Massachusetts Institute of Technology |
Keywords: Cognitive Control Architectures, Reinforcement Learning, Model Learning for Control
Abstract: Humans quickly solve tasks in novel systems with complex dynamics, without requiring much interaction. While deep reinforcement learning algorithms have achieved tremendous success in many complex tasks, these algorithms need a large number of samples to learn meaningful policies. In this paper, we present a task for navigating a marble to the center of a circular maze. While this system is very intuitive and easy for humans to solve, it can be very difficult and inefficient for standard reinforcement learning algorithms to learn meaningful policies. We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system. Learning consists of initializing a physics engine with parameters estimated using data from the real system. The error in the physics engine is then corrected using Gaussian process regression, which is used to model the residual between real observations and physics engine simulations. The physics engine augmented with the residual model is then used to control the marble in the maze environment using a model-predictive feedback over a receding horizon. To the best of our knowledge, this is the first time that a hybrid model consisting of a full physics engine along with a statistical function approximator has been used to control a complex physical system in real-time using nonlinear model-predictive control (NMPC).
|
|
04:45-05:00, Paper TuCT16.4 | Add to My Program |
EGO-Swarm: A Fully Autonomous and Decentralized Quadrotor Swarm System in Cluttered Environments |
|
Zhou, Xin | ZHEJIANG UNIVERSITY |
Zhu, Jiangchao | Zhejiang University |
Zhou, Hongyu | Norwegian University of Science and Technology |
Xu, Chao | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Autonomous Vehicle Navigation, Aerial Systems: Applications
Abstract: This paper presents a decentralized and asynchronous systematic solution for multi-robot autonomous navigation in unknown obstacle-rich scenes using merely onboard resources. The planning system is formulated under gradient-based local planning framework, where collision avoidance is achieved by formulating the collision risk as a penalty of a nonlinear optimization problem. In order to improve robustness and escape local minima, we incorporate a lightweight topological trajectory generation method. Then agents generate safe, smooth, and dynamically feasible trajectories in only several milliseconds using an unreliable trajectory sharing network. Relative localization drift among agents is corrected by using agent detection in depth images. Our method is demonstrated in both simulation and real-world experiments. The source code is released for the reference of the community.
|
|
TuCT17 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Humanoids and Animaloids V |
|
|
Chair: Cheng, Hu | The Chinese University of Hong Kong |
|
04:00-04:15, Paper TuCT17.1 | Add to My Program |
Robust Landing Stabilization of Humanoid Robot on Uneven Terrain Via Admittance Control and Heel Strike Motion |
|
Jo, Joonhee | KIST |
Park, Gyunghoon | Korea Institute of Science and Technology |
Oh, Yonghwan | Korea Institute of Science & Technology (KIST) |
Keywords: Humanoid and Bipedal Locomotion, Whole-Body Motion Planning and Control, Body Balancing
Abstract: This paper addresses robust landing stabilization in humanoid locomotion on uneven terrain. The core idea is to find a configuration of the robot that results in small impulsive force when an unexpected obstacle is encountered, and to adjust post-contact reference for swing foot with which the pose of the foot is stabilized on the obstacle. This can be achieved by walking with heel strike motion (validated by the impact map analysis) and by employing hybrid admittance control combining the admittance control with reset of post-contact reference, embedded into the momentum-based whole-body control framework. The validity of the proposed algorithm is verified by simulation with a physics engine.
|
|
04:15-04:30, Paper TuCT17.2 | Add to My Program |
Toward Autonomous Driving by Musculoskeletal Humanoids: A Study of Developed Hardware and Learning-Based Software (I) |
|
Kawaharazuka, Kento | The University of Tokyo |
Tsuzuki, Kei | University of Tokyo |
Koga, Yuya | The University of Tokyo |
Omura, Yusuke | The University of Tokyo |
Makabe, Tasuku | The University of Tokyo |
Shinjo, Koki | The University of Tokyo |
Onitsuka, Moritaka | The University of Tokyo |
Nagamatsu, Yuya | The University of Tokyo |
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Humanoid Robot Systems, Biomimetics, Field Robots
Abstract: This paper summarizes an autonomous driving project by musculoskeletal humanoids. The musculoskeletal humanoid, which mimics the human body in detail, has redundant sensors and a flexible body structure. These characteristics are suitable for motions with complex environmental contact, and the robot is expected to sit down on the car seat, step on the acceleration and brake pedals, and operate the steering wheel by both arms. We reconsider the developed hardware and software of the musculoskeletal humanoid Musashi in the context of autonomous driving. The respective components of autonomous driving are conducted using the benefits of the hardware and software. Finally, Musashi succeeded in the pedal and steering wheel operations with recognition.
|
|
04:30-04:45, Paper TuCT17.3 | Add to My Program |
Automatic Grouping of Redundant Sensors and Actuators Using Functional and Spatial Connections: Application to Muscle Grouping for Musculoskeletal Humanoids |
|
Kawaharazuka, Kento | The University of Tokyo |
Nishiura, Manabu | University of Tokyo |
Koga, Yuya | The University of Tokyo |
Omura, Yusuke | The University of Tokyo |
Toshimitsu, Yasunori | University of Tokyo |
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Kawasaki, Koji | The University of Tokyo |
Inaba, Masayuki | The University of Tokyo |
Keywords: Learning from Experience, Biomimetics, Redundant Robots
Abstract: For a robot with redundant sensors and actuators distributed throughout its body, it is difficult to construct a controller or a neural network using all of them due to computational cost and complexity. Therefore, it is effective to extract functionally related sensors and actuators, group them, and construct a controller or a network for each of these groups. In this study, the functional and spatial connections among sensors and actuators are embedded into a graph structure and a method for automatic grouping is developed. Taking a musculoskeletal humanoid with a large number of redundant muscles as an example, this method automatically divides all the muscles into regions such as the forearm, upper arm, scapula, neck, etc., which has been done by humans based on a geometric model. The functional relationship among the muscles and the spatial relationship of the neural connections are calculated without a geometric model. This study is applied to muscle grouping of musculoskeletal humanoids Musashi and Kengoro, and its effectiveness is verified.
|
|
04:45-05:00, Paper TuCT17.4 | Add to My Program |
State Estimation for Hybrid Wheeled-Legged Robots Performing Mobile Manipulation Tasks |
|
You, Yangwei | Institute for Infocomm Research |
Cheong, Samuel | Institute for Infocomm Research |
Chen, Lawrence Tai Pang | Institute for Infocomm Research |
Chen, Yuda | Institute for Infocomm Research, A*STAR Research Entities |
Zhang, Kun | Institute for Infocomm Research (I2R), A*STAR |
Acar, Cihan | Institute for Infocomm Research (I2R), A*STAR |
Lai, Fon Lin | Institute for Infocomm Research |
Adiwahono, Albertus Hendrawan | I2R A-STAR |
Tee, Keng Peng | Institute for Infocomm Research |
Keywords: Legged Robots, Mobile Manipulation, Climbing Robots
Abstract: This paper introduces a general state estimation framework fusing multiple sensor information for hybrid wheeled-legged robots performing mobile manipulation tasks. At the core of the state estimator is a novel unified odometry for hybrid locomotion which can seamlessly maintain tracking and has no need to switch between stepping and rolling modes. To the best of our knowledge, the proposed odometry is the first work in this area. It is calculated based on the robot kinematics and instantaneous contact points of wheels with sensor inputs from IMU, joint encoders, joint torque sensors estimating wheel contact status, as well as RGB-D camera detecting geometric features of the terrain (e.g. elevation and surface normal vector). Subsequently, the odometry output is utilized as the motion model of a 3D Lidar map-based Monte Carlo Localization module for drift-free state estimation. As part of the framework, visual localization is integrated to provide high precision guidance for the robot movement relative to an object of interest. The proposed approach was verified thoroughly by two experiments conducted on the Pholus robot with OptiTrack measurements as ground truth.
|
|
TuCT18 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Grasping and Manipulation |
|
|
Co-Chair: Gorjup, Gal | The University of Auckland |
|
04:00-04:15, Paper TuCT18.1 | Add to My Program |
Adversarial Skill Learning for Robust Manipulation |
|
Jian, Pingcheng | Tsinghua |
Yang, Chao | Tsinghua University |
Guo, Di | Tsinghua University |
Liu, Huaping | Tsinghua University |
Sun, Fuchun | Tsinghua Univerisity |
Keywords: Reinforcement Learning, Robust/Adaptive Control, Dexterous Manipulation
Abstract: Deep reinforcement learning has made significant progress in robotic manipulation tasks and it works well in the ideal disturbance-free environment. However, in a real-world environment, both internal and external disturbances are inevitable, thus the performance of the trained policy will dramatically drop. To improve the robustness of the policy, we introduce the adversarial training mechanism to the robotic manipulation tasks in this paper, and an adversarial skill learning algorithm based on soft actor-critic (SAC) is proposed for robust manipulation. Extensive experiments are conducted to demonstrate that the learned policy is robust to internal and external disturbances. Additionally, the proposed algorithm is evaluated in both the simulation environment and on the real robotic platform.
|
|
04:15-04:30, Paper TuCT18.2 | Add to My Program |
Learning Visual Affordances with Target-Orientated Deep Q-Network to Grasp Objects by Harnessing Environmental Fixtures |
|
Liang, Hengyue | University of Minnesota, Twin Cities |
Lou, Xibai | University of Minnesota Twin Cities |
Yang, Yang | University of Minnesota |
Choi, Changhyun | University of Minnesota, Twin Cities |
Keywords: Grasping, Perception for Grasping and Manipulation, Deep Learning in Grasping and Manipulation
Abstract: This paper introduces a challenging object grasping task and propose to solve it with a self-supervised learning approach. The goal of the task is to grasp an object which is not feasible with a single robotic manipulator but only with harnessing environment fixtures (e.g., walls, furniture, heavy objects). This Slide-to-Wall grasping task assumes no prior knowledge except a partial observation of the target object. Hence the robot should learn a good manipulation policy given a scene observation that may include the target object, environmental fixtures, and any other disturbing objects. We formulate the problem as visual affordances learning where Target-Oriented Deep Q-Network (TO-DQN) is proposed to efficiently learn visual affordance maps (i.e., Q-maps) to guide robot actions. Since the training necessitates robots exploring and colliding with the fixtures, TO-DQN is trained safely with a simulated robot manipulator. Then the learned policy is applied to a real-robot manipulator. We empirically show that TO-DQN can learn to solve the grasping task in different environment settings in simulation and outperforms a standard and a variant Deep Q-Network (DQN) in terms of training efficiency and robustness against unseen environmental changes. The testing performance in both simulation and real-robot experiments shows that TO-DQN trained policy achieves comparable performance to humans.
|
|
04:30-04:45, Paper TuCT18.3 | Add to My Program |
Enhancing Robot Perception in Grasping and Dexterous Manipulation through Crowdsourcing and Gamification |
|
Gorjup, Gal | The University of Auckland |
Gerez, Lucas | The University of Auckland |
Liarokapis, Minas | The University of Auckland |
Keywords: Human-Robot Collaboration, Multi-Modal Perception for HRI, Human-Robot Teaming
Abstract: Robot grasping and manipulation planning in unstructured and dynamic environments is heavily dependent on the attributes of manipulated objects. Although deep learning approaches have delivered exceptional performance in robot perception, human perception and reasoning are still superior in processing novel object classes. Moreover, training such models requires large datasets that are generally expensive to obtain. This work combines crowdsourcing and gamification to leverage human intelligence, enhancing the object recognition and attribute estimation aspects of robot perception. The framework employs an attribute matching system that encodes visual information into an online puzzle game, utilizing the collective intelligence of players to expand an initial attribute database and react to real-time perception conflicts. The framework is deployed and evaluated in a proof-of-concept application for enhancing object recognition in autonomous robot grasping and a model for estimating the response time is proposed. The obtained results demonstrate that given enough players, the framework can offer near real-time labeling of novel objects, based purely on visual information and human experience.
|
|
04:45-05:00, Paper TuCT18.4 | Add to My Program |
Teaching Robotic and Biomechatronic Concepts with a Gripper Design Project and a Grasping and Manipulation Competition |
|
Liarokapis, Minas | The University of Auckland |
Kontoudis, George | Virginia Tech |
Keywords: Education Robotics
Abstract: Lecturers of Engineering courses around the world are struggling to increase the engagement of students through the introduction of appropriate hands-on activities and assignments. In Biomechatronics and Robotics courses these assignments typically focus on how certain devices are designed, modelled, fabricated, or controlled. The hardware for these assignments is typically purchased by some external vendor and the students only get the chance to analyze it or program it, so as to execute a useful task (e.g., programming mobile robots to perform path following tasks). Student engagement can be increased by instructing the students to prepare the hardware for their assignment. This also increases the sense of ownership of the project outcomes. In this paper, we present how a robotic gripper / hand design project and the introduction of a grasping and manipulation competition as a course assignment, can significantly increase the student engagement and their understanding of the taught concepts. The presented best practices have been trialed over the last four years in two different courses (one undergraduate and one postgraduate) of the Department of Mechanical Engineering at the University of Auckland in New Zealand. For the particular assignment the students were asked to fully developed a robotic gripper or hand from scratch using a single actuator (only the actuator and the power electronics were provided). The performance of the developed devices was assessed through the
|
|
TuCT19 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Dynamics and Control IV |
|
|
Chair: Sreenivasa, Manish | University of Wollongong |
Co-Chair: Ohtsuka, Toshiyuki | Kyoto University |
|
04:00-04:15, Paper TuCT19.1 | Add to My Program |
Model Based Evaluation of Human and Lower-Limb Exoskeleton Interaction During Sit to Stand Motion |
|
Bottin-Noonan, Joel | University of Wollongong |
Sreenivasa, Manish | University of Wollongong |
Keywords: Physically Assistive Devices, Modeling and Simulating Humans, Prosthetics and Exoskeletons
Abstract: The interaction between an exoskeleton and its human user is complex, and needs to conform to various requirements related to safety, comfort and adaptability. It is however impractical to test a large number of prototype variations against a large number of user variations, especially in the initial design and testing phases. Model based methods can help at this design stage by providing a virtual testbed. In this study, we develop a MATLAB-based toolbox that can simulate the interaction between a human model and a lower limb exoskeleton during the sit to stand motion. We present results for different scales of human users as well as variation in levels of exoskeleton assistance. Our results show that large reductions in human joint torques upto 57Nm are possible, while also transmitting large forces upto 300N to the human model. Additionally, by varying the human body sizes by 15% we found that the interaction forces also changed by as much as 29.2%. Therefore, careful consideration of the human user and its limitation should be made in the exoskeleton design and concept phases.
|
|
04:15-04:30, Paper TuCT19.2 | Add to My Program |
Efficient Solution Method Based on Inverse Dynamics for Optimal Control Problems of Rigid Body Systems |
|
Katayama, Sotaro | Kyoto University |
Ohtsuka, Toshiyuki | Kyoto University |
Keywords: Optimization and Optimal Control
Abstract: We propose an efficient way of solving optimal control problems for rigid-body systems on the basis of inverse dynamics and the multiple-shooting method. We treat all variables, including the state, acceleration, and control input torques, as optimization variables and treat the inverse dynamics as an equality constraint. We eliminate the update of the control input torques from the linear equation of Newton's method by applying condensing for inverse dynamics. The size of the resultant linear equation is the same as that of the multiple-shooting method based on forward dynamics except for the variables related to the passive joints and contacts. Compared with the conventional methods based on forward dynamics, the proposed method reduces the computational cost of the dynamics and their sensitivities by utilizing the recursive Newton-Euler algorithm (RNEA) and its partial derivatives. In addition, it increases the sparsity of the Hessian of the Karush–Kuhn–Tucker conditions, which reduces the computational cost, e.g., of Riccati recursion. Numerical experiments show that the proposed method outperforms state-of-the-art implementations of differential dynamic programming based on forward dynamics in terms of computational time and numerical robustness.
|
|
04:30-04:45, Paper TuCT19.3 | Add to My Program |
Compensation for Undefined Behaviors During Robot Task Execution by Switching Controllers Depending on Embedded Dynamics in RNN |
|
Suzuki, Kanata | Fujitsu Laboratories LTD |
Mori, Hiroki | Waseda University |
Ogata, Tetsuya | Waseda University |
Keywords: Learning from Experience, Cognitive Control Architectures, Sensorimotor Learning
Abstract: Robotic applications require both correct task performance and compensation for undefined behaviors. Although deep learning is a promising alternative to model-based control to perform complex tasks, the response to undefined behaviors that are not reflected in the training dataset remains challenging. In a human--robot collaborative task, the robot may adopt an unexpected posture due to collisions and other unexpected events. Therefore, robots should be able to recover from disturbances for completing the execution of the intended task. We propose a compensation method for undefined behaviors by switching between two controllers. Specifically, the proposed method switches between learning-based and model-based controllers depending on the internal representation of a recurrent neural network that learns task dynamics. Undefined behaviors are detected from the embedded task dynamics in the learning-based controller, rendering an external anomaly detector unnecessary. We applied the proposed method to a pick-and-place task and evaluated the compensation for undefined behaviors. Experimental results from simulations and on a real robot demonstrate the effectiveness and high performance of the proposed method.
|
|
04:45-05:00, Paper TuCT19.4 | Add to My Program |
Reduction of Ground Impact of a Powered Exoskeleton by Shock Absorption Mechanism on the Shank |
|
Park, Jeongsu | KAIST |
Lee, Dae-Ho | KAIST |
Park, Kyeong-Won | KAIST |
Kong, Kyoungchul | Korea Advanced Institute of Science and Technology |
Keywords: Wearable Robotics, Physically Assistive Devices, Human-Centered Robotics
Abstract: Powered exoskeletons for people with paraplegia are subjected to repetitive and large impacts due to the repeated ground contacts. The repetitive impact forces not only deteriorate the wear comfort but also cause a serious damage to the muscles and bones of the human wearing the powered exoskeleton. To address this issue, a novel shock absorption mechanism for powered exoskeletons that can reduce the peak of ground reaction force up to 28% is designed in this paper. The designed absorption mechanism is integrated into the WalkON Suit, a powered exoskeleton for people with paraplegia and verified by experimental results with a human subject in this paper also.
|
|
TuCT20 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Deep Learning in Robotics II |
|
|
Chair: Hao, Qi | Southern University of Science and Technology |
|
04:00-04:15, Paper TuCT20.1 | Add to My Program |
FlowDriveNet: An End-To-End Network for Learning Driving Policies from Image Optical Flow and LiDAR Point Flow |
|
Wang, Shuai | University of Science and Technology of China |
Qin, Jiahu | University of Science and Technology of China |
Li, Menglin | University of Science and Technology of China |
Wang, Yaonan | Hunan University |
Keywords: Imitation Learning, Machine Learning for Robot Control, Computer Vision for Automation
Abstract: Learning driving policies using an end-to-end network has been proved a promising solution for autonomous driving. Due to the lack of a benchmark driver behavior dataset that contains both the visual and the LiDAR data, existing works solely focus on learning driving from visual sensors. Besides, most works are limited to predict steering angle yet neglect the more challenging vehicle speed control problem. In this paper, we propose a novel end-to-end network, FlowDriveNet, which takes advantages of sequential visual data and LiDAR data jointly to predict steering angle and vehicle speed. The main challenges of this problem are how to efficiently extract driving-related information from images and point clouds, and how to fuse them effectively. To tackle these challenges, we propose a concept of point flow and declare that image optical flow and LiDAR point flow are significant motion cues for driving policy learning. Specifically, we first create an enhanced dataset that consists of images, point clouds and corresponding human driver behaviors. Then, in FlowDriveNet, a deep but efficient visual feature extraction module and a point feature extraction module are utilized to extract spatial features from optical flow and point flow, respectively. Additionally, a novel temporal fusion and prediction module is designed to fuse temporal information from the extracted spatial feature sequences and predict vehicle driving commands.
|
|
04:15-04:30, Paper TuCT20.2 | Add to My Program |
PocoNet: SLAM-Oriented 3D LiDAR Point Cloud Online Compression Network |
|
Cui, Jinhao | Zhejiang University |
Zou, Hao | Zhejiang University |
Kong, Xin | Zhejiang University |
Yang, Xuemeng | Zhejiang University |
Zhao, Xiangrui | Zhejiang University |
Liu, Yong | Zhejiang University |
Li, Wanlong | Beijing Huawei Digital Technologies Co., Ltd |
Wen, Feng | Huawei Technologies Co., Ltd |
Zhang, Hongbo | Huawei Technologies |
Keywords: Deep Learning for Visual Perception, Localization, Semantic Scene Understanding
Abstract: In this paper, we present PocoNet: Point cloud Online COmpression NETwork to address the task of SLAM-oriented compression. The aim of this task is to select a compact subset of points with high priority to maintain localization accuracy. The key insight is that points with high priority have similar geometric features in SLAM scenarios. Hence, we tackle this task as point cloud segmentation to capture complex geometric information. We calculate observation counts by matching between maps and point clouds and divide them into different priority levels. Trained by labels annotated with such observation counts, the proposed network could evaluate the point-wise priority. Experiments are conducted by integrating our compression module into an existing SLAM system to evaluate compression ratios and localization performances. Experimental results on two different datasets verify the feasibility and generalization of our approach.
|
|
04:30-04:45, Paper TuCT20.3 | Add to My Program |
3D Reconstruction of Deformable Colon Structures Based on Preoperative Model and Deep Neural Network |
|
Zhang, Shuai | University of Technology Sydney |
Zhao, Liang | University of Technology Sydney |
Huang, Shoudong | University of Technology, Sydney |
Ma, Ruibin | University of North Carolina at Chapel Hill |
Hu, Boni | Northwestern Polytechnical University |
Hao, Qi | Southern University of Science and Technology |
Keywords: Surgical Robotics: Laparoscopy, SLAM, Computer Vision for Medical Robotics
Abstract: In colonoscopy procedures, it is important to rebuild and visualize the colonic surface to minimize the missing regions and reinspect for abnormalities. Due to the fast camera motion and deformation of the colon in standard forward-viewing colonoscopies, traditional simultaneous localization and mapping (SLAM) systems work poorly for 3D reconstruction of colon surfaces and are prone to severe drift. Thus in this paper, a preoperative colon model segmented from CT scans is used together with the colonoscopic images to achieve the 3D colon reconstruction. The proposed framework includes dense depth estimation from monocular colonoscopic images using a deep neural network (DNN), visual odometry (VO) based camera motion estimation and an embedded deformation (ED) graph based non-rigid registration algorithm for deforming 3D scans to the segmented colon model. A realistic simulator is used to generate different simulation datasets with ground truth. Simulation results demonstrate the good performance of the proposed 3D colonic surface reconstruction method in terms of accuracy and robustness. In-vivo experiments are also conducted and the results show the practicality of the proposed framework for providing useful shape and texture information in colonoscopy applications.
|
|
04:45-05:00, Paper TuCT20.4 | Add to My Program |
DenseLiDAR: A Real-Time Pseudo Dense Depth Guided Depth Completion Network |
|
Gu, Jiaqi | Zhejiang University |
Xiang, Zhiyu | Zhejiang University |
Ye, Yuwen | Zhejiang University |
Wang, Lingxuan | Zhejiang University |
Keywords: Deep Learning for Visual Perception, RGB-D Perception, Recognition
Abstract: Depth Completion can produce a dense depth map from a sparse input and provide a more complete 3D description of the environment. Despite great progress made in depth completion, the sparsity of the input and low density of the ground truth still make the problem challenging. In this work, we propose DenseLiDAR, a novel real-time pseudo depth guided depth completion network. We exploit dense pseudo-depth map obtained from simple morphological operations to guide the network in three aspects: (1) Constructing a residual structure for the output (2) Rectifying the sparse input data (3) Providing dense structural loss for training the network. Thanks to these novel designs, higher performance of the output could be achieved. In addition, two new metrics for better evaluating the quality of the predicted depth map are also presented. Extensive experiments on KITTI depth completion benchmark suggest that our model is able to achieve the state-of-the-art performance at the highest frame rate of 50Hz. The predicted dense depth is further evaluated by several downstream robotic perception or positioning tasks. For the task of 3D detection, 3~5 percent performance gains on small objects categories are achieved on KITTI 3D object detection dataset. For RGB-D SLAM, higher accuracy on vehicle's trajectory is also obtained. These promising results not only verify the high quality of our depth prediction, but also demonstrate the potential of improving the related downstream tasks.
|
|
TuCT21 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Control Applications |
|
|
Chair: Vidal-Calleja, Teresa A. | University of Technology Sydney |
Co-Chair: Park, Hae-Won | Korea Advanced Institute of Science and Technology |
|
04:00-04:15, Paper TuCT21.1 | Add to My Program |
Faithful Euclidean Distance Field from Log-Gaussian Process Implicit Surfaces |
|
Wu, Lan | University of Technology Sydney |
Lee, Ki Myung Brian | University of Technology Sydney |
Liu, Liyang | University of Sydney |
Vidal-Calleja, Teresa A. | University of Technology Sydney |
Keywords: Mapping
Abstract: In this letter, we introduce the Log-Gaussian Process Implicit Surface (Log-GPIS), a novel continuous and probabilistic mapping representation suitable for surface reconstruction and local navigation. Our key contribution is the realisation that the regularised Eikonal equation can be simply solved by applying the logarithmic transformation to a GPIS formulation to recover the accurate Euclidean distance field (EDF) and, at the same time, the implicit surface. To derive the proposed representation, Varadhan's formula is exploited to approximate the non-linear Eikonal partial differential equation (PDE) of the EDF by the logarithm of a linear PDE. We show that members of the Matern covariance family directly satisfy this linear PDE. The proposed approach does not require post-processing steps to recover the EDF. Moreover, unlike sampling-based methods, Log-GPIS does not use sample points inside and outside the surface as the derivative of the covariance allow direct estimation of the surface normals and distance gradients. We benchmarked the proposed method on simulated and real data against state-of-the-art mapping frameworks that also aim at recovering both the surface and a distance field. Our experiments show that Log-GPIS produces the most accurate results for the EDF and comparable results for surface reconstruction and its computation time still allows online operations.
|
|
04:15-04:30, Paper TuCT21.2 | Add to My Program |
Force Control of a Hydraulic Actuator with a Neural Network Inverse Model |
|
Kim, Sung-Woo | Korea Advanced Institute of Science and Technology |
Cho, Buyoun | Korea Advanced Institute of Science and Technology |
Shin, Seunghoon | Korea Advanced Institute of Science and Technology |
Oh, Jun Ho | Korea Advanced Inst. of Sci. and Tech |
Hwangbo, Jemin | Korean Advanced Institute of Science and Technology |
Park, Hae-Won | Korea Advanced Institute of Science and Technology |
Keywords: Hydraulic/Pneumatic Actuators, Force Control, Neural and Fuzzy Control
Abstract: In this study, a learning-based force controller for a hydraulic actuator is presented. We propose a control method with an inverse model composed of a deep neural network, which accurately tracks a force trajectory. This learning-based controller can be trained offline using force and position data sets from the hydraulic actuator. The methodology for training the controller network and the experimental setup for data collection are proposed. The learning-based controller was implemented on a hydraulic actuator hardware platform. The proposed learning-based controller demonstrates improved tracking performance compared to that of conventional model-based adaptive control methods.
|
|
04:30-04:45, Paper TuCT21.3 | Add to My Program |
An Encoder-Free Joint Velocity Estimation Method for Serial Manipulators Using Inertial Sensors |
|
Xu, Xiaolong | Shandong University |
Sun, Yujie | Shandong University |
Tian, Xincheng | Shandong University |
Zhou, Lelai | Shandong University |
Li, Yibin | Shandong University |
Keywords: Surveillance Robotic Systems, Sensor Fusion, Kinematics
Abstract: This paper focuses on developing a real-time and flexible velocity estimation approach for serial revolute manipulator using only one inertial measurement unit (IMU) mounted on each link side of the manipulator. Particularly, the proposed approach has no requirement for the installation position and orientation of the IMU, which improves the flexibility of the implementation procedure. A joint velocity model is established based on the proposed principle of constructing coordinate system according to the robotic geometric information. The general solutions are derived in detail, thereby the proposed algorithm can be generalized into any other robots with the same geometric configuration. With the method, the joint rotational velocity measurements of static and dynamic robotic motion are provided compared to encoders. Experimental results based on the six degrees of freedom (DOF) collaborative manipulator have validated the feasibility and effectiveness of the proposed approach. The proposed method has the benefits of low cost and flexibility, which could work as a redundant velocity monitor criterion to provide assistant joint velocity measurements.
|
|
04:45-05:00, Paper TuCT21.4 | Add to My Program |
D-ACC: Dynamic Adaptive Cruise Control for Highways with Ramps Based on Deep Q-Learning |
|
Das, Lokesh Chandra | The University of Memphis |
Won, Myounggyu | University of Memphis |
Keywords: Intelligent Transportation Systems, AI-Based Methods, Autonomous Agents
Abstract: An Adaptive Cruise Control (ACC) system allows vehicles to maintain a desired headway distance to a preceding vehicle automatically. It is increasingly adopted by commercial vehicles. Recent research demonstrates that the effective use of ACC can improve the traffic flow through the adaptation of the headway distance in response to the current traffic conditions. In this paper, we demonstrate that a state-of-the-art intelligent ACC system performs poorly on highways with ramps due to the limitation of the model-based approaches that do not take into account appropriately the traffic dynamics on ramps in determining the optimal headway distance. We then propose a dynamic adaptive cruise control system (D-ACC) based on deep reinforcement learning that adapts the headway distance effectively according to dynamically changing traffic conditions for both the main road and ramp to optimize the traffic flow. Extensive simulations are performed with a combination of a traffic simulator (SUMO) and vehicle-to-everything communication (V2X) network simulator (Veins) under numerous traffic scenarios. We demonstrate that D-ACC improves the traffic flow by up to 70% compared with a state-of-the-art intelligent ACC system in a highway segment with a ramp.
|
|
TuCT22 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Autonomous Manipulation |
|
|
Chair: Wang, Zhongkui | Ritsumeikan University |
|
04:00-04:15, Paper TuCT22.1 | Add to My Program |
Precise Multi-Modal In-Hand Pose Estimation Using Low-Precision Sensors for Robotic Assembly |
|
von Drigalski, Felix Wolf Hans Erich | OMRON SINIC X Corporation |
Hayashi, Kennosuke | OMRON Corporation |
Huang, Yifei | The University of Tokyo |
Yonetani, Ryo | Omron Sinic X |
Hamaya, Masashi | OMRON SINIC X Corporation |
Tanaka, Kazutoshi | OMRON SINIC X Corporation |
Ijiri, Yoshihisa | OMRON Corp |
Keywords: Perception for Grasping and Manipulation, Assembly, Object Detection, Segmentation and Categorization
Abstract: In industrial assembly tasks, the in-hand pose of grasped objects needs to be known with high precision for subsequent manipulation tasks such as insertion. This problem (in-hand-pose estimation) has traditionally been addressed using visual recognition or tactile sensing. On the one hand, while visual recognition can provide efficient pose estimates, it tends to suffer from low precision due to noise, occlusions and calibration errors. On the other hand, tactile fingertip sensors can provide precise complementary information, but their low durability significantly limits their use in real-world applications. To get the best of both worlds, we propose an efficient method for in-hand pose estimation using off-the-shelf cameras and robot wrist force sensors, which requires no precise camera calibration. The key idea is to utilize visual and contact information adaptively to maximally reduce the uncertainty about the in-hand object pose in a Bayesian state estimation framework. As most of the uncertainty can be resolved from visual observations, our approach reduces the number of physical environment interactions while keeping a high pose estimation accuracy. Our experimental evaluation demonstrates that our approach can estimate object poses with sub-mm precision with an off-the-shelf camera and force-torque sensor.
|
|
04:15-04:30, Paper TuCT22.2 | Add to My Program |
Assembly Sequences Based on Multiple Criteria against Products with Deformable Parts |
|
Kiyokawa, Takuya | Nara Institute of Science and Technology |
Takamatsu, Jun | Nara Institute of Science and Technology |
Ogasawara, Tsukasa | Nara Institute of Science and Technology |
Keywords: Assembly, Industrial Robots, Task Planning
Abstract: To generate assembly sequences that robots can easily handle, this study tackled assembly sequence generation (ASG) by considering two tradeoff objectives: (1) insertion conditions and (2) degrees of the constraints affecting the assembled parts. We propose a multiobjective genetic algorithm to balance these two objectives. Furthermore, we extend our previously proposed 3D computer-aided design (CAD)-based method for extracting three types of two-part relationship matrices from 3D models that include deformable parts. The interference between deformable and other parts can be determined using scaled part shapes. Our proposed ASG can produce Pareto-optimal sequences for multi-component models with deformable parts such as rubber bands, rubber belts, and roller chains. We further discuss the limitation and applicability of the generated sequences to robotic assembly.
|
|
04:30-04:45, Paper TuCT22.3 | Add to My Program |
A Versatile End-Effector for Pick-And-Release of Fabric Parts |
|
Yamazaki, Kimitoshi | Shinshu University |
Abe, Taiki | Shinshu University |
Keywords: Grippers and Other End-Effectors, Factory Automation, Industrial Robots
Abstract: A novel robotic end-effector is introduced for pick-and-release of several fabric parts used for producing underwear. One of the main operations in a factory for sewing cloth products is to pick up fabric parts, provide them to a sewing machine, and sew them together. In the case of underwear, thin cotton cloth is the target of manipulation. Since such cloth parts are placed in a stacked state, they tend to stick to each other, and a certain skill or technical acuity is required for picking up. Therefore, this work is typically mostly done manually at present. One purpose of this study is to automate such manipulation, so a mechanism is introduced to pick up only the top piece of cotton fabric from a stack. An essential part of the mechanism is an attached cylindrical brush with a removal cloth on its surface. The cylindrical brush is placed on the cotton fabric and is then rotated to roll up only the top piece of cotton fabric. This mechanism makes it possible to release the fabric by rotating the brush in the reverse direction. In addition, two sets of the cylindrical brushes are installed to face each other to enable pinching. This mechanism enables pick-and-release for various cloth parts, such as woven rubber pieces and piped cloth hems. Finally, a composite end-effector equipping these gripping functions is manufactured, and evaluation experiments are conducted using actual fabric parts. The results show that the proposed end-effector is practical.
|
|
04:45-05:00, Paper TuCT22.4 | Add to My Program |
A Soft Robotic Hand Based on Bellows Actuators for Dishwashing Automation |
|
Wang, Zhongkui | Ritsumeikan University |
Hirata, Takao | Ritsumeikan University |
Sato, Takanori | Yamagata University |
Mori, Tomoharu | Yamagata University |
Kawakami, Masaru | Yamagata University |
Furukawa, Hidemitsu | Yamagata University |
Kawamura, Sadao | Ritsumeikan University |
Keywords: Grippers and Other End-Effectors, Soft Robot Applications
Abstract: Automation in the food services industry is not as developed as in the automotive and electrical industries. This may be because (1) the tasks involve high-speed operations in unstructured environments and (2) the lack of effective robotic end-effectors. In this paper, we focus on the automation of dishwashing operations and propose a robotic hand capable of withdrawing and grasping a dish plate from a pile, even when there is water or oil left on the plate in a post-meal scenario. The robotic hand consists of withdrawing and grasping mechanisms. Each mechanism is actuated by two bellows actuators. Soft pads with special groove patterns were fabricated and used to provide sufficient friction when they come into contact with the plate. A theoretical analysis of the withdrawing and grasping operations was performed, and an analytical model of the bellows actuator was established by considering both the pressure thrust force and the elastic force. Bellows actuators were designed based on the analysis and fabricated using both 3D printing and casting methods. Characterization of bellows actuator was performed to validate the analytical model, and agreement was found on the thrust force, but discrepancy occurred in the elastic force due to the complex structure and deformation mode of bellows. Finally, handling experiments on different plates were conducted and results demonstrated that the proposed robotic hand can successfully withdraw and grasp a plate from a pile.
|
|
TuCT23 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Autonomous Driving |
|
|
Chair: Kong, Detian | Chinese University of HK |
|
04:00-04:15, Paper TuCT23.1 | Add to My Program |
IDE-Net: Interactive Driving Event and Pattern Extraction from Human Data |
|
Jia, Xiaosong | University of California, Berkeley |
Sun, Liting | University of California, Berkeley |
Tomizuka, Masayoshi | University of California |
Zhan, Wei | Univeristy of California, Berkeley |
Keywords: Behavior-Based Systems, Agent-Based Systems, Intention Recognition
Abstract: Autonomous vehicles (AVs) need to share the road with multiple, heterogeneous road users in a variety of driving scenarios. It is overwhelming and unnecessary to carefully interact with all observed agents, and AVs need to determine whether and when to interact with each surrounding agent. In order to facilitate the design and test of prediction and planning modules of AVs, in-depth understanding of interactive behavior is expected with proper representation, and events in behavior data need to be extracted and categorized automatically. Answers to what are the essential patterns of interactions are also crucial for these motivations in addition to answering whether and when. Thus, learning to extract interactive driving events and patterns from human data for tackling the whether-when-what tasks is of critical importance for AVs. In this paper, we propose the Interactive Driving event and pattern Extraction Network (IDE-Net), a deep learning framework to automatically extract interaction events and patterns directly from vehicle trajectories. In IDE-Net, we leverage the power of multi-task learning and proposed three auxiliary tasks to assist the pattern extraction in an unsupervised fashion. We also design a unique spatial-temporal block to encode the trajectory data. Experimental results on the INTERACTION dataset verified the effectiveness of such designs in terms of better generalizability and effective pattern extraction.
|
|
04:15-04:30, Paper TuCT23.2 | Add to My Program |
HD Map Update for Autonomous Driving with Crowdsourced Data |
|
Kim, Kitae | Korea University |
Cho, Soohyun | Korea University |
Chung, Woojin | Korea University |
Keywords: Mapping, Autonomous Vehicle Navigation, Object Detection, Segmentation and Categorization
Abstract: Current self-driving cars can perform precise localization and generate collision-free trajectories using high definition (HD) maps which provide accurate road information. Therefore, keeping HD maps up to date is important for safe autonomous driving. In general, automotive HD maps are built by the use of expensive mapping systems. In addition, a lot of manual modifications are required in many cases. The conventional HD mapping cannot be frequently carried out due to the high cost. In this work, we used a large amount of road data collected by crowdsourcing devices. Crowdsourcing devices consist of low-cost sensors. The devices are mounted on repeatedly traveling vehicles such as buses. Although collected data shows high uncertainty and low accuracy, a large amount of data can be obtained in a short time with low expense. We present a solution that keeps HD maps up to date by using crowdsourced data. The developed solution concentrates on landmark information among crowdsourced data and HD maps. By using uncertainty information, we chose reliable observations for map updating. Observation learner algorithms were carefully designed under the consideration of differences between discrete and continuous landmarks. The triggering condition for the map update can be adjusted by the proposed update mode selection strategy. The proposed map updating scheme has been experimentally verified by the use of crowdsourced data collected from real road environments.
|
|
04:30-04:45, Paper TuCT23.3 | Add to My Program |
Distributed Dynamic Map Fusion Via Federated Learning for Intelligent Networked Vehicles |
|
Zhang, Zijian | Southern University of Science and Technology |
Wang, Shuai | Southern University of Science and Technology |
Hong, Yuncong | Southern University of Science and Technology |
Zhou, Liangkai | Southern University of Science and Technology |
Hao, Qi | Southern University of Science and Technology |
Keywords: Intelligent Transportation Systems, Multi-Robot Systems, Autonomous Agents
Abstract: The technology of dynamic map fusion among networked vehicles has been developed to enlarge sensing ranges and improve sensing accuracy for individual vehicles. This paper proposes a federated learning (FL) based dynamic map fusion framework to achieve high map quality despite unknown numbers of objects in the fields of view (FoVs), var- ious sensing and model uncertainties, and missing data labels for online learning. The novelty of this work is threefold: (1) developing a three-stage fusion scheme to predict the number of objects effectively and to fuse multiple local maps with fidelity scores; (2) developing an FL algorithm which fine-tunes feature models distributively by aggregating model parameters; (3) developing a knowledge distillation method to generate FL training labels when data labels are unavailable. The proposed framework is implemented in the CARLA simulation platform. Extensive experimental results are provided to verify the superior performance and robustness of the developed map fusion and FL schemes.
|
|
04:45-05:00, Paper TuCT23.4 | Add to My Program |
Ground-Aware Monocular 3D Object Detection for Autonomous Driving |
|
Liu, Yuxuan | Hong Kong University of Science and Technology |
Yuan, Yixuan | City University of Hong Kong |
Wang, Lujia | Shenzhen Institutes of Advanced Technology |
Liu, Ming | Hong Kong University of Science and Technology |
Keywords: Automation Technologies for Smart Cities, Deep Learning for Visual Perception, Object Detection, Segmentation and Categorization
Abstract: Estimating the 3D position and orientation of objects in the environment with a single RGB camera is a critical and challenging task for low-cost urban autonomous driving and mobile robots. Most of the existing algorithms are based on the geometric constraints in 2D-3D correspondence, which stems from generic 6D object pose estimation. We first identify how the ground plane provides additional clues in depth reasoning in 3D detection in driving scenes. Based on this observation, we then improve the processing of 3D anchors and introduce a novel neural network module to fully utilize such application-specific priors in the framework of deep learning. Finally, we introduce an efficient neural network embedded with the proposed module for 3D object detection. We further verify the power of the proposed module with a neural network designed for monocular depth prediction. The two proposed networks achieve state-of-the-art performances on the KITTI 3D object detection and depth prediction benchmarks, respectively. The code will be published in https://www.github.com/Owen-Liuyuxuan/visualDet3D
|
|
TuCT24 Virtual-Asia, Time zone: GMT+1 |
Add to My Program |
Aerial Robotics: Mechanics and Control II |
|
|
Chair: Shen, Shaojie | Hong Kong University of Science and Technology |
Co-Chair: Chen, Ben M. | Chinese University of Hong Kong |
|
04:00-04:15, Paper TuCT24.1 | Add to My Program |
Underwater Stability of a Morphable Aerial-Aquatic Quadrotor with Variable Thruster Angles |
|
Tan, Yu Herng | National University of Singapore |
Chen, Ben M. | Chinese University of Hong Kong |
Keywords: Aerial Systems: Applications, Aerial Systems: Mechanics and Control, Marine Robotics
Abstract: The design of aerial-aquatic multirotors can benefit from thruster rotation so that the thrusters can act directly in the lateral directions of surge and sway when submerged. This allows much more effective locomotion underwater as opposed to the aerial configuration where rotational acceleration is used to direct small components of thrust in the lateral directions. However, the introduction of lateral thruster components by rotating the thrusters about their respective arm axes creates additional coupled moment terms. Here, the dynamics of this design is analysed to show that critical angles of thruster rotation can achieve stable surge and sway movements while also decoupling lateral from rotational movements.
|
|
04:15-04:30, Paper TuCT24.2 | Add to My Program |
Development of Flapping Robot with Self-Takeoff from the Ground Capability |
|
Afakh, Muhammad Labiyb | Tokyo Metropolitan University |
Sato, Terukazu | Tokyo Metropolitan University |
Sato, Hidaka | Tokyo Metropolitan University |
Takesue, Naoyuki | Tokyo Metropolitan University |
Keywords: Biologically-Inspired Robots, Aerial Systems: Mechanics and Control, Mechanism Design
Abstract: Birds are agile in locomotion and able to move quickly and easily from one place to another. When a bird is on the ground, and a threat approaches, the bird will fly away and escape. An ornithopter robot provides advantages in energy saving, maneuverability, and crash safety. Most flapping robots require an operator or assistance to take off. The goal of this study is to enable self-takeoff from the ground. The developed robot can generate thrust to its body by exceeding its own weight using a simple flapping mechanism and lightweight design. The result of the takeoff experiment showed that the ornithopter robot was able to self-takeoff from the ground without assistance.
|
|
04:30-04:45, Paper TuCT24.3 | Add to My Program |
Fast-Tracker: A Robust Aerial System for Tracking Agile Target in Cluttered Environments |
|
Han, Zhichao | Zhejiang University |
Zhang, Ruibin | Zhejiang University |
Pan, Neng | Zhejiang University |
Xu, Chao | Zhejiang University |
Gao, Fei | Zhejiang University |
Keywords: Aerial Systems: Applications, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: This paper proposes a systematic solution that uses an unmanned aerial vehicle (UAV) to aggressively and safely track an agile target. It properly handles the challenging situations where the intent of the target and the dense environments are unknown. Our work is divided into two parts: target motion prediction and tracking trajectory planning. The target motion prediction method utilizes target observations to reliably predict its future motion. The tracking trajectory planner follows the hierarchical workflow. A target informed kinodynamic searching method is adopted as the front-end, which heuristically searches for a safe tracking trajectory. The back-end optimizer then refines it into a spatial-temporal optimal trajectory. The proposed solution is integrated into an onboard quadrotor system. We fully test the system in challenging real-world tracking missions. Moreover, benchmark comparisons validate that the proposed method surpasses the cutting-edge methods on time efficiency and tracking effectiveness.
|
|
04:45-05:00, Paper TuCT24.4 | Add to My Program |
Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments (I) |
|
Gao, Fei | Zhejiang University |
Wang, Luqi | Hong Kong University of Science and Technology |
Zhou, Boyu | Hong Kong University of Science and Technology |
Zhou, Xin | ZHEJIANG UNIVERSITY |
Pan, Jie | Hong Kong University of Science and Technology |
Shen, Shaojie | Hong Kong University of Science and Technology |
Keywords: Aerial Systems: Applications, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: In this paper, we propose a complete and robust system, teach-repeat-replan, for the aggressive flight of autonomous quadrotors. The proposed system is built upon on the classical teach-and-repeat framework, which is widely adopted in infrastructure inspection, aerial transportation, and search-and-rescue. For these applications, a human's intention is essential for deciding the topological structure of the flight trajectory of the drone. However, poor teaching trajectories and changing environments prevent a simple teach-and-repeat system from being applied flexibly and robustly. In this paper, instead of commanding the drone to precisely follow a teaching trajectory, we propose a method to automatically convert a human-piloted trajectory, which can be arbitrarily jerky, to a topologically equivalent one. The generated trajectory is guaranteed to be smooth, safe, and dynamically feasible, with a human preferable aggressiveness. Also, to avoid unmapped or moving obstacles during flights, a fast local perception method and a sliding-windowed replanning method are integrated into our system, to generate safe and dynamically feasible local trajectories onboard.
|
|
TuDT1 Award Session, Time zone: GMT+1 |
Add to My Program |
Human-Robot Interaction Award Session |
|
|
Co-Chair: Xiao, Xiao | Southern University of Science and Technology |
|
10:00-10:15, Paper TuDT1.1 | Add to My Program |
Can I Pour into It? Robot Imagining Open Containability Affordance of Previously Unseen Objects Via Physical Simulations |
|
Wu, Hongtao | Johns Hopkins University |
Chirikjian, Gregory | Johns Hopkins University |
|
10:15-10:30, Paper TuDT1.2 | Add to My Program |
Collision Detection, Identification, and Localization on the DLR SARA Robot with Sensing Redundancy |
|
Iskandar, Maged | German Aerospace Center - DLR |
Eiberger, Oliver | DLR - German Aerospace Center |
Albu-Schäffer, Alin | DLR - German Aerospace Center |
De Luca, Alessandro | Sapienza University of Rome |
Dietrich, Alexander | German Aerospace Center (DLR) |
|
10:30-10:45, Paper TuDT1.3 | Add to My Program |
Automated Acquisition of Structured, Semantic Models of Manipulation Activities from Human VR Demonstration |
|
Haidu, Andrei | University Bremen |
Beetz, Michael | University of Bremen |
|
10:45-11:00, Paper TuDT1.4 | Add to My Program |
Reactive Human-To-Robot Handovers of Arbitrary Objects |
|
Yang, Wei | NVIDIA |
Paxton, Chris | NVIDIA Research |
Mousavian, Arsalan | NVIDIA |
Chao, Yu-Wei | NVIDIA |
Cakmak, Maya | University of Washington |
Fox, Dieter | University of Washington |
|
TuDT2 Award Session, Time zone: GMT+1 |
Add to My Program |
Service Robotics Award Session |
|
|
Chair: Minor, Mark | University of Utah |
|
10:00-10:15, Paper TuDT2.1 | Add to My Program |
Tactile SLAM: Real-Time Inference of Shape and Pose from Planar Pushing |
|
Suresh, Sudharshan | Carnegie Mellon University |
Bauza Villalonga, Maria | Massachusetts Institute of Technology |
Yu, Kuan-Ting | XYZ Robotics |
Mangelson, Joshua | Brigham Young University |
Rodriguez, Alberto | Massachusetts Institute of Technology |
Kaess, Michael | Carnegie Mellon University |
|
10:15-10:30, Paper TuDT2.2 | Add to My Program |
Robotic Guide Dog: Leading a Human with Leash-Guided Hybrid Physical Interaction |
|
Xiao, Anxing | Harbin Institute of Technology, Shenzhen |
Tong, Wenzhe | Harbin Institute of Technology, Weihai |
Yang, Lizhi | University of California, Berkeley |
Zeng, Jun | University of California, Berkeley |
Li, Zhongyu | University of California, Berkeley |
Sreenath, Koushil | University of California, Berkeley |
|
10:30-10:45, Paper TuDT2.3 | Add to My Program |
Compact Flat Fabric Pneumatic Artificial Muscle (ffPAM) for Soft Wearable Robotic Devices |
|
Kim, Woojong | KAIST |
Park, Hyunkyu | Korea Advanced Institute of Science and Technology |
Kim, Jung | KAIST |
|
10:45-11:00, Paper TuDT2.4 | Add to My Program |
BADGR: An Autonomous Self-Supervised Learning-Based Navigation System |
|
Kahn, Gregory | University of California, Berkeley |
Abbeel, Pieter | UC Berkeley |
Levine, Sergey | UC Berkeley |
|
TuDT3 Award Session, Time zone: GMT+1 |
Add to My Program |
Mechatronics and Design Award Session |
|
|
Co-Chair: Tang, Chao | Georgia Institute of Technology |
|
10:00-10:15, Paper TuDT3.1 | Add to My Program |
Soft Hybrid Aerial Vehicle Via Bistable Mechanism |
|
Li, Xuan | University of Pennsylvania |
McWilliams, Jessica | University of Pennsylvania |
Li, Minchen | University of Pennsylvania |
Sung, Cynthia | University of Pennsylvania |
Jiang, Chenfanfu | University of Pennsylvania |
|
10:15-10:30, Paper TuDT3.2 | Add to My Program |
A Versatile Inverse Kinematics Formulation for Retargeting Motions Onto Robots with Kinematic Loops |
|
Schumacher, Christian | Disney Research |
Knoop, Espen | The Walt Disney Company |
Bächer, Moritz | Disney Research |
|
10:30-10:45, Paper TuDT3.3 | Add to My Program |
Multi-Point Orientation Control of Discretely-Magnetized Continuum Manipulators |
|
Richter, Michiel | University of Twente |
Kalpathy Venkiteswaran, Venkatasubramanian | University of Twente |
Misra, Sarthak | University of Twente |
|
10:45-11:00, Paper TuDT3.4 | Add to My Program |
Surface Robots Based on S-Isothermic Surfaces |
|
Iwamoto, Noriyasu | Shinshu Univ. |
Nishikawa, Atsushi | Osaka University |
Arai, Hiroaki | Taisei Corporation |
|