| |
Last updated on May 27, 2019. This conference program is tentative and subject to change
Technical Program for Tuesday May 21, 2019
|
TuPL Plenary Session, 210 |
Add to My Program |
Plenary Session II |
|
|
Chair: Desai, Jaydev P. | Georgia Institute of Technology |
|
08:30-09:30, Paper TuPL.1 | Add to My Program |
Opportunities and Challenges for Autonomy in Micro Aerial Vehicles |
Kumar, Vijay | University of Pennsylvania |
Keywords: Aerial Systems: Applications
Abstract: VIJAY KUMAR is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania. He received his Bachelors of Technology from the Indian Institute of Technology, Kanpur and his Ph.D. from The Ohio State University in 1987. He has been on the Faculty in the Department of Mechanical Engineering at the University of Pennsylvania since1987. Dr. Kumar served as the Deputy Dean for Research in the School of Engineering and Applied Science from 2000-2004. He directed the GRASP Laboratory, a multidisciplinary robotics and perception laboratory, from 1998-2004. He was the Chairman of the Department of Mechanical Engineering and Applied Mechanics from 2005-2008. He then served as the Deputy Dean for Education in the School of Engineering and Applied Science from 2008-2012. He has served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy (2012 – 2013). Dr. Kumar is a Fellow of the American Society of Mechanical Engineers (ASME) and the Institute of Electrical and Electronic Engineers (IEEE). He has served on the editorial boards of the IEEE Transactions on Robotics and Automation, IEEE Transactions on Automation Science and Engineering, ASME Journal of Mechanical Design, the ASME Journal of Mechanisms and Robotics and the Springer Tract in A
|
|
TuKN1 Keynote Session, 517ab |
Add to My Program |
Keynote Session III |
|
|
Chair: Desai, Jaydev P. | Georgia Institute of Technology |
|
09:45-10:30, Paper TuKN1.1 | Add to My Program |
Embracing Failure |
Mason, Matthew T. | Carnegie Mellon University |
Keywords: Manipulation Planning
Abstract: Matthew T. Mason has been working on robotic manipulation since the 1970's. He earned the BS, MS, and PhD degrees in Computer Science and Artificial Intelligence at MIT, finishing his PhD in 1982. Since that time he has been on the faculty at Carnegie Mellon University. He was Director of the Carnegie Mellon Robotics Institute from 2004 to 2014, and is presently Professor of Robotics and Computer Science. He is a Fellow of the AAAI, and a Fellow of the IEEE. He is a winner of the System Development Foundation Prize, the IEEE RAS Pioneer Award, and the 2018 IEEE Robotics and Automation Award.
|
|
TuKN2 Keynote Session, 517cd |
Add to My Program |
Keynote Session IV |
|
|
Chair: Dudek, Gregory | McGill University |
|
09:45-10:30, Paper TuKN2.1 | Add to My Program |
Robotic Dresses and Emotional Interfaces |
Wipprecht, Anouk | Self-Employed / Freelance / Hi-Tech Fashion Designer |
Keywords: Education Robotics
Abstract: Dutch FashionTech designer Anouk Wipprecht creates designs ahead of her time; combining the latest in science and technology to make fashion an experience that transcends mere appearances. She wants her garments to facilitate and augment the interactions we have with ourselves and our surroundings. Her Spider Dress is a perfect example of this aesthetic, where sensors and moveable arms on the dress help to create a more defined boundary of personal space while employing a fierce style. Partnering up with companies such as Intel, AutoDesk, Google, Arduino, Microsoft, Adidas, Cirque Du Soleil, Audi, Swarovski, and 3D printing companies Shapeways and Materialise she researches how our future would look as we continue to embed technology into what we wear, and more importantly – how this will change our perspective on how we will interface with technology.
|
|
TuAT1 |
220 |
PODS: Tuesday Session I |
Interactive Session |
|
11:00-12:15, Subsession TuAT1-01, 220 | |
Marine Robotics I - 2.1.01 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-02, 220 | |
Pose Estimation - 2.1.02 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-03, 220 | |
Visual Odometry I - 2.1.03 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-04, 220 | |
Space Robotics I - 2.1.04 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-05, 220 | |
Deep Learning for Manipulation - 2.1.05 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-06, 220 | |
Product Design, Development and Prototyping - 2.1.06 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-07, 220 | |
Humanoid Robots IV - 2.1.07 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-08, 220 | |
Human-Robot Interaction I - 2.1.08 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-09, 220 | |
Perception for Manipulation I - 2.1.09 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-10, 220 | |
Intelligent Transportation I - 2.1.10 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-11, 220 | |
Medical Robotics V - 2.1.11 Interactive Session, 5 papers |
|
11:00-12:15, Subsession TuAT1-12, 220 | |
Field Robotics II - 2.1.12 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-13, 220 | |
Soft Robots II - 2.1.13 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-14, 220 | |
Haptics & Interfaces I - 2.1.14 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-15, 220 | |
SLAM - Session IV - 2.1.15 Interactive Session, 5 papers |
|
11:00-12:15, Subsession TuAT1-16, 220 | |
Mapping - 2.1.16 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-17, 220 | |
Aerial Systems: Mechanisms I - 2.1.17 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-18, 220 | |
Aerial Systems: Applications III - 2.1.18 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-19, 220 | |
Automation Technology - 2.1.19 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-20, 220 | |
Force and Tactile Sensing I - 2.1.20 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-21, 220 | |
Social HRI II - 2.1.21 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-22, 220 | |
Object Recognition & Segmentation I - 2.1.22 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-23, 220 | |
Localization and Estimation - 2.1.23 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-24, 220 | |
Under-Actuated Robots - 2.1.24 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-25, 220 | |
Human-Robot Interaction II - 2.1.25 Interactive Session, 6 papers |
|
11:00-12:15, Subsession TuAT1-26, 220 | |
Multi-Robot Systems V - 2.1.26 Interactive Session, 6 papers |
|
TuAT1-01 Interactive Session, 220 |
Add to My Program |
Marine Robotics I - 2.1.01 |
|
|
|
11:00-12:15, Paper TuAT1-01.1 | Add to My Program |
Online Estimation of Ocean Current from Sparse GPS Data for Underwater Vehicles |
Lee, Ki Myung Brian | University of Technology Sydney |
Yoo, Chanyeol | University of Technology Sydney |
Hollings, Ben | Blue Ocean Monitoring Ltd |
Anstee, Stuart David | Defence Science and Technology Group |
Huang, Shoudong | University of Technology, Sydney |
Fitch, Robert | University of Technology Sydney |
Keywords: Marine Robotics, Probability and Statistical Methods, Field Robots
Abstract: Underwater robots are subject to position drift due to the effect of ocean currents and the lack of accurate localisation while submerged. We are interested in exploiting such position drift to estimate the ocean current in the surrounding area, thereby assisting navigation and planning. We present a Gaussian process (GP)-based expectation maximisation (EM) algorithm that estimates the underlying ocean current using sparse GPS data obtained on the surface and dead-reckoned position estimates. We first develop a specialised GP regression scheme that exploits the incompressibility of ocean currents to counteract the underdetermined nature of the problem. We then use the proposed regression scheme in an EM algorithm that estimates the best-fitting ocean current in between each GPS fix. The proposed algorithm is validated in simulation and on a real dataset, and is shown to be capable of reconstructing the underlying ocean current field. We expect to use this algorithm to close the loop between planning and estimation for underwater navigation in unknown ocean currents.
|
|
11:00-12:15, Paper TuAT1-01.2 | Add to My Program |
Working towards Adaptive Sensing for Terrain-Aided Navigation |
Zhou, Mingxi | University of Rhode Island |
Bachmayer, Ralf | University of Bremen |
deYoung, Brad | Memorial University |
Keywords: Marine Robotics, Localization, Autonomous Vehicle Navigation
Abstract: An adaptive sensing method is presented to control the pinging interval of a downward-looking sonar on an Autonomous Underwater Vehicle. The goal is to conserve energy via adjusting the pinging rate automatically without reducing the localization accuracy when using terrain-aided navigation (TAN). In this paper, the TAN is implemented using a particle filter and a bias velocity estimator developed based on a Kalman filter. The adaptation on the sonar pinging interval is determined based on the depth variation of local seafloor topography which is quantified using a modified Teager Kaiser energy operator. As a result, more measurements are collected on high relief regions, and less measurements are obtained on relatively flat and smooth regions. We evaluated the adaptive sensing method in a simulated environment and applied it to a field data set. The results show that the adaptive sensing method produces an improved navigational accuracy compared to the missions with fixed sonar pinging rates. In the offline field missions, the energy consumed by the altimeter is reduced to about 30% in the adaptive sensing missions compared to continuously sensing missions where the altimeter is pinging consistently without switching off.
|
|
11:00-12:15, Paper TuAT1-01.3 | Add to My Program |
Non-Gaussian SLAM Utilizing Synthetic Aperture Sonar |
Cheung, Mei Yi | MIT |
Fourie, Dehann | Massachusetts Institute of Technology and Woods Hole Oceanograph |
Rypkema, Nicholas Rahardiyan | Massachusetts Institute of Technology |
Vaz Teixeira, Pedro | Massachusetts Institute of Technology |
Schmidt, Henrik | Massachusetts Institute of Technology |
Leonard, John | MIT |
Keywords: Marine Robotics, SLAM, Field Robots
Abstract: Synthetic Aperture Sonar (SAS) is a technique to improve the spatial resolution from a moving set of receivers by extending the array in time, increasing the effective array length and aperture. This technique is limited by the accuracy of the receiver position estimates, necessitating highly accurate, typically expensive aided-inertial navigation systems for submerged platforms. We leverage simultaneous localization and mapping to fuse acoustic and navigational measurements and obtain accurate pose estimates even without the benefit of absolute positioning for lengthy underwater missions. We demonstrate a method of formulating the well-known SAS problem in a SLAM framework, using acoustic data from hydrophones to simultaneously estimate platform and beacon position. An empirical probability distribution is computed from a conventional beamformer to correctly account for uncertainty in the acoustic measurements. The non-parametric method relieves the familiar Gaussian-only assumption currently used in the localization and mapping discipline and fits effectively into a factor graph formulation with conventional factors such as ground-truth priors and odometry. We present results from field experiments performed on the Charles River with an autonomous surface vehicle which demonstrate simultaneous localization of an unknown acoustic beacon and vehicle positioning, and provide comparison to GPS ground truths.
|
|
11:00-12:15, Paper TuAT1-01.4 | Add to My Program |
Easily Deployable Underwater Acoustic Navigation System for Multi-Vehicle Environmental Sampling Applications |
Quraishi, Anwar Ahmad | École Polytechnique Fédérale De Lausanne |
Bahr, Alexander | Ecole Polytechnique Federale De Lausanne |
Schill, Felix | Ecole Polytechnique Federale De Lausanne (EPFL) |
Martinoli, Alcherio | EPFL |
Keywords: Marine Robotics, Field Robots, Environment Monitoring and Management
Abstract: Water as a medium poses a number of challenges for robots, limiting the progress of research in underwater robotics vis-a-vis ground or aerial robotics. The primary challenges are satellite based positioning and radio communication being unusable due to high attenuation of electromagnetic waves in water. We have developed miniature, agile, easy to carry and deploy Autonomous Underwater Vehicles (AUVs) equipped with a suite of sensors for underwater environmental sensing. We previously demonstrated adaptive sampling and feature tracking, and gathered data from a lake for limnological research, with the AUV performing inertial navigation. In this paper, we demonstrate a new underwater acoustic positioning system, which allows on-board estimation of AUV position. Our system uses absolute time information from GNSS for initial clock synchronization and uses one-way-travel-time for range measurements, which makes it scalable in the number of robots. It is easily deployable and does not rely on any installed infrastructure in the environment. We describe various hardware and software components of our system, and present results from experiments in Lake Geneva.
|
|
11:00-12:15, Paper TuAT1-01.5 | Add to My Program |
Underwater Terrain Reconstruction from Forward-Looking Sonar Imagery |
Wang, Jinkun | Stevens Institute of Technology |
Shan, Tixiao | Stevens Institute of Technology |
Englot, Brendan | Stevens Institute of Technology |
Keywords: Marine Robotics, Mapping, SLAM
Abstract: In this paper, we propose a novel approach for underwater simultaneous localization and mapping using a multibeam imaging sonar for 3D terrain mapping tasks. The high levels of noise and the absence of elevation angle information in sonar images present major challenges for data association and accurate 3D mapping. Instead of repeatedly projecting extracted features into Euclidean space, we apply optical flow within bearing-range images for tracking extracted features. To deal with degenerate cases, such as when tracking is interrupted by noise, we model the subsea terrain as a Gaussian Process random field on a Chow–Liu tree. Terrain factors are incorporated into the factor graph, aimed at smoothing the terrain elevation estimate. We demonstrate the performance of our proposed algorithm in a simulated environment, which shows that terrain factors effectively reduce estimation error. We also show ROV experiments performed in a variable-elevation tank environment, where we are able to construct a descriptive and smooth height estimate of the tank bottom.
|
|
11:00-12:15, Paper TuAT1-01.6 | Add to My Program |
Through-Water Stereo SLAM with Refraction Correction for AUV Localization |
Suresh, Sudharshan | Carnegie Mellon University |
Westman, Eric | Carnegie Mellon University |
Kaess, Michael | Carnegie Mellon University |
Keywords: Localization, SLAM, Marine Robotics
Abstract: In this work, we propose a novel method for underwater localization using natural visual landmarks above the water surface. High-accuracy, drift-free pose estimates are necessary for inspection tasks in underwater indoor environments, such as nuclear spent pools. Inaccuracies in robot localization degrade the quality of its obtained map. Our framework uses sparse features obtained via an onboard upward-facing stereo camera to build a global ceiling feature map. However, adopting the pinhole camera model without explicitly modeling light refraction at the water-air interface contributes to a systematic error in observations. Therefore, we use refraction-corrected projection and triangulation functions to obtain true landmark estimates. The SLAM framework jointly optimizes vehicle odometry and point landmarks in a global factor graph using an incremental smoothing and mapping backend. To the best of our knowledge, this is the first method that observes in-air landmarks through water for underwater localization. We evaluate our method via both simulation and real-world experiments in a test-tank environment. The results show accurate localization across various challenging scenarios.
|
|
TuAT1-02 Interactive Session, 220 |
Add to My Program |
Pose Estimation - 2.1.02 |
|
|
|
11:00-12:15, Paper TuAT1-02.1 | Add to My Program |
Detect in RGB, Optimize in Edge: Accurate 6D Pose Estimation for Texture-Less Industrial Parts |
Zhang, Haoruo | Shanghai Jiao Tong University, Research Institute of Robotics |
Cao, Qixin | Shanghai Jiao Tong University |
Keywords: Object Detection, Segmentation and Categorization, Computer Vision for Automation, Perception for Grasping and Manipulation
Abstract: In order to solve robotic bin-picking problem in many industrial applications, accurate 6D object pose estimation is one of fundamental technologies. This paper presents a method for accurate 6D pose estimation from a single RGB image for texture-less industrial parts. These objects are common but still challenging to deal with, due to the fact that poor surface texture and brightness makes difficult to compute discriminative local appearance descriptors. The proposed method mainly consists of two stages, which ranges from the detection stage to the optimization stage. Firstly, all known objects in the RGB image are detected with 2D bounding box via a tiny convolutional neural network. Then, the second stage will optimize the 6D pose in the Edge image given several coarse initializations. These coarse initializations are generated from the Edge image via a hypothesis-evaluation scheme. Furthermore, the proposed method is validated by achieving state-of-the-art results of texture-less industrial parts for RGB input. According to practical experiments, the proposed method is accurate and robust enough to be applied on the robotic manipulation platform to complete a simple assembly task.
|
|
11:00-12:15, Paper TuAT1-02.2 | Add to My Program |
POSEAMM: A Unified Framework for Solving Pose Problems Using an Alternating Minimization Method |
Campos, João | ISR-Lisbon |
Cardoso, João | Coimbra Polytechnic - ISEC |
Miraldo, Pedro | KTH Royal Institute of Technology, Stockholm |
Keywords: Computer Vision for Automation, Omnidirectional Vision, Localization
Abstract: Pose estimation is one of the most important problems in computer vision. It can be divided in two different categories - absolute and relative - and may involve two different types of camera models: central and non-central. State-of-the-art methods have been designed to solve separately these problems. This paper presents a unified framework that is able to solve any pose problem by alternating optimization techniques between two set of parameters, rotation and translation. In order to make this possible, it is necessary to define an objective function that captures the problem at hand. Since the objective function will depend on the rotation and translation it is not possible to solve it as a simple minimization problem. Hence the use of Alternating Minimization methods, in which the function will be alternatively minimized with respect to the rotation and the translation. We show how to use our framework in three distinct pose problems. Our methods are then benchmarked with both synthetic and real data, showing their better balance between computational time and accuracy.
|
|
11:00-12:15, Paper TuAT1-02.3 | Add to My Program |
Learning Object Localization and 6D Pose Estimation from Simulation and Weakly Labeled Real Images |
Mercier, Jean-Philippe | Universite Laval |
Mitash, Chaitanya | Rutgers University |
Giguere, Philippe | Université Laval |
Boularias, Abdeslam | Carnegie Mellon University |
Keywords: Computer Vision for Automation, Computer Vision for Manufacturing, RGB-D Perception
Abstract: Accurate pose estimation is often a requirement for robust robotic grasping and manipulation of objects placed in cluttered, tight environments, such as a shelf with multiple objects. When deep learning approaches are employed to perform this task, they typically require a large amount of training data. However, obtaining precise 6 degrees of freedom for ground-truth can be prohibitively expensive. This work therefore proposes an architecture and a training process to solve this issue. More precisely, we present a weak object detector that enables localizing objects and estimating their 6D poses in cluttered and occluded scenes. To minimize the human labor required for annotations, the proposed detector is trained with a combination of synthetic and a few weakly annotated real images, for which a human provides only a list of objects present in each image (no time-consuming annotations, such as bounding boxes, segmentation masks and object poses). To close the gap between real and synthetic images, we use multiple domain classifiers trained adversarially. During the inference phase, the resulting class-specific heatmaps of the weak detector are used to guide the search of 6D poses of objects. Our proposed approach is evaluated on several publicly available datasets for pose estimation. The results clearly indicate that this approach could provide an efficient way toward fully automating the training process of computer vision models used in robotics.
|
|
11:00-12:15, Paper TuAT1-02.4 | Add to My Program |
Stampede: A Discrete-Optimization Method for Solving Pathwise-Inverse Kinematics |
Rakita, Daniel | University of Wisconsin-Madison |
Mutlu, Bilge | University of Wisconsin–Madison |
Gleicher, Michael | University of Wisconsin - Madison |
Keywords: Kinematics, Motion and Path Planning
Abstract: We present a discrete-optimization technique for finding feasible robot arm trajectories that pass through provided 6-DOF Cartesian-space end-effector paths with high accuracy, a problem called pathwise-inverse kinematics. The output from our method consists of a path function of joint-angles that best follows the provided end-effector path function, given some definition of ``best''. Our method, called Stampede, casts the robot motion translation problem as a discrete-space graph-search problem where the nodes in the graph are individually solved for using non-linear optimization; framing the problem in such a way gives rise to a well-structured graph that affords an effective best path calculation using an efficient dynamic-programming algorithm. We present techniques for sampling configuration space, such as diversity sampling and adaptive sampling, to construct the search-space in the graph. Through an evaluation, we show that our approach performs well in finding smooth, feasible, collision-free robot motions that match the input end-effector trace with very high accuracy, while alternative approaches, such as a state-of-the-art per-frame inverse kinematics solver and a global non-linear trajectory-optimization approach, performed unfavorably.
|
|
11:00-12:15, Paper TuAT1-02.5 | Add to My Program |
Reconstructing Human Hand Pose and Configuration Using a Fixed-Base Exoskeleton |
Pereira, Aaron | Technische Universität München |
Stillfried, Georg | German Aerospace Center (DLR) |
Baker, Thomas | Technische Universität München |
Schmidt, Annika | German Aerospace Center (DLR) |
Maier, Annika | German Aerospace Center |
Pleintinger, Benedikt | Institute of Robotics andMechatronics, GermanAerospaceCenter (DLR |
Chen, Zhaopeng | Institute of Robotics and Mechatronics, GermanAerospaceCenter, D |
Hulin, Thomas | German Aerospace Center (DLR) |
Lii, Neal Y. | German Aerospace Center (DLR) |
Keywords: Wearable Robots, Virtual Reality and Interfaces, Haptics and Haptic Interfaces
Abstract: Accurate real-time estimation of the pose and configuration of the human hand attached to a dexterous haptic input device is crucial to improve the interaction possibilities for teleoperation and in virtual and augmented reality. In this paper, we present an approach to reconstruct the pose of the human hand and the joint angles of the fingers when wearing a novel fixed-base (grounded) hand exoskeleton. Using a kinematic model of the human hand built from MRI data, we can reconstruct the hand pose and joint angles without sensors on the human hand, from attachment points on the first three fingers and the palm. We test the accuracy of our approach using motion capture as a ground truth. This reconstruction can be used to determine contact geometry and force-feedback from virtual or remote objects in virtual reality or teleoperation.
|
|
11:00-12:15, Paper TuAT1-02.6 | Add to My Program |
Learning Pose Estimation for High-Precision Robotic Assembly Using Simulated Depth Images |
Litvak, Yuval | Ben-Gurion University, Be'er Sheva |
Biess, Armin | Ben-Gurion University of the Negev |
Bar-Hillel, Aharon | Ben Gurion University of the Negev |
Keywords: Computer Vision for Manufacturing, Perception for Grasping and Manipulation, Deep Learning in Robotics and Automation
Abstract: Most of industrial robotic assembly tasks today require fixed initial conditions for successful assembly. These constraints induce high production costs and low adaptability to new tasks. In this work we aim towards flexible and adaptable robotic assembly by using 3D CAD models for all parts to be assembled. We focus on a generic assembly task - the Siemens Innovation Challenge - in which a robot needs to assemble a gear-like mechanism with high precision into an operating system. To obtain the millimeter-accuracy required for this task and industrial settings alike, we use a depth camera mounted near the robot’s end-effector. We present a high-accuracy two-stage pose estimation procedure based on deep convolutional neural networks, which includes detection, pose estimation, refinement, and handling of near- and full symmetries of parts. The networks are trained on simulated depth images with means to ensure successful transfer to the real robot. We obtain an average pose estimation error of 2.16 millimeters and 0.64 degree leading to 91% success rate for robotic assembly of randomly distributed parts. To the best of our knowledge, this is the first time that the Siemens Innovation Challenge is fully addressed, with all the parts assembled with high success rates.
|
|
TuAT1-03 Interactive Session, 220 |
Add to My Program |
Visual Odometry I - 2.1.03 |
|
|
|
11:00-12:15, Paper TuAT1-03.1 | Add to My Program |
Aided Inertial Navigation: Unified Feature Representations and Observability Analysis |
Yang, Yulin | University of Delaware |
Huang, Guoquan | University of Delaware |
Keywords: Localization, SLAM, Visual-Based Navigation
Abstract: Extending our recent work that focuses on the observability analysis of aided inertial navigation systems (INS) using homogeneous geometric features including points, lines and planes, in this paper, we complete the analysis for the general aided INS using different combinations of geometric features (i.e., points, lines and planes). We analytically show that the linearized aided INS with different feature combinations generally possess the same observability properties as those with same features, i.e., 4 unobservable directions, corresponding to the global yaw rotation and the global position of the sensor platform. During the analysis, we particularly propose a novel minimal representation of line features, i.e., the ``closest point'' parameterization, which uses a 4D Euclidean vector to describe a line and is proved to preserve the same observability properties. Based on that, for the first time, we provide two sets of unified representations for points, lines and planes, i.e., the quaternion form and the closest point (CP) form, and perform extensive observability analysis with analytically-computed Jacobians for these unified parameterizations. We validate the proposed CP representations and observability analysis with Monte-Carlo simulations, in which EKF-based vision-aided INS (VINS) with combinations of geometrical features in CP form are developed and compared.
|
|
11:00-12:15, Paper TuAT1-03.2 | Add to My Program |
A Linear-Complexity EKF for Visual-Inertial Navigation with Loop Closures |
Geneva, Patrick | University of Delaware |
Eckenhoff, Kevin | University of Delaware |
Huang, Guoquan | University of Delaware |
Keywords: Localization, SLAM, Mapping
Abstract: Enabling real-time visual-inertial navigation in unknown environments while achieving bounded-error performance holds great potentials in robotic applications. To this end, in this paper, we propose a novel linear-complexity EKF for visual-inertial localization, which can efficiently utilize loop closure constraints, thus allowing for long-term persistent navigation. The key idea is to adapt the Schmidt-Kalman formulation within the multi-state constraint Kalman filter (MSCKF) framework, in which we selectively include keyframes as nuisance parameters in the state vector for loop closures but do not update their estimates and covariance in order to save computations while still tracking their cross-correlations with the current navigation states. As a result, the proposed Schmidt-MSCKF has only O(n) computational complexity while still incorporating loop closures into the system. The proposed approach is validated extensively on large-scale real-world experiments, showing significant performance improvements when compared to the standard MSCKF, while only incurring marginal computational overhead.
|
|
11:00-12:15, Paper TuAT1-03.3 | Add to My Program |
Sensor-Failure-Resilient Multi-IMU Visual-Inertial Navigation |
Eckenhoff, Kevin | University of Delaware |
Geneva, Patrick | University of Delaware |
Huang, Guoquan | University of Delaware |
Keywords: Localization, SLAM, Failure Detection and Recovery
Abstract: In this paper, we present a real-time multi-IMU visual inertial navigation system (mi-VINS) that utilizes the information from multiple inertial measurement units (IMUs) and thus is resilient to IMU sensor failures. In particular, in the proposed mi-VINS formulation, one of the IMUs serves as the "base" of the system, while the rest act as auxiliary sensors aiding in state estimation. A key advantage of this architecture is the ability to seamlessly "promote" an auxiliary IMU as a new base, for example, upon detection of the base IMU failure, thus being resilient to the single point of sensor failure as seen in conventional VINS. Moreover, in order to properly fuse the information of multiple IMUs, both the spatial (relative pose) and temporal (time offset) calibration parameters between each sensor and the base IMU are estimated online. The proposed mi-VINS with online spatial and temporal calibration is validated in both simulations and real-world experiments, and is shown to be able to provide accurate localization and calibration even in scenarios with IMU sensor failures.
|
|
11:00-12:15, Paper TuAT1-03.4 | Add to My Program |
Learning Monocular Visual Odometry through Geometry-Aware Curriculum Learning |
Saputra, Muhamad Risqi U. | University of Oxford |
Porto Buarque de Gusmão, Pedro | University of Oxford |
Wang, Sen | Edinburgh Centre for Robotics, Heriot-Watt University |
Markham, Andrew | Oxford University |
Trigoni, Niki | University of Oxford |
Keywords: Localization, Visual Learning, Deep Learning in Robotics and Automation
Abstract: Inspired by the cognitive process of humans and animals, Curriculum Learning (CL) trains a model by gradually increasing the difficulty of the training data. In this paper, we study whether CL can be applied to complex geometry problems like estimating monocular Visual Odometry (VO). Unlike existing CL approaches, we present a novel CL strategy for learning the geometry of monocular VO by gradually making the learning objective more difficult during training. To this end, we propose a novel geometry-aware objective function by jointly optimizing relative and composite transformations over small windows via bounded pose regression loss. A cascade optical flow network followed by recurrent network with a differentiable windowed composition layer, termed CL-VO, is devised to learn the proposed objective. Evaluation on three real-world datasets shows superior performance of CL-VO over state-of-the-art feature-based and learning-based VO.
|
|
11:00-12:15, Paper TuAT1-03.5 | Add to My Program |
Visual-Odometric Localization and Mapping for Ground Vehicles Using SE(2)-XYZ Constraints |
Zheng, Fan | The Chinese University of Hong Kong |
Liu, Yunhui | Chinese University of Hong Kong |
Keywords: Localization, SLAM, Sensor Fusion
Abstract: This paper focuses on the localization and mapping problem on ground vehicles using odometric and monocular visual sensors. To improve the accuracy of vision based estimation on ground vehicles, researchers have exploited the constraint of approximately planar motion, and usually implemented it as a stochastic constraint on an SE(3) pose. In this paper, we propose a simpler algorithm that directly parameterizes the ground vehicle poses on SE(2). The out-of-SE(2) motion perturbations are not neglected, but incorporated into an integrated noise term of a novel SE(2)-XYZ constraint, which associates an SE(2) pose and a 3D landmark via the image feature measurement. For odometric measurement processing, we also propose an efficient preintegration algorithm on SE(2). Utilizing these constraints, a complete visual-odometric localization and mapping system is developed, in a commonly used graph optimization structure. Its superior performance in accuracy and robustness is validated by real-world experiments in industrial indoor environments.
|
|
11:00-12:15, Paper TuAT1-03.6 | Add to My Program |
Keyframe-Based Direct Thermal–Inertial Odometry |
Khattak, Shehryar | University of Nevada, Reno |
Papachristos, Christos | University of Nevada Reno |
Alexis, Kostas | University of Nevada, Reno |
Keywords: Localization, Sensor Fusion, Field Robots
Abstract: This paper proposes an approach for fusing direct radiometric data from a thermal camera with inertial measurements to extend the robotic capabilities of aerial robots for navigation in GPS-denied and visually degraded environments in the conditions of darkness and in the presence of air-borne obscurants such as dust,fog and smoke. An optimization based approach is developed that jointly minimizes the re-projection error of 3D landmarks and inertial measurement errors. The developed solution is extensively verified against both ground-truth in an indoor laboratory setting, as well as inside an underground mine under severely visually degraded conditions.
|
|
TuAT1-04 Interactive Session, 220 |
Add to My Program |
Space Robotics I - 2.1.04 |
|
|
|
11:00-12:15, Paper TuAT1-04.1 | Add to My Program |
On Parameter Estimation of Space Manipulator Systems with Flexible Joints Using the Energy Balance |
Nanos, Kostas | National Technical University of Athens |
Papadopoulos, Evangelos | National Technical University of Athens |
Keywords: Space Robotics and Automation, Flexible Robots, Dynamics
Abstract: The parameter estimation of space manipulator systems on orbit is studied, whose manipulators are subject to joint flexibilities. To improve path planning and tracking capabilities, advanced control strategies that benefit from the knowledge of system parameters are required. These parameters include the system inertial parameters as well as the stiffness and damping parameters, which describe joint flexibilities. During operation some of these parameters may change or be unknown. Estimation methods based on the equations of motion are sensitive to noise, while methods based on the angular momentum conservation, while they are tolerant to noise, they cannot estimate the parameters that describe joint flexibilities. A parameter estimation method, based on the energy balance, applied during the motion of a space flexible-joint manipulator system in the free-floating mode, is developed. The method is tolerant to noise and can reconstruct the system full dynamics. It is shown that the parameters estimated by the proposed method can describe the system dynamics fully. The application of the developed method is valid for spatial systems; it is illustrated by a planar 7 degrees of freedom (DoF) example system.
|
|
11:00-12:15, Paper TuAT1-04.2 | Add to My Program |
Coordinated Control of Spacecraft's Attitude and End-Effector for Space Robots |
Giordano, Alessandro Massimo | DLR (German Aerospace Center) |
Ott, Christian | German Aerospace Center (DLR) |
Albu-Schäffer, Alin | DLR - German Aerospace Center |
Keywords: Space Robotics and Automation, Motion Control, Dynamics
Abstract: This paper addresses the coordinated control of the spacecraft's attitude and the end-effector pose of a manipulator-equipped space robot. A controller is proposed to simultaneously regulate the spacecraft's attitude, the global center-of-mass (CoM), and the end-effector pose. The control is based on a triangular actuation decomposition that decouples the end-effector task from the spacecraft's force actuator, increasing fuel efficiency. The strategy is validated in hardware using a robotic motion simulator composed of a seven degrees-of-freedom (DOF) arm mounted on a 6DOF base. The trade-off between control requirements and fuel consumption is discussed.
|
|
11:00-12:15, Paper TuAT1-04.3 | Add to My Program |
Central Pattern Generators Control of Momentum Driven Compliant Structures |
Bonardi, Stephane | Institute of Space and Astronautical Science (ISAS), Japan Aeros |
Romanishin, John | MIT |
Rus, Daniela | MIT |
Kubota, Takashi | JAXA ISAS |
Keywords: Space Robotics and Automation, Motion Control, Compliance and Impedance Control
Abstract: We introduce the concept of Momentum Driven Structures (MDS) made of inertially actuated units linked together by compliant elements as a potential solution for rough environments exploration. We propose a control method for MDS based on the bio-inspired concept of Central Pattern Generator (CPG) and study in simulation the impact of compliance distribution on locomotion performance using population based optimization techniques. Our results suggest that compliant structures outperform their rigid counterparts in terms of distance traveled. In addition, we show that co-evolved structures are only performing marginally better than their control-only optimized equivalent, highlighting the fact that compliance modulation may not be a significant asset in such experiment, considering the related hardware complexity it introduces.
|
|
11:00-12:15, Paper TuAT1-04.4 | Add to My Program |
Rover-IRL: Inverse Reinforcement Learning with Soft Value Iteration Networks for Planetary Rover Path Planning |
Pflueger, Max | University of Southern California |
Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Sukhatme, Gaurav | University of Southern California |
Keywords: Space Robotics and Automation, Deep Learning in Robotics and Automation
Abstract: Planetary rovers, such as those currently on Mars, face difficult path planning problems, both before landing during the mission planning stages as well as once on the ground. In this work we present a new approach to these planning problems based on inverse reinforcement learning (IRL) using deep convolutional networks and value iteration networks as important internal structures. Value iteration networks are an approximation of the value iteration (VI) algorithm implemented with convolutional neural networks to make VI fully differentiable. We propose a modification to the value iteration recurrence, referred to as the soft value iteration network (SVIN). SVIN is designed to produce more effective training gradients through the value iteration network. It relies on an internal soft policy model, where the policy is represented with a probability distribution over all possible actions, rather than a deterministic policy that returns only the best action. We demonstrate the effectiveness of our proposed architecture in both a grid world dataset as well as a highly realistic synthetic dataset generated from currently deployed rover mission planning tools and real Mars imagery.
|
|
11:00-12:15, Paper TuAT1-04.5 | Add to My Program |
Contact-Event-Triggered Mode Estimation for Dynamic Rigid Body Impedance-Controlled Capture |
Kato, Hiroki | Japan Aerospace Exploration Agency |
Hirano, Daichi | Japan Aerospace Exploration Agency |
Ota, Jun | The University of Tokyo |
Keywords: Space Robotics and Automation, Contact Modeling, Grasping
Abstract: This paper presents a contact-event-triggered filter using only a force-torque sensor with impedance control for non-cooperative, rotating, heavy object capture. Contact events are modeled for prediction, and detected to trigger the particle filter’s updating process. By combining these features, a computationally efficient, contact-event-triggered filter is proposed. For our purpose of capture using impedance control, expected contact events, collisions and sliding are defined for prediction and detection. This novel method is implemented in an air bearing robotic system, and has demonstrated its superiority with the highest success rate (100%) for sliding contact mode cases, whereas the previous method could only yield a success rate of 87.9%. The computation resource is demonstrated to be limited, with a computation time of 4.2 milliseconds on average and 8.3 milliseconds at worst.
|
|
11:00-12:15, Paper TuAT1-04.6 | Add to My Program |
Multi-Rate Tracking Control for a Space Robot on a Controlled Satellite: A Passivity-Based Strategy |
De Stefano, Marco | German Aerospace Center (DLR) |
Mishra, Hrishik | German Aerospace Center (DLR) |
Balachandran, Ribin | DLR |
Lampariello, Roberto | German Aerospace Center (DLR) |
Ott, Christian | German Aerospace Center (DLR) |
Secchi, Cristian | Univ. of Modena & Reggio Emilia |
Keywords: Space Robotics and Automation, Multi-Robot Systems
Abstract: In this work we design a novel control strategy for a space manipulator operating on a controlled base. The proposed controllers resolve the tracking of the end-effector and the regulation of the base. In particular, we focus on the effects due to the different sampling rates of the manipulator and the base controllers which can generate stability issues. These effects are analysed from an energetic perspective and passivity-based controllers are designed for the base and the manipulator to avoid instability. The method is validated with simulations and experiments on a robotic facility, the OOS-Sim.
|
|
TuAT1-05 Interactive Session, 220 |
Add to My Program |
Deep Learning for Manipulation - 2.1.05 |
|
|
|
11:00-12:15, Paper TuAT1-05.1 | Add to My Program |
Leveraging Contact Forces for Learning to Grasp |
Merzic, Hamza | Herr |
Bogdanovic, Miroslav | Max Planck Institute for Intelligent Systems |
Kappler, Daniel | X (Google) |
Righetti, Ludovic | New York University |
Bohg, Jeannette | Stanford University |
Keywords: Deep Learning in Robotics and Automation, Grasping, Sensor-based Control
Abstract: Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used two- fingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.
|
|
11:00-12:15, Paper TuAT1-05.2 | Add to My Program |
Learning Latent Space Dynamics for Tactile Servoing |
Sutanto, Giovanni | University of Southern California |
Ratliff, Nathan | Lula Robotics Inc |
Sundaralingam, Balakumar | University of Utah |
Chebotar, Yevgen | University of Southern California |
Su, Zhe | University of Southern California |
Handa, Ankur | IIIT Hyderabad |
Fox, Dieter | University of Washington |
Keywords: Deep Learning in Robotics and Automation, Force and Tactile Sensing, Model Learning for Control
Abstract: To achieve a dexterous robotic manipulation, we need to endow our robot with tactile feedback capability, i.e. the ability to drive action based on tactile sensing. In this paper we specifically address the challenge of tactile servoing, i.e. given the current tactile sensing and a target/goal tactile sensing --- memorized from a successful task execution in the past --- what is the action that will bring the current tactile sensing to move closer towards the target tactile sensing at the next time step. We develop a data-driven approach to acquire a dynamics model for tactile servoing by learning from demonstration. Moreover, our method represents the tactile sensing information as to lie on a surface --- or a 2D manifold --- and perform a manifold learning, making it applicable to any tactile skin geometry. As a proof of concept, we evaluate our method on a robot equipped with a tactile finger.
|
|
11:00-12:15, Paper TuAT1-05.3 | Add to My Program |
PointNetGPD: Detecting Grasp Configurations from Point Sets |
Liang, Hongzhuo | University of Hamburg |
Ma, Xiaojian | Tsinghua University |
Li, Shuang | University of Hamburg |
Görner, Michael | University of Hamburg |
Tang, Song | University of Hamburg |
Fang, Bin | Tsinghua University |
Sun, Fuchun | Tsinghua University |
Zhang, Jianwei | University of Hamburg |
Keywords: Deep Learning in Robotics and Automation, Grasping, Perception for Grasping and Manipulation
Abstract: In this paper, we propose an end-to-end grasp evaluation model to address the challenging problem of localizing robot grasp configurations directly from the point cloud. Compared to recent grasp evaluation metrics that are based on handcrafted depth features and a convolutional neural network (CNN), our proposed PointNetGPD is light-weighted and can directly process the 3D point cloud that locates within the gripper for grasp evaluation. Taking the raw point cloud as input, our proposed grasp evaluation network can capture the complex geometric structure of the contact area between the gripper and the object even if the point cloud is very sparse. To further improve our proposed model, we generate a larger-scale grasp dataset with 350k real point cloud and grasps with YCB object set for training. The performance of the proposed model is quantitatively measured both in simulation and on robotic hardware. Experiments on object grasping and clutter removal show that our proposed model generalizes well to novel objects and outperforms state-of-the-art methods.
|
|
11:00-12:15, Paper TuAT1-05.4 | Add to My Program |
Learning Deep Visuomotor Policies for Dexterous Hand Manipulation |
Jain, Divye | University of Washington |
Li, Andrew | University of Washington |
Singhal, Shivam | University of Washington |
Rajeswaran, Aravind | University of Washington |
Kumar, Vikash | Google-Brain |
Todorov, Emanuel | University of Washington |
Keywords: Deep Learning in Robotics and Automation, Dexterous Manipulation, Learning from Demonstration
Abstract: Multi-fingered dexterous hands are versatile and capable of acquiring a diverse set of skills such as grasping, in-hand manipulation, and tool use. To fully utilize their versatility in real-world scenarios, we require algorithms and policies that can control them using on-board sensing capabilities, without relying on external tracking or motion capture systems. Cameras and tactile sensors are the most widely used on-board sensors that do not require instrumentation of the world. In this work, we demonstrate an imitation learning based approach to train deep visuomotor policies for a variety of manipulation tasks with a simulated five fingered dexterous hand. These policies directly control the hand using high dimensional visual observations of the world and propreoceptive observations from the robot, and can be trained efficiently with a few hundred expert demonstration trajectories. We also find that using touch sensing information enables faster learning and better asymptotic performance for tasks with high degree of occlusions.
|
|
11:00-12:15, Paper TuAT1-05.5 | Add to My Program |
Learning to Identify Object Instances by Touch: Tactile Recognition Via Multimodal Matching |
Lin, Justin | University of California, Berkeley |
Calandra, Roberto | Facebook |
Levine, Sergey | UC Berkeley |
Keywords: Deep Learning in Robotics and Automation, Force and Tactile Sensing
Abstract: Much of the literature on robotic perception focuses on the visual modality. Vision provides a global observation of a scene, making it broadly useful. However, in the domain of robotic manipulation, vision alone can sometimes prove inadequate: in the presence of occlusions or poor lighting, visual object identification might be difficult. The sense of touch can provide robots with an alternative mechanism for recognizing objects. In this paper, we study the problem of touch-based instance recognition. We propose a novel framing of the problem as multi-modal recognition: the goal of our system is to recognize, given a visual and tactile observation, whether or not these observations correspond to the same object. To our knowledge, our work is the first to address this type of multi-modal instance recognition problem on such a large-scale with our analysis spanning 98 different objects. We employ a robot equipped with two GelSight touch sensors, one on each finger, and a self-supervised, autonomous data collection procedure to collect a dataset of tactile observations and images. Our experimental results show that it is possible to accurately recognize object instances by touch alone, including instances of novel objects that were never seen during training. Our learned model outperforms other methods on this complex task, including that of human volunteers.
|
|
11:00-12:15, Paper TuAT1-05.6 | Add to My Program |
Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost |
Gupta, Abhishek | UC Berkeley |
Zhu, Henry | UC Berkeley |
Rajeswaran, Aravind | University of Washington |
Levine, Sergey | UC Berkeley |
Kumar, Vikash | Google-Brain |
Keywords: Deep Learning in Robotics and Automation, Dexterous Manipulation
Abstract: Dexterous multi-fingered robotic hands can perform a wide range of manipulation skills, making them an appealing component for general-purpose robotic manipulators. However, such hands pose a major challenge for autonomous control, due to the high dimensionality of their configuration space and complex intermittent contact interactions. In this work, we propose deep reinforcement learning (deep RL) as a scalable solution for learning complex, contact rich behaviors with multi-fingered hands. Deep RL provides an end-to-end approach to directly map sensor readings to actions, without the need for task specific models or policy classes. We show that contact-rich manipulation behavior with multi-fingered hands can be learned by directly training with model-free deep reinforcement learning algorithms in the real world, with minimal additional assumption and without the aid of simulation. We learn a variety of complex behaviors on two different low-cost hardware platforms. We show that each task can be learned entirely from scratch, and further study how the learning process can be further accelerated by using a small number of human demonstrations to bootstrap learning. Our experiments demonstrate that complex multi-fingered manipulation skills can be learned in the real world in about 3-5 hours, and that demonstrations can decrease this to 2-3 hours, indicating that direct deep RL training in the real world is a viable and practical alternative to simulation and model-based contr
|
|
TuAT1-06 Interactive Session, 220 |
Add to My Program |
Product Design, Development and Prototyping - 2.1.06 |
|
|
|
11:00-12:15, Paper TuAT1-06.1 | Add to My Program |
Inkjet Printable Actuators and Sensors for Soft-Bodied Crawling Robots |
Ta, Tung D. | The University of Tokyo |
Umedachi, Takuya | The University of Tokyo |
Kawahara, Yoshihiro | The University of Tokyo |
Keywords: Product Design, Development and Prototyping, Flexible Robots, Biomimetics
Abstract: Soft-bodied robots are getting attention from researchers as their potential in designing compliant and adaptive robots. However, soft-bodied robots also pose many challenges not only in non-linear controlling but also in design and fabrication. Especially, the non-compatibility between soft materials and rigid sensors/actuators makes it more difficult to design a fully compliant soft-bodied robot. In this paper, we propose an all-printed sensor and actuator for designing soft-bodied robots by printing silver nano-particle ink on top of a flexible plastic film. We can print bending sensors and thermal based actuators instantly with home-commodity inkjet printers without any pre/post-processing. We exemplify the application of this fabrication method with an all-printed paper caterpillar robots which can inch forward and sense its body bending angle.
|
|
11:00-12:15, Paper TuAT1-06.2 | Add to My Program |
Design and Evaluation of an Energy-Saving Drive for a Versatile Robotic Gripper |
Neven, Johannes Job | Delft University of Technology |
Baioumy, Mohamed | Delft University of Technology |
Wolfslag, Wouter | University of Edinburgh |
Wisse, Martijn | Delft University of Technology |
Keywords: Product Design, Development and Prototyping, Grippers and Other End-Effectors, Mechanism Design
Abstract: The main task of robotic grippers, holding an object, does not require work theoretically. Yet grippers consume significant amounts of energy in practice. This paper presents an approach for designing an energy-saving drive for robotic grippers employing a Statically Balanced Force Amplifier (SBFA) and a Non-backdrivable mechanism (NBDM). A novel metric (Grip Performance Metric) to systematically evaluate drives regarding their energy consumption, is used in the design phase; afterwards, the realization and testing of a prototype (REED, Robotic Energy-Efficient Drive) are presented. Results show that the actuation force can be reduced by 92%, resulting in energy-savings of 86% for an example task. This shows the potential of drives based on SBFAs and NBDMs to achieve energy-neutral grippers.
|
|
11:00-12:15, Paper TuAT1-06.3 | Add to My Program |
Generative Deformation: Procedural Perforation for Elastic Structures |
Transue, Shane | University of Colorado Denver |
Choi, Min-Hyung | University of Colorado Denver |
Keywords: Simulation and Animation, Computational Geometry, Additive Manufacturing
Abstract: Procedural generation of elastic structures provides the fundamental basis for controlling and designing 3D printed deformable object behaviors. The automation through generative algorithms provides flexibility in how design and functionality can be seamlessly integrated into a cohesive process that generates 3D prints with variable elasticity. Generative deformation introduces an automated method for perforating existing volumetric structures, promoting simulated deformations, and integrating stress analysis into a cohesive pipeline model that can be used with existing consumer-level 3D printers with elastic material capabilities. In this work, we present a consolidated implementation of the design, simulate, refine, and 3D print procedure based on the automated generation of heterogeneous lattice structures. We utilize Finite Element Analysis (FEA) metrics to generate perforated deformation models that adhere to deformation behaviors created within our design environment. We present the core algorithms, automated pipeline, and 3D print deformations of various objects. Quantitative results illustrate how the heterogeneous geometric structure can influence elastic material behaviors towards design objectives. Our method provides an automated open-source tool for quickly prototyping elastic 3D prints.
|
|
11:00-12:15, Paper TuAT1-06.4 | Add to My Program |
Robotics Education and Research at Scale: A Remotely Accessible Robotics Development Platform |
Wiedmeyer, Wolfgang | Karlsruhe Institute of Technology (KIT) |
Mende, Michael | Karlsruhe Institute of Technology (KIT) |
Hartmann, Dennis | Karlsruhe Institute of Technology (KIT) |
Bischoff, Rainer | KUKA Roboter GmbH |
Ledermann, Christoph | Karlsruhe Institute of Technology |
Kroeger, Torsten | Karlsruher Institut Für Technologie (KIT) |
Keywords: Education Robotics, Product Design, Development and Prototyping
Abstract: This paper introduces the KUKA Robot Learning Lab at KIT - a remotely accessible robotics testbed. The motivation behind the laboratory is to make state-of-the-art industrial lightweight robots more accessible for education and research. Such expensive hardware is usually not available to students or less privileged researchers to conduct experiments. This paper describes the design and operation of the Robot Learning Lab and discusses the challenges that one faces when making experimental robot cells remotely accessible. Especially safety and security must be ensured, while giving users as much freedom as possible when developing programs to control the robots. A fully automated and efficient processing pipeline for experiments makes the lab suitable for a large amount of users and allows a high usage rate of the robots.
|
|
11:00-12:15, Paper TuAT1-06.5 | Add to My Program |
Automated Seedling Height Assessment for Tree Nurseries Using Point Cloud Processing |
Wanasinghe, Thumeera Ruwansiri | Memorial University of Newfoundland |
De Silva, Oscar | Memorial University of Newfoundland |
Mann, George K. I. | Memorial University of Newfoundland |
Dowden, Benjamin | Memorial University of Newfoundland |
Lundrigan, Cyril Gerard | Government of Newfoundland & Labrador |
Keywords: Agricultural Automation, Product Design, Development and Prototyping, Object Detection, Segmentation and Categorization
Abstract: This paper presents a prototype of an automated seedling height assessment system for tree nurseries. The proposed system can acquire and store real-time 3D point-cloud data of seedlings; and perform offline identification, measurement, and report generation of seedling heights with an overall system accuracy that meets a 5,mathrm{mm} accuracy specification. Periodic growth information of seedlings allows quantifying effects of different factors on the overall seedling development process for research and production optimization purposes. However, current manual sampling approaches used at these facilities produce quite limited data samples, and the process is rather time-consuming and labor-intensive for industrial scale operations. In contrast, the proposed system is capable of significantly increasing the measurement sample size, measurement resolution, and frequency of measurement by automating the seedling measurement process using a scanning laser profilometer and an application specific point-cloud processing algorithm. The performance of the proposed profilometry solution for point-cloud generation is compared with several other point-cloud generation methods such as a 3D structured light sensing, light intensity detection and ranging (LiDAR), stereovision, and photogrammetry. These comparison results demonstrate a superior performance of the laser-profilometer over other sensing solutions available for seedling height measurement. The proposed system is experimen
|
|
11:00-12:15, Paper TuAT1-06.6 | Add to My Program |
Adsorption Pad Using Capillary Force for Uneven Surface |
Ichikawa, Akihiko | Meijo University |
Shinya, Kajino | Meijo University |
Atsusi, Takeyama | Meijo University |
Yamato, Adachi | Meijo University |
Keisuke, Totsuka | Meijo University |
Ikemoto, Yusuke | Meijo University |
Ohara, Kenichi | Meijo University |
Oomichi, Takeo | Meijo University |
Fukuda, Toshio | Meijo University |
Keywords: Product Design, Development and Prototyping, Micro/Nano Robots, Soft Material Robotics
Abstract: We propose a novel adsorption pad for wall climbing robot and irregular surface object using capillary force and water sealing. We call this adsorption pad as Super Wet Adsorption pad. The SWA pad has a porous part and a capillary part. The porous part is made by salt reaching method. When the SWA pad adsorbs to the wall which some sand and dust are attached, water comes from the porous part to avoid vacuum breaking. The capillary part is connected to the porous part to supply and stock the water. In this paper, we show the design of the porous part and the capillary part, fabrication process of each parts, and perform the evaluation experiment of the capillary force and adsorption of uneven surfaces, demonstration of wall climbing robot and adsorption of irregular surface foods.
|
|
TuAT1-07 Interactive Session, 220 |
Add to My Program |
Humanoid Robots IV - 2.1.07 |
|
|
|
11:00-12:15, Paper TuAT1-07.1 | Add to My Program |
Effects of Foot Stiffness and Damping on Walking Robot Performance |
Schumann, Ethan | University of Pittsburgh |
Smit-Anseeuw, Nils | University of Michigan |
Zaytsev, Petr | University of Stuttgart |
Gleason, Rodney | University of Michigan |
Shorter, Alex | University of Michigan |
Remy, C. David | University of Michigan |
Keywords: Legged Robots, Humanoid and Bipedal Locomotion, Biomimetics
Abstract: In this paper, we investigated how the stiffness and damping properties of soft robotic feet affect the stability and energetic economy of bipedal robotic walking. To this end, we manufactured four different spherical feet from the following materials: hollow rubber, Sorbothane, Norsorex, and Neoprene. The materials were specifically chosen to cover a wide range of stiffness and damping values. The impact response of each design was first characterized in a drop test rig. We then evaluated the performance of each foot in an extensive series of walking experiments on the planar bipedal robot RAMone. Our results showed that, at low speeds, the feet with lower damping had a smaller energy cost of walking, possibly due to greater return of mechanical energy at lift-off. However, at speeds above 0.5 m/s, the feet with lower damping started to exhibit a bouncing behaviour which led to higher walking instability and increased the energy cost of walking. Additionally, we found the feet with lower stiffness to be more economical across all walking speeds. Our results provide insight into the role of foot properties in bipedal walking and may help with the design of walking robots.
|
|
11:00-12:15, Paper TuAT1-07.2 | Add to My Program |
Dynamic Walking on Slippery Surfaces: Demonstrating Stable Bipedal Gaits with Planned Ground Slippage |
Ma, Wenlong | California Institute of Technology |
Or, Yizhar | Technion |
Ames, Aaron | Caltech |
Keywords: Legged Robots, Humanoid and Bipedal Locomotion, Optimization and Optimal Control
Abstract: Dynamic bipedal robot locomotion has achieved remarkable success due in part to recent advances in trajectory generation and nonlinear control for stabilization. A key assumption utilized in both theory and experiments is that the robot's stance foot always makes no-slip contact with the ground, including at impacts. This assumption breaks down on slippery low-friction surfaces, as commonly encountered in outdoor terrains, leading to failure and loss of stability. In this work, we extend the theoretical analysis and trajectory optimization to account for stick-slip transitions at point foot contact using Coulomb's friction law. Using AMBER-3M planar biped robot as an experimental platform, we demonstrate for the first time a slippery walking gait which can be stabilized successfully both on a lubricated surface and on a rough no-slip surface. We also study the influence of foot slippage on reducing the mechanical cost of transport, and compare energy efficiency in both numerical simulation and experimental measurement.
|
|
11:00-12:15, Paper TuAT1-07.3 | Add to My Program |
Torque and Velocity Controllers to Perform Jumps with a Humanoid Robot: Theory and Implementation on the iCub Robot |
Bergonti, Fabio | Italian Institute of Technology |
Fiorio, Luca | Istituto Italiano Di Tecnologia |
Pucci, Daniele | Italian Institute of Technology |
Keywords: Legged Robots, Humanoid and Bipedal Locomotion, Humanoid Robots
Abstract: Jumping can be an effective way of locomotion to overcome small terrain gaps or obstacles. In this paper we propose two different approaches to perform jumps with a humanoid robot. Specifically, starting from a pre-defined CoM trajectory we develop the theory for a velocity controller and for a torque controller based on an optimization technique for the evaluation of the joints input. The controllers have been tested both in simulation and on the humanoid robot iCub. In simulation the robot was able to jump using both controllers, while the real system jumped with the velocity controller only. The results highlight the importance of controlling the centroidal angular momentum and they suggest that the joint performances, namely maximum power, of the legs and torso joints, and the low level control performances are fundamental to achieve acceptable results.
|
|
11:00-12:15, Paper TuAT1-07.4 | Add to My Program |
Safe Adaptive Switching among Dynamical Movement Primitives: Application to 3D Limit-Cycle Walkers |
Veer, Sushant | University of Delaware |
Poulakakis, Ioannis | University of Delaware |
Keywords: Robust/Adaptive Control of Robotic Systems, Humanoid and Bipedal Locomotion, Legged Robots
Abstract: Complex robot motions are frequently generated by composing simpler primitive movements. We use this approach to formulate robot motion plans as sequences of primitives to be executed one after the other. When dealing with dynamical movement primitives, besides accomplishing the high-level objective, planners must also reason about the effect of the plan's execution on the safety of the platform. This task is exacerbated by the presence of disturbances, such as non-vanishing external forces. To address this issue, we present a framework that builds on rigorous control-theoretic tools to generate safely executable motion plans for externally excited robotic systems. We illustrate the proposed framework on adapting the motion of a 3D bipedal robot model to persistent external forcing by switching among dynamic movement primitives, each corresponding to a limit-cycle walking gait.
|
|
11:00-12:15, Paper TuAT1-07.5 | Add to My Program |
Torque-Based Balancing for a Humanoid Robot Performing High-Force Interaction Tasks |
Abi-Farraj, Firas | CNRS-Irisa |
Henze, Bernd | German Aerospace Center (DLR) |
Ott, Christian | German Aerospace Center (DLR) |
Robuffo Giordano, Paolo | Centre National De La Recherche Scientifique (CNRS) |
Roa, Maximo A. | DLR - German Aerospace Center |
Keywords: Humanoid Robots, Humanoid and Bipedal Locomotion
Abstract: Balancing is a critical feature for a robot interacting with an unstructured environment. The balancing control should account for unknown perturbation forces that might destabilize the robot when performing the intended tasks. In the case of humanoid robots, this challenge is higher, due to the inherent difficulties of balancing a robot on two legs resulting in a rather small footprint. Approaches for enabling a good balancing behavior on humanoid robots traditionally rely on whole-body balancing approaches. This paper extends a passivity-based whole-body balancing framework to guarantee the equilibrium of a humanoid robot while performing different interaction tasks where the (high) task forces acting on the robot are difficult to foresee. Instead of controlling the center of mass, the proposed controller directly uses information from the Gravito-Inertial Wrench Cone to guarantee the feasibility of the balancing forces. The performance of the approach is validated in a number of successful experimental tests.
|
|
11:00-12:15, Paper TuAT1-07.6 | Add to My Program |
Humanoid Dynamic Synchronization through Whole-Body Bilateral Feedback Teleoperation (I) |
Ramos, Joao | Massachusetts Institute of Technology |
Kim, Sangbae | Massachusetts Institute of Technology |
Keywords: Humanoid and Bipedal Locomotion, Telerobotics and Teleoperation, Haptics and Haptic Interfaces
Abstract: This paper presents a method to achieve human and legged robot dynamic synchronization through bilateral feedback teleoperation. Our study shows how we can explore the interplay between human Extrapolated Center ofMass and the contact forces with the environment in order to transmit to the robot the underlying balancing and stepping strategy. All the necessary key equations for the frontal plane coupled dynamics are presented along with the human feedback law derived from the proposed state normalization in length and time. Here, we pay special attention to how the natural frequency of each system influences the resulting motion and analyze how the coupled system responds to various robot sizes. Experiments in which a human operator controls a simulated bipedal robot show how the Balance Feedback Interface force varies according to different scales and responds to external disturbances. Finally, we show the method’s robustness to uneven terrain and how we can allow the point feet robot to synchronously take steps with the operator. This is an introductory study that aims to grant legged robotsmotor capabilities for power manipulation comparable to humans.
|
|
TuAT1-08 Interactive Session, 220 |
Add to My Program |
Human-Robot Interaction I - 2.1.08 |
|
|
|
11:00-12:15, Paper TuAT1-08.1 | Add to My Program |
Interactive Open-Ended Object, Affordance and Grasp Learning for Robotic Manipulation |
Mohades Kasaei, Seyed Hamidreza | University of Groningen |
Shafii, Nima | University of Aveiro |
Seabra Lopes, Luís | Universidade De Aveiro |
Tomé, Ana Maria | Universidade De Aveiro |
Keywords: Physical Human-Robot Interaction, Mobile Manipulation, Sensor Fusion
Abstract: Service robots are expected to autonomously and efficiently work in human-centric environments. For this type of robots, object perception and manipulation are challenging tasks due to need for accurate and real-time response. This paper presents an interactive open-ended learning approach to recognize multiple objects and their grasp affordances concurrently. This is an important contribution in the field of service robots since no matter how extensive the training data used for batch learning, a robot might always be confronted with an unknown object when operating in human-centric environments. The paper describes the system architecture and the learning and recognition capabilities. Grasp learning associates grasp configurations (i.e., end-effector end-effector positions and orientations) to grasp affordance categories. The grasp affordance category and the grasp configuration are taught through verbal and kinesthetic teaching, respectively. A Bayesian approach is adopted for learning and recognition of object categories and an instance-based approach is used for learning and recognition of affordance categories. An extensive set of experiments has been performed to assess the performance of the proposed approach regarding recognition accuracy, scalability and grasp success rate on challenging datasets and real-world scenarios.
|
|
11:00-12:15, Paper TuAT1-08.2 | Add to My Program |
A Parallel Low-Impedance Sensing Approach for Highly Responsive Physical Human-Robot Interaction |
Boucher, Gabriel | Université Laval |
Laliberte, Thierry | Universite Laval |
Gosselin, Clement | Université Laval |
Keywords: Physical Human-Robot Interaction, Compliance and Impedance Control
Abstract: This paper presents a novel sensing approach for the physical interaction between a human user and a serial robotic arm. The approach is inspired from the concept of macro-mini robot architecture. The framework is developed for a general multi-degree-of-freedom serial robot and a corresponding impedance control scheme is proposed. In order to illustrate the concept, a five-degree-of-freedom robotic arm was built as well as a six-degree-of-freedom low-impedance sensing device that is used to control the robot. Experimental results are provided.
|
|
11:00-12:15, Paper TuAT1-08.3 | Add to My Program |
Safe Human Robot Cooperation in Task Performed on the Shared Load |
Anvaripour, Mohammad | University of Windsor |
Khoshnam Tehrani, Mahta | Simon Fraser University |
Menon, Carlo | Simon Fraser University |
Saif, Mehrdad | Department of Electrical and Computer Engineering, University Of |
Keywords: Physical Human-Robot Interaction, Collision Avoidance, Sensor-based Control
Abstract: Human-robot collaboration in industrial settings calls for implementing safety measures to ensure there is no risk to humans working in such an environment. In human-robot physical collaboration, an object or a load is handled by both human and the robot. Developing a safety framework for the robot is a requirement for preventing collisions during performing a task. In this paper, force myography (FMG) data are used to develop a control scheme for the robot such that it can work with the human worker while avoiding collisions. Force myography quantifies the activities of human muscles when applying force to handle an object. A neural network-based approach is then used to select the most informative features of the FMG signal. The proposed control scheme then incorporates the FMG data and the robot dynamics to obtain a prediction about the next step of the cooperation task and to plan the robot motion accordingly. The proposed approach is evaluated experimentally in real time in a moving objects task which requires appropriate complementary actions from the robot and the human user. The results of this study show that the proposed scheme can successfully plan the robot motion based on the actions of the human user.
|
|
11:00-12:15, Paper TuAT1-08.4 | Add to My Program |
A Multi-Modal Sensor Array for Safe Human-Robot Interaction and Mapping |
Abah, Colette | Vanderbilt University |
Orekhov, Andrew | Vanderbilt University |
Johnston, Garrison | Vanderbilt University |
Yin, Peng | Carnegie Mellon University |
Choset, Howie | Carnegie Mellon University |
Simaan, Nabil | Vanderbilt University |
Keywords: Physical Human-Robot Interaction, Force and Tactile Sensing, Robot Safety
Abstract: In the future, human-robot interaction will include collaboration in close-quarters where the environment geometry is partially unknown. As a means for enabling such interaction, this paper presents a multi-modal sensor array capable of contact detection and localization, force sensing, proximity sensing, and mapping. The sensor array integrates Hall effect and time-of-flight (ToF) sensors in an I 2C communication network. The design, fabrication, and characterization of the sensor array for a future in-situ collaborative continuum robot are presented. Possible perception benefits of the sensor array are demonstrated for accidental contact detection, mapping of the environment, selection of admissible zones for bracing, and constrained motion control of the end effector while maintaining a bracing constraint with an admissible rolling motion.
|
|
11:00-12:15, Paper TuAT1-08.5 | Add to My Program |
Disturbance-Observer-Based Compliance Control of Electro-Hydraulic Actuators with Backdrivability |
Lee, Woongyong | POSTECH |
Chung, Wan Kyun | POSTECH |
Keywords: Physical Human-Robot Interaction, Compliance and Impedance Control, Hydraulic/Pneumatic Actuators
Abstract: This paper proposes an intrinsically backdrivable electro-hydraulic torque actuator (EHTA) and associated control algorithms that could lead to the design of a high-performance interactive robot system. The EHTA consists of (electro-hydraulic) backdrivable servovalves and double vane rotary hydraulic actuators. It is represented as a flexible actuator model considering servovalve dynamics and fluid dynamics. In the flexible actuator structure, the interaction stability is not guaranteed because the EHTA is a non-collocated system. However, due to the use of backdrivable servovalves, the EHTA is rigidly represented in the finite frequency region that a robot operates in daily life; it has the property of finite frequency passivity. Based on this property, a disturbance observer (DOB) was designed to compensate for friction effects and model uncertainties. Then, the compliance control was applied to the friction-free actuator. Consequently, we achieved the actuator with a torque sensitivity of 0.1 Nm and a maximum torque output of approximately 100 Nm. The proposed actuators and controllers were evaluated through experiments.
|
|
11:00-12:15, Paper TuAT1-08.6 | Add to My Program |
Dynamic Primitives in Human Manipulation of Non-Rigid Objects |
Guang, Hui | Tsinghua University |
Bazzi, Salah | Northeastern University |
Sternad, Dagmar | Northeastern University |
Hogan, Neville | Massachusetts Institute of Technology |
Keywords: Physical Human-Robot Interaction, Dexterous Manipulation, Biologically-Inspired Robots
Abstract: This study examined strategies humans chose to manipulate an object with complex (nonlinear, underactuated) dynamics, such as liquid sloshing in a cup of coffee. The problem was simplified to the well-known cart-and-pendulum system moving on a horizontal line. This model was implemented in a virtual environment and human subjects manipulated the object via a robotic manipulandum. The task was to maneuver the system from rest to arrive at a target position such that no residual oscillations of the pendulum bob remained. Our goal was to test whether humans simplified control by employing dynamic primitives, specifically submovements. Experimental velocity profiles of the human movements were compared to those predicted by three different control models. Two models used continuous optimization-based control, the third control model was based on Input Shaping. Input Shaping is a method for controlling flexible objects by convolving a motion profile with impulses of appropriate amplitude and timing. To evaluate whether humans used Input Shaping, we decomposed the velocity profiles recorded from humans into submovements, as proxies for the convolved impulses. Comparing the motion profiles from the 3 models with the experimentally measured human profiles showed superior performance of the Input Shaping model. These initial results are consistent with our hypothesis that combining dynamic primitives, submovements, is a competent description of human performance and may provide a simp
|
|
TuAT1-09 Interactive Session, 220 |
Add to My Program |
Perception for Manipulation I - 2.1.09 |
|
|
|
11:00-12:15, Paper TuAT1-09.1 | Add to My Program |
State Estimation in Contact-Rich Manipulation |
Wirnshofer, Florian | Siemens AG |
Schmitt, Philipp Sebastian | Siemens Corporate Technology |
Meister, Philine | Siemens |
v. Wichert, Georg | Siemens AG |
Burgard, Wolfram | University of Freiburg |
Keywords: Perception for Grasping and Manipulation, Compliant Assembly
Abstract: This paper introduces a Bayesian state estimator for contact-rich manipulation tasks with application in non-prehensile manipulation, industrial assembly or in-hand localization. The core idea of our approach is to explicitly model both the contact dynamics and a torque-based robot controller as part of the underlying system model. Our approach is capable of estimating the state of movable objects for various robot kinematics and geometries of robots and objects. This includes complex scenarios with multiple robots, multiple objects and articulated objects. We have validated our approach in simulation and on a physical robot. The experiments show that multi-modal distributions of six degrees of freedom object poses can be accurately tracked in real-time in a complex manipulation scenario.
|
|
11:00-12:15, Paper TuAT1-09.2 | Add to My Program |
Improved Proximity, Contact, and Force Sensing Via Optimization of Elastomer-Air Interface Geometry |
Lancaster, Patrick | University of Washington |
Smith, Joshua R. | University of Washington |
Srinivasa, Siddhartha | University of Washington |
Keywords: Perception for Grasping and Manipulation, Reactive and Sensor-Based Planning, Sensor-based Control
Abstract: We describe a single fingertip-mounted sensing sys- tem for robot manipulation that provides proximity (pre-touch), contact detection (touch), and force sensing (post-touch). The sen- sor system consists of optical time-of-flight range measurement modules covered in a clear elastomer. Because the elastomer is clear, the sensor can detect and range nearby objects, as well as measure deformations caused by objects that are in contact with the sensor and thereby estimate the applied force. We examine how this sensor design can be improved with respect to invariance to object reflectivity, signal-to-noise ratio, and continuous operation when switching between the distance and force measurement regimes. By harnessing time-of-flight technology and optimizing the elastomer-air boundary to control the emitted light’s path, we develop a sensor that is able to seamlessly transition between measuring distances of up to 50 mm and contact forces of up to 10 newtons. We demonstrate that our sensor improves manipulation accuracy in a block unstacking task. Thorough instructions for manufacturing the sensor from inexpensive, commercially available components are provided, as well as all relevant hardware design files and software sources.
|
|
11:00-12:15, Paper TuAT1-09.3 | Add to My Program |
Improving Haptic Adjective Recognition with Unsupervised Feature Learning |
Richardson, Benjamin A. | Max Planck Institute for Intelligent Systems |
Kuchenbecker, Katherine J. | Max Planck Institute for Intelligent Systems |
Keywords: Perception for Grasping and Manipulation, Force and Tactile Sensing
Abstract: Humans can form an impression of how a new object feels simply by touching its surfaces with the densely innervated skin of the fingertips. Many haptics researchers have recently been working to endow robots with similar levels of haptic intelligence, but these efforts almost always employ hand-crafted features, which are brittle, and concrete tasks, such as object recognition. We applied unsupervised feature learning methods, specifically K-SVD and Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP), to rich multi-modal haptic data from a diverse dataset. We then tested the learned features on 19 more abstract binary classification tasks that center on haptic adjectives such as smooth and squishy. The learned features proved superior to traditional hand-crafted features by a large margin, almost doubling the average F1 score across all adjectives. Additionally, particular exploratory procedures (EPs) and sensor channels were found to support perception of certain haptic adjectives, underlining the need for diverse interactions and multi-modal haptic data.
|
|
11:00-12:15, Paper TuAT1-09.4 | Add to My Program |
Tactile Mapping and Localization from High-Resolution Tactile Imprints |
Bauza Villalonga, Maria | Massachusetts Institute of Technology |
Canal Anton, Oleguer | Massachusetts Institute of Technology |
Rodriguez, Alberto | Massachusetts Institute of Technology |
Keywords: Perception for Grasping and Manipulation, Force and Tactile Sensing, Dexterous Manipulation
Abstract: This work studies the problem of shape reconstruction and object localization using a vision-based tactile sensor, GelSlim. The main contributions are the recovery of local shapes from contact, an approach to reconstruct the tactile shape of objects from tactile imprints, and an accurate method for object localization of previously reconstructed objects. The algorithms can be applied to a large variety of 3D objects and provide accurate tactile feedback for in-hand manipulation. Results show that by exploiting the dense tactile information we can reconstruct the shape of objects with high accuracy and do on-line object identification and localization, opening the door to reactive manipulation guided by tactile sensing. We provide videos and supplemental information in the project's website web.mit.edu/mcube/research/tactile_localization.html .
|
|
11:00-12:15, Paper TuAT1-09.5 | Add to My Program |
Maintaining Grasps within Slipping Bounds by Monitoring Incipient Slip |
Dong, Siyuan | MIT |
Ma, Daolin | Massachusetts Institute of Technology |
Donlon, Elliott | MIT |
Rodriguez, Alberto | Massachusetts Institute of Technology |
Keywords: Perception for Grasping and Manipulation, Sensor-based Control, Grasping
Abstract: In this paper, we propose an approach to detect incipient slip, i.e. predict slip, by using a high-resolution vision-based tactile sensor, GelSlim. The sensor dynamically captures the tactile imprints of the grasped object and their changes with a soft gel pad. The method assumes the object is mostly rigid and expects the motion of object's imprint on the sensor surface to be a 2D rigid-body motion. We use the deviation of the true motion field from that of a 2D planar rigid transformation as a measure of slip. The output is a dense slip field which we monitor in real time to detect when small areas of the contact patch start to slip (incipient slip). The method can detect incipient slip in any direction without any prior knowledge of the object at 24 Hz. We test the method on 10 objects for 240 times and achieve 86.25% detection accuracy with the vast majority of failure cases occurring when grasping highly deformable objects. We further show how the slip feedback can be used to adjust the gripping force to avoid slip with a closed-loop bottle-cap screwing and unscrewing experiment. The method can be used to enable many manipulation tasks in both structured and unstructured environments.
|
|
11:00-12:15, Paper TuAT1-09.6 | Add to My Program |
From Pixels to Percepts: Highly Robust Perception and Exploration Using Deep Learning and an Optical Biomimetic Tactile Sensor |
Lepora, Nathan | University of Bristol |
Church, Alex | University of Bristol |
de Kerckhove, Conrad | University of Bristol |
Hadsell, Raia | DeepMind |
Lloyd, John | University of Bristol |
Keywords: Force and Tactile Sensing, Deep Learning in Robotics and Automation
Abstract: Deep learning has the potential to have the impact on robot touch that it has had on robot vision. Optical tactile sensors act as a bridge between the subjects by allowing techniques from vision to be applied to touch. In this paper, we apply deep learning to an optical biomimetic tactile sensor, the TacTip, which images an array of papillae (pins) inside its sensing surface analogous to structures within human skin. {color{black}Our main result is that the application of a deep CNN can give reliable edge perception and thus a robust policy for planning contact points to move around object contours.} Robustness is demonstrated over several irregular and compliant objects with both tapping and continuous sliding, using a model trained only by tapping onto a disk. These results relied on using techniques to encourage generalization to tasks beyond which the model was trained. We expect this is a generic problem in practical applications of tactile sensing that deep learning will solve.
|
|
TuAT1-10 Interactive Session, 220 |
Add to My Program |
Intelligent Transportation I - 2.1.10 |
|
|
|
11:00-12:15, Paper TuAT1-10.1 | Add to My Program |
Road Detection through CRF Based LiDAR-Camera Fusion |
Gu, Shuo | Nanjing University of Science and Technology |
Zhang, Yigong | Nanjing University of Science and Technology |
Tang, Jinhui | Nanjing University of Science and Technology |
Yang, Jian | Nanjing University of Science & Technology |
Kong, Hui | Nanjing University of Science and Technology |
Keywords: Intelligent Transportation Systems, Sensor Fusion
Abstract: In this paper, we propose a road detection method with LiDAR-camera fusion in a novel conditional random field (CRF) framework to exploit both range and color information. In the LiDAR based part, a fast height-difference based scanning strategy is applied in the 2D LiDAR range-image domain and a dense road detection result in camera image domain can be obtained through geometric upsampling given the LiDAR-camera calibration parameters. In the camera based part, a fully convolutional network is applied in the camera image domain. Finally, we fuse the dense and binary road detection results from both LiDAR and camera in a single CRF framework. Experiments show that using a single thread of CPU, the proposed LiDAR based part can operate at a frequency of over 250Hz with sparse output in range image and 40Hz with dense result in camera image for the 64-beam Velodyne scanner. Our CRF fusion method achieves competitive performance on the KITTI-Road dataset.
|
|
11:00-12:15, Paper TuAT1-10.2 | Add to My Program |
Semantic Mapping Extension for OpenStreetMap Applied to Indoor Robot Navigation |
Naik, Lakshadeep | Hochschule Bonn Rhein Sieg University of Applied Science |
Blumenthal, Sebastian | Locomotec |
Huebel, Nico | KU Leuven |
Bruyninckx, Herman | University of Leuven |
Prassler, Erwin | Bonn-Rhein-Sieg Univ. of Applied Sciences |
Keywords: Intelligent Transportation Systems, Autonomous Vehicle Navigation
Abstract: In this work a graph-based, semantic mapping approach for indoor robotics applications is presented, which is extending OpenStreetMap (OSM) with robotic-specific, semantic, topological, and geometrical information. Models are introduced for basic indoor structures such as walls, doors, corridors, elevators, etc. The architectural principles support composition with additional domain and application-specific knowledge. As an example, a model for an area is introduced, and it is explained how this can be used in navigation. A key advantage of the proposed graph-based map representation is that it allows exploiting the hierarchical structure of the graphs. Finally, the compatibility of the approach with existing, grid-based motion planning algorithms is shown.
|
|
11:00-12:15, Paper TuAT1-10.3 | Add to My Program |
Adaptive Probabilistic Vehicle Trajectory Prediction through Physically Feasible Bayesian Recurrent Neural Network |
Tang, Chen | University of California Berkeley |
Chen, Jianyu | UC Berkeley |
Tomizuka, Masayoshi | University of California |
Keywords: Intelligent Transportation Systems, Deep Learning in Robotics and Automation
Abstract: Probabilistic vehicle trajectory prediction is essential for robust safety of autonomous driving. Current methods for long-term prediction cannot guarantee the physical feasibility of predicted distribution. Moreover, their models cannot adapt to the driving policy of the predicted target human driver. In this work, we propose to overcome these two shortcomings by a Bayesian recurrent neural network model consisting of Bayesian-neural-network-based policy model and known physical model of the scenario. Bayesian neural network can ensemble complicated output distribution, enabling rich family of trajectory distribution. The embedded physical model ensures feasibility of the distribution. Moreover, the adopted gradient-based training method allows direct optimization for better performance in long prediction horizon. Furthermore, a particle-filter-based parameter adaptation algorithm is designed to adapt the policy Bayesian neural network to the predicted target online. Effectiveness of the proposed methods is verified with a toy example with multi-modal stochastic feedback gain and naturalistic car following data.
|
|
11:00-12:15, Paper TuAT1-10.4 | Add to My Program |
Optimizing Vehicle Distributions and Fleet Sizes for Shared Mobility-On-Demand |
Wallar, Alexander | Massachusetts Institute of Technology |
Alonso-Mora, Javier | Delft University of Technology |
Rus, Daniela | MIT |
Keywords: Intelligent Transportation Systems, Planning, Scheduling and Coordination, Path Planning for Multiple Mobile Robots or Agents
Abstract: Mobility-on-demand (MoD) systems are revolutionizing urban transit with the introduction of ride-sharing. Such systems have the potential to reduce vehicle congestion and improve accessibility of a city's transportation infrastructure. Recently developed algorithms can compute routes for vehicles in real-time for a city-scale volume of requests while allowing vehicles to carry multiple passengers at the same time. However, these algorithms focus on optimizing the performance for a given fleet of vehicles and do not tell us how many vehicles are needed to service all the requests. In this paper, we present an offline method to optimize the vehicle distributions and fleet sizes on historical demand data for MoD systems that allow passengers to share vehicles. We present an algorithm to determine how many vehicles are needed, where they should be initialized, and how they should be routed to service all the travel demand for a given period of time. Evaluation using 23,529,740 historical taxi requests from one month in Manhattan shows that on average 2864 four passenger vehicles are needed to service all of the taxi demand in a day with an average added travel delay of 2.8 mins.
|
|
11:00-12:15, Paper TuAT1-10.5 | Add to My Program |
Global Vision-Based Reconstruction of Three-Dimensional Road Surfaces Using Adaptive Extended Kalman Filter |
Li, Diya | Virginia Tech |
Furukawa, Tomonari | Virginia Polytechnic Institute and State University |
Keywords: Intelligent Transportation Systems, Automation Technologies for Smart Cities, Computer Vision for Transportation
Abstract: This paper presents a vision-based technique and a system developed for the global reconstruction of three-dimensional (3-D) road surfaces. Using the system, the technique globally reconstructs 3-D road surfaces by estimating the global camera pose using the Adaptive Extended Kalman Filter (AEKF) and integrating it with existing local road surface reconstruction techniques. The AEKF adaptively updates the covariance of uncertainties such that the estimation works well even in environments with varying uncertainties. Numerical results show the efficacy of the proposed technique over the EKF-based technique by 50% in accuracy, and the on-road test has demonstrated the ability of the proposed technique for the real-world global 3D road surface reconstruction.
|
|
11:00-12:15, Paper TuAT1-10.6 | Add to My Program |
Deep Metadata Fusion for Traffic Light to Lane Assignment |
Langenberg, Tristan | Daimler AG |
Lüddecke, Timo | University of Göttingen |
Wörgötter, Florentin | University of Göttingen |
Keywords: Intelligent Transportation Systems, Computer Vision for Transportation, Deep Learning in Robotics and Automation
Abstract: We present a deep metadata fusion approach that connects image data and heterogeneous metadata inside a Convolutional Neural Network (CNN). This approach enables us to assign all relevant traffic lights to their associated lanes. To achieve this, a common CNN topology is trained by downsampled and transformed input images to predict an indication vector. The indication vector contains the column positions of all the relevant traffic lights that are associated with lanes. In parallel, we fuse prepared and adaptively weighted Metadata Feature Maps (MFM) with the convolutional feature map input of a selected convolutional layer. The results are compared to rule-based, only-metadata, and only-vision approaches. In addition, human performance of the traffic light to ego-vehicle lane assignment has been measured by a subjective test. The proposed approach outperforms all other approaches. It achieves about 93.0 % average precision for a real world dataset. In a more complex dataset, 87.1 % average precision is achieved. In particular, the new approach reaches significantly higher results with 93.7 % to 91.0 % average accuracy for a real world dataset in contrast to lower human performance.
|
|
TuAT1-11 Interactive Session, 220 |
Add to My Program |
Medical Robotics V - 2.1.11 |
|
|
|
11:00-12:15, Paper TuAT1-11.1 | Add to My Program |
Autonomous Tissue Manipulation Via Surgical Robot Using Learning Based Model Predictive Control |
Shin, Changyeob | University of California, Los Angeles |
Ferguson, Peter | University of California Los Angeles |
Aghajani Pedram, Sahba | University of California, Los Angeles |
Ma, Ji | University of California Los Angeles |
Dutson, Erik | UCLA |
Rosen, Jacob | University of California at Santa Cruz |
Keywords: Surgical Robotics: Planning, Model Learning for Control, Learning from Demonstration
Abstract: Tissue manipulation is a frequently used fundamental subtask of any surgical procedures, and in some cases it may require the involvement of a surgeon's assistant. The complex dynamics of soft tissue as an unstructured environment is one of the main challenges in any attempt to automate the manipulation of it via a surgical robotic system. Two AI learning based model predictive control algorithms using vision strategies are proposed and studied: (1) reinforcement learning and (2) learning from demonstration. Comparison of the performance of these AI algorithms in a simulation setting indicated that the learning from demonstration algorithm can boost the learning policy by initializing the predicted dynamics with given demonstrations. Furthermore, the learning from demonstration algorithm is implemented on a Raven IV surgical robotic system and successfully demonstrated feasibility of the proposed algorithm using an experimental approach. This study is part of a profound vision in which the role of a surgeon will be redefined as a pure decision maker whereas the vast majority of the manipulation will be conducted autonomously by a surgical robotic system. A supplementary video can be found at: http://bionics.seas.ucla.edu/research/surgeryproject17.html
|
|
11:00-12:15, Paper TuAT1-11.2 | Add to My Program |
Robotic Control of a Multi-Modal Rigid Endoscope Combining Optical Imaging with All-Optical Ultrasound |
Dwyer, George | University College London |
Colchester, Richard J | UCL |
Alles, Erwin J. | University College London |
Maneas, Efthymios | Department of Medical Physics and Biomedical Engineering, Univer |
Ourselin, Sebastien | University College London |
Vercauteren, Tom | King's College London |
Deprest, Jan | University Hospital Leuven |
Vander Poorten, Emmanuel B | KU Leuven |
De Coppi, Paolo | UCL |
Desjardins, Adrien | Department of Medical Physics and Biomedical Engineering, Univer |
Stoyanov, Danail | University College London |
Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy
Abstract: Fetoscopy is a technically challenging surgery, due to the dynamic environment and low diameter endoscopes often resulting in a limited field of view. In this paper, we report on the design and operation of a robotic multimodal endoscope with optical ultrasound and white light stereo camera. The manufacture and control of the endoscope is presented, along with large area (80 mm x 80 mm) surface visualisations of a placenta phantom using the optical ultrasound sensor. The repeatability of the surface visualisations were found to be 0.446±0.139 mm and 0.267±0.017 mm for a raster and spiral scan, respectively.
|
|
11:00-12:15, Paper TuAT1-11.3 | Add to My Program |
Enabling Technology for Safe Robot-Assisted Retinal Surgery: Early Warning for Unsafe Scleral Force |
He, Changyan | Beihang University |
Patel, Niravkumar | Johns Hopkins University |
Iordachita, Ioan Iulian | Johns Hopkins University |
Kobilarov, Marin | Johns Hopkins University |
Keywords: Medical Robots and Systems, Force and Tactile Sensing, Robot Safety
Abstract: Retinal microsurgery is technically demanding and requires high surgical skill with very little room for manipulation error. Any unexpected manipulation could cause extreme tool-sclera contact force (scleral force), it could in turn lead to sclera damage. The introduction of robotic assistance could enhance and expand a surgeon's manipulation capabilities. However, the potential intraoperative danger coming from surgeon's mis-operations cannot be filtered or interrupted by the existing robotic systems. Therefore, we propose a method to predict the upcoming unsafe manipulation in robot-assisted retinal surgery, this predicted information is then fed back to the surgeon via auditory substitution. By this way the surgeon could react to the possible unsafe events in advance. The sclera safety is focused in this work. A force-sensing tool is fabricated and calibrated to measure the scleral force. A recurrent neural network is designed and trained to predict the scleral force status 500 milliseconds early. The auditory substitution is implemented to feedback the predicted force status to the surgeon. A vessel following manipulation is designed and performed on a dry eye phantom to simulate the retinal surgery and is further used to examine the proposed method. Five users are involved in the validation experiments. The results show that the early warning could help to reduce the number of the unsafe manipulation events.
|
|
11:00-12:15, Paper TuAT1-11.4 | Add to My Program |
Robotic Bronchoscopy Drive Mode of the Auris Monarch Platform |
Graetzel, Chauncey | Auris Health Inc |
Sheehy, Alexander | Auris Health Inc |
Noonan, David | Auris Health Inc |
Keywords: Surgical Robotics: Steerable Catheters/Needles, Telerobotics and Teleoperation, Medical Robots and Systems
Abstract: Robotic bronchoscopy has the potential to improve the early detection of lung cancer. For the technology to be broadly adopted, the physician needs to be able to control the robotic bronchoscope in an instinctive and effective manner. In this paper, we describe the algorithms used to manipulate/drive the Auris Monarch Platform, a 10 degree-of-freedom bronchoscope and sheath, using a 3 degree-of-freedom user input. We introduce the concept of paired driving where the devices co-insert and co-articulate depending on their relative insertion. The paper presents safety algorithms such as auto-relax on retract and tension monitoring. The drive modes were developed, optimized, and clinically tested in lung models, human cadavers and live porcine models prior to their commercial release. Clinical studies show that the physician is able to reach significantly deeper in the lung than with classic bronchoscopes.
|
|
11:00-12:15, Paper TuAT1-11.5 | Add to My Program |
Using Comanipulation with Active Force Feedback to Undistort Stiffness Perception in Laparoscopy |
Schmitt, François | ICube, University of Strasbourg |
Sulub, Josue | ISIR-Agathe, Sorbonne Universités, CNRS |
Avellino, Ignacio | Sorbonne Université, CNRS, INSERM, ISIR-Agathe |
Da Silva, Jimmy | Sorbonne Université, CNRS, INSERM, ISIR-Agathe |
Barbé, Laurent | University of Strasbourg, ICUBE CNRS |
Piccin, Olivier | ICube-AVR |
Bayle, Bernard | University of Strasbourg |
Morel, Guillaume | Univ. Pierre Et Marie Curie - Paris 6 |
Keywords: Surgical Robotics: Laparoscopy, Force Control, Cooperative Manipulators
Abstract: Surgeons performing laparoscopic surgery experience distortion when perceiving the stiffness of a patient’s tissues. This is due to the lever effect induced by the introduction of instruments in their patient’s body through a fulcrum. To address this problem, we propose to use the comanipulation paradigm. A robotic device is connected to the handle of the instrument while simultaneously being held by the surgeon. This device applies a force on the handle that reflects the force measured at the tool tip, with a gain that depends on the lever ratio. The implementation of this method is presented on an experimental setup and a preliminary assessment experiment is presented.
|
|
TuAT1-12 Interactive Session, 220 |
Add to My Program |
Field Robotics II - 2.1.12 |
|
|
|
11:00-12:15, Paper TuAT1-12.1 | Add to My Program |
VIKINGS : An Autonomous Inspection Robot for the ARGOS Challenge (I) |
Merriaux, Pierre | Irseem/Esigelec |
Rossi, Romain | ESIGELEC |
Boutteau, Rémi | IRSEEM |
Vauchey, Vincent | ESIGELEC |
Qin, Lei | ESIGELEC |
Chanuc, Pailin | ESIGELEC |
Rigaud, Florent | Compagnie Nationale Du Rhône |
Roger, Florent | SOMINEX |
Benoit, Decoux | Esigelec |
Savatier, Xavier | Irseem Ea 4353 |
Keywords: Field Robots, Localization, Robotics in Hazardous Fields
Abstract: This paper presents the overall architecture of the VIKINGS robot, one of the five contenders in the ARGOS challenge and winner of two competitions. The VIKINGS robot is an autonomous or remote-operated robot for the inspection of oil and gas sites and is able to assess various petrochemical risks based on embedded sensors and processing. The VIKINGS robot is able to autonomously monitor all the elements of a petrochemical process on a multi-storey oil platform (reading gauges, state of the valves, proper functioning of the pumps) while facing many hazards (leaks, obstacles or holes in its path). In this article is presented the major components of the robot's architecture and the algorithms we developed for some functions (localization, gauge reading, etc). We also present the methodology that we adopted and that allowed us to succeed in this challenge.
|
|
11:00-12:15, Paper TuAT1-12.2 | Add to My Program |
Coordinated Control of a Dual-Arm Space Robot (I) |
Shi, Lingling | Beijing Institute of Technology |
Jayakody, Hiranya Samanga | University of New South Wales |
Katupitiya, Jayantha | The University of New South Wales |
Jin, Xin | Beijing Institute of Technology |
Keywords: Space Robotics and Automation, Motion Control
Abstract: Dual-arm space robots have attracted increasing attention to perform on-orbit servicing missions autonomously or telerobotically. Coordinated control of the arms motion and the spacecraft base attitude considering coupling dynamics between them is essential for successful space operations. In this work both arms of the space robot perform as mission arms aimed at accomplishing secure capture of a floating target. Two types of controllers, i.e. a smoothed quasi-continuous second-order sliding mode controller (SQC2S) and an adaptive variable structure controller (AVSC) are developed and compared in three cases: path tracking, set-point regulation and inaccurate system model. The AVSC controller has been demonstrated to achieve higher tracking accuracy in all scenarios. Furthermore, AVSC controller saves a large amount of energy compared to SQC2S. This is a critical advantage since energy is limited on-board spacecraft and is quite valuable. The SQC2S controller cannot deal with large value system uncertainties reflected by large pose tracking errors of the end-effectors; whereas AVSC can restrict the position tracking error within a much smaller value. Therefore, AVSC has the ability to realize fast and high-accuracy tracking operation of space robots in space missions.
|
|
11:00-12:15, Paper TuAT1-12.3 | Add to My Program |
Radiological Monitoring of Nuclear Facilities Using the Continuous Autonomous Radiation Monitoring Assistance (CARMA) Robot (I) |
Bird, Benjamin | The University of Manchester |
Griffiths, Arron | The University of Manchester |
Martin, Horatio | The University of Manchester |
Codres, Eduard | The University of Manchester |
Jones, Jennifer | The University of Manchester |
Stancu, Alexandru | The University of Manchester |
Lennox, Barry | The University of Manchester |
Watson, Simon | University of Manchester |
Poteau, Xavier | Sellafield Ltd |
Keywords: Robotics in Hazardous Fields, Field Robots, Mapping
Abstract: Nuclear facilities can often require continuous monitoring to ensure there is no contamination of radioactive materials that might lead to safety or environmental issues. The current approach to radiological monitoring is to use human operators, which is both time consuming and cost inefficient, and as with many repetitive, routine tasks, there are considerable opportunities for the process to be improved through the utilization of autonomous robotic systems. This paper describes the design and development of an autonomous, ground based radiological monitoring robot, Continuous Autonomous Radiation Monitoring Assistance (CARMA) and how it was able to detect and locate a fixed alpha source that was embedded into the floor when it was deployed into an active area on the Sellafield nuclear site. This deployment was the first time that a fully autonomous robot had ever been deployed at Sellafield, the largest nuclear site in Europe.
|
|
11:00-12:15, Paper TuAT1-12.4 | Add to My Program |
Robot Foraging: Autonomous Sample Return in a Large Outdoor Environment (I) |
Gu, Yu | West Virginia University |
Strader, Jared | West Virginia University |
Ohi, Nicholas | West Virginia University |
Harper, Scott | West Virginia University |
Lassak, Kyle | West Virginia University |
Yang, Chizhao | West Virginia University |
Kogan, Lisa | West Virginia University |
Hu, Boyi | Harvard University |
Gramlich, Matthew | West Virginia University |
Kavi, Rahul | West Virginia University |
Gross, Jason | West Virginia University |
Keywords: Field Robots, Autonomous Vehicle Navigation, Space Robotics and Automation
Abstract: Robotic foraging is a rich and potentially fruitful research field. Many robotics applications can be modeled as foraging problems, such as search and rescue, wildlife tracking, crop pollination and harvesting, mining and in-situ resource utilization, and scientific data/sample collection. In this article, the design of an autonomous foraging robot, named Cataglyphis, that won NASA’s Sample Return Robot Challenge in 2014, 2015, and 2016, is presented. The main goal of this article is to share the thinking process behind some of the key choices made during the design of Cataglyphis. The general framework that enabled autonomous sample return as described in this article may also be adapted to robots performing many other foraging-like applications.
|
|
11:00-12:15, Paper TuAT1-12.5 | Add to My Program |
Pictobot : A Cooperative Painting Robot for Interior Finishing of Industrial Developments (I) |
Asadi, Ehsan | Nanyang Technological University |
Li, Bingbing | Nanyang Technological University |
Chen, I-Ming | Nanyang Technological University |
Keywords: Robotics in Construction, Field Robots, Human-Centered Automation
Abstract: Interior painting of industrial developments is labor-intensive and performed with conventional techniques which are no doubt timeconsuming and tiresome We present a cooperative painting robot called Pictobot, Which provides a way to combine the benefits of automation in construction with those of human dexterity and ingenuity. Thus it relieves workers of the tiresome task and considerable climbing, bending, kneeling, and reaching, freeing them to supervise the robot. Pictobot is empowered with a sensor-driven painting system via in-situ 3D scanning and spray-gun motion planning which adapts to the uncertainties of construction environment and robot deployment from various positions. Besides, we employ human perception to decompose a broad functional area to smaller workspaces and to position the robot approximately at the expected places with the help of both visual and data feedbacks. So the human-Pictobot system works collaboratively, in which the worker's judgment and perception become the upper robot planner, and the robot adjusts the spray gun path and the painting plans from various deployed positions. The robot is tested in two actual industrial developments successfully. Pictobot enables achieving higher spraying transfer efficiency in comparison with manual sparing that means reducing paint dust, paint waste and human exposure to harmful chemicals. It also allows the existing workforce to achieve more with consistent coat quality and higher productivity.
|
|
11:00-12:15, Paper TuAT1-12.6 | Add to My Program |
Teleoperated In-Situ Repair of an Aeroengine (I) |
Alatorre, David | University of Nottingham |
Nasser, Bilal | Rolls-Royce Plc |
Rabani, Amir | The University of Nottingham |
Nagy, Adam | University of Nottingham |
Dong, Xin | University of Nottingham |
Axinte, Dragos | University of Nottingham |
Kell, James | University of Nottingham |
Keywords: Telerobotics and Teleoperation, Robotics in Hazardous Fields, Motion Control
Abstract: There is a substantial financial incentive for in-situ repair of industrial assets. However, the need for highly rained mechanics to travel to the location of a repair often results in inconveniently long downtimes. The emergence of robots capable of replicating human interventions on industrial equipment can be coupled with remote control strategies to reduce the response time from several days to a few hours. This work outlines the design and remote control strategy for a novel robotic system to carry out repairs on aeroengine compressors in-situ via the internet. A high-level control computer serves as an interface with the skilled operator. A low-level controller receives instruction packets from the high-level controller via the internet and uses them to determine the necessary movements to carry out a machining operation. The robot, comprising a combination of rotary, prismatic and flexible (continuum) joints, was designed to replicate the degree of freedom of hand-held tools. Sensors and encoders on the robot enable the low-level controller to independently detect faults and stop all motion despite the high latency of internet communications. The remote control system was tested by machining stress relief features on eleven compressor blades with a median RMS error of 0.064 mm between the desired and measured blends. A successful demonstration on a production aeroengine shows the capability of the system.
|
|
TuAT1-13 Interactive Session, 220 |
Add to My Program |
Soft Robots II - 2.1.13 |
|
|
|
11:00-12:15, Paper TuAT1-13.1 | Add to My Program |
Stiffness-Tuneable Limb Segment with Flexible Spine for Malleable Robots |
Clark, Angus Benedict | Imperial College London |
Rojas, Nicolas | Imperial College London |
Keywords: Flexible Robots, Soft Material Robotics
Abstract: Robotic arms built from stiffness-adjustable, continuously bending segments serially connected with revolute joints have the ability to change their mechanical architecture and workspace, thus allowing high flexibility and adaptation to different tasks with less than six degrees of freedom, a concept that we call malleable robots. Known stiffening mechanisms may be used to implement suitable links for these novel robotic manipulators; however, these solutions usually show a reduced performance when bending due to structural deformation. By including an inner support structure this deformation can be minimised, resulting in an increased stiffening performance. This paper presents a new multi-material spine-inspired flexible structure for providing support in stiffness-controllable layer-jamming-based robotic links of large diameter. The proposed spine mechanism is highly movable with type and range of motions that match those of a robotic link using solely layer jamming, whilst maintaining a hollow and light structure. The mechanics and design of the flexible spine are explored, and a prototype of a link utilising it is developed and compared with limb segments based on granular jamming and layer jamming without support structure. Results of experiments verify the advantages of the proposed design, demonstrating that it maintains a constant central diameter across bending angles and presents an improvement of more than 203% of resisting force at 180 degrees.
|
|
11:00-12:15, Paper TuAT1-13.2 | Add to My Program |
A Reconfigurable Variable Stiffness Manipulator by a Sliding Layer Mechanism |
Li, Dickson Chun Fung | The Chinese University of Hong Kong |
Wang, Zerui | The Chinese University of Hong Kong |
Ouyang, Bo | City University of Hong Kong |
Liu, Yunhui | Chinese University of Hong Kong |
Keywords: Flexible Robots, Soft Material Robotics
Abstract: Inherent compliance plays an enabling role in soft robots, which rely on it to mechanically conform to the environment. However, it also limits the payload of the robots. Various variable stiffness approaches have been adopted to limit compliance and provide structural stability, but most of them can only achieve stiffening of discrete fixed regions which means compliance cannot be precisely adjusted for different needs. This paper offers an approach to enhance the payload with finely adjusted compliance for different needs. We have developed a manipulator that incorporates a novel variable stiffness mechanism and a sliding layer mechanism. The variable stiffness mechanism can achieve a 6.4 stiffness changing ratio with a miniaturized size (10mm diameter for the testing prototype) through interlocking jamming layers with a honeycomb core. The sliding layer mechanism can actively shift the position of the stiffening regions through sliding of jamming layers. A model to predict the robot shape is derived with verifications via an experiment. The stiffening capacity of the variable stiffness mechanism is also empirically evaluated. A case study of a potential application in laparoscopic surgeries is showcased. The payload of the manipulator is investigated, and the prototype shows up to 57.8 percentage decrease of the vertical deflection due to an external load after reconfigurations.
|
|
11:00-12:15, Paper TuAT1-13.3 | Add to My Program |
A Novel Variable Stiffness Actuator Based on Pneumatic Actuation and Supercoiled Polymer Artificial Muscles |
Yang, Yang | The Hong Kong University of Science and Technology |
Kan, Zicheng | The Hong Kong University of Science and Technology |
Zhang, Yazhan | The Hong Kong University of Science and Technology |
Tse, Yu Alexander | The Hong Kong University of Science and Technology |
Wang, Michael Yu | Hong Kong University of Science & Technology |
Keywords: Soft Material Robotics, Hydraulic/Pneumatic Actuators, Mechanism Design
Abstract: This article describes an innovative design of variable stiffness soft actuator, which can potentially be utilized for manipulation and locomotion of soft robots. The new actuator is a combination of two types of actuations: soft pneumatic actuation and muscle-like supercoiled polymer (SCP) actuation. Soft pneumatic actuator has two roles: first is to generate bending motions and second is to increase the stiffness of the whole actuator together with SCP artificial muscles. SCP artificial muscles are exploited to generate pre-load to resist the whole actuator from (excessive) deformation when external load is applied. These two types of actuations are arranged antagonistically to realize stiffness tuning of the whole actuator. At a given bending position, stiffness of the actuator could be tuned by controlling the pressure inside the air chamber and the tension on the SCP artificial muscles. In experimental section, tests are conducted to characterize the applied SCP artificial muscles before they are applied to the proposed actuator. Afterwards, tests of proposed actuator are performed to examine its variable stiffness capability. From experimental results, the proposed actuator can achieve 3.47 times stiffness variation ratio from 0.0312 N/mm (40 kPa air pressure and no SCP actuation) to 0.1083 N/mm (82 kPa air pressure and SCP actuation at 0.143 W/cm) at the same position (bending angle of 56 degree). This study exhibits the potential of applying SCP artificial muscles to
|
|
11:00-12:15, Paper TuAT1-13.4 | Add to My Program |
Design and Analysis of Pneumatic 2-DoF Soft Haptic Devices for Shear Display |
Kanjanapas, Smita | Stanford University |
Nunez, Cara M. | Stanford University |
Williams, Sophia R. | Stanford University |
Okamura, Allison M. | Stanford University |
Luo, Ming | Stanford University |
Keywords: Soft Material Robotics, Haptics and Haptic Interfaces
Abstract: Haptic devices use touch to enable communication in a salient and private manner. While most haptic devices are held or worn at the hand, there is recent interest in developing wearable haptic devices for the arms. This frees the hands for manipulation tasks, but creates challenges for wearability. One approach is to use pneumatically driven soft haptic devices that, compared to rigid devices, can be more readily worn due to their form factor and light weight. We propose a two-degree of freedom (2-DOF) pneumatic soft linear tactor that can be mounted on the forearm and provide shear force. The tactor is comprised of four soft fiber-constrained linear pneumatic actuators connected to a dome-shaped tactor head. The tactor can provide fast, repeatable forces on the order of 1 N in shear, in various directions in the plane of the skin surface. We demonstrate the tradeoffs of two housing schemes, one soft and one rigid, that mount the pneumatic soft linear actuator to the forearm. A user study demonstrated the performance of both versions of the device in providing directional cues, highlighting the challenges and importance of grounding soft wearable devices and the difficulties of designing haptic devices given the perceptual limits of the human forearm.
|
|
11:00-12:15, Paper TuAT1-13.5 | Add to My Program |
Pre-Charged Pneumatic Soft Gripper with Close Loop Control |
Li, Yunquan | The University of Hong Kong |
Chen, Yonghua | The University of Hong Kong |
Li, Yingtian | The University of Hong Kong |
Keywords: Soft Material Robotics, Grippers and Other End-Effectors, Mechanism Design
Abstract: Pneumatic soft grippers have nonlinear continuum deformation enabling them to adapt to irregular object shapes. Most existing pneumatic soft grippers use only open loop control. Attempts for close loop control of soft grippers are all based on air pressure regulation, which is inaccurate and cumbersome due to nonlinear performance of the soft actuator and the compressible nature of air. In this paper, we presents a controllable soft gripper based on pre-charged pneumatic (PCP) soft actuators. The soft actuators, with the same design as a normal bending pneumatic soft actuator (PSA), are pre-charged with air to a preset pressure through a one way check valve. The bending angle and bending speed of soft actuators are controlled by servomotors through tendons based on feedback data from force and proximity sensors. Kinematic models of the soft actuators are developed. Dynamic properties are experimentally studied. A prototype gripper with close loop control is developed for controllable grasping demonstration.
|
|
11:00-12:15, Paper TuAT1-13.6 | Add to My Program |
A Novel Iterative Learning Model Predictive Control Method for Soft Bending Actuators |
Tang, ZhiQiang | The Chinese University of Hong Kong |
Heung, Ho Lam | The Chinese University of Hong Kong |
Tong, Kai Yu | The Chinese University of Hong Kong |
Li, Zheng | The Chinese University of Hong Kong |
Keywords: Soft Material Robotics, Motion Control, Model Learning for Control
Abstract: Soft robots attract research interests worldwide. However, its control remains challenging due to the difficulty in sensing and accurate modeling. In this paper, we propose a novel iterative learning model predictive control (ILMPC) method for soft bending actuators. The uniqueness of our approach is the ability to improve model accuracy gradually. In this method, a pseudo-rigid-body model is used to take an initial guess of the bending behavior of the actuator and the model accuracy is improved with iterative learning. Compared with conventional model free iterative learning control (ILC), the proposed method significantly reduces the learning curve. Compared with the model predictive control (MPC), the proposed method does not rely on an accurate model and it will output a satisfactory model after the learning process. A soft-elastic composite actuator (SECA) is used to validate the proposed method. Both simulation and experimental results show that the proposed method outperforms the conventional MPC and ILC.
|
|
TuAT1-14 Interactive Session, 220 |
Add to My Program |
Haptics & Interfaces I - 2.1.14 |
|
|
|
11:00-12:15, Paper TuAT1-14.1 | Add to My Program |
Design and Experimental Validation of a 2DOF Sidestick Powered by Hyper-Redundant Magnetorheological Actuators Providing Active Feedback |
Begin, Marc-Andre | Universite De Sherbrooke |
Denninger, Marc | Université De Sherbrooke |
Plante, Jean-Sebastien | Université De Sherbrooke |
Keywords: Haptics and Haptic Interfaces, Mechanism Design, Tendon/Wire Mechanism
Abstract: Haptic joysticks for man-machine interaction used in aerospace flight control have highly demanding requirements of reliability, force density, and high dynamics that can hardly be met with conventional electromagnetic actuators. This work explores the potential of using an alternative actuation strategy based on hyper-redundant MR clutches that modulate the force of a tendon-driven 2-degree-of-freedom spherical gimbal. A system design and its closed-loop force control scheme are proposed. Experimental results for an open-loop characterization, static force control and dynamic force control are set out and compared with typical requirements for such devices from the literature. Results show that the proposed architecture leads to one of the lightest systems reported in the literature that has the potential to meet reliability requirements by providing a jam-free design with duplex fault tolerance, and yet, can generate high force levels while providing enough force resolution. The approach is promising and can extend to high-performance collaborative robot applications.
|
|
11:00-12:15, Paper TuAT1-14.2 | Add to My Program |
A Lightweight Force-Controllable Wearable Arm Based on Magnetorheological-Hydrostatic Actuators |
Veronneau, Catherine | Universite De Sherbrooke |
Denis, Jeff | Université De Sherbrooke |
Lebel, Louis-Philippe | Université De Sherbrooke |
Denninger, Marc | Université De Sherbrooke |
Plante, Jean-Sebastien | Université De Sherbrooke |
Girard, Alexandre | Université De Sehrbrooke |
Keywords: Wearable Robots, Hydraulic/Pneumatic Actuators, Haptics and Haptic Interfaces
Abstract: Supernumerary Robotic Limbs (SRLs) are wearable robots augmenting human capabilities by acting as a co-worker, reaching objects, support human arms, etc. However, existing SRLs lack the mechanical backdrivability and bandwidth required for tasks where the interaction forces must be controlled such as painting, drilling, manipulating objects, etc. Being highly backdrivable with a high bandwidth while minimizing weight presents a major technological challenge imposed by the limited performances of conventional electromagnetic actuators. This paper studies the feasibility of using magnetorheological (MR) clutches coupled to a low-friction hydrostatic transmission to provide a highly capable, but yet lightweight, force-controllable SRL. A 2.7 kg 2-DOFs wearable robotic arm is designed and built. Shoulder and elbow joints are designed to deliver 39 and 25 Nm, with 115 and 180° of range of motion. Experimental studies conducted on a one-DOF test bench and validated analytically demonstrate a high force bandwidth (>25 Hz) and a good ability to control interaction forces even when interacting with an external impedance. Furthermore, three force-control approaches are studied and demonstrated experimentally: open-loop, closed-loop on force, and closed-loop on pressure. All three methods are shown to be effective. Overall, the proposed MR-Hydrostatic actuation system is well-suited for a lightweight SRL interacting with both human and environment that add unpredictable disturbances.
|
|
11:00-12:15, Paper TuAT1-14.3 | Add to My Program |
Effects of Different Hand-Grounding Locations on Haptic Performance with a Wearable Kinesthetic Haptic Device |
Nisar, Sajid | Kyoto University |
Orta Martinez, Melisa | Stanford University |
Endo, Takahiro | Kyoto University |
Matsuno, Fumitoshi | Kyoto University |
Okamura, Allison M. | Stanford University |
Keywords: Haptics and Haptic Interfaces, Wearable Robots, Tendon/Wire Mechanism
Abstract: Grounding of kinesthetic feedback against a user's hand can increase the portability and wearability of a haptic device. However, the effects of different hand-grounding locations on haptic perception of a user are unknown. In this letter, we investigate the effects of three different hand-grounding locations—back of the hand, proximal phalanx of the index finger, and middle phalanx of the index finger—on haptic perception using a newly designed wearable haptic device. The novel device can provide kinesthetic feedback to the user's index finger in two directions: along the finger-axis and in the finger's flexion-extension movement direction. We measure users’ haptic perception for each grounding location through a user study for each of the two feedback directions. Results show that among the studied locations, grounding at proximal phalanx has a smaller average just noticeable difference for both feedback directions, indicating a more sensitive haptic perception. The realism of the haptic feedback, based on user ratings, was the highest with grounding at the middle phalanx for feedback along the finger axis, and at the proximal phalanx for feedback in the flexion-extension direction. The results provide insights for designing next-generation wearable hand-grounded kinesthetic devices to achieve better haptic performance and user experience in virtual reality and teleoperated robotic applications.
|
|
11:00-12:15, Paper TuAT1-14.4 | Add to My Program |
Optical Force Sensing in Minimally Invasive Robotic Surgery |
Hadi Hosseinabadi, Amir Hossein | University of British Columbia |
Honarvar, Mohammad | University of British Columbia |
Salcudean, Septimiu E. | University of British Columbia |
Keywords: Surgical Robotics: Laparoscopy, Force and Tactile Sensing, Haptics and Haptic Interfaces
Abstract: This paper evaluates the feasibility of a novel optical sensing concept to measure forces applied at the tip of daVinci EndoWrist instruments. An optical slit is clamped onto the instrument shaft, in-line with an infrared LED-bicell pair. Deflection of the shaft moves the slit with respect to the LED-bicell pair and modulates the light incident on each active element of the bicell. The differential photocurrent is conditioned and monitored to estimate the tip forces. The feasibility evaluation consists of a flexible beam model to quantify the required sensor performance, experimental results with a 3D printed prototype and estimation of the sensor limitations including the measurement bandwidth due to the structural dynamics. The proposed approach requires no modifications to the instrument, is adaptable to different instruments and robot platforms, and leads to high-resolution, high-dynamic range sensing without hysteresis.
|
|
11:00-12:15, Paper TuAT1-14.5 | Add to My Program |
Mechanical Framework Design with Experimental Verification of a Wearable Exoskeleton Chair |
Han, Bin | Huazhong University of Science and Technology |
Du, Zihao | Huazhong University of Science and Technology |
Huang, Tiantian | Huazhong University of Science and Technology |
Zhang, Tao | Tsinghua University |
Li, Zhiyuan | Tsinghua University |
Bai, Ou | FIU |
Chen, Xuedong | Huazhong University of Science and Technology |
Keywords: Wearable Robots, Mechanism Design
Abstract: In this study, a human-chair model was developed as the basis for a wearable chair design. A prototype chair, HUST-EC, was fabricated and evaluated. Employing the optimization under an inner point penalty function, an optimized simulation of the operating mode with the lowest chair height was implemented. The solid models were established by using the finite element analysis program embedded in Solidworks, which revealed that the support from the designed chair was steady to the user. An electromyography (EMG) test platform has been developed, consisting of four EMG sensors, a MATLAB-based acquisition software, and a loaded vest. Four healthy subjects participated in the evaluation experiment, in which EMGs were collected from the muscle groups of rectus femoris, biceps femoris, vastus medialis, and vastus lateralis under different loads and chair angles. The experimental data demonstrate that (1) the HUST-EC can greatly reduce muscle activation at a variety of loads and bending angles; (2) under the same load, the muscle activation decreases slightly with an increased bending angle; and (3) at the same bending angle, muscle activation increases slightly with an increased load. The results show that the designed chair can effectively reduce the physical burden in workers and may improve work efficiency.
|
|
11:00-12:15, Paper TuAT1-14.6 | Add to My Program |
Fluidic Elastomer Actuators for Haptic Interactions in Virtual Reality |
Barreiros, Jose | Cornell University |
Claure, Houston | Cornell University |
Peele, Bryan | Cornell University |
Shapira, Omer | NVIDIA |
Spjut, Josef | NVIDIA Corporation |
Luebke, David | NVIDIA |
Jung, Malte | Cornell University |
Shepherd, Robert | Cornell University |
Keywords: Haptics and Haptic Interfaces, Virtual Reality and Interfaces, Soft Material Robotics
Abstract: Virtual reality experiences via immersive optics and sound are becoming ubiquitous; there are several consumer systems (e.g., Oculus Rift and HTC Vive) now available with these capabilities. Other sensory experiences, such as that of touch remain elusive in this field. The most successful examples of haptic sensation (e.g., Nintendo 64's Rumble Pack and its descendants) are vibrotactile, which do not afford for persistent, morphological shape experiences. This paper presents work on the development of a 12 DOF fluidically pressurized soft actuator for persistent and kinesthetic haptic sensations, a hardware controller for operating it, and software interface with NVIDIA's game VR Funhouse.
|
|
TuAT1-15 Interactive Session, 220 |
Add to My Program |
SLAM - Session IV - 2.1.15 |
|
|
|
11:00-12:15, Paper TuAT1-15.1 | Add to My Program |
KO-Fusion: Dense Visual SLAM with Tightly-Coupled Kinematic and Odometric Tracking |
Houseago, Charlie | Imperial College London |
Bloesch, Michael | Imperial College |
Leutenegger, Stefan | Imperial College London |
Keywords: SLAM, Sensor Fusion, Kinematics
Abstract: Dense visual SLAM methods are able to estimate the 3D structure of an environment and locate the observer within them. They estimate the motion of a camera by matching visual information between consecutive frames, and are thus prone to failure under extreme motion conditions or when observing texture-poor regions. The integration of additional sensor modalities has shown great promise in improving the robustness and accuracy of such SLAM systems. In contrast to the popular use of inertial measurements we propose to tightly-couple a dense RGB-D SLAM system with kinematic and odometry measurements from a wheeled robot equipped with a manipulator. The system has real-time capability while running on GPU. It optimizes the camera pose by considering the geometric alignment of the map as well as kinematic and odometric data from the robot. Through experimentation in the real-world, we show that the system is more robust to challenging trajectories featuring fast and loopy motion than the equivalent system without the additional kinematic and odometric knowledge, whilst retaining comparable performance to the equivalent RGB-D only system on easy trajectories.
|
|
11:00-12:15, Paper TuAT1-15.2 | Add to My Program |
Diffraction-Aware Sound Localization for a Non-Line-Of-Sight Source |
An, Inkyu | KAIST |
Lee, Doheon | KAIST |
Choi, Jung-Woo | KAIST |
Manocha, Dinesh | University of North Carolina at Chapel Hill |
Yoon, Sung-eui | KAIST |
Keywords: Robot Audition, Localization
Abstract: We present a novel sound localization algorithm for a non-line-of-sight (NLOS) sound source in indoor environments. Our approach exploits the diffraction properties of sound waves as they bend around a barrier or an obstacle in the scene. We combine a ray tracing-based sound propagation algorithm with a Uniform Theory of Diffraction (UTD) model, which simulate bending effects by placing a virtual sound source on a wedge in the environment. We precompute the wedges of a reconstructed mesh of an indoor scene and use them to generate diffraction acoustic rays to localize the 3D position of the source. Our method identifies the convergence region of those generated acoustic rays as the estimated source position based on a particle filter. We have evaluated our algorithm in multiple scenarios consisting of static and dynamic NLOS sound sources. In our tested cases, our approach can localize a source position with an average accuracy error of 0.7m, measured by the L2 distance between estimated and actual source locations in a 7m×7m×3m room. Furthermore, we observe 37% to 130% improvement in accuracy over a state-of-the-art localization method that does not model diffraction effects, especially when a sound source is not visible to the robot.
|
|
11:00-12:15, Paper TuAT1-15.3 | Add to My Program |
DeepFusion: Real-Time Dense 3D Reconstruction for Monocular SLAM Using Single-View Depth and Gradient Predictions |
Laidlow, Tristan | Imperial College London |
Czarnowski, Jan | Imperial College London |
Leutenegger, Stefan | Imperial College London |
Keywords: SLAM, Deep Learning in Robotics and Automation, Mapping
Abstract: While the keypoint-based maps created by sparse monocular Simultaneous Localisation and Mapping (SLAM) systems are useful for camera tracking, dense 3D reconstructions may be desired for many robotic tasks. Solutions involving depth cameras are limited in range and to indoor spaces, and dense reconstruction systems based on minimising the photometric error between frames are typically poorly constrained and suffer from scale ambiguity. To address these issues, we propose a 3D reconstruction system that leverages the output of Convolutional Neural Networks (CNNs) to produce fully dense depth maps for keyframes that include metric scale. Our system, DeepFusion, is capable of producing dense reconstructions at real-time on a GPU. It fuses the output of a semi-dense multiview stereo algorithm with the depth and gradient predictions of a CNN in a probabilistic fashion, using learned uncertainties produced by the network. While the network only needs to be run once per keyframe, we are able to optimise for the depth map with each new frame so as to constantly make use of new geometric constraints. Based on its performance on synthetic and real world datasets, we demonstrate that DeepFusion is capable of performing at least as well as other comparable systems.
|
|
11:00-12:15, Paper TuAT1-15.4 | Add to My Program |
Sparse2Dense: From Direct Sparse Odometry to Dense 3D Reconstruction |
Tang, Jiexiong | KTH - Royal Institute of Technology |
Folkesson, John | KTH |
Jensfelt, Patric | KTH - Royal Institute of Technology |
Keywords: SLAM, Mapping, Visual Learning
Abstract: In this paper, we proposed a new deep learning based dense monocular SLAM method. Compared to existing methods, the proposed framework constructs a dense 3D model via a sparse to dense mapping using learned surface normals. Together with single view learned depth estimation as prior for monocular visual odometry, both accurate positioning and quality depth reconstruction are obtained. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.
|
|
11:00-12:15, Paper TuAT1-15.5 | Add to My Program |
Loosely-Coupled Semi-Direct Monocular SLAM |
Lee, Seong Hun | University of Zaragoza |
Civera, Javier | Universidad De Zaragoza |
Keywords: SLAM, Localization, Mapping
Abstract: We propose a novel semi-direct approach for monocular simultaneous localization and mapping (SLAM) that combines the complementary strengths of direct and feature-based methods. The proposed pipeline loosely couples direct odometry and feature-based SLAM to perform three levels of parallel optimizations: (1) photometric bundle adjustment (BA) that jointly optimizes the local structure and motion, (2) geometric BA that refines keyframe poses and associated feature map points, and (3) pose graph optimization to achieve global map consistency in the presence of loop closures. This is achieved in real-time by limiting the feature-based operations to marginalized keyframes from the direct odometry module. Exhaustive evaluation on two benchmark datasets demonstrates that our system outperforms the state-of-the-art monocular odometry and SLAM systems in terms of overall accuracy and robustness.
|
|
TuAT1-16 Interactive Session, 220 |
Add to My Program |
Mapping - 2.1.16 |
|
|
|
11:00-12:15, Paper TuAT1-16.1 | Add to My Program |
Dynamic Hilbert Maps: Real-Time Occupancy Predictions in Changing Environments |
Guizilini, Vitor | University of Sydney |
Senanayake, Ransalu | University of Sydney |
Ramos, Fabio | University of Sydney |
Keywords: Mapping, Field Robots, Learning and Adaptive Systems
Abstract: This paper addresses the problem of learning instantaneous occupancy levels of dynamic environments and predicting future occupancy levels. Due to the complexity of most real environments, such as urban streets or crowded areas, the efficient and robust incorporation of temporal dependencies into otherwise static occupancy models remains a challenge. We propose a method to capture the uncertainty of moving objects and incorporate this uncertainty information into a continuous occupancy map represented in a rich high-dimensional feature space. This data-efficient model not only allows us to learn the occupancy states incrementally, but also makes predictions about what the future occupancy states will be. Experiments performed using 2D and 3D laser data collected from crowded unstructured outdoor environments show that the proposed methodology can accurately predict occupancy states for areas of around 1000 m^2 at 10 Hz, making the proposed framework ideal for online applications under real-time constraints.
|
|
11:00-12:15, Paper TuAT1-16.2 | Add to My Program |
Evaluating the Effectiveness of Perspective Aware Planning with Panoramas |
Mox, Daniel | University of Pennsylvania |
Cowley, Anthony | University of Pennsylvania |
Hsieh, M. Ani | University of Pennsylvania |
Taylor, Camillo Jose | University of Pennsylvania |
Keywords: Mapping
Abstract: In this work, we present an information based exploration strategy tailored for the generation of high resolution 3D maps. We employ RGBD panoramas because they have been shown to provide memory efficient high quality representations of space. Robots explore the environment by selecting locations with maximal Cauchy-Schwarz Quadratic Mutual Information (CSQMI) computed on an angle enhanced occupancy grid to collect these RGBD panoramas. By employing the angle enhanced occupancy grid, the resulting exploration strategy emphasizes perspective in addition to binary coverage. Furthermore, the goal selection strategy is improved by using image morphology to reduce the search space over which CSQMI is computed. We present experimental results demonstrating the improved performance in perception related tasks by capturing panoramas using this approach, near frontier exploration, and a control of logging images at regular intervals while teleoperating the robot through the workspace. Collect imagery was passed through an object detection library with our perspective aware approach yielding a greater number of successful detections compared to near frontier exploration.
|
|
11:00-12:15, Paper TuAT1-16.3 | Add to My Program |
Actively Improving Robot Navigation on Different Terrains Using Gaussian Process Mixture Models |
Nardi, Lorenzo | University of Bonn |
Stachniss, Cyrill | University of Bonn |
Keywords: Mapping, Learning and Adaptive Systems, Autonomous Vehicle Navigation
Abstract: Robot navigation in outdoor environments is exposed to detrimental factors such as vibrations or power consumption due to the different terrains on which the robot navigates. In this paper, we address the problem of actively improving navigation by planning paths that aim at reducing over time phenomena such as vibrations during traversal. Our approach uses a Gaussian Process~(GP) mixture model and an aerial image of the environment to learn and improve continuously a place-dependent model of such phenomena from the experiences of the robot. We use this model to plan paths that trade-off the exploration of unknown promising regions and the exploitation of known areas where the impact of the detrimental factors on navigation is low, leading to an improved navigation over time. We implemented our approach and thoroughly tested it using real-world data. Our experiments suggest that our approach with no initial information leads the robot, after few runs, to follow paths along which it experiences similar vibrations or energy consumption as if it was following the optimal path computed given the ground truth information.
|
|
11:00-12:15, Paper TuAT1-16.4 | Add to My Program |
Continuous Occupancy Map Fusion with Fast Bayesian Hilbert Maps |
Zhi, Weiming | University of Sydney |
Ott, Lionel | University of Sydney |
Senanayake, Ransalu | University of Sydney |
Ramos, Fabio | University of Sydney |
Keywords: Mapping, Learning and Adaptive Systems
Abstract: Mapping the occupancy of an environment is central for robot autonomy. Traditional occupancy grid maps discretise the environment into independent cells, neglecting important spatial correlations, and are unable to capture the continuous nature of the real world. With these drawbacks of grid maps in mind, Hilbert Maps (HM) and more recently Bayesian Hilbert Maps (BHMs), were introduced as a continuous representation of the environment. In this paper we propose a method to merge Bayesian Hilbert Maps built by a team of robots in a decentralised manner. The training of BHMs requires the inversion of a large covariance matrix, incurring cubic complexity. We introduce an approximation, Fast Bayesian Hilbert Maps (Fast-BHM), which reduces the time complexity to below quadratic. Such speed-ups allow the building and merging of Bayesian Hilbert Map models to be practical, opening the door for multi-robot Hilbert Map systems which can be much faster and more robust than an individual robot. By merging several individual Fast-BHMs in a decentralised manner we obtain a unified model of the environment which is itself a Fast-BHM. We conduct experiments to show that global Fast-BHM models do not deteriorate after repeated merging and training. We then empirically demonstrate, due to its the compact representation, fused Fast-BHMs outperform fusion methods involving discretising continuous representations, when the amount of information communicated is limited.
|
|
11:00-12:15, Paper TuAT1-16.5 | Add to My Program |
Regeneration of Normal Distributions Transform for Target Lattice Based on Fusion of Truncated Gaussian Components |
Hong, Hyunki | Seoul National University |
Yu, Hyeonwoo | Seoul National University |
Lee, Beom-Hee | Seoul National University |
Keywords: Mapping, Range Sensing, SLAM
Abstract: In this letter, we propose a method which can be used to regenerate the 3D normal distributions transform (NDT) for target lattice. When a pose is updated by SLAM, the lattice at the pose is also transformed. Given that NDT is a Gaussian mixture model generated by regular cells, the fusion of NDTs transformed with updated poses can distort the shapes of the Gaussian components (GCs). Moreover, when robots without information about other robots' initial poses share and fuse NDT maps, the simple fusion of NDT maps built in different lattices can distort GCs. To overcome this problem, we propose a method by which GCs are subdivided into truncated GCs by the target lattices on each axis iteratively, and the truncated GCs in the same target cell are fused. To determine whether the GC should be subdivided, we define a weight threshold assigned to the weight corresponding to the truncated GC. In an experiment, we evaluated the receiver operating characteristics, the accuracy, the L2 value, the mean error, the mean covariance distance based on Frechet distance to assess the similarity of the regenerated NDT and ground truth NDT. Also, we evaluated the computational performance of the proposed method. Moreover, we evaluated the application of map fusion. It was found that the NDT regenerated by the proposed method showed improvement in the L2 value, mean error, and mean covariance distance.
|
|
11:00-12:15, Paper TuAT1-16.6 | Add to My Program |
Robust Global Structure from Motion Pipeline with Parallax on Manifold Bundle Adjustment and Initialization |
Liu, Liyang | University of Technology Sydney |
Zhang, Teng | University of Technology, Sydney |
Leighton, Brenton | University of Technology Sydney |
Zhao, Liang | Imperial College London |
Huang, Shoudong | University of Technology, Sydney |
Dissanayake, Gamini | University of Technology Sydney |
Keywords: Mapping, SLAM
Abstract: In this paper we present a novel global Structure from Motion (SfM) pipeline that is particularly effective in dealing with low-parallax scenes and camera motion collinear with the features that represent the environment structure. It is therefore particularly suitable in Urban SLAM, in which frequent road-facing motion poses many challenges to conventional SLAM algorithms. Our pipeline includes a recently explored bundle adjustment (BA) method that exploits a feature parameterization using parallax angle between on-manifold observation rays (PMBA). This BA stage is demonstrated to have a consistently stable optimization configuration for features with any parallax and therefore low-parallax features can stay in reconstruction without pre-filtering. To allow practical usage of PMBA, we provide a compatible initialization stage in the SfM to initialize all camera poses simultaneously, exhibiting friendliness to collinear motion. This is achieved by simplifying PMBA into a hybrid graph problem of high connectivity yet small node set, solved using a robust linear programming technique. Using simulations and a series of publicly available real datasets including “KITTI” and “BAL”, we demonstrate the robustness of the position initialization stage in handling collinear motion and outlier matches, superior convergence performance of the BA stage in presence of low-parallax features, and effectiveness of our pipleline to handle many sequential or out-of-order urban scenes.
|
|
TuAT1-17 Interactive Session, 220 |
Add to My Program |
Aerial Systems: Mechanisms I - 2.1.17 |
|
|
|
11:00-12:15, Paper TuAT1-17.1 | Add to My Program |
Fault-Tolerant Flight Control of a VTOL Tailsitter UAV |
Fuhrer, Silvan | ETH Zurich |
Verling, Sebastian | ETH Zurich |
Stastny, Thomas | Swiss Federal Institute of Technology (ETH Zurich) |
Siegwart, Roland | ETH Zurich |
Keywords: Aerial Systems: Mechanics and Control, Failure Detection and Recovery
Abstract: Compared to other vertical take-off and landing (VTOL) systems, a tailsitter minimizes the number of actuators and moving parts necessary. The downside of having a minimalistic actuation is its inherent low fault-tolerance. The failure of an actuator usually results in a loss of controllability, resulting in a crash. In this paper we analyze the possible actuator failures and the constraints they pose on the capabilities of the system. We further present light-weight adaptations to the nominal flight controller to make it fault-tolerant. The fault-tolerant controller is implemented on a small tailsitter VTOL aircraft and adjusted to the system by means of extensive experimental studies. Finally, the capabilities and performance under failures are demonstrated and analyzed.
|
|
11:00-12:15, Paper TuAT1-17.2 | Add to My Program |
Modeling and Control of a Passively-Coupled Tilt-Rotor Vertical Takeoff and Landing Aircraft |
Chiappinelli, Romain | Coriolis Games Corporation |
Cohen, Mitchell | McGill University |
Doff-Sotta, Martin | McGill University |
Nahon, Meyer | McGill University |
Forbes, James Richard | McGill University |
Apkarian, Jacob | Coriolis G |
Keywords: Aerial Systems: Mechanics and Control, Dynamics, Control Architectures and Programming
Abstract: This paper presents the modeling and control of a passively-coupled tilt-rotor vertical takeoff and landing aircraft. The aircraft consists of a quadrotor frame attached to a fixed-wing aircraft by an unactuated hinged mechanism. The platform is capable of smooth transitions from hover to forward flight without the use of tilting actuators. The transition from hover to forward flight is made possible by differential thrust between the fore and aft propellers of the quadrotor frame. In this paper, the coupled dynamics between the quadrotor frame and the aircraft frame are modeled as a constrained multi-body system. The equations of motion are established using a constrained Lagrangian approach and the model developed is used to build a realistic simulation environment for control design purpose. A cascaded control architecture based on P/PID controllers is proposed to achieve inner-loop attitude, height and forward velocity control. Simulated and experimental results are obtained with a close match for hover, transitions, forward flight, and banked turn maneuvers.
|
|
11:00-12:15, Paper TuAT1-17.3 | Add to My Program |
Power-Minimizing Control of a Variable-Pitch Propulsion System for Versatile Unmanned Aerial Vehicles |
Henderson, Travis | CSE, UMN |
Papanikolopoulos, Nikos | University of Minnesota |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications
Abstract: In response to an abundance of applications, Unmanned Aerial Vehicles are being called upon to perform missions of high difficulty for increasingly long periods of time. Traditional paradigms of propeller design and actuation are reaching a design ceiling, motivating creative approaches to the design of propeller-based propulsion mechanisms. Within the last decade, one particular kind of mechanism, the variable pitch propeller, has been studied by researchers for its applications to the class of small UAVs. This paper pushes for new results in this area by exploring the use of Variable Pitch Propulsion (VPP) for maximizing efficiency for small, versatile UAVs. A control algorithm is presented to minimize the consumed electrical power during a quasi-steady propulsive state. In particular, the algorithm is not confined to operation in limited regions of the state space, but seeks to minimize power at whatever point in the state space a steady state is reached. Several experimental results are presented to validate the approach.
|
|
11:00-12:15, Paper TuAT1-17.4 | Add to My Program |
Rapid Inertial Reorientation of an Aerial Insect-Sized Robot Using a Piezo-Actuated Tail |
Singh, Avinash | University of Washington |
Libby, Thomas | University of Washington |
Fuller, Sawyer | University of Washington |
Keywords: Aerial Systems: Mechanics and Control, Biologically-Inspired Robots
Abstract: We present the design, fabrication, and feedforward control of a insect-sized (142~mg) aerial robot that is equipped with a bio-inspired inertial tail. A tail allows the robot to perform rapid inertial reorientation as well as to shift weight to modulate aerodynamic torques on its body. Here we present the first analysis of inertial reorientation using a piezo actuator, departing from previous work to date that has focused exclusively on actuation by DC electric motor. The primary difference is that unlike a geared motor system, the piezo-tail system operates as a resonant system, exhibiting slowly-decaying oscillations. We present a dynamic model of piezo-driven inertial reorientation, along with an open-loop feedforward controller that reduces excitation of the resonant mode. We validate our approach on a tethered testbed as well as a flight-capable prototype. Our results indicate that incorporating a tail can allow for more rapid dynamic maneuvers and could stabilize the robot during flight.
|
|
11:00-12:15, Paper TuAT1-17.5 | Add to My Program |
Contact-Based Navigation Path Planning for Aerial Robots |
Khedekar, Nikhil Vijay | Birla Institute of Technology and Science (BITS), Pilani |
Mascarich, Frank | University of Nevada, Reno |
Papachristos, Christos | University of Nevada Reno |
Dang, Tung | University of Nevada, Reno |
Alexis, Kostas | University of Nevada, Reno |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications
Abstract: In this paper the problem of contact-based navigation path planning for aerial robots is considered with the goal of enabling the autonomous in-contact operation on surfaces that can be highly anomalous. Such a capacity can prove critical in inspection through contact missions, as well as when a flying robot is tasked to operate in very narrow environments rendering safe free-flight impossible. To achieve this objective, beyond sliding in contact, a new locomotion primitive is introduced, namely that of azimuth rotations perpendicular to the surface under consideration. This new navigation mode, called flying cartwheel mode, offers navigation resourcefulness and resilience when the system is tasked to move in contact with surfaces that are otherwise non-traversable. The designed path planning method exploits both navigation modalities and a traversability metric to decide when to switch from sliding to flying cartwheel mode, and overall provides cost-optimal trajectories for in-contact navigation. The proposed approach is verified both in simulation, as well as experimentally using a surface presenting complex anomalies. It is highlighted that the proposed method does not assume any specialized contact mechanism or a control law tailored to physical interaction tasks, and hence is applicable to almost any micro aerial vehicle integrating protective shrouds around its propellers.
|
|
11:00-12:15, Paper TuAT1-17.6 | Add to My Program |
Cargo Transportation Strategy Using T3-Multirotor UAV |
Lee, Seung Jae | Seoul National University |
Kim, H. Jin | Seoul National University |
Keywords: Aerial Systems: Mechanics and Control, Aerial Systems: Applications, Dynamics
Abstract: In this paper, we introduce a cargo transportation method with a new type of multi-rotor UAV platform known as T3-multirotor, to achieve stable and constant flight performance regardless of the type of cargo attached to the fuselage. The T3-multirotor, which consists of the `Thrust Generating Part' and the `Fuselage Part', can directly control the relative attitude between the two parts using the novel servomechanism. By utilizing the servomechanism with the proposed relative attitude control strategy, the T3-multirotor with cargo attached to the fuselage part can behave as a multi-rotor with only the moment of inertia of the thrust generating part during entire transportation. This allows the T3-multirotor to achieve the reliable performance in the event of any cargo being attached to the fuselage, achieving stable platform motion control. Detailed hardware description and dynamic analysis of T3-Multirotor is performed in this paper, and the validity of the proposed control strategy is also analyzed. The feasibility of the proposed control strategy is verified through experimental results with analysis.
|
|
TuAT1-18 Interactive Session, 220 |
Add to My Program |
Aerial Systems: Applications III - 2.1.18 |
|
|
|
11:00-12:15, Paper TuAT1-18.1 | Add to My Program |
Experimental Learning of a Lift-Maximizing Central Pattern Generator for a Flapping Robotic Wing |
Bayiz, Yagiz Efe | Pennsylvania State University |
Hsu, Shih-Jung | Penn State University |
Aguiles, Aaron | The Pennsylvania State University |
Shade-Alexander, Yano | Pennsylvania State University |
Cheng, Bo | Pennsylvania State University |
Keywords: Learning and Adaptive Systems, Aerial Systems: Applications, Biologically-Inspired Robots
Abstract: In this work, we present an application of a policy gradient algorithm to a real-time robotic learning problem, where the goal is to maximize the average lift generation of a dynamically scaled robotic wing at a constant Reynolds number (Re). Compared to our previous work, the merit of this work is two-fold. First, a central pattern generator model was used as the motion controller, which provided a smooth generation and transition of rhythmic wing motion patterns while the CPG was being updated by the policy gradient, thereby accelerating the sample generation and reducing the total learning time. Second, the kinematics included three degrees of freedom (stroke, deviation, pitching) and were also free of half-stroke symmetry constraint, together they yielded a larger kinematic space which later explored by the policy gradient to maximize the lift generation. The learned wing kinematics used the full range of stroke and deviation to maximize the lift generation, implying that the wing trajectories with larger disk area and lower frequencies were preferred for high lift generation at constant Re. Furthermore, the wing pitching amplitude converged to values between 45°-49° regardless of what the other parameters were. Notably, the learning agent was able to find two locally optimal wing motion patterns, which had distinct shapes of wing trajectory but generated similar cycle-averaged lift.
|
|
11:00-12:15, Paper TuAT1-18.2 | Add to My Program |
Toward Lateral Aerial Grasping & Manipulation Using Scalable Suction |
Kessens, Chad C. | United States Army Research Laboratory |
Horowitz, Matthew | Engility Corp |
Liu, Chao | University of Pennsylvania |
Dotterweich, James | Engility Corp |
Yim, Mark | University of Pennsylvania |
Edge, Harris | US Army Research Lab |
Keywords: Aerial Systems: Applications, Mobile Manipulation, Grippers and Other End-Effectors
Abstract: This paper is an initial step toward the realization of an aerial robot that can perform lateral physical work, such as drilling a hole or fastening a screw in a wall. Aerial robots are capable of high maneuverability and can provide access to locations that would be difficult or impossible for ground-based robots to reach. However, to fully utilize this mobility, systems would ideally be able to perform functional work in those locations, requiring the ability to exert lateral forces. To substantially improve a hovering vehicle's ability to stably deliver large lateral forces, we propose the use of a versatile suction-based gripper that can establish pulling contact on featureless surfaces. Such contact enables access to environmental forces that can be used to further stabilize the vehicle and also increase the lateral force delivered to the surface through a possible secondary mechanism. This paper introduces the concept, describes the design of a new self-sealing suction cup based on a previous design, details the design of a gripper using those cups, and describes the arm and flight vehicle. It then evaluates the cup and gripper performance in several ways, culminating in physical grasping demonstrations using the arm and gripper, including one in the presence of simulated flight noise based on data from preliminary indoor flight experiments.
|
|
11:00-12:15, Paper TuAT1-18.3 | Add to My Program |
Light-Weight Whiskers for Contact, Pre-Contact and Fluid Velocity Sensing |
Deer, William | University of Queensland |
Pounds, Pauline | The University of Queensland |
Keywords: Aerial Systems: Applications, Force and Tactile Sensing
Abstract: This paper reports the design, fabrication and testing of light-weight whisker sensors intended for use on robotic platforms, especially drones, featuring multiple whisker fibres in an array. The whiskers transmit forces along the vibrissae fibres to a load plate bonded to embedded MEMS barometers potted in polyurethane rubber, which act as force sensors. This construction allows for directional sensing of forces with arrays of fibres weighing less than half a gram per whisker. Forces as low as 0.34 mg can be measured, and the whiskers are capable of sensing fluid stream velocities up to 7.5 ms−1. The whiskers are sufficiently sensitive so as to be able to detect the pressure wave of an approaching hand moving at 0.53 ms−1 from 20 mm away.
|
|
11:00-12:15, Paper TuAT1-18.4 | Add to My Program |
There's No Place Like Home: Visual Teach and Repeat for Emergency Return of Multirotor UAVs During GPS Failure |
Warren, Michael | University of Toronto |
Greeff, Melissa | University of Toronto |
Patel, Bhavit | University of Toronto |
Collier, Jack | Defence R&D Canada |
Schoellig, Angela P. | University of Toronto |
Barfoot, Timothy | University of Toronto |
Keywords: Aerial Systems: Applications, Visual-Based Navigation, Sensor-based Control
Abstract: Redundant navigation systems are critical for safe operation of UAVs in high-risk environments. Since most commercial UAVs almost wholly rely on GPS, jamming, interference and multi-pathing are real concerns that usually limit their operations to low-risk environments and Visual Line-of-Sight. This paper presents a vision-based route-following system for the autonomous, safe return of UAVs under primary navigation failure such as GPS jamming. Using a Visual Teach and Repeat framework to build a visual map of the environment during an outbound flight, we show the autonomous return of the UAV by visually localising the live view to this map when a simulated GPS failure occurs, controlling the vehicle to follow the safe outbound path back to the launch point. Using gimbal-stabilised stereo vision and inertial sensing alone, without reliance on external infrastructure, VO and localisation are achieved at altitudes of 5-25 m and flight speeds up to 55 km/h. We examine the performance of the visual localisation algorithm under a variety of conditions and also demonstrate closed-loop autonomy along a complicated 450 m path.
|
|
11:00-12:15, Paper TuAT1-18.5 | Add to My Program |
Human Gaze-Driven Spatial Tasking of an Autonomous MAV |
Yuan, Liangzhe | University of Pennsylvania |
Reardon, Christopher M. | U.S. Army Research Laboratory |
Warnell, Garrett | U.S. Army Research Laboratory |
Loianno, Giuseppe | New York University |
Keywords: Aerial Systems: Applications, Human-Centered Robotics
Abstract: In this work, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, non-invasive forms of interactions between an human and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an Inertial Measurement Unit (IMU) can be used to (a) estimate the relative position of the human with respect to a quadrotor, (b) decouple the gaze direction from the head orientation, and (c) allow the human {em spatially task} (i.e., send new 3D navigation waypoints to) the robot in an uninstrumented environment. We employ a combination of camera and IMU data to track the human's head orientation, which allows us to decouple the gaze direction from the head motion.In order to detect the flying robot, we train and use a deep neural network.We evaluate the proposed approach experimentally, and show that our pipeline is able to successfully achieve human-guided autonomy for spatial tasking. The proposed approach can be employed in a wide range of scenarios including inspection, first response, and it can be used by people with disabilities that affect their mobility.
|
|
11:00-12:15, Paper TuAT1-18.6 | Add to My Program |
Optimal Trajectory Generation for Quadrotor Teach-And-Repeat |
Gao, Fei | Hong Kong University of Science and Technology |
Wang, Luqi | HKUST |
Wang, Kaixuan | Hong Kong University of Science and Technology |
Wu, William | HKUST |
Zhou, Boyu | Hong Kong University of Science and Technology |
Han, Luxin | Hong Kong University of Science and Technology |
Shen, Shaojie | Hong Kong University of Science and Technology |
Keywords: Aerial Systems: Applications, Motion and Path Planning, Autonomous Vehicle Navigation
Abstract: In this paper, we propose a novel motion planning framework for quadrotor teach-and-repeat applications. Instead of controlling the drone to precisely follow the teaching path, our method converts an arbitrary jerky human-piloted trajectory to a topologically equivalent one, which is guaranteed to be safe, smooth, and kinodynamically feasible with an expected aggressiveness. Our proposed planning framework optimizes the trajectory in both spatial and temporal aspects. In the spatial layer, a flight corridor is found to represent the free space which is topologically equivalent with the teaching path. Then a minimum-jerk piecewise trajectory is generated within the flight corridor. In the temporal layer, the trajectory is re-parameterized to obtain a minimum-time temporal trajectory under kinodynamic constraints. The spatial and temporal optimizations are both formulated as convex programs and are done iteratively. The proposed method is integrated into a complete quadrotor system and is validated to perform aggressive flights in challenging indoor and outdoor environments.
|
|
TuAT1-19 Interactive Session, 220 |
Add to My Program |
Automation Technology - 2.1.19 |
|
|
|
11:00-12:15, Paper TuAT1-19.1 | Add to My Program |
Design and Implementation of Computer Vision Based In-Row Weeding System |
Wu, Xiaolong | Georgia Institute of Technology |
Aravecchia, Stephanie | Umi 2958 Gt-Cnrs |
Pradalier, Cedric | GeorgiaTech Lorraine |
Keywords: Agricultural Automation, Robotics in Agriculture and Forestry, Field Robots
Abstract: Autonomous robotic weeding systems in precision farming have demonstrated their full potential to alleviate the current dependency on herbicides or pesticides by introducing selective spraying or mechanical weed removal modules, thus reducing the environmental pollution and improving the sustainability. However, most previous works require fast weed detection system to achieve real-time treatment. In this paper, a novel computer vision based weeding control system is presented, where a non-overlapping multi-camera system is introduced to compensate the indeterminate classification delays, thus allowing for more complicated and advanced detection algorithms, e.g. deep learning based methods. The suitable tracking and control strategies are developed to achieve accurate and robust in-row weed treatment, and the performance of the proposed system is evaluated in different terrain conditions in the presence of various delays.
|
|
11:00-12:15, Paper TuAT1-19.2 | Add to My Program |
LSTM-Based Network for Human Gait Stability Prediction in an Intelligent Robotic Rollator |
Chalvatzaki, Georgia | National Technical University of Athens |
Koutras, Petros | National Technical University of Athens |
Hadfield, Jack | National Technical University of Athens |
Papageorgiou, Xanthi S. | National Technical University of Athens |
Tzafestas, Costas S. | ICCS - Inst of Communication and Computer Systems |
Maragos, Petros | National Technical University of Athens |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Human Detection and Tracking, Deep Learning in Robotics and Automation
Abstract: In this work, we present a novel framework for on-line human gait stability prediction of the elderly users of an intelligent robotic rollator using Long Short Term Memory (LSTM) networks, fusing multimodal RGB-D and Laser Range Finder (LRF) data from non-wearable sensors. A Deep Learning (DL) based approach is used for the upper body pose estimation. The detected pose is used for estimating the body Center of Mass (CoM) using Unscented Kalman Filter (UKF). An Augmented Gait State Estimation framework exploits the LRF data to estimate the legs' positions and the respective gait phase. These estimates are the inputs of an encoder-decoder sequence to sequence model which predicts the gait stability state as Safe or Fall Risk walking. It is validated with data from real patients, by exploring different network architectures, hyperparameter settings and by comparing the proposed method with other baselines. The presented LSTM-based human gait stability predictor is shown to provide robust predictions of the human stability state, and thus has the potential to be integrated into a general user-adaptive control architecture as a fall-risk alarm.
|
|
11:00-12:15, Paper TuAT1-19.3 | Add to My Program |
Urban Swarms: A New Approach for Autonomous Waste Management |
Alfeo, Antonio Luca | University of Pisa |
Castello Ferrer, Eduardo | MIT |
Lizarribar Carrillo, Yago | MIT |
Grignard, Arnaud | MIT |
Alonso Pastor, Luis | MIT |
Sleeper, Dylan T. | MIT |
Cimino, Mario G. C. A. | University of Pisa |
Lepri, Bruno | Bruno Kessler Foundation |
Vaglini, Gigliola | University of Pisa |
Larson, Kent | MIT |
Dorigo, Marco | Université Libre De Bruxelles |
Pentland, Alex ('Sandy') | MIT |
Keywords: Automation Technologies for Smart Cities, Swarms, Agent-Based Systems
Abstract: Modern cities are growing ecosystems that face new challenges due to the increasing population demands. One of the many problems they face nowadays is waste management, which has become a pressing issue requiring new solutions. Swarm robotics systems have been attracting an increasing amount of attention in the past years and they are expected to become one of the main driving factors for innovation in the field of robotics. The research presented in this paper explores the feasibility of a swarm robotics system in an urban environment. By using bio-inspired foraging methods such as multi-place foraging and stigmergy-based navigation, a swarm of robots is able to improve the efficiency and autonomy of the urban waste management system in a realistic scenario. To achieve this, a diverse set of simulation experiments was conducted using real-world GIS data and implementing different garbage collection scenarios driven by robot swarms. Results presented in this research show that the proposed system outperforms current approaches. Moreover, results not only show the efficiency of our solution, but also give insights about how to design and customize these systems.
|
|
11:00-12:15, Paper TuAT1-19.4 | Add to My Program |
Automated Aortic Pressure Regulation in Ex Vivo Heart Perfusion |
Xin, Liming | University of Toronto |
Yao, Weiran | Harbin Institute of Technology |
Peng, Yan | Shanghai University |
Qi, Naiming | Harbin Institute of Technology |
Badiwala, Mitesh | University of Toronto |
Sun, Yu | University of Toronto |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care
Abstract: This paper presents the first system for automated ex vivo perfusion of an isolated heart and regulating the heart’s aortic pressure (AoP). An adaptive controller was developed for AoP regulation and maintained the heart’s physiological aerobic metabolism. A mathematical model of the perfusion system was established based on a nonlinear equivalent circuit fluid flow model. The model combined with a virtual controller forms a reference model to generate the ideal trajectory of AoP. An adaptation algorithm tunes the control parameters based on the reference model and the isolated heart. Experiments were conducted using large animal hearts (55±5 kg porcine, n = 6) to validate the adaptive controller’s performance for stepwise and fast switching AoP references. The results confirmed that the the proposed controller is able to regulate the AoP of an isolated porcine heart in an accurate (mean error less than 2 mmHg) and fast (4~8 s of settling time) manner.
|
|
11:00-12:15, Paper TuAT1-19.5 | Add to My Program |
A Robotic Microscope System to Examine TCR Quality against Tumor Neoantigens: A New Tool for Cancer Immunotherapy Research |
Ong, Lee-Ling Sharon | Tilburg University |
Zhu, Hai | Singapore-MIT Alliance for Research and Technology |
Banik, Debasis | Singapore-MIT Alliance for Research and Technology |
Guan, Zhenping | Singapore-MIT Alliance for Research and Technology |
Feng, Yinnian | Vanderbilt University |
Reinherz, Ellis | Dana-Farber Cancer Institute |
Lang, Matthew | Vanderbilt |
Asada, Harry | MIT |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Biological Cell Manipulation, Automation at Micro-Nano Scales
Abstract: During immune surveillance, cytotoxic T lymphocytes (CTL) can selectively identify and destroy tumor cells by recognizing tumor-specific peptides (neoantigens), bound to major histocompatibility complex molecules (pMHC) arrayed on cancer cell surfaces. CTL use the same machinery to destroy virally infected cells displaying pathogen-specific pMHC, while leaving intact healthy cells expressing normal self-pMHC. We present a robotic microscope that allows scientists to conduct highly sensitive and selective T cell-pMHC studies with high throughput. Our system manipulates micro-meter beads coated with particular pMHC, presents them to T cells and generates piconewton level intermolecular forces required to detect T cell acuity with a neoantigen.Our systemintegrates optical tweezers, precision nanomicro stages, and episcopic/diascopic illumination schemes at two magnifications.We create a coordinate referencing system to locate translucent T cells in three-dimensional, over a large space, based on the characteristic intensity change at each focal plane caused by the cells. Our systemperforms automated experiments to detect the level of T cell acuity with specific pMHCs, by measuring the downstream cellular responses. High acuity T cells can be selectively recovered for single cell analysis. Our new methodology and tool will have a significant impact on cancer immunotherapy and immunology research.
|
|
11:00-12:15, Paper TuAT1-19.6 | Add to My Program |
A Multi-Vehicle Trajectories Generator to Simulate Vehicle-To-Vehicle Encountering Scenarios |
Ding, Wenhao | Tsinghua University |
Wang, Wenshuo | Carnegie Mellon University |
Zhao, Ding | Carnegie Mellon University |
Keywords: Automation Technologies for Smart Cities, Big Data in Robotics and Automation, Autonomous Agents
Abstract: Generating multi-vehicle trajectories from existing limited data can provide rich resources for autonomous vehicle development and testing. This paper introduces a multivehicle trajectory generator (MTG) that can encode multivehicle interaction scenarios (called driving encounters) into an interpretable representation from which new driving encounter scenarios are generated by sampling. The MTG consists of a bi-directional encoder and a multi-branch decoder. A new disentanglement metric is then developed for model analyses and comparisons in terms of model robustness and the independence of the latent codes. Comparison of our proposed MTG with β-VAE and InfoGAN demonstrates that the MTG has stronger capability to purposely generate rational vehicle-to-vehicle encounters through operating the disentangled latent codes. Thus the MTG could provide more data for engineers and researchers to develop testing and evaluation scenarios for autonomous vehicles.
|
|
TuAT1-20 Interactive Session, 220 |
Add to My Program |
Force and Tactile Sensing I - 2.1.20 |
|
|
|
11:00-12:15, Paper TuAT1-20.1 | Add to My Program |
Deep N-Shot Transfer Learning for Tactile Material Classification with a Flexible Pressure-Sensitive Skin |
Bäuml, Berthold | German Aerospace Center (DLR) |
Tulbure, Andreea Roxana | Karlsruhe Institute of Technology |
Keywords: Force and Tactile Sensing, Deep Learning in Robotics and Automation, Recognition
Abstract: n-shot learning, i.e., learning a classifier from only few or even one training samples per class, is the ultimate goal in minimizing the cost of sample acquisition. This is esp. important for active sensing tasks like tactile material classification. Achieving high classification accuracy from only few samples is typically possible only when pre-knowledge is used. In n-shot transfer learning, knowledge from pre-training on a large knowledge set with many classes and samples per class has to be transferred to support the training for a given task set with only few samples per new class. In this paper, we show for the first time that deep end-to-end transfer learning is feasible for tactile material classification. Based on the previously presented (TactNet-II) [1], a deep con- volutional neural network (CNN) which reaches superhuman tactile classification performance, we adapt state-of-the art deep transfer learning methods. We evaluate the resulting deep n-shot learning methods with a publicly available tactile material data set with 36 materials [1] in a 6-way n-shot learning task with 30 materials in the knowledge set. In 1-shot learning, our deep transfer learning method reaches 75.5% classification accuracy and in 10-shot more than 90%, outperforming classification without knowledge transfer by more than 40%. This results in an up to 15 time reduction in the number of samples needed to reach a desired accuracy level.
|
|
11:00-12:15, Paper TuAT1-20.2 | Add to My Program |
Towards Effective Tactile Identification of Textures Using a Hybrid Touch Approach |
Taunyazov, Tasbolat | National University of Singapore |
Koh, Hui Fang | Nanyang Technological University |
Wu, Yan | A*STAR Institute for Infocomm Research |
Cai, Caixia | Institute for Infocomm Research(I2R), A*STAR |
Soh, Harold | National Universtiy of Singapore |
Keywords: Force and Tactile Sensing, Haptics and Haptic Interfaces
Abstract: The sense of touch is arguably the first human sense to develop. Empowering robots with the sense of touch may augment their understanding of interacted objects and the environment beyond standard sensory modalities (e.g., vision).This paper investigates the effect of hybridizing touch and sliding movements for tactile-based texture classification. We develop three machine-learning algorithms within a framework to discriminate between surface textures; the first two methods use hand-engineered tactile features, whilst the third leverages convolutional and recurrent neural network layers to learn feature representations from raw data. To compare these methods, we constructed a dataset comprising tactile data from 23 textures gathered using the iCub platform under a loosely constrained setup, i.e., with nonlinear motion. In line with findings from neuroscience, our experiments show that a good initial estimate can be obtained via touch data, which can be further refined via sliding; combining both in our framework achieves a 98% accuracy over unseen data.
|
|
11:00-12:15, Paper TuAT1-20.3 | Add to My Program |
"Touching to See" and "Seeing to Feel": Robotic Cross-Modal Sensory Data Generation for Visual-Tactile Perception |
Lee, Jet-Tsyn | University of Liverpool |
Bollegala, Danushka | University of Liverpool |
Luo, Shan | University of Liverpool |
Keywords: Force and Tactile Sensing, Haptics and Haptic Interfaces, Sensor Fusion
Abstract: The integration of visual-tactile stimulus is common while humans performing daily tasks. In contrast, using unimodal visual or tactile perception limits the perceivable dimensionality of a subject. However, it remains a challenge to integrate the visual and tactile perception to facilitate robotic tasks. In this paper, we propose a novel framework for the cross-modal sensory data generation for visual and tactile perception. Taking texture perception as an example, we apply conditional generative adversarial networks to generate pseudo visual images or tactile outputs from data of the other modality. Extensive experiments on the ViTac dataset of cloth textures show that the proposed method can produce realistic outputs from other sensory inputs. We adopt the structural similarity index to evaluate similarity of the generated output and real data and results show that realistic data have been generated. Classification evaluation has also been performed to show that the inclusion of generated data can improve the perception performance. The proposed framework has potential to expand datasets for classification tasks, generate sensory outputs that are not easy to access, and also advance integrated visual-tactile perception.
|
|
11:00-12:15, Paper TuAT1-20.4 | Add to My Program |
Shear-Invariant Sliding Contact Perception with a Soft Tactile Sensor |
Aquilina, Kirsty | University of Bristol |
Barton, David A. W. | University of Bristol |
Lepora, Nathan | University of Bristol |
Keywords: Force and Tactile Sensing
Abstract: Manipulation tasks often require robots to be continuously in contact with an object. Therefore tactile perception systems need to handle continuous contact data. Shear deformation causes the tactile sensor to output path-dependent readings in contrast to discrete contact readings. As such, in some continuous-contact tasks, sliding can be regarded as a disturbance over the sensor signal. Here we present a shear-invariant perception method based on principal component analysis (PCA) which outputs the required information about the environment despite sliding motion. A compliant tactile sensor (the TacTip) is used to investigate continuous tactile contact. First, we evaluate the method offline using test data collected whilst the sensor slides over an edge. Then, the method is used within a contour-following task applied to 6 objects with varying curvatures; all contours are successfully traced. The method demonstrates generalisation capabilities and could underlie a more sophisticated controller for challenging manipulation or exploration tasks in unstructured environments.
|
|
11:00-12:15, Paper TuAT1-20.5 | Add to My Program |
Soft Tactile Sensing: Retrieving Force, Torque and Contact Point Information from Deformable Surfaces |
Ciotti, Simone | University of Pisa |
Sun, Teng | King's College London |
Battaglia, Edoardo | University of Pisa - Research Center E. Piaggio |
Bicchi, Antonio | Università Di Pisa |
Liu, Hongbin | King's College London |
Bianchi, Matteo | University of Pisa |
Keywords: Force and Tactile Sensing, Haptics and Haptic Interfaces
Abstract: Intrinsic Tactile Sensing (ITS) is a well-established technique, relying on force/torque and geometric surface description to find contact centroids. The method works well for rigid surfaces. However, finding a solution for deformable surfaces is an open issue. This work presents two solutions to extend ITS to deformable surfaces, relying on force-deformation characteristics of the surface under exploration: (i) a closed-form approach that calculates the contact centroid using standard ITS, but on a shrunk geometry approximating the deformed surface; (ii) an iterative procedure that takes into account soft surface deformation, and force/torque equilibrium to minimize a cost function. We have tested both using ellipsoid silicone specimens, with different softness levels and indented along different directions. Both linear and quadratic fitting for the force-indentation behavior were employed. The two methods have distinct advantages and limitations. However, a combination of two methods, using one to produce the initial guess for the other, turns out to be very effective. Indeed, in our validation this solution showed convergence under 1ms, attaining errors lower than 1 mm. The proposed approaches were implemented in a ROS-based toolbox, integrating both solutions.
|
|
11:00-12:15, Paper TuAT1-20.6 | Add to My Program |
Miniaturization of Multistage High Dynamic Range Six-Axis Force Sensor Composed of Resin Material |
Okumura, Daisuke | Saitama University |
Sakaino, Sho | Saitama University |
Tsuji, Toshiaki | Saitama University |
Keywords: Force and Tactile Sensing, Haptics and Haptic Interfaces, Force Control
Abstract: Accurate force sensing is essential for skilled robot motion, while the limitation of the dynamic range is one of the major issues for force sensing. This paper deals with the development of a miniature six-axis force sensor with the aim of enabling robotic force detection across a high dynamic range (HDR). A miniaturized structure for the multistage structure of the HDR force sensor is designed by horizontally arranging two bodies: a low-rigidity thin beam inside a high-rigidity thick beam. The proposed sensor demonstrated performance superior to that of a conventional force sensor in the low load region, even with the inexpensive and lightweight resin material. Although creep and stress relaxation degrade the sensor's accuracy, these effects could be reduced by 80 percent with a correction filter.
|
|
TuAT1-21 Interactive Session, 220 |
Add to My Program |
Social HRI II - 2.1.21 |
|
|
|
11:00-12:15, Paper TuAT1-21.1 | Add to My Program |
Robots Learn Social Skills: End-To-End Learning of Co-Speech Gesture Generation for Humanoid Robots |
Yoon, Youngwoo | Electronics and Telecommunications Research Institute |
Ko, Woo-Ri | ETRI |
Jang, Minsu | Electronics & Telecommunications Research Institute |
Lee, Jaeyeon | ETRI |
Kim, Jaehong | Electronics and Telecommunications Research Institute |
Lee, Geehyuk | KAIST |
Keywords: Social Human-Robot Interaction, Deep Learning in Robotics and Automation, AI-Based Methods
Abstract: Co-speech gestures enhance interaction experiences between humans as well as between humans and robots. Most existing robots use rule-based speech-gesture association, but this requires human labor and prior knowledge of experts to be implemented. We present a learning-based co-speech gesture generation that is learned from 52 h of TED talks. The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures. The model successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures. In a subjective evaluation, participants reported that the gestures were human-like and matched the speech content. We also demonstrate a co-speech gesture with a NAO robot working in real time.
|
|
11:00-12:15, Paper TuAT1-21.2 | Add to My Program |
The Doctor Will See You Now: Could a Robot Be a Medical Receptionist? |
Sutherland, Craig | University of Auckland |
Ahn, Byeong-Kyu | Sungkyunkwan University |
Brown, Bianca | The University of Auckland |
Lim, Jong Yoon | University of Auckland |
Johanson, Deborah | The University of Auckland |
Broadbent, Elizabeth | University of Auckland |
MacDonald, Bruce | University of Auckland |
Ahn, Ho Seok | The University of Auckland, Auckland |
Keywords: Social Human-Robot Interaction, Physical Human-Robot Interaction, Human-Centered Robotics
Abstract: A robot cannot be warm and friendly – or can it? To explore whether a robot can be a medical receptionist, we developed a robotic system for interacting with patients at a doctor’s clinic, including acting friendly. We designed the robot to interact naturally with patients at the start and finish of a clinic visit. We investigated people’s perceptions to the robot in a wizard-of-Oz study, where the participants interacted with the robot over four interactions. 40 participants evaluated the robot. The results indicate the participants thought the robot could be a friendly receptionist, especially after repeated interactions with the robot. However, the participants mainly thought the robot was friendly in a “professional” way, rather than a personal friend.
|
|
11:00-12:15, Paper TuAT1-21.3 | Add to My Program |
Designing a Personality-Driven Robot for a Human-Robot Interaction Scenario |
Beik Mohammadi, Hadi | University of Hamburg |
Xirakia, Nikoletta | Universität Hamburg |
Abawi, Fares | Universität Hamburg |
Barykina, Irina | Universität Hamburg |
Chandran, Krishnan | Universität Hamburg |
Nair, Gitanjali | Universität Hamburg |
Nguyen, Cuong | Universität Hamburg |
Speck, Daniel | Universität Hamburg |
Alpay, Tayfun | Universität Hamburg |
Griffiths, Sascha | Universität Hamburg |
Heinrich, Stefan | Universität Hamburg |
Strahl, Erik | Universität Hamburg |
Weber, Cornelius | Knowledge Technology Group, University of Hamburg |
Wermter, Stefan | University of Hamburg |
Keywords: Social Human-Robot Interaction, Cognitive Human-Robot Interaction
Abstract: In this paper, we present an autonomous AI system designed for a Human-Robot Interaction (HRI) study, set around a dice game scenario. We conduct a case study to answer our research question: Does a robot with a socially engaged personality lead to a higher acceptance than a competitive personality? The flexibility of our proposed system allows us to construct and attribute two different personalities to a humanoid robot: a socially engaged personality that maximizes its user interaction and a competitive personality that is focused on playing and winning the game. We evaluate both personalities in a user study, in which the participants play a turn-taking dice game with the robot. Each personality is assessed with four different evaluation tools: 1) the Godspeed Questionnaire, 2) the Mind Perception Questionnaire, 3) a custom questionnaire concerning the overall HRI experience, and 4) a Convolutional Neural Network analyzing the emotions on the participants' facial feedback throughout the game. Our results show that the socially engaged personality evokes stronger emotions among the participants and is rated higher in likability and animacy than the competitive one. We conclude that designing the robot with a socially engaged personality contributes to a higher acceptance within an HRI scenario.
|
|
11:00-12:15, Paper TuAT1-21.4 | Add to My Program |
How Shall I Drive? Interaction Modeling and Motion Planning towards Empathetic and Socially-Graceful Driving |
Ren, Yi | Arizona State University |
Elliott, Steven | Arizona State University |
Wang, Yiwei | Arizona State University |
Yang, Yezhou | Arizona State University |
Zhang, Wenlong | Arizona State University |
Keywords: Social Human-Robot Interaction, Autonomous Agents, Intelligent Transportation Systems
Abstract: While intelligence of autonomous vehicles (AVs) has significantly advanced in recent years, accidents involving AVs suggest that these autonomous systems lack gracefulness in driving when interacting with human drivers. In the setting of a two-player game, we propose model predictive control based on social gracefulness, which is measured by the discrepancy between the actions taken by the AV and those that could have been taken in favor of the human driver. We define social awareness as the ability of an agent to infer such favorable actions based on knowledge about the other agent's intent, and further show that empathy, i.e., the ability to understand others' intent by simultaneously inferring others' understanding of the agent's self intent, is critical to successful intent inference. Lastly, through an intersection case, we show that the proposed gracefulness objective allows an AV to learn more sophisticated behavior, such as passive-aggressive motions that gently force the other agent to yield.
|
|
11:00-12:15, Paper TuAT1-21.5 | Add to My Program |
It Would Make Me Happy If You Used My Guess: Comparing Robot Persuasive Strategies in Social Human-Robot Interaction |
Saunderson, Shane | University of Toronto |
Nejat, Goldie | University of Toronto |
Keywords: Social Human-Robot Interaction, Robot Companions, Human-Centered Robotics
Abstract: This paper presents an exploratory social Human-Robot Interaction (HRI) study that investigates and compares the persuasive effectiveness of robots attempting to influence a user with different behavior strategies. Ten multimodal persuasive strategies were uniquely designed based on Compliance Gaining Behaviors (CGBs). These persuasive strategies were then compared using two competing social robots attempting to influence a participant’s estimate during a jelly bean guessing game. The results of our exploratory study with 200 participants showed that affective and logical strategies had a higher potential for persuasive influence and warrant further research.
|
|
11:00-12:15, Paper TuAT1-21.6 | Add to My Program |
Enabling Robots to Infer How End-Users Teach and Learn through Human-Robot Interaction |
Losey, Dylan | Stanford University |
O'Malley, Marcia | Rice University |
Keywords: Cognitive Human-Robot Interaction, Learning from Demonstration, Human Factors and Human-in-the-Loop
Abstract: During human-robot interaction (HRI), we want the robot to understand us, and we want to intuitively understand the robot. In order to communicate with and understand the robot, we can leverage interactions, where the human and robot observe each other's behavior. However, it is not always clear how the human and robot should interpret these actions: a given interaction might mean several different things. Within today's state-of-the-art, the robot assigns a single interaction strategy to the human, and learns from or teaches the human according to this fixed strategy. Instead, we here recognize that different users interact in different ways, and so one size does not fit all. Therefore, we argue that the robot should maintain a distribution over the possible human interaction strategies, and then infer how each individual end-user interacts during the task. We formally define learning and teaching when the robot is uncertain about the human's interaction strategy, and derive solutions to both problems using Bayesian inference. In examples and a benchmark simulation, we show that our personalized approach outperforms standard methods that maintain a fixed interaction strategy.
|
|
TuAT1-22 Interactive Session, 220 |
Add to My Program |
Object Recognition & Segmentation I - 2.1.22 |
|
|
|
11:00-12:15, Paper TuAT1-22.1 | Add to My Program |
Detection-By-Localization: Maintenance-Free Change Object Detector |
Tanaka, Kanji | University of Fukui |
Keywords: Localization, Object Detection, Segmentation and Categorization, SLAM
Abstract: Recent researches demonstrate that self-localization performance is a very useful measure of likelihood-of-change (LoC) for change detection. In this paper, this ``detection-by-localization" scheme is studied in a novel generalized task of object-level change detection. In our framework, a given query image is segmented into object-level subimages (termed ``scene parts"), which are then converted to subimage-level pixel-wise LoC maps via the detection-by-localization scheme. Our approach models a self-localization system as a ranking function, outputting a ranked list of reference images, without requiring relevance score. Thanks to this new setting, we can generalize our approach to a broad class of self-localization systems. We further propose an aggregation of different self-localization results from different queries so as to achieve higher precision. Our ranking based self-localization model allows to fuse self-localization results from different modalities via an unsupervised rank fusion derived from a field of multi-modal information retrieval (MMR). Our framework does not rely on the raw-score-merging hypothesis. Challenging experiments of cross-season change detection using the publicly available North Campus Long-Term (NCLT) dataset validates the efficacy of our proposed method.
|
|
11:00-12:15, Paper TuAT1-22.2 | Add to My Program |
Customized Object Recognition and Segmentation by One Shot Learning with Human Robot Interaction |
Guo, Ping | Intel |
Cao, Lu | Intel |
Zhang, Lidan | Intel |
Ren, Haibing | Intel Labs China |
Zhang, Yimin | Intel Corporation |
Shen, Yingzhe | Intel Labs China |
Shi, Xuesong | Intel |
Keywords: Object Detection, Segmentation and Categorization, Learning and Adaptive Systems
Abstract: to utilize the state-of-the-art object recognition/detection/segmentation methods to real applications, there are two big blocks. First, most of the deep learning models heavily depend on large amounts of labeled training data, which is expensive to obtain for each individual application. Second, the object categories must be pre-defined in the dataset, thus not practical to scenarios with varying object categories. To alleviate the reliance on pre-defined big data, this paper proposes a customized object recognition and segmentation method. It aims to recognize and segment any object defined by the user, given only one annotation. There are three steps in the proposed method. First, the robot takes an exemplar video of the target object, the user defines the object name, and mask its boundary on only one frame. Then the robot automatically propagates the annotation through the exemplar video based on a proposed data generation method. In the meantime, a segmentation model continuously updates itself on the generated data. Finally, only a light segmentation net is required at testing stage, to recognize and segment any object that the user defines.
|
|
11:00-12:15, Paper TuAT1-22.3 | Add to My Program |
SEG-VoxelNet for 3D Vehicle Detection from RGB and LiDAR Data |
Dou, Jian | Xian Jiaotong University |
Xue, Jianru | Xi'an Jiaotong University |
Fang, Jianwu | Xian Jiaotong University |
Keywords: Object Detection, Segmentation and Categorization, RGB-D Perception, Sensor Fusion
Abstract: This paper proposes a SEG-VoxelNet that takes RGB images and LiDAR point clouds as inputs for accurately detecting 3D vehicles in autonomous driving scenarios, which for the first time introduces semantic segmentation technique to assist the 3D LiDAR point cloud based detection. Specifically, SEG-VoxelNet is composed of two sub-networks: an image semantic segmentation network (SEG-Net) and an improved-VoxelNet. The SEG-Net generates the semantic segmentation map which represents the probability of the category for each pixel. The improved-VoxelNet is capable of effectively fusing point cloud data with image semantic feature and generating accurate 3D bounding boxes of vehicles. Experiments on the KITTI 3D vehicle detection benchmark show that our approach outperforms the methods of state-of-the-art.
|
|
11:00-12:15, Paper TuAT1-22.4 | Add to My Program |
Object Classification Based on Unsupervised Learned Multi-Modal Features for Overcoming Sensor Failures |
Nitsch, Julia | Ibeo Automotive Systems GmbH |
Nieto, Juan | ETH Zürich |
Siegwart, Roland | ETH Zurich |
Schmidt, Max | Ibeo Automotive Systems GmbH |
Cadena Lerma, Cesar | ETH Zurich |
Keywords: Object Detection, Segmentation and Categorization, Sensor Fusion, Deep Learning in Robotics and Automation
Abstract: For autonomous driving applications it is critical to know which type of road users and road side infrastructure are present to plan driving manoeuvres accordingly. Therefore autonomous cars are equipped with different sensor modalities to robustly perceive its environment. However, for classification modules based on machine learning techniques it is challenging to overcome unseen sensor noise. This work presents an object classification module operating on unsupervised learned multi-modal features with the ability to overcome gradual or total sensor failure. A two stage approach composed of an unsupervised feature training and a uni-modal and multi-modal classifiers training is presented. We propose a simple but effective decision module switching between uni-modal and multi-modal classifiers based on the closeness in the feature space to the training data. Evaluations on the ModelNet 40 data set show that the proposed approach has a 14% accuracy gain compared to a late fusion approach operating on a noisy point cloud data and a 6% accuracy gain when operating on noisy image data.
|
|
11:00-12:15, Paper TuAT1-22.5 | Add to My Program |
SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud |
Wu, Bichen | UC Berkeley |
Zhou, Xuanyu | University of California Berkeley |
Zhao, Sicheng | University of California Berkeley |
Yue, Xiangyu | UC Berkeley |
Keutzer, Kurt | UC Berkeley |
Keywords: Object Detection, Segmentation and Categorization, Semantic Scene Understanding, AI-Based Methods
Abstract: Earlier work demonstrates the promise of deep-learning-based approaches for point cloud segmentation; however, these approaches need to be improved to be practically useful. To this end, we introduce a new model SqueezeSegV2. With an improved model structure, SqueezeSetV2 is more robust against dropout noises in LiDAR point cloud and therefore achieves significant accuracy improvement. Training models for point cloud segmentation requires large amounts of labeled data, which is expensive to obtain. To sidestep the cost of data collection and annotation, simulators such as GTA-V can be used to create unlimited amounts of labeled, synthetic data. However, due to domain shift, models trained on synthetic data often do not generalize well to the real world. Existing domain-adaptation methods mainly focus on images and most of them cannot be directly applied to point clouds. We address this problem with a domain-adaptation training pipeline consisting of three major components: 1) learned intensity rendering, 2) geodesic correlation alignment, and 3) progressive domain calibration. When trained on real data, our new model exhibits segmentation accuracy improvements of 6.0-8.6% over the original SqueezeSeg. When training our new model on synthetic data using the proposed domain adaptation pipeline, we nearly double test accuracy on real-world data, from 29.0% to 57.4%. Our source code and synthetic dataset are open sourced.
|
|
11:00-12:15, Paper TuAT1-22.6 | Add to My Program |
Fully Automated Annotation with Noise-Masked Visual Markers for Deep Learning-Based Object Detection |
Kiyokawa, Takuya | Nara Institute of Science and Technology |
Tomochika, Keita | Nara Institute of Science and Technology |
Takamatsu, Jun | Nara Institute of Science and Technology |
Ogasawara, Tsukasa | Nara Institute of Science and Technology |
Keywords: Computer Vision for Automation, Deep Learning in Robotics and Automation, Object Detection, Segmentation and Categorization
Abstract: Automated factories use deep learning-based vision systems to accurately detect various products. However, training such vision systems, requires manual annotation of a significant amount of data to optimize the large number of parameters of the deep convolutional neural networks. Such manual annotation is very time-consuming and laborious. To reduce this burden, we propose a fully automated annotation approach without any manual intervention. To do this, we associate one visual marker with one object and capture them in the same image. However, if an image showing the marker is used for training, normally the neural network learns the marker as a feature of the object. By hiding the marker with a noise mask, we succeeded in reducing this erroneous learning. Experiments verified the effectiveness of the proposed method in comparison with manual annotation, both in terms of the time needed to collect training data and the resulting detection accuracy of the vision system. The time required for data collection was reduced from 16.1 hours to 1.87 hours. The accuracy of the vision system trained with the proposed method was 87.3%, which is higher than the accuracy of a vision system trained with the manual method.
|
|
TuAT1-23 Interactive Session, 220 |
Add to My Program |
Localization and Estimation - 2.1.23 |
|
|
|
11:00-12:15, Paper TuAT1-23.1 | Add to My Program |
RoPose-Real: Real World Dataset Acquisition for Data-Driven Industrial Robot Arm Pose Estimation |
Gulde, Thomas | Reutlingen University |
Ludl, Dennis | Reutlingen University |
Andrejtschik, Johann | Reutlingen University |
Thalji, Salma Maath Ahmad | Reutlingen University |
Curio, Cristóbal | Reutlingen University |
Keywords: Computer Vision for Automation, Calibration and Identification, Surveillance Systems
Abstract: It is necessary to employ smart sensory systems in dynamic and mobile workspaces where industrial robots are mounted on mobile platforms. Such systems should be aware of flexible and non-stationary workspaces and able to react autonomously to changing situations. Building upon our previously presented RoPose-system [1], which employs a convolutional neural network architecture that has been trained on pure synthetic data to estimate the kinematic chain of an industrial robot arm system, we now present RoPose-Real. RoPose-Real extends the prior system with a comfortable and targetless extrinsic calibration tool, to allow for the production of automatically annotated datasets for real robot systems. Furthermore, we use the novel datasets to train the estimation network with real world data. The extracted pose information is used to automatically estimate the observing sensor pose relative to the robot system. Finally we evaluate the performance of the presented subsystems in a real world robotic scenario.
|
|
11:00-12:15, Paper TuAT1-23.2 | Add to My Program |
A Framework for Self-Training Perceptual Agents in Simulated Photorealistic Environments |
Mania, Patrick | Universitaet Bremen |
Beetz, Michael | University of Bremen |
Keywords: Learning and Adaptive Systems, Visual Learning, RGB-D Perception
Abstract: The development of high-performance perception for mobile robotic agents is still challenging. Learning appropriate perception models usually requires extensive amounts of labeled training data that ideally follows the same distribution as the data an agent will encounter in its target task. Recent developments in gaming industry led to game engines able to generate photorealistic environments in real-time, which can be used to realistically simulate the sensory input of an agent. We propose a novel framework which allows the definition of different learning scenarios and instantiates these scenarios in a high quality game engine where a perceptual agent can act and learn in. The scenarios are specified in a newly developed scenario description language that allows the parametrization of the virtual environment and the perceptual agent. New scenarios can be sampled from a task-specific object distribution that allows the automatic generation of extensive amounts of different learning environments for the perceptual agent. We will demonstrate the plausibility of the framework by conducting object recognition experiments on a real robotic system which has been trained within our framework.
|
|
11:00-12:15, Paper TuAT1-23.3 | Add to My Program |
Fast and Precise Detection of Object Grasping Positions with Eigenvalue Templates |
Mano, Kousuke | Chubu University |
Hasegawa, Takahiro | Chubu University |
Yamashita, Takayoshi | Chubu University |
Fujiyoshi, Hironobu | Chubu University |
Domae, Yukiyasu | The National Institute of Advanced Industrial Science and Techno |
Keywords: Factory Automation, Industrial Robots, RGB-D Perception
Abstract: Fast Graspability Evaluation (FGE) has been proposed as a method for detecting grasping positions on objects and is now being used for industrial robots. FGE uses convolution of hand templates with regions on the target object to estimate the optimum grasping posture. However, the hand opening width and rotation angles must be set with high resolution to achieve highly accurate results and the computational load is high. To address that issue, we propose a method in which hand templates are represented in compact form for faster processing by using singular value decomposition. Applying singular value decomposition enables hand templates to be represented as linear combinations of a small number of eigenvalue templates and eigenfunctions. Eigenfunctions take discrete values, but response values can be calculated with arbitrary parameters by fitting a continuous function. Experimental results show that the proposed method reduces computation time by two thirds while maintaining the same detection accuracy as conventional FGE for both parallel hands and three-finger hands.
|
|
11:00-12:15, Paper TuAT1-23.4 | Add to My Program |
Improved Coverage Path Planning Using a Virtual Sensor Footprint: A Case Study on Demining |
Dogru, Sedat | University of Coimbra |
Marques, Lino | University of Coimbra |
Keywords: Demining Systems, Field Robots
Abstract: Coverage performance in a coverage path planning problem depends both on the path created and on the footprint of the sensor used. The footprint can be increased either by increasing the size of the sensor, or by mounting the sensor on a robotic arm to allow scanning over larger areas as the platform moves, effectively creating a virtual sensor with a larger footprint than the physical sensor's. However, the virtual footprint comes at a cost requiring formulating an optimization problem for the area of interest. In this work, three common strategies to use a metal detector on a platform are discussed, their time and energy performances are formulated and the corresponding optima are found.
|
|
11:00-12:15, Paper TuAT1-23.5 | Add to My Program |
Model-Based Estimation of the Gravity-Loaded Shape and Scene Depth for a Slim 3-Actuator Continuum Robot with Monocular Visual Feedback |
Chen, Yuyang | Shanghai Jiao Tong University |
Zhang, Shu'an | Shanghai Jiao Tong University |
Zeng, Lingyun | Shanghai Jiao Tong University |
Zhu, Xiangyang | Shanghai Jiao Tong University |
Xu, Kai | Shanghai Jiao Tong University |
Keywords: Flexible Robots, Biologically-Inspired Robots, Motion and Path Planning
Abstract: Fruitful developments on continuum robots have been witnessed in recent years due to their movements and manipulation capabilities in confined spaces. Due to the nature that a continuum robot has an infinite number of DoFs (Degrees of Freedom), majority of the existing systems deployed abundant actuators such that the robot can be controlled in separately modeled and actuated segments with constant or variable curvature. As the shape of a continuum robot is always jointly determined by its actuation and the interactions from the environment, it is hence worth exploring the opposite approach that how a task can be accomplished with a minimal number of actuators. This paper presents the first step of such an investigation where a slim 3-actuator continuum robot is controlled to reach different spatial locations under gravity. As the gravity greatly affects the robot’s shape, a monocular camera, together with two UKFs (Unscented Kalman Filters), was used to concurrently estimate the robot’s shape and the feature depth. Then the estimated shape can be used in updating the kinematics model of the robot to achieve motion control. Experiments were conducted to validate the efficacy of the proposed shape estimation, which promises the motion control implementation in the near future.
|
|
11:00-12:15, Paper TuAT1-23.6 | Add to My Program |
PedX: Benchmark Dataset for Metric 3D Pose Estimation of Pedestrians in Complex Urban Intersections |
Kim, Wonhui | University of Michigan |
Srinivasan Ramanagopal, Manikandasriram | University of Michigan |
Barto, Charles | University of Michigan |
Yu, Ming-Yuan | University of Michigan |
Rosaen, Karl | University of Michigan |
Goumas, Nick | University of Michigan |
Vasudevan, Ram | University of Michigan |
Johnson-Roberson, Matthew | University of Michigan |
Keywords: Computer Vision for Transportation, Human Detection and Tracking
Abstract: This paper presents a novel dataset titled PedX, a large-scale multimodal collection of pedestrians at complex urban intersections. PedX consists of more than 5,000 pairs of high-resolution (12MP) stereo images and LiDAR data along with providing 2D and 3D labels of pedestrians. We also present a novel 3D model fitting algorithm for automatic 3D labeling harnessing constraints across different modalities and novel shape and temporal priors. All annotated 3D pedestrians are localized into the real-world metric space, and the generated 3D models are validated using a mocap system configured in a controlled outdoor environment to simulate pedestrians in urban intersections. We also show that the manual 2D labels can be replaced by state-of-the-art automated labeling approaches, thereby facilitating automatic generation of large scale datasets.
|
|
TuAT1-24 Interactive Session, 220 |
Add to My Program |
Under-Actuated Robots - 2.1.24 |
|
|
|
11:00-12:15, Paper TuAT1-24.1 | Add to My Program |
Design of a Modular Continuum Robot Segment for Use in a General Purpose Manipulator |
Castledine, Nicholas Peter | University of Leeds |
Boyle, Jordan Hylke | University of Leeds |
Kim, Jongrae | University of Leeds |
Keywords: Underactuated Robots, Tendon/Wire Mechanism, Biologically-Inspired Robots
Abstract: This paper presents the development of a tendon-driven continuum robot segment with a modular design, simple construction and significant lifting capabilities. The segment features a continuous flexible core combined with rigid interlocking vertebrae evenly distributed along its length. This design allows bending in two degrees of freedom while minimising torsional movement. The segment is actuated by two antagonistic tendon pairs, each of which is driven by a single geared DC motor. Modularity is achieved by embedding these motors in one end of the segment, avoiding the need for a bulky actuation unit and allowing variable numbers of segments to be connected. The design features a large hollow central bore which could be used as a vacuum channel for suction-assisted gripping or to allow ingress and egress of fluids. The design process goes through four iterations, the final two of which are subjected to quantitative experiments to evaluate workspace, lifting capabilities and torsional rigidity. All iterations are fabricated using multi-material 3D printing, which allows the entire structure to be printed as a pre-assembled unit with the rigid vertebrae fused to the flexible core. Assembly is then a simple case of inserting the motors and connecting the tendons. This unconventional manufacturing approach is found to be efficient, effective and relatively cheap.
|
|
11:00-12:15, Paper TuAT1-24.2 | Add to My Program |
Reshaping Particle Configurations by Collisions with Rigid Objects |
Shahrokhi, Shiva | University of Houston |
Zhao, Haoran | University of Houston |
Becker, Aaron | University of Houston |
Keywords: Underactuated Robots, Swarms, Automation at Micro-Nano Scales
Abstract: Consider many particles actuated by a uniform global external field (e.g. gravitational or magnetic fields). This paper presents analytical results using workspace obstacles and global inputs to reshape such a group of particles. Shape control of many particles is necessary for conveying information, construction, and navigation. First we show how the particles' characteristic angle of repose can be used to reshape the particles by controlling angle of attack and the magnitude of the driving force. These can then be used to control the force and torque applied to a rectangular rigid body. Next, we examine the full set of stable, achievable mean and variance configurations for the shape of a particle group in two canonical environments: a square and a circular workspace. Finally, we show how workspaces with linear boundary layers can be used to achieve a more rich set of mean and variance configurations.
|
|
11:00-12:15, Paper TuAT1-24.3 | Add to My Program |
Velocity Constrained Trajectory Generation for a Collinear Mecanum Wheeled Robot |
Watson, Matthew Thomas | University of Sheffield |
Gladwin, Daniel T | University of Sheffield |
Prescott, Tony J | University of Sheffield |
Conran, Sebastian | Consequential Robotics Ltd |
Keywords: Motion and Path Planning, Underactuated Robots, Optimization and Optimal Control
Abstract: While much research has been conducted into the generation of smooth trajectories for underactuated unstable aerial vehicles such as quadrotors, less attention has been paid to the application of the same techniques to ground based omnidirectional dynamically balancing robots. These systems have more control authority over their linear accelerations than aerial vehicles, meaning trajectory smoothness is less of a critical design parameter. However, when operating in indoor environments these systems must often adhere to relatively low velocity constraints, resulting in very conservative trajectories when enforced using existing trajectory optimisation methods. This paper makes two contributions; this gap is bridged by the extension of these existing methods to create a fast velocity constrained trajectory planner, with trajectory timing characteristics derived from the optimal minimum-time solution of a simplified acceleration and velocity constrained model. Next, a differentially flat model of an omnidirectional balancing robot utilizing a collinear Mecanum drive is derived, which is used to allow an experimental prototype of this configuration to smoothly follow these velocity constrained trajectories.
|
|
11:00-12:15, Paper TuAT1-24.4 | Add to My Program |
Vibration Control for Manipulators on a Translationally Flexible Base |
Beck, Fabian | German Aerospace Center (DLR) |
Garofalo, Gianluca | German Aerospace Center (DLR) |
Ott, Christian | German Aerospace Center (DLR) |
Keywords: Underactuated Robots, Flexible Robots, Dynamics
Abstract: In this contribution the problem of vibration control is studied on the basis of a fundamental oscillatory system consisting of a mass spring system and an additional mass. The proposed control strategy couples the orbits of the two masses such that both masses stop, while simultaneously stabilizing the second mass to a desired equilibrium. Using a coordinate and input transformation, the control strategy is directly transferred to an n-link manipulator mounted on a base with linear translational stiffness. Using semidefinite Lyapunov functions and a conditional stability argument, it is shown that the proposed control strategy damps out base vibrations, while additionally achieving a desired configuration in the task-space. Finally, the proposed method is compared to a state-of-the-art approach using numerical simulations.
|
|
11:00-12:15, Paper TuAT1-24.5 | Add to My Program |
Gaussian Processes Model-Based Control of Underactuated Balance Robots |
Chen, Kuo | Rutgers University |
Yi, Jingang | Rutgers University |
Song, Dezhen | Texas A&M University |
Keywords: Model Learning for Control, Underactuated Robots, Learning and Adaptive Systems
Abstract: Control of underactuated balance robot requires external subsystem trajectory tracking and internal unstable subsystem balancing with limited control authority. We present a learning-based control approach for underactuated balance robots. The tracking and balancing control is designed the controller in fast- and slow-time scales. In the slow-time scale, model predictive control is adopted to plan desired internal state profile to achieve external trajectory tracking task. The internal state is then stabilized around the planned profile in the fast-time scale. The control design is based on a learned Gaussian process (GP) regression model without need of a priori knowledge about the robot dynamics. The controller also incorporates the GP model predicted variance to enhance robustness to modeling errors. Experiments are presented using a Furuta pendulum system.
|
|
11:00-12:15, Paper TuAT1-24.6 | Add to My Program |
Analysis of 3D Position Control for a Multi-Agent System of Self-Propelled Agents Steered by a Shared, Global Control Input |
Huang, Li | University of Houston |
Julien, Leclerc | University of Houston |
Becker, Aaron | University of Houston |
Keywords: Underactuated Robots, Multi-Robot Systems, Swarms
Abstract: Abstract—This paper investigates strategies for multi-agent 3D position control using a shared control input and selfpropelled agents. The only control inputs allowed are rotation commands that rotate all agents by the same rotation matrix. In the 2D case, only two degrees-of-freedom (DOF) in position are controllable. We review controllability results in 2D, and then show that interesting things happen in 3D. We provide control laws for steering up to nine DOF in position, which can be mapped in various ways, including to control the x, y, z position of three agents, make four agents meet, or reduce the spread of n agents.
|
|
TuAT1-25 Interactive Session, 220 |
Add to My Program |
Human-Robot Interaction II - 2.1.25 |
|
|
|
11:00-12:15, Paper TuAT1-25.1 | Add to My Program |
Working with Walt: How a Cobot Was Developed and Inserted on an Auto Assembly Line (I) |
El Makrini, Ilias | Vrije Universiteit Brussel |
Elprama, Shirley A. | Imec-SMIT-VUB |
Van den Bergh, Jan | Hasselt University - tUL - Flanders Make |
Vanderborght, Bram | Vrije Universiteit Brussel |
Jewell, Charlotte. Isabelle. Catherine | Imec-Smit-Vrije Universiteit Brussel |
Jacobs, An | IMinds-SMIT-VUB |
Keywords: Control Architectures and Programming, Industrial Robots, Social Human-Robot Interaction
Abstract: Collaborative robots (cobots) are a category of robots designed to work together with humans. By combining the fortes of the robot, such as precision and strength, with the dexterity and problem-solving ability of the human, it is possible to accomplish tasks that cannot be fully automated and improve the production quality and working conditions of employees. This article presents the results of the ClaXon project, which studies and implements interactions between humans and cobots in factories. The project has led to the integration of a cobot in the Audi car manufacturing plant in Brussels, Belgium. Proofs of concept were realized to study multimodal perceptions for human–robot interaction. The project addressed technical challenges regarding the introduction of cobots on the factory floor. Social experiments were conducted with factory workers to assess the social acceptance of cobots and to study the interactions between human and robot.
|
|
11:00-12:15, Paper TuAT1-25.2 | Add to My Program |
Intuitive Physical Human-Robot Interaction Using a Passive Parallel Mechanism (I) |
Badeau, Nicolas | Université Laval |
Gosselin, Clement | Université Laval |
Foucault, Simon | Université Laval |
Laliberte, Thierry | Universite Laval |
Abdallah, Muhammad | General Motors R&D |
Keywords: Physical Human-Robot Interaction, Parallel Robots, Physically Assistive Devices
Abstract: In this paper we propose a novel passive mechanism and a macro-mini architecture for effective and intuitive physical human-robot interaction (pHRI). The macro-mini concept allows the use of a mini low-impedance passive mechanism (LIP) to effortlessly and intuitively control a macro high-impedance active (HIA) system such as a gantry manipulator. The proposed mini LIP design is based on a three-degree-of-freedom (3-dof) translational parallel mechanism, which makes it simple and compact, thereby adding little inertia to end-effector of the macro HIA mechanism. The kinematically and statically decoupled LIP mechanism is first described and analysed. Then, the kinematics of the macro-mini architecture is studied in order to establish the capabilities of the robot. A controller is then proposed that uses the passive joint coordinates of the LIP mechanism as input to control the motion of the HIA mechanism. Finally, experimental results are provided in order to illustrate the performance and intuitive behaviour of the robot, which is particularly suited for manufacturing applications.
|
|
11:00-12:15, Paper TuAT1-25.3 | Add to My Program |
SMErobotics: Smart Robots for Flexible Manufacturing (I) |
Perzylo, Alexander Clifford | Fortiss GmbH - An-Institut Technische Universitaet Muenchen |
Rickert, Markus | Fortiss, An-Institut Technische Universität München |
Kahl, Bjoern | Univ. of Applied Sciences Bonn-Rhein-Sieg |
Somani, Nikhil | Agency for Science, Technology and Research (A*STAR) |
Lehmann, Christian | Lehmann Robotic Solutions |
Kuss, Alexander | Fraunhofer Institute for Manufacturing Engineering and Automatio |
Profanter, Stefan | Fortiss GmbH - An-Institut Technische Universitaet Muenchen |
Beck, Anders Billesø | Technical University of Denmark, Danish Technological Institute |
Haage, Mathias | Lund University |
Hansen, Mikkel Rath | Danish Technological Institute |
Roa, Maximo A. | DLR - German Aerospace Center |
Sornmo, Olof | Cognibotics AB |
Gestegård Robertz, Sven | Lund University |
Thomas, Ulrike | Chemnitz University of Technology |
Veiga, Germano | INESC TEC |
Topp, Elin Anna | Lund University - LTH |
Kessler, Ingmar | Fortiss GmbH |
Danzer, Marinus | KUKA |
Keywords: Intelligent and Flexible Manufacturing, Cognitive Human-Robot Interaction, AI-Based Methods
Abstract: Current market demands require an increasingly agile production environment throughout many manufacturing branches. Traditional automation systems and industrial robots, on the other hand, are often too inflexible to provide an economically viable business case for companies with rapidly changing products. The introduction of cognitive abilities into robotic and automation systems is, therefore, a necessary step toward lean changeover and seamless human-robot collaboration. In this article, we introduce the European Union (EU)-funded research project SMErobotics, which focuses on facilitating the use of robot systems in small and medium-sized enterprises (SMEs). We analyze open challenges for this target audience and develop multiple efficient technologies to address related issues. Real-world demonstrators of several end users and from multiple application domains show the impact these smart robots can have on SMEs. This article intends to give a broad overview of the research conducted in SMErobotics. Specific details of individual topics are provided through references to our previous publications.
|
|
11:00-12:15, Paper TuAT1-25.4 | Add to My Program |
The Playful Software Platform: Reactive Programming for Orchestrating Robotic Behavior (I) |
Berenz, Vincent | Max Planck Institute for Intelligent Systems |
Schaal, Stefan | MPI Intelligent Systems & University of Southern California |
Keywords: Software, Middleware and Programming Environments, Control Architectures and Programming, Social Human-Robot Interaction
Abstract: For many service robots, reactivity to changes in their surroundings is a must. However, developing software suitable for dynamic environments is difficult. Existing robotic middleware allows engineers to design behavior graphs by organizing communication between components. But because these graphs are structurally inflexible, they hardly support the development of complex reactive behavior. To address this limitation, we propose Playful, a software platform that applies reactive programming to the specification of robotic behavior. The front-end of Playful is a scripting language which is simple (only five keywords), yet results in the runtime coordinated activation and deactivation of an arbitrary number of higher-level sensory-motor couplings. When using Playful, developers describe actions of various levels of abstraction via behaviors trees. During runtime an underlying engine applies a mixture of logical constructs to obtain the desired behavior. These constructs include conditional ruling, dynamic prioritization based on resources management and finite state machines. Playful has been successfully used to program an upper-torso humanoid manipulator to perform lively interaction with any human approaching it.
|
|
11:00-12:15, Paper TuAT1-25.5 | Add to My Program |
Better Teaming through Visual Cues: How Projecting Imagery in a Workspace Can Improve Human-Robot Collaboration (I) |
Kalpagam Ganesan, Ramsundar | Arizona State University |
Rathore, Yash | Arizona State University |
Ross, Heather | Arizona State University |
Ben Amor, Heni | Arizona State University |
Keywords: Social Human-Robot Interaction, Virtual Reality and Interfaces, Visual Tracking
Abstract: In this paper, we present a communication paradigm using a context-aware mixed reality approach for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on the tracked objects and the workspace. Simultaneous tracking and projection onto objects enable the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. We hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, we conducted an experiment involving human subjects and compared the performance (both objective and subjective) of the presented system with conventional forms of communication, namely printed and mobile display instructions. We found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.
|
|
11:00-12:15, Paper TuAT1-25.6 | Add to My Program |
A Lower-Back Robotic Exoskeleton: Industrial Handling Augmentation Used to Provide Spinal Support (I) |
Zhang, Ting | Soochow University |
Huang, He (Helen) | North Carolina State University |
Keywords: Physically Assistive Devices, Prosthetics and Exoskeletons, Human Performance Augmentation
Abstract: The work presented in this paper is a lower-back exoskeleton prototype to provide back support for industrial workers, who are required to manually handle heavy materials. Reducing spinal loads during these tasks can thereby reduce the risk of work-related back injuries. Biomechanical studies show that compression of the lumbar spine is a key risk factor for musculoskeletal injuries. To address this issue, we present a wearable exoskeleton designed to provide back support and reduce lumbar spine compression. To provide effective assistance and avoid injury to muscles or tendons, we aim to apply a continuous torque of approximately 40 Nm on both hip joints, to actively assist both abduction/adduction (HAA) and flexion/extension (HFE). Each actuation unit includes a modular and compact series-elastic actuator (SEA) with a clutch. It provides mechanical compliance at the interface between the exoskeleton and the user, and the clutches can automatically disengage the torque between the exoskeleton and the user. These experimental results show that the exoskeleton can reduce lumbar compression by reducing the need for muscular activity in spine. Furthermore, powering both HFE and HAA can effectively reduce the lumbar spinal loading user experience when lifting and lowering objects while in a twisted posture.
|
|
TuAT1-26 Interactive Session, 220 |
Add to My Program |
Multi-Robot Systems V - 2.1.26 |
|
|
|
11:00-12:15, Paper TuAT1-26.1 | Add to My Program |
A Heuristic for Task Allocation and Routing of Heterogeneous Robots While Minimizing Maximum Travel Cost |
Bae, Jungyun | Korea University |
Lee, Jungho | Korea University |
Chung, Woojin | Korea University |
Keywords: Planning, Scheduling and Coordination, Path Planning for Multiple Mobile Robots or Agents, Multi-Robot Systems
Abstract: The article proposes a new heuristic for task allocation and routing of heterogeneous robots. Specifically, we consider a path planning problem where there are two (structurally) heterogeneous robots that start from distinctive depots and a set of targets to visit. The objective is to find a tour for each robot in a manner that enables each target location to be visited at least once by one of the robots while minimizing the maximum travel cost. A solution for Multiple Depot Heterogeneous Traveling Salesman Problem (MDHTSP) with min-max objective is in great demand with many potential applications, because it can significantly reduce the job completion duration. However, there are still no reliable algorithms that can run in short amount of time. As an initial idea of solving min-max MDHTSP, we present a heuristic based on a primal-dual technique that solves for a case involving two robots while focusing on task allocation. Based on computational results of the implementation, we show that the proposed algorithm produces a good quality of feasible solution within a relatively short computation time.
|
|
11:00-12:15, Paper TuAT1-26.2 | Add to My Program |
Solving Methods for Multi-Robot Missions Planning with Energy Capacity Consideration |
Habibi, Muhammad Khoirul Khakim | ONERA - the French Aerospace Lab |
Grand, Christophe | ONERA |
Lesire, Charles | ONERA |
Pralet, Cedric | ONERA |
Keywords: Planning, Scheduling and Coordination, Task Planning, Multi-Robot Systems
Abstract: We consider the problem of minimizing the total duration of missions by heterogeneous vehicles. The problem contains constraints related to vehicles’ capabilities and energy. The goal is to determine the best routes of each vehicle deployed by choosing which waypoints to pass and which observations to perform. Each vehicle has a particular distance matrix and a limited energy. In order to provide high quality solutions within reasonable computational time, two decomposition-based approximate methods were implemented: (i) the Multiphase heuristic, and (ii) the Two-Phase iterative heuristic. The performance of the methods is evaluated against the Branch-and-Cut algorithm using generated instances.
|
|
11:00-12:15, Paper TuAT1-26.3 | Add to My Program |
Salty–A Domain Specific Language for GR(1) Specifications and Designs |
Elliott, Trevor | Groq, Inc |
Alshiekh, Mohammed | University of Texas at Austin |
Humphrey, Laura | Air Force Research Laboratory |
Pike, Lee | Groq, Inc |
Topcu, Ufuk | The University of Texas at Austin |
Keywords: Formal Methods in Robotics and Automation, Distributed Robot Systems, Process Control
Abstract: Designing robot controllers that correctly react to changes in the environment is a time-consuming and error-prone process. An alternative is to use “correct-by- construction” synthesis approaches to automatically generate controller designs from high-level specifications. In particular, Generalized Reactivity(1) or GR(1) specifications are well-suited to express specifications for robots that must act in dynamic environments, and approaches to generate controller designs from GR(1) specifications are highly computationally efficient. Toward that end, this paper presents Salty, a domain-specific language for GR(1) specifications. While tools exist to synthesize system designs from GR(1) specifications, Salty makes such specifications easier to write and debug by supporting features such as richer input and output types, user-defined macros, common specification patterns, and specification optimization and sanity checking. Salty interfaces with the separately developed synthesis tool Slugs to produce a system or controller design, and Salty translates this design to a software implementation in a variety of languages. We demonstrate Salty on an application involving coordination of multiple unmanned air vehicles (UAVs) and provide a workflow for connecting synthesized UAV controllers to freely available UAV planning and simulation software suites UxAS and AMASE.
|
|
11:00-12:15, Paper TuAT1-26.4 | Add to My Program |
Persistent Multi-Robot Mapping in an Uncertain Environment |
Mitchell, Derek | Carnegie Mellon University |
Michael, Nathan | Carnegie Mellon University |
Keywords: Task Planning, Mapping, Motion and Path Planning
Abstract: This paper proposes a method to deploy teams of robots with constrained energy capacities to persistently maintain a map of an uncertain environment. Typical occupancy map approaches assume a static world; however, we introduce a decay in confidence that degrades the occupancy probability of grid cells and promotes revisitation. Further, sections of the map whose occupancy differs between observations are visited more frequently, while unchanging areas are scheduled less frequently. While naive planning is intractable through the entire space of multi-agent spatio-temporal states, the proposed algorithm decouples planning such that constraints are resolved separately by solving reasonable subproblems. We evaluate this approach in simulation and show how the uncertainty of our world model is maintained below an acceptable threshold while the algorithm retains a tractable computation time.
|
|
11:00-12:15, Paper TuAT1-26.5 | Add to My Program |
A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering |
Tanwani, Ajay Kumar | UC Berkeley |
Mor, Nitesh | UC Berkeley |
Kubiatowicz, John | UC Berkeley |
Gonzalez, Joseph E. | UC Berkeley |
Goldberg, Ken | UC Berkeley |
Keywords: Networked Robots, Deep Learning in Robotics and Automation, Learning and Adaptive Systems
Abstract: The growing demand of industrial, automotive and service robots presents a challenge to the centralized Cloud Robotics model in terms of privacy, security, latency, bandwidth, and reliability. In this paper, we present a `Fog Robotics' approach to deep robot learning that distributes compute, storage and networking resources between the Cloud and the Edge in a federated manner. Deep models are trained on non-private (public) synthetic images in the Cloud; the models are adapted to the private real images of the environment at the Edge within a trusted network and subsequently, deployed as a service for low-latency and secure inference/prediction for other robots in the network. We apply this approach to surface decluttering, where a mobile robot picks and sorts objects from a cluttered floor by learning a deep object recognition and a grasp planning model. Experiments suggest that Fog Robotics can improve performance by sim-to-real domain adaptation in comparison to exclusively using Cloud or Edge resources, while reducing the inference cycle time by 4x in decluttering with 185 objects over 213 grasp attempts.
|
|
11:00-12:15, Paper TuAT1-26.6 | Add to My Program |
Multirobot Reconnection on Graphs: Problem, Complexity, and Algorithms (I) |
Banfi, Jacopo | Cornell University |
Basilico, Nicola | University of Milan |
Amigoni, Francesco | Politecnico Di Milano |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Networked Robots
Abstract: In several multirobot applications in which communication is limited, the mission could require the robots to iteratively take coordinated joint decisions on how to spread out in the environment and on how to reconnect with each other to share data and compute plans. Exploration and surveillance are examples of these applications. In this paper, we consider the problem of computing robots' paths on a graph-represented environment for restoring connections at minimum traveling cost. We call it the multirobot reconnection problem, we show its NP-hardness and hardness of approximation on some important classes of graphs, and we provide optimal and heuristic algorithms to solve it in practical settings. The techniques we propose are then exploited to derive a new efficient planning algorithm for a relevant connectivity-constrained multirobot planning problem addressed in the literature, the multirobot informative path planning with periodic connectivity problem.
|
|
TuAT2 Regular Session, 517d |
Add to My Program |
Award Session IV |
|
|
Chair: Brock, Oliver | Technische Universität Berlin |
Co-Chair: Aloimonos, Yiannis | University of Maryland |
|
11:00-11:12, Paper TuAT2.1 | Add to My Program |
Efficient Symbolic Reactive Synthesis for Finite-Horizon Tasks |
He, Keliang | Rice University |
Wells, Andrew | Rice University |
Kavraki, Lydia | Rice University |
Moshe, Vardi | Rice University |
Keywords: Formal Methods in Robotics and Automation, Manipulation Planning
Abstract: When humans and robots perform complex tasks together, the robot must have a strategy to choose its actions based on observed human behavior. One well-studied approach for finding such strategies is reactive synthesis. Existing approaches for finite-horizon tasks have used an explicit state approach, which incurs high runtime. In this work, we present a compositional approach to perform synthesis for finite-horizon tasks based on binary decision diagrams. We show that for pick-and-place tasks, the compositional approach achieves exponential speed-ups compared to previous approaches. We demonstrate the synthesized strategy on a UR5 robot.
|
|
11:12-11:24, Paper TuAT2.2 | Add to My Program |
Combined Task and Motion Planning under Partial Observability: An Optimization-Based Approach |
Phiquepal, Camille | University of Stuttgart |
Toussaint, Marc | University of Stuttgart |
Keywords: Task Planning, Motion and Path Planning, Manipulation Planning
Abstract: We propose a novel approach to Combined Task and Motion Planning (TAMP) under partial observability. Previous optimization-based TAMP methods compute optimal plans and paths assuming full observability. However, partial observability requires the solution to be a policy that reacts to the observations that the agent receives. We consider a formulation where observations introduce additional branching in the symbolic decision tree. The solution is now given by a reactive policy on the symbolic level together with a path tree that describes the branchings of optimal motion depending on the observations. Our method works in two stages: First, the symbolic policy is optimized using approximate path costs estimated from independent optimizations of trajectory pieces. Second, we fix the best symbolic policy and optimize a joint trajectory tree. We test our approach on object manipulation and autonomous driving examples. We also compare the algorithm’s performance to a state-of-the-art TAMP planner in fully observable cases.
|
|
11:24-11:36, Paper TuAT2.3 | Add to My Program |
Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks |
Lee, Michelle | Stanford University |
Zhu, Yuke | Stanford University |
Srinivasan, Krishnan | Stanford University |
Shah, Parth | Stanford University |
Savarese, Silvio | Stanford University |
Fei-Fei, Li | Stanford University |
Garg, Animesh | Stanford University |
Bohg, Jeannette | Stanford University |
Keywords: Deep Learning in Robotics and Automation, Perception for Grasping and Manipulation, Sensor-based Control
Abstract: Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. However, it is non-trivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We evaluate our method on a peg insertion task, generalizing over different geometry, configurations, and clearances, while being robust to external perturbations. Results for simulated and real robot experiments are presented.
|
|
11:36-11:48, Paper TuAT2.4 | Add to My Program |
Deep Visuo-Tactile Learning: Estimation of Tactile Properties from Images |
Takahashi, Kuniyuki | Preferred Networks |
Tan, Jethro | Preferred Networks, Inc |
Keywords: Force and Tactile Sensing, Deep Learning in Robotics and Automation
Abstract: Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help us decide which actions we should choose and how to perform them. E.g., we can drive slower if we see that we have bad traction or grasp tighter if an item looks slippery. We believe that this ability also helps robots to enhance their understanding of the environment, and thus enables them to tailor their actions to the situation at hand. We therefore propose a model to estimate the degree of tactile properties from visual perception alone (e.g., the level of slipperiness or roughness). Our method extends a encoder-decoder network, in which the latent variables are visual and tactile features. In contrast to previous works, our method does not require manual labeling, but only RGB images and the corresponding tactile sensor data. All our data is collected with a webcam and uSkin tactile sensor mounted on the end-effector of a Sawyer robot, which strokes the surfaces of 25 different materials. We show that our model generalizes to materials not included in the training data by evaluating the feature space, indicating that it has learned to associate important tactile properties with images.
|
|
11:48-12:00, Paper TuAT2.5 | Add to My Program |
Variational End-To-End Navigation and Localization |
Amini, Alexander | Massachusetts Institute of Technology |
Rosman, Guy | Massachusetts Institute of Technology |
Karaman, Sertac | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Deep Learning in Robotics and Automation, Computer Vision for Transportation, Autonomous Vehicle Navigation
Abstract: Deep learning has revolutionized the ability to learn "end-to-end" autonomous vehicle control directly from raw sensory data. While there have been recent advances on extensions to handle forms of navigation instruction, these works are unable to capture the full distribution of possible actions that could be taken and to reason about localization of the robot within the environment. In this paper, we extend end-to-end driving networks with the ability to understand maps. We define a novel variational network capable of learning from raw camera data of the environment as well as higher level roadmaps to predict (1) a full probability distribution over the possible control commands; and (2) a deterministic control command capable of navigating on the route specified within the map. Additionally, we formulate how our model can be used to localize the robot according to correspondences between the map and the observed visual road topology, inspired by the rough localization that human drivers can perform. We evaluate our algorithms on real-world driving data, and reason about the robustness of the inferred steering commands under various types of rich driving scenarios. In addition, we evaluate our localization algorithm over a new set of roads and intersections which the model has never driven through and demonstrate rough localization in situations without any GPS prior.
|
|
TuBT1 |
220 |
PODS: Tuesday Session II |
Interactive Session |
|
13:30-14:45, Subsession TuBT1-01, 220 | |
Marine Robotics II - 2.2.01 Interactive Session, 5 papers |
|
13:30-14:45, Subsession TuBT1-02, 220 | |
Marine Robotics III - 2.2.02 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-03, 220 | |
Visual Odometry II - 2.2.03 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-04, 220 | |
Space Robotics II - 2.2.04 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-05, 220 | |
Deep Visual Learning I - 2.2.05 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-06, 220 | |
Biological Cell Manipulation - 2.2.06 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-07, 220 | |
Human Detection and Tracking - 2.2.07 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-08, 220 | |
Visual Localization II - 2.2.08 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-09, 220 | |
Perception for Manipulation II - 2.2.09 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-10, 220 | |
Human-Robot Interaction III - 2.2.10 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-11, 220 | |
Medical Robotics VI - 2.2.11 Interactive Session, 5 papers |
|
13:30-14:45, Subsession TuBT1-12, 220 | |
Rehabilitation Robotics II - 2.2.12 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-13, 220 | |
Soft Robots III - 2.2.13 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-14, 220 | |
Haptics & Interfaces II - 2.2.14 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-15, 220 | |
SLAM - Session V - 2.2.15 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-16, 220 | |
Humanoid Robots V - 2.2.16 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-17, 220 | |
Aerial Systems: Mechanisms II - 2.2.17 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-19, 220 | |
Flexible Robots - 2.2.19 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-20, 220 | |
Force and Tactile Sensing II - 2.2.20 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-21, 220 | |
Deep Visual Learning II - 2.2.21 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-22, 220 | |
Object Recognition & Segmentation II - 2.2.22 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-23, 220 | |
Motion and Path Planning II - 2.2.23 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-24, 220 | |
Industrial Robotics - 2.2.24 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-25, 220 | |
Intelligent Transportation II - 2.2.25 Interactive Session, 6 papers |
|
13:30-14:45, Subsession TuBT1-26, 220 | |
Aerial Systems: Applications IV - 2.2.18 Interactive Session, 6 papers |
|
TuBT1-01 Interactive Session, 220 |
Add to My Program |
Marine Robotics II - 2.2.01 |
|
|
|
13:30-14:45, Paper TuBT1-01.1 | Add to My Program |
Streamlines for Motion Planning in Underwater Currents |
To, Kwun Yiu Cadmus | University of Technology Sydney |
Lee, Ki Myung Brian | University of Technology Sydney |
Yoo, Chanyeol | University of Technology Sydney |
Anstee, Stuart David | Defence Science and Technology Group |
Fitch, Robert | University of Technology Sydney |
Keywords: Marine Robotics, Motion and Path Planning, Field Robots
Abstract: Motion planning for underwater vehicles must consider the effect of ocean currents. We present an efficient method to compute reachability and cost between sample points in sampling-based motion planning that supports long-range planning over hundreds of kilometres in complicated flows. The idea is to search a reduced space of control inputs that consists of stream functions whose level sets, or streamlines, optimally connect two given points. Such stream functions are generated by superimposing a control input onto the underlying current flow. A streamline represents the resulting path that a vehicle would follow as it is carried along by the current given that control input. We provide rigorous analysis that shows how our method avoids exhaustive search of the control space, and demonstrate simulated examples in complicated flows including a traversal along the east coast of Australia, using actual current predictions, between Sydney and Brisbane.
|
|
13:30-14:45, Paper TuBT1-01.2 | Add to My Program |
A Distributed Predictive Control Approach for Cooperative Manipulation of Multiple Underwater Vehicle Manipulator Systems |
Heshmati-alamdari, Shahab | KTH Royal Institute of Technology |
Karras, George | National Technical University of Athens |
Kyriakopoulos, Kostas | National Technical Univ. of Athens |
Keywords: Marine Robotics, Cooperative Manipulators
Abstract: This paper addresses the problem of cooperative object transportation for multiple Underwater Vehicle Manipulator Systems (UVMSs) in a constrained workspace involving static obstacles. We propose a Nonlinear Model Predictive Control (NMPC) approach for a team of UVMSs in order to transport an object while avoiding significant constraints and limitations such as: kinematic and representation singularities, obstacles within the workspace, joint limits and control input saturations. More precisely, by exploiting the coupled dynamics between the robots and the object, and using certain load sharing coefficients, we design a distributed NMPC for each UVMS in order to cooperatively transport the object within the workspace’s feasible region. Moreover, the control scheme adopts load sharing among the UVMSs according to their specific payload capabilities. Additionally, the feedback relies on each UVMS’s locally measurements and no explicit data is exchanged online among the robots, thus reducing the required communication bandwidth. Finally, real-time simulation results conducted in UwSim dynamic simulator running in ROS environment verify the efficiency of the theoretical finding.
|
|
13:30-14:45, Paper TuBT1-01.3 | Add to My Program |
Coordinated Control of a Reconfigurable Multi-Vessel Platform: Robust Control Approach |
Park, Shinkyu | Massachusetts Institute of Technology |
Kayacan, Erkan | Massachusetts Institute of Technology |
Ratti, Carlo | Massachusetts Institute of Technology |
Rus, Daniela | MIT |
Keywords: Robust/Adaptive Control of Robotic Systems, Multi-Robot Systems, Marine Robotics
Abstract: We propose a feedback control system for a reconfigurable multi-vessel platform. The platform consists of N propeller-driven vessels each of which is capable of latching to another vessel to form a rigid body of connected vessels. The main technical challenges are that i) depending on configurations of the platform the dynamic model would be different, and ii) the number of control variables in control system design increases as does the total number of vessels in the platform. To address these challenges, we develop a coordinated robust control scheme. Through experiments we assess trajectory tracking and disturbance attenuation performance of the control scheme in various configurations of the platform. Experiment results yield that average position and orientation tracking error are approximately 0.09 m and 3 degree, and the maximum tracking error-to-disturbance ratio is 1.12.
|
|
13:30-14:45, Paper TuBT1-01.4 | Add to My Program |
Ambient Light Based Depth Control of Underwater Robotic Unit AMussel |
Vasiljevic, Goran | Faculty of Electrical Engineering and Computing, Zagreb, Croatia |
Arbanas, Barbara | University of Zagreb, Faculty of Electrical Engineering and Comp |
Bogdan, Stjepan | University of Zagreb |
Keywords: Marine Robotics, Sensor-based Control, Multi-Robot Systems
Abstract: In this paper, we present a method for depth control of one degree of freedom (1DOF) underwater robotic platform aMussel, based on the measurements from the ambient light sensor. Since ambient light values change during the day and depend on the weather conditions, references for the controller are acquired from other aMussel holding depth using pressure sensor based controller. Control inputs are transmitted using acoustic communication.
|
|
13:30-14:45, Paper TuBT1-01.5 | Add to My Program |
A Unified Closed-Loop Motion Planning Approach for an I-AUV in Cluttered Environment with Localization Uncertainty |
Yu, Huan | University of Technology Sydney |
Lu, Wenjie | Duke University |
Liu, Dikai | University of Technology, Sydney |
Keywords: Marine Robotics, Motion and Path Planning, Mobile Manipulation
Abstract: This paper presents an unified motion planning approach for an Intervention Autonomous Underwater Vehicle (I-AUV) in a cluttered environment with localization uncertainty. With the uncertainty being propagated by an information filter, a trajectory optimization problem closed by a Linear-Quadratic-Gaussian controller is formulated for a coupled design of optimal trajectory, localization, and control. Due to the presence of obstacles or complexity of the cluttered environment, a set of feasible initial I-AUV trajectories covering multiple homotopy classes are required by optimization solvers. Parameterized through polynomials, the initial base trajectories are from solving quasi-quadratic optimization problems that are linearly constrained by waypoints from RRTconnect, while the initial trajectories of the manipulator are generated by a null space saturation controller. Simulations on an I-AUV with a 3 DOF manipulator in cluttered underwater environments demonstrated that initial trajectories are generated efficiently and that optimal and collision-free I-AUV trajectories with low state uncertainty are obtained.
|
|
TuBT1-02 Interactive Session, 220 |
Add to My Program |
Marine Robotics III - 2.2.02 |
|
|
|
13:30-14:45, Paper TuBT1-02.1 | Add to My Program |
A Bio-Robotic Remora Disc with Attachment and Detachment Capabilities for Reversible Underwater Hitchhiking |
Wang, Siqi | Beihang University |
Li, Lei | Beihang University |
Chen, YuFeng | Microrobotics Laboratory, School of Applied Sciences and Enginee |
Wang, Yueping | Beihang University |
Sun, Wenguang | Beihang University |
Xiao, Junfei | Beihang University |
Wainwright, Dylan | Harvard University |
Wang, Tianmiao | Beihang University |
Wood, Robert | Harvard University |
Wen, Li | Beihang University |
Keywords: Marine Robotics, Soft Material Robotics, Biologically-Inspired Robots
Abstract: Remoras employ their adhesive discs to rapidly attach to and detach from a wide range of marine surfaces. By analyzing high-speed images of remoras’ (Echeneis naucrates) hitchhiking behavior, we describe the fish’s detachment mechanism as a lip curling up to break the seal between the disc and substrate. By mimicking the kinematic and morphological properties of the biological disc, we fabricated a multi-material biomimetic disc (whose stiffness spans four orders of magnitude) that is capable of both attachment and detachment. Detachment is realized by a flexible cable-driven mechanism that curls the anterior region of the silicone soft lip, allows leakage under the disc, and equalizes the internal pressure to the external pressure. The disc lamellae with attached carbon fiber spinules can be rotated by hydraulic soft actuators whose internal pressure is precisely tuned to the ambient underwater pressure. During attachment, increasing the rotational angle of the lamellae and the preload of the disc significantly enhanced the adhesive forces. We found that curling up the soft lip and folding down the lamellae rapidly reduced the pulling force of the disc by a factor of 254 compared to that under the attached state, which lead to detachment. Based on these mechanisms, underwater maneuvers involving repeated attachment and detachment were demonstrated with an integrated ROV unit that had a self-contained actuation and control system for the disc.
|
|
13:30-14:45, Paper TuBT1-02.2 | Add to My Program |
Robot Communication Via Motion: Closing the Human-Robot Interaction Loop Underwater |
Fulton, Michael | University of Minnesota |
Edge, Chelsey | University of Minnesota |
Sattar, Junaed | University of Minnesota |
Keywords: Marine Robotics, Social Human-Robot Interaction, Cognitive Human-Robot Interaction
Abstract: In this paper, we propose a novel method for underwater robot-to-human communication using the motion of the robot as "body language". To evaluate this system, we develop simulated examples of the system's body language gestures, called kinemes, and compare them to a baseline system using flashing colored lights through a user study. Our work shows evidence that motion can be used as a successful communication vector which is accurate, easy to learn, and quick enough to be used, all without requiring any additional hardware to be added to our platform. We thus contribute to "closing the loop" for human-robot interaction underwater by proposing and testing this system, suggesting a library of possible body language gestures for underwater robots, and offering insight on the design of nonverbal robot-to-human communication methods.
|
|
13:30-14:45, Paper TuBT1-02.3 | Add to My Program |
Three-Dimensionally Maneuverable Robotic Fish Enabled by Servo Motor and Water Electrolyser |
Zuo, Wenyu | University of Houston |
Keow, Alicia Li Jen | University of Houston |
Chen, Zheng | University of Houston |
Keywords: Marine Robotics, Biologically-Inspired Robots, Dynamics
Abstract: Three-dimensionally (3D) maneuverable robotic fish are highly desirable due to their abilities to explore and survey the underwater environment. Existing depth control mechanism is focused on using compressed air or piston to generate volume change, which makes the system bulky and impractical in a small size underwater robot. In this paper, a small and compact 3D maneuverable robotic fish is developed. Instead of using a compressed air tank, the robot is equipped with an on-board water electrolyzer to generate the gases for depth change. The fabricated robotic fish shows fast diving and rising performance. A servo motor is used to generate asymmetric flapping motion on the caudal fin, which leads to a two-dimensionally (2D) planar motion. A 3D dynamic model is then derived for the fabricated robotic fish. Several open-loop control experiments have been conducted to validate the model as well as the design. It has been demonstrated in the experimental results that the robot is capable of generating 3D motion. The robot can achieve 0.13 m/s forward velocity, 30.6 degree/s turning rate, and it takes about 5.5 s to dive to 0.55 m and 10 s to rise.
|
|
13:30-14:45, Paper TuBT1-02.4 | Add to My Program |
A Multimodal Aerial Underwater Vehicle with Extended Endurance and Capabilities |
Lu, Di | Shanghai Jiao Tong University |
Xiong, Chengke | Shanghai Jiaotong University |
Zeng, Zheng | Shanghai Jiao Tong University |
Lian, Lian | Shanghai Jiaotong University |
Keywords: Marine Robotics
Abstract: A new solution to improving the poor endurance of the existing hybrid aerial underwater vehicle (HAUV) is proposed in this paper. The proposed multimodal hybrid aerial underwater vehicle (MHAUV) merges the design concept of the fixed-wing unmanned aerial vehicle (UAV), the multirotor, and the underwater glider (UG) and has a novel lightweight pneumatic buoyancy adjustment system. MHAUV is well suited for moving in distinct medium and can achieve extended endurance for long distance travel in both air and water. The mathematical model is given based on Newton-Euler formalism. Necessary design principles of the vehicle’s physical parameters are obtained through different gliding equilibrium points. Then, a control scheme composed of two separate proportional-integral-derivative (PID) is employed for the vehicle's motion control in multi-domain simulation. The simulation results are presented to verify the multi-domain mobility and the mode switch ability of the proposed vehicle intuitively. Finally, a prototype, NEZHA, is introduced to be the experimental platform. The success of the flight test, the hovering test, the underwater glide test, and the medium transition test all contribute to prove the feasibility of the proposed concept of the novel MHAUV.
|
|
13:30-14:45, Paper TuBT1-02.5 | Add to My Program |
Design and Experiments of a Squid-Like Aquatic-Aerial Vehicle with Soft Morphing Fins and Arms |
Hou, Taogang | Beihang University |
Yang, Xingbang | Beihang University |
Su, Haohong | Beihang University |
Jiang, Buhui | Beihang University |
Chen, Lingkun | Beihang University |
Wang, Tianmiao | Beihang University |
Liang, Jianhong | Beihang University |
Keywords: Biologically-Inspired Robots, Soft Material Robotics, Marine Robotics
Abstract: Aquatic-aerial multimodal vehicle is a new concept aircraft that can freely shuttle between water and air. Some of the natural organisms provide the inspiration to realize this multi-locomotion. Most of current prototypes use rigid link mechanisms or hinges to morph the structure thus to adapt to the aquatic-aerial environment, which is commonly complicated and bulky. In this paper, we present a novel prototype with pneumatically-driven soft fins and arms that can fold and spread just like the flying squid. The fins and arms can augment the lift force during flying by spreading and reduce drag force during swimming by folding. The performance of the morphable structures was investigated in wind and water tunnel. The results explain the tradeoff strategies of multimodal-locomotion between water and air, and verify the feasibility of the novel aquatic-aerial vehicle with soft morphable structures.
|
|
13:30-14:45, Paper TuBT1-02.6 | Add to My Program |
Nonlinear Orientation Controller for a Compliant Robotic Fish Based on Asymmetric Actuation |
Meurer, Christian | Tallinn University of Technology |
Simha, Ashutosh | Tallinn University of Technology |
Kotta, Ülle | Tallinn University of Technology |
Kruusmaa, Maarja | Tallinn University of Technology |
Keywords: Marine Robotics
Abstract: Compliant fish-like robots are being developed as efficient and dependable underwater observation platforms with low impact on the observed environment. Orientation control is an essential building block to achieve autonomy for those vehicles. So far, the major focus has been on rigid tails or on flexible tails with a high degree of actuation. We present a novel control strategy for an underactuated robotic fish with a flexible tail optimized for cruising. The basis for our approach is the generation asymmetric velocity profiles of the robot's tail beats. To achieve such velocity profiles, the usual sinusoidal tail actuation is replaced with skewed triangle waves. We provide a simple formulation for such waves, where their skew is dependent on only one variable which we define as skew factor. Furthermore, a nonlinear control law is derived to achieve the desired turning motions. We implement the controller on a compliant fish-like robot with a simple actuation mechanism. The control scheme is experimentally validated, and its robustness is tested in field trials.
|
|
TuBT1-03 Interactive Session, 220 |
Add to My Program |
Visual Odometry II - 2.2.03 |
|
|
|
13:30-14:45, Paper TuBT1-03.1 | Add to My Program |
Project AutoVision: Localization and 3D Scene Perception for an Autonomous Vehicle with a Multi-Camera System |
Heng, Lionel | DSO National Laboratories |
Choi, Benjamin | DSO National Laboratories |
Cui, Zhaopeng | ETH Zurich |
Geppert, Marcel | ETH Zürich |
Hu, Sixing | National University of Singapore |
Kuan, Benson | DSO National Laboratories |
Liu, Peidong | ETH Zurich |
Nguyen, Rang | Ho Chi Minh City University of Technology |
Yeo, YeChuan | DSO National Laboratories |
Geiger, Andreas | Max Planck Institute for Intelligent Systems, Tübingen |
Lee, Gim Hee | National University of Singapore |
Pollefeys, Marc | ETH Zurich |
Sattler, Torsten | ETH Zurich |
Keywords: Field Robots, Computer Vision for Transportation, Autonomous Vehicle Navigation
Abstract: Project AutoVision aims to develop localization and 3D scene perception capabilities for a self-driving vehicle. Such capabilities will enable autonomous navigation in urban and rural environments, in day and night, and with cameras as the only exteroceptive sensors. The sensor suite employs many cameras for both 360-degree coverage and accurate multi-view stereo; the use of low-cost cameras keeps the cost of this sensor suite to a minimum. In addition, the project seeks to extend the operating envelope to include GNSS-less conditions which are typical for environments with tall buildings, foliage, and tunnels. Emphasis is placed on leveraging multi-view geometry and deep learning to enable the vehicle to localize and perceive in 3D space. This paper presents an overview of the project, and describes the sensor suite and current progress in the areas of calibration, localization, and perception.
|
|
13:30-14:45, Paper TuBT1-03.2 | Add to My Program |
Improving the Robustness of Visual-Inertial Extended Kalman Filtering |
Jackson, James | Brigham Young University |
Nielsen, Jerel | Brigham Young University |
McLain, T.W. | Bringham Young University |
Beard, Randal | Brigham Young University |
Keywords: Visual-Based Navigation, Aerial Systems: Perception and Autonomy, Robust/Adaptive Control of Robotic Systems
Abstract: Visual-inertial navigation methods have been shown to be an effective, low-cost way to operate autonomously without GPS or other global measurements, however most filtering approaches to VI suffer from observability and consistency problems.To increase robustness of the state-of-the-art methods, we propose a three-fold improvement. First, we propose the addition of a linear drag term in the velocity dynamics which improves estimation accuracy. Second, we propose the use of a partial-update formulation which limits the effect of linearization errors in partially-observable states, such as sensor biases. Finally, we propose the use of a keyframe reset step to enforce observability and consistency of the normally unobservable position and heading states. In this paper, we derive the proposed filter and use a Monte Carlo simulation experiment to analyze the response of visual-inertial Kalman filters with the above described additions. The results of this study show that the combination of all of these features significantly improves estimation accuracy and consistency.
|
|
13:30-14:45, Paper TuBT1-03.3 | Add to My Program |
Towards Fully Dense Direct Filter-Based Monocular Visual-Inertial Odometry |
Hardt-Stremayr, Alexander | Alpen-Adria-Universität Klagenfurt |
Weiss, Stephan | Alpen-Adria-Universität Klagenfurt |
Keywords: Sensor Fusion, Visual-Based Navigation, Localization
Abstract: We propose a fully dense direct filter-based visual-inertial odometry method estimating both pixel depth for all pixels and robot state simultaneously, having all uncertainties in the same state vector. Due to the fully dense method, our approach works even in low-textured areas with very low, smooth gradients (i.e. scenes where feature based or semi-dense approaches fail). Our algorithm performs in real-time on a CPU with a time complexity linearly dependent on the amount of pixels in the provided image. To achieve this, we propose complexity reduction methods for fast matrix inversion, exploiting specific structures of the covariance matrix. We provide both simulated and real-world results in low-textured areas with a smooth gradient.
|
|
13:30-14:45, Paper TuBT1-03.4 | Add to My Program |
Enhancing V-SLAM Keyframe Selection with an Efficient ConvNet for Semantic Analysis |
Alonso, Iñigo | University of Zaragoza |
Riazuelo, Luis | Instituto De Investigación En IngenieríadeAragón, University of Z |
Murillo, Ana Cristina | University of Zaragoza |
Keywords: Computer Vision for Other Robotic Applications, Semantic Scene Understanding, Deep Learning in Robotics and Automation
Abstract: Selecting relevant visual information from a video is a challenging task on its own and even more in robotics, due to strong computational restrictions. This work proposes a novel keyframe selection strategy based on image quality and semantic information, which boosts strategies currently used in Visual-SLAM (V-SLAM). Commonly used V-SLAM methods select keyframes based only on relative displacements and amount of tracked feature points. Our strategy to select more carefully these keyframes allows the robotic systems to make better use of them. With minimal computational cost, we show that our selection includes more relevant keyframes, which are useful for additional posterior recognition tasks, without penalizing the existing ones, mainly place recognition. A key ingredient is our novel CNN architecture to run a quick semantic image analysis at the onboard CPU of the robot. It provides sufficient accuracy significantly faster than related works. We demonstrate our hypothesis with several public datasets with challenging robotic data.
|
|
13:30-14:45, Paper TuBT1-03.5 | Add to My Program |
Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks |
Wang, Guangming | Shanghai Jiao Tong University |
Wang, Hesheng | Shanghai Jiao Tong University |
Liu, Yiling | Shanghai Jiao Tong University |
Chen, Weidong | Shanghai Jiao Tong University |
Keywords: Deep Learning in Robotics and Automation, SLAM
Abstract: A new unsupervised learning method of depth and ego-motion using multiple masks from monocular video is proposed in this paper. The depth estimation network and the ego-motion estimation network are trained according to the constraints of depth and ego-motion without truth values. The main contribution of our method is to carefully consider the occlusion of the pixels generated when the adjacent frames are projected to each other, and the blank problem generated in the projection target imaging plane. Two fine masks are designed to solve most of the image pixel mismatch caused by the movement of the camera. In addition, some relatively rare circumstances are considered, and repeated masking is proposed. To some extent, the method is to use a geometric relationship to filter the mismatched pixels for training, making unsupervised learning more efficient and accurate. The experiments on KITTI dataset show our method achieves good performance in terms of depth and ego-motion. The generalization capability of our method is demonstrated by training on the low-quality uncalibrated bike video dataset and evaluating on KITTI dataset, and the results are still good.
|
|
13:30-14:45, Paper TuBT1-03.6 | Add to My Program |
Experimental Comparison of Visual-Aided Odometry Methods for Rail Vehicles |
Tschopp, Florian | ETH Zurich |
Schneider, Thomas | ETH Zürich |
Palmer, Andrew William | Siemens |
Nourani-Vatani, Navid | Siemens |
Cadena Lerma, Cesar | ETH Zurich |
Siegwart, Roland | ETH Zurich |
Nieto, Juan | ETH Zürich |
Keywords: Computer Vision for Transportation, Intelligent Transportation Systems, SLAM
Abstract: Today, rail vehicle localization is based on infrastructure-side Balises (beacons) together with on-board odometry to determine whether a rail segment is occupied. Such a coarse locking leads to a sub-optimal usage of the rail networks. New railway standards propose the use of moving blocks centered around the rail vehicles to increase the capacity of the network. However, this approach requires accurate and robust position and velocity estimation of all vehicles. In this work, we investigate the applicability, challenges and limitations of current visual and visual-inertial motion estimation frameworks for rail applications. An evaluation against RTK-GPS ground truth is performed on multiple datasets recorded in industrial, sub-urban, and forest environments. Our results show that stereo visual-inertial odometry has a great potential to provide a precise motion estimation because of its complementing sensor modalities and shows superior performance in challenging situations compared to other frameworks.
|
|
TuBT1-04 Interactive Session, 220 |
Add to My Program |
Space Robotics II - 2.2.04 |
|
|
|
13:30-14:45, Paper TuBT1-04.1 | Add to My Program |
Characterizing the Effects of Reduced Gravity on Rover Wheel-Soil Interactions Using Computer Vision Techniques |
Niksirat, Parna | Concordia University |
Skonieczny, Krzysztof | Concordia University |
Forough Nassiraei, Amir | Concordia University |
Keywords: Space Robotics and Automation, Wheeled Robots
Abstract: Mitigating potential hazards for planetary rovers posed by soft soils requires testing in representative environments such as with Martian soil simulants in reduced gravity. This work describes the experimentation, methods, and results of a rover-soil visualization technique that produced rich datasets of reduced gravity wheel-terrain interaction. The activities are linked to the upcoming ExoMars space mission, through the use of ExoMars wheel prototype and Martian soil simulant in simulated Martian gravity produced in parabolic flights. The results indicate that, with wheel normal load held equal between experiments, the amount of soil mobilized by wheel-soil interaction increases as gravity decreases. Moreover, the amount of soil mobilized is more sensitive to slip in lower gravity. The results of the visualization analysis suggest a deterioration in the soil resistance and weaker soil bonding at lower gravities, which undermines the rover mobility by reducing the net traction. The results have important implications regarding the practice of using a reduced-mass rover on Earth to assess the performance of a full-mass rover in similar soil on an extraterrestrial surface.
|
|
13:30-14:45, Paper TuBT1-04.2 | Add to My Program |
Adaptive H∞ Controller for Precise Manoeuvring of a Space Robot |
Seddaoui, Asma | University of Surrey |
Saaj, Chakravarthini | University of Surrey |
Eckersley, Steve | Surrey Satellite Technology Ltd |
Keywords: Space Robotics and Automation, Motion Control
Abstract: A space robot working in a controlled floating mode can be used for performing in-orbit telescope assembly through simultaneously controlling the motion of the spacecraft base and its robotic arm. Handling and assembling optical mirrors requires the space robot to achieve slow and precise manoeuvres regardless of the disturbances and errors in the trajectory. The robustness offered by the nonlinear H∞ controller, in the presence of environmental disturbances and parametric uncertainties, makes it a viable solution. However, using fixed tuning parameters for this controller does not always result in the desired performance as the arm’s trajectory is not known a priori for orbital assembly missions. In this paper, a complete study on the impact of the different tuning parameters is performed and a new adaptive H∞ controller is developed based on bounded functions. The simulation results presented show that the proposed adaptive H∞ controller guarantees robustness and precise tracking using a minimal amount of forces and torques for assembly operations using a small space robot.
|
|
13:30-14:45, Paper TuBT1-04.3 | Add to My Program |
Belief Space Planning for Reducing Terrain Relative Localization Uncertainty in Noisy Elevation Maps |
Fang, Eugene | Carnegie Mellon University |
Furlong, Michael | SGT/KBRWyle |
Whittaker, William | Carnegie Mellon University |
Keywords: Motion and Path Planning, Localization, Space Robotics and Automation
Abstract: Accurate global localization is essential for planetary rovers to reach mission goals and mitigate operational risk. For initial exploration missions, it is inappropriate to deploy GPS or build other infrastructure for navigating. One way of determining global position is to use terrain relative navigation (TRN). TRN compares planetary rover-perspective images and 3D models to existing satellite orbital imagery and digital elevation models (DEMs) for absolute positioning. However, TRN is limited by the quality of orbital data and the presence and uniqueness of terrain features. This work presents a novel combination of belief space planning with terrain relative navigation. Additionally, we introduce a new method for increasing the robustness of belief space planning to noisy map data. The new algorithm provides a statistically significant reduction in localization uncertainty when tested on elevation data produced from lunar orbital imagery.
|
|
13:30-14:45, Paper TuBT1-04.4 | Add to My Program |
Soil Displacement Terramechanics for Wheel-Based Trenching with a Planetary Rover |
Pavlov, Catherine | Carnegie Mellon University |
Johnson, Aaron | Carnegie Mellon University |
Keywords: Space Robotics and Automation, Wheeled Robots, Mobile Manipulation
Abstract: Planetary exploration rovers are expensive, weight constrained, and cannot be serviced once deployed. Here, we explore one way to increase their capabilities while avoiding the cost, mass, and complexity leading to these issues. We propose to re-use the large wheel actuators for trenching and other digging operations, which will enable a range of missions such as sampling deeper layers of soil. We present a new, closed-form model of the soil displaced by an angled, spinning wheel to analyze the trenching potential of a driving strategy and inform the control of the wheel. The model is demonstrated with single wheel experiments under different driving conditions. The model suggests: that a deep trench does not require large tractive efforts; that the shape of the trench can be controlled; and that a rear wheel has a lower risk of entrapment when trenching than a front wheel. Ultimately this model could be used in a nonprehensile manipulation planning or learning algorithm to enable autonomous trenching.
|
|
13:30-14:45, Paper TuBT1-04.5 | Add to My Program |
Haptic Inspection of Planetary Soils with Legged Robots |
Kolvenbach, Hendrik | ETHZ |
Bärtschi, Christian | ETH Zürich |
Wellhausen, Lorenz | ETH Zürich |
Grandia, Ruben | ETH Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Space Robotics and Automation, Legged Robots, Force and Tactile Sensing
Abstract: Planetary exploration robots encounter challenging terrain during operation. Vision-based approaches have failed in the past to reliably predict soil characteristics, which makes it necessary to probe the terrain haptically. We present a robust, haptic inspection approach for a variety of fine, granular media, which are representative of Martian soil. In our approach, the robot uses one limb to perform an impact trajectory, while supporting the main body with the remaining three legs. The resulting vibration, which is recorded by sensors placed in the foot, is decomposed using the discrete wavelet transformation and assigned a soil class by a Support Vector Machine. We tested two foot designs and validated the robustness of this approach through the extensive use of an open-source dataset, which we recorded on a specially designed single-foot testbed. A remarkable overall classification accuracy of more than 98% could be achieved despite various introduced disturbances. The contributions of the different sensors to the classification performance are evaluated. Finally, we test the generalization performance on unknown soils and show that their behavior can be anticipated.
|
|
13:30-14:45, Paper TuBT1-04.6 | Add to My Program |
Experimental Evaluation of Teleoperation Interfaces for Cutting of Satellite Insulation |
Pryor, Will | Johns Hopkins University |
Vagvolgyi, Balazs | Johns Hopkins University |
Gallagher, William | Georgia Institute of Technology |
Deguet, Anton | Johns Hopkins University |
Leonard, Simon | The Johns Hopkins University |
Whitcomb, Louis | The Johns Hopkins University |
Kazanzides, Peter | Johns Hopkins University |
Keywords: Space Robotics and Automation, Telerobotics and Teleoperation, Virtual Reality and Interfaces
Abstract: On-orbit servicing of satellites is complicated by the fact that almost all existing satellites were not designed to be serviced. This creates a number of challenges, one of which is to cut and partially remove the protective thermal blanketing that encases a satellite prior to performing the servicing operation. A human operator on Earth can perform this task telerobotically, but must overcome difficulties presented by the multi-second round-trip telemetry delay between the satellite and the operator and the limited, or even obstructed, views from the available cameras. This paper reports the results of ground-based experiments with trained NASA robot teleoperators to compare our recently-reported augmented virtuality visualization to the conventional camera-based visualization. We also compare the master console of a da Vinci surgical robot to the conventional teleoperation interface. The results show that, for the cutting task, the augmented virtuality visualization can improve operator performance compared to the conventional visualization, but that operators are more proficient with the conventional control interface than with the da Vinci master console.
|
|
TuBT1-05 Interactive Session, 220 |
Add to My Program |
Deep Visual Learning I - 2.2.05 |
|
|
|
13:30-14:45, Paper TuBT1-05.1 | Add to My Program |
OmniDRL: Robust Pedestrian Detection Using Deep Reinforcement Learning on Omnidirectional Cameras |
Dias Pais, Gonçalo | Instituto Sistemas E Robótica, Lisboa |
Dias, Tiago | Institute for Systems and Robotics, Instituto Superior Técnico, |
Nascimento, Jacinto | Instituto De Sistemas E Robótica, |
Miraldo, Pedro | KTH Royal Institute of Technology, Stockholm |
Keywords: Deep Learning in Robotics and Automation, Visual Learning, Computer Vision for Automation
Abstract: Pedestrian detection is one of the most explored topics in computer vision and robotics. The use of deep learning methods allowed the development of new and highly competitive algorithms. Deep Reinforcement Learning has proved to be within the state-of-the-art in terms of both detection in perspective cameras and robotics applications. However, for detection in omnidirectional cameras, the literature is still scarce, mostly because of their high levels of distortion. This paper presents a novel and efficient technique for robust pedestrian detection in omnidirectional images. The proposed method uses deep Reinforcement Learning that takes advantage of the distortion in the image. By considering the 3D bounding boxes and their distorted projections into the image, our method is able to provide the pedestrian's position in the world, in contrast to the image positions provided by most state-of-the-art methods for perspective cameras. Our method avoids the need of pre-processing steps to remove the distortion, which is computationally expensive. Beyond the novel solution, our method compares favorably with the state-of-the-art methodologies that do not consider the underlying distortion for the detection task.
|
|
13:30-14:45, Paper TuBT1-05.2 | Add to My Program |
2D3D-MatchNet: Learning to Match Keypoints across 2D Image and 3D Point Cloud |
Feng, Mengdan | National University of Singapore |
Hu, Sixing | National University of Singapore |
Ang Jr, Marcelo H | National University of Singapore |
Lee, Gim Hee | National University of Singapore |
Keywords: Deep Learning in Robotics and Automation, Visual Learning, Localization
Abstract: Large-scale point cloud generated from 3D sensors is more accurate than its image-based counterpart. However, it is seldom used in visual pose estimation due to the difficulty in obtaining 2D-3D image to point cloud correspondences. In this paper, we propose the 2D3D-MatchNet - an end-to-end deep network architecture to jointly learn the descriptors for 2D and 3D keypoint from image and point cloud, respectively. As a result, we are able to directly match and establish 2D-3D correspondences from the query image and 3D point cloud reference map for visual pose estimation. We create our Oxford 2D-3D Patches dataset from the Oxford Robotcar dataset with the ground truth camera poses and 2D-3D image to point cloud correspondences for training and testing the deep network. Experimental results verify the feasibility of our approach.
|
|
13:30-14:45, Paper TuBT1-05.3 | Add to My Program |
Teaching Robots to Draw |
Kotani, Atsunobu | Brown University |
Tellex, Stefanie | Brown |
Keywords: Deep Learning in Robotics and Automation, Visual Learning
Abstract: In this paper, we introduce an approach which enables manipulator robots to write handwritten characters or line drawings. Given an image of just-drawn handwritten characters, the robot infers a plan to replicate the image with a writing utensil, and then reproduces the image. Our approach draws each target stroke in one continuous drawing motion and does not rely on handcrafted rules or on predefined paths of characters. Instead, it learns to write from a dataset of demonstrations. We evaluate our approach in both simulation and on two real robots. Our model can draw handwritten characters in a variety of languages which are disjoint from the training set, such as Greek, Tamil, or Hindi, and also reproduce any stroke-based drawing from an image of the drawing.
|
|
13:30-14:45, Paper TuBT1-05.4 | Add to My Program |
Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping |
Yan, Mengyuan | Stanford University |
Li, Adrian | X |
Kalakrishnan, Mrinal | X |
Pastor, Peter | Google X |
Keywords: Deep Learning in Robotics and Automation, Visual Learning, Grasping
Abstract: Many previous works approach vision-based robot grasping by training a value network that evaluates grasp proposals. These approaches require an optimization process at run-time to infer the best action from the value network. As a result, the inference time grows exponentially as the dimension of action space increases. We propose an alternative method, by directly training a neural density model to approximate the conditional distribution of successful grasp poses from the input images. We construct a neural network that combines Gaussian mixture and normalizing flows, which is able to represent multi-modal, complex probability distributions. We demonstrate on both simulation and real robot that the proposed actor model achieves similar performance compared to the value network using the Cross-Entropy Method (CEM) for inference, on top-down grasping with a 4 dimensional action space. Our actor model reduces the inference time by 3 times compared to the state-of-the-art CEM method. We believe that actor models will play an important role when scaling up these approaches to higher dimensional action spaces.
|
|
13:30-14:45, Paper TuBT1-05.5 | Add to My Program |
Self-Supervised Learning for Single View Depth and Surface Normal Estimation |
Zhan, Huangying | The University of Adelaide |
Weerasekera, Chamara Saroj | The University of Adelaide |
Garg, Ravi | The University of Adelaide |
Reid, Ian | University of Adelaide |
Keywords: Deep Learning in Robotics and Automation, Visual Learning, Mapping
Abstract: In this work we present a self-supervised learning framework to simultaneously train two Convolutional Neural Networks (CNNs) to predict depth and surface normals from a single image. In contrast to most existing frameworks which represent outdoor scenes as fronto-parallel planes at piece-wise smooth depth, we propose to predict depth with surface orientation while assuming that natural scenes have piece-wise smooth normals. We show that a simple depth-normal consistency as a soft-constraint on the predictions is sufficient and effective for training both these networks simultaneously. The trained normal network provides state-of-the-art predictions while the depth network, relying on much realistic smooth normal assumption, outperforms the traditional self-supervised depth prediction network by a large margin on the KITTI benchmark.
|
|
13:30-14:45, Paper TuBT1-05.6 | Add to My Program |
Learning to Drive from Simulation without Real World Labels |
Bewley, Alex | Google AI |
Rigley, Jessica | Wayve Technologies |
Liu, Yuxuan | Wayve Technologies LTD |
Hawke, Jeffrey | Wayve |
Shen, Richard | Wayve |
Lam, Vinh-Dieu | Wayve Technologies |
Kendall, Alex | Engineering Department, University of Cambridge |
Keywords: Deep Learning in Robotics and Automation, Visual Learning, Learning from Demonstration
Abstract: Simulation can be a powerful tool for understanding machine learning systems and designing methods to solve real-world problems. Training and evaluating methods purely in simulation is often ``doomed to succeed'' at the desired task in a simulated environment, but the resulting models are incapable of operation in the real world. Here we present and evaluate a method for transferring a vision-based lane following driving policy from simulation to operation on a rural road without any real-world labels. Our approach leverages recent advances in image-to-image translation to achieve domain transfer while jointly learning a single-camera control policy from simulation control labels. We assess the driving performance of this method using both open-loop regression metrics, and closed-loop performance operating an autonomous vehicle on rural and urban roads.
|
|
TuBT1-06 Interactive Session, 220 |
Add to My Program |
Biological Cell Manipulation - 2.2.06 |
|
|
|
13:30-14:45, Paper TuBT1-06.1 | Add to My Program |
Fabrication and Characterization of Muscle Rings Using Circular Mould and Rotary Electrical Stimulation for Bio-Syncretic Robots |
Zhang, Chuang | Shenyang Institute of Automation Chinese Academy of Sciences |
Shi, Jialin | Shenyang Institute of Automation, Chinese Academy of Sciences |
Wang, Wenxue | Shenyang Institute of Automation, CAS |
Xi, Ning | The University of Hong Kong |
Wang, Yuechao | Shenyang Inst. of Automation |
Liu, Lianqing | Shenyang Institute of Automation |
Keywords: Biological Cell Manipulation, Micro/Nano Robots, Soft Material Robotics
Abstract: Bio-syncretic robots made up of living biological systems and electromechanical systems may have the potential excellent performance of natural biological entities. Therefore, the study of the bio-syncretic robots has got lots of attention in recent years. The 3D skeletal muscles have been used widely, due to the considerable contraction force and the controllability. However, the low differentiation quality of the C2C12 in the tissues hinders the broad application in the development of the skeleton muscle actuated bio-syncretic robots. In this work, an approach based on circular mould and rotary electrical stimulation to build high-quality muscle rings, which can be used to actuate various bio-syncretic robots, has been proposed. Firstly, the advantage of the proposed circular mould for the muscle rings culture has been shown by simulation. Then, the muscle rings have been fabricated with different moulds using the experiment-optimized compositions of the biological mixture. After that, the muscle rings in the circular moulds with different electrical stimulations have been cultured, to show the superiority of the proposed rotary electrical stimulation. Moreover, the contractility of the muscle rings have been measured under the different electrical pulses stimulation, for the study of the control property of the muscle rings. This work may be meaningful not only the development of bio-syncretic robots actuated by 3D muscle tissues but also the muscle tissue engineering.
|
|
13:30-14:45, Paper TuBT1-06.2 | Add to My Program |
Cell Injection Microrobot Development and Evaluation in Microfluidic Chip |
Feng, Lin | Beihang University |
Chen, Dixiao | Beihang University |
Zhou, Qiang | Beihang University |
Song, Bin | BEIHANG UNIVERSITY |
Zhang, Wei | Beihang University |
Keywords: Micro/Nano Robots, Biological Cell Manipulation
Abstract: We propose an innovative design of microrobot, which can achieve donor cell suction, delivery and injection in a mammalian oocyte on microfluidic chip. The microrobot body contains a hollow space that produces suction and ejection forces for injection of cell nuclei using a nozzle at the tip of the robot. Specifically, a controller changes the hollow volume by balancing the magnetic and elastic forces of the membrane, and along with motion of stages in the XY plane. A glass capillary attached at the tip of the robot contains the nozzle is able to absorb and inject cell nuclei. The microrobot provides three degrees of freedom and generates micronewton forces. We demonstrate the effectiveness of the proposed microrobot through an experiment of absorption and ejection of 20 µm particles from the nozzle using magnetic control in a microfluidic chip.
|
|
13:30-14:45, Paper TuBT1-06.3 | Add to My Program |
Orienting Oocytes Using Vibrations for In-Vitro Fertilization Procedures |
Meyer, Daniel | Stanford University |
Perez Colon, Martin Luis | Stanford University |
Vahid Alizadeh, Hossein | Stanford University |
Su, Lisa | Stanford University |
Behr, Barry | Stanford University |
Camarillo, David B. | Stanford University |
Keywords: Biological Cell Manipulation, Automation at Micro-Nano Scales, Medical Robots and Systems
Abstract: Accurate positioning of cells is a fundamental task for many procedures in assisted reproductive technologies (e.g. intracytoplasmic sperm injection or preimplantation genetic diagnosis) to extract or insert materials into and from the cell without causing damage to it. The current method of manual manipulation is based on a trial and error procedure performed by skilled embryologists, where they use two different micropipettes to rotate the cell and immobilize it in the desired orientation. This procedure is time consuming, inconsistent and has a low efficiency. Attempts to automate the process presented in the literature have not yet been implemented in IVF clinics because their high degree of automation requires extensive changes to the current systems used in the clinics. We designed a system that can easily be integrated into standard equipment of IVF clinics and allows automated as well as manual manipulation of cells. The system uses vibrations induced by a surface transducer at the pipette holder to rotate the cell around the pipette tip axis, resulting in 2D motion. To detect if the polar body is in the desired position after a vibration burst, we developed a polar body detection algorithm. We performed simulations and experiments to confirm that vibrations at the natural frequencies of the system cause rotation around the pipette tip axis. Experimental results show that the system is capable of positioning the polar body in plane in less than 5.41 seconds.
|
|
13:30-14:45, Paper TuBT1-06.4 | Add to My Program |
Vision-Based Automated Sorting of C. Elegans on a Microfluidic Device |
Dong, Xianke | McGill University |
Song, Pengfei | McGill University |
Liu, Xinyu | University of Toronto |
Keywords: Automation at Micro-Nano Scales, Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care
Abstract: This paper reports a vision-based microfluidic system for automated, high-speed sorting of the nematode worm C. elegans. Exceeding the capabilities of conventional worm sorting microfluidic devices purely relying on passive sorting mechanisms, our system is capable of accurate measurement of the worm body length/width and active sorting of worms with the desired sizes from a mixture of worms at different developmental stages. This feature is enabled by the combination of vision-based worm detection and sizing algorithms and automated on-chip worm manipulation. A double-layer microfluidic device with computer-controlled pneumatic valves is developed for sequential loading, trapping, imaging, and sorting of single worms based on vision-based worm size measurement results. To keep the system operation robust, vision-based algorithms for detecting multiworm loading and worm size measurement failure have also been developed. We conducted sorting experiments on 319 worms and achieve an average sorting speed of 10.4 worms per minute (5.8 s/worm) with an operation success rate of 90.3%. This system will facilitate worm biology studies where body size measurement and size-based sorting of many worms are needed.
|
|
13:30-14:45, Paper TuBT1-06.5 | Add to My Program |
Automated Laser Ablation of Motile Sperm for Immobilization |
Zhang, Zhuoran | University of Toronto |
Dai, Changsheng | University of Toronto |
Wang, Xian | University of Toronto |
Ru, Changhai | Soochow University |
Abdalla, Khaled | CReATe Fertility Centre |
Jahangiri, Sahar | CReATe Fertility Centre |
Librach, Clifford | University of Toronto |
Jarvi, Keith | Mount Sinai Hospital |
Sun, Yu | University of Toronto |
Keywords: Biological Cell Manipulation, Automation at Micro-Nano Scales, Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care
Abstract: Automated manipulation of single cells is required in both biological and clinical applications. In clinical infertility treatments, a single motile sperm is immobilized and inserted into an egg cell for in vitro fertilization. Sperm immobilization is essential to ease the ensuing pick-up procedure, and importantly, it prevents the sperm tail from beating inside the egg cell, which causes a lower fertilization rate. For immobilizing a motile sperm, the sperm tail must be accurately positioned and aligned with the manipulation tool (e.g., laser spot). Manual immobilization has stringent skill requirements and is not able to accurately position the sperm tail to the center of the laser spot for immobilization. This paper presents a visual servo system that is capable of accurately positioning the tail of a motile sperm relative to the laser spot for automated sperm immobilization. A visual servo control strategy was developed to estimate and compensate for the motion of the sperm tail. Experimental results showed that the visual servo controller achieved a positioning accuracy of 1.7 μm, independent of sperm speed or swimming direction. By quantitatively evaluating the effect of laser energy on immobilization, a consistent immobilization success rate of 100% was achieved (based on experiments on 900 sperms) with a throughput five times that of manual operation. Experimental results confirmed that this automated immobilization technique did not induce damage to sperm DNA.
|
|
13:30-14:45, Paper TuBT1-06.6 | Add to My Program |
A Microrobotic System for Simultaneous Measurement of Turgor Pressure and Cell-Wall Elasticity of Individual Growing Plant Cells |
Burri, Jan Thomas | ETH Zurich |
Vogler, Hannes | Institute of Plant Biology and Zurich-Basel Plant Science Center |
Munglani, Gautam | University of Zürich |
Läubli, Nino Fabian | ETH Zürich |
Grossniklaus, Ueli | Institute of Plant Biology and Zurich-Basel Plant Science Center |
Nelson, Bradley J. | ETH Zurich |
Keywords: Automation at Micro-Nano Scales, Biological Cell Manipulation, Force Control
Abstract: Plant growth and morphogenesis is directed by cell division and the expansion of individual cells. How the tightly controlled process of cell expansion is regulated is poorly understood. We introduce a microrobotic platform able to separately measure the turgor pressure and cell wall elasticity of individual growing, turgid cells by combining microindentation with cell compression experiments. The system independently controls two indenters with geometries at different scales. Indentation measurements are performed automatically by deforming the cells with indenters with a spatial resolution in the nanometer range while recording force and displacement. The dual-indentation technique offers a noninvasive, high-throughput method to characterize the cytomechanics of single turgid cells by separately measuring elasticity and turgor pressure. In this way, the expansion regulation of growing cells can be investigated, as demonstrated here using Lilium longiflorum pollen tubes as an example.
|
|
TuBT1-07 Interactive Session, 220 |
Add to My Program |
Human Detection and Tracking - 2.2.07 |
|
|
|
13:30-14:45, Paper TuBT1-07.1 | Add to My Program |
Asymmetric Local Metric Learning with PSD Constraint for Person Re-Identification |
Wen, Zhijie | Shanghai University |
Sun, Mingyang | Shanghai University |
Li, Ying | Shanghai University |
Ying, Shihui | School of Science, ShanghaiUniversity |
Peng, Yaxin | Shanghai University |
Keywords: Human Detection and Tracking, Recognition, Learning and Adaptive Systems
Abstract: Person re-identification is one of the key issues in both machine learning and video monitor application. In particular, defining an appropriate distance metric between the person images is very important. Existing metric learning approaches used in person re-identification either learn a single measure, or ignore the positive semi-definite (PSD) of measurement matrix, at the same time, because of the limited number of positive sample pairs, some metric learning methods are largely dominated by the large amount of negative sample pairs. Considering the above issues, we propose a new adaptive local metric learning method with PSD constraint and an asymmetric sample weighting strategy. Unlike existing metric learning methods which learn a single distance metric, we use an approximation error bound of a smooth metric matrix function over the data manifold to learn local metrics as linear combinations of basis metrics defined on anchor points over different regions of the instance space. Besides, we develop an efficient two stage algorithm that first learns the linear combinations of each instance and then the metric matrices of the anchor points. Our metric learning method has excellent performance. We firstly apply the proposed method on 5 UCI databases. Then the proposed approach is applied for person re-identification, achieving better performance on three challenging databases (GRID, VIPeR,CUHK01) than the existing methods.
|
|
13:30-14:45, Paper TuBT1-07.2 | Add to My Program |
A Fast and Robust 3D Person Detector and Posture Estimator for Mobile Robotic Applications |
Lewandowski, Benjamin | Ilmenau University of Technology |
Liebner, Jonathan | Ilmenau University of Technology |
Wengefeld, Tim | Ilmenau University of Technology |
Mueller, Steffen | Ilmenau University of Technology |
Gross, Horst-Michael | Ilmenau University of Technology |
Keywords: Human Detection and Tracking, RGB-D Perception
Abstract: Due to recent deep learning techniques, person detection seems to be solved in the computer vision domain, however, it is still an issue in mobile robotics. On a robot only limited computing capacities are available. The challenge gets even more difficult when operating in an environment, with people in poses different from the standard upright ones. In this work the environment of a supermarket is considered. Unlike most scenarios targeted by the community, persons not only occur in standing postures, but also grasping into the shelves or squatting in front of them. Furthermore, people are heavily occluded, e.g. by shopping carts. In such a challenging environment, it is important to perceive people early enough and in real-time in order to enable a socially aware navigation. Classical person detectors often suffer from a high posture variance or do not achieve acceptable real-time detection rates. For this reason, different components from the 3D object detection domain have been used to create a new robust person detector for mobile application. Operating on 3D point clouds allows fast detections in real-time up to our goal distance of ten meters and above using the Kinect2 depth sensor. The detector can even differentiate between typical postures of customers who stand or squat in front of shelves.
|
|
13:30-14:45, Paper TuBT1-07.3 | Add to My Program |
Spatiotemporal and Kinetic Gait Analysis System Based on Multisensor Fusion of Laser Range Sensor and Instrumented Insoles |
Eguchi, Ryo | Keio University |
Yorozu, Ayanori | Keio University |
Takahashi, Masaki | Keio University |
Keywords: Human Detection and Tracking, Medical Robots and Systems, Health Care Management
Abstract: Tracking of human legs during walking are key technologies for gait analysis evaluating the movement function of the elderly and patients with gait disorders. Although the motion capture cameras are the gold standard method for gait analysis because of their high accuracy, they are not always accessible in clinical sites because of their cost, scale, and usability. In response, a laser range sensor (LRS), which is used for obstacle avoidance and human detection of mobile robots, has recently been employed for tracking of leg motions. Some previous studies set LRS at shin height and tracked leg motions during walking using three or five observation patterns and the Kalman filtering and data association methods. However, these systems had difficulty in tracking during walking along a circular trajectory including frequent overlaps and occlusions of legs. Therefore, this paper presents a spatiotemporal and kinetic gait analysis system using a single LRS and instrumented insoles and proposes a multisensor fusion algorithm for tracking leg motions. The instrumented insoles are in-shoe devices embedded force sensors and can detect accurate timings of gait events via force sensing. The system identifies gait phases by the fusion algorithm and switches acceleration input added to motion models of tracked legs for the Kalman filter and data association. The tracking performance of the proposed system was evaluated by measuring walking on a circular trajectory in experiments.
|
|
13:30-14:45, Paper TuBT1-07.4 | Add to My Program |
Part Segmentation for Highly Accurate Deformable Tracking in Occlusions Via Fully Convolutional Neural Networks |
Wan, Weilin | University of Washington |
Walsman, Aaron | University of Washington |
Fox, Dieter | University of Washington |
Keywords: Human Detection and Tracking, Visual Tracking, RGB-D Perception
Abstract: Successfully tracking the human body is an important perceptual challenge for robots that must work around people. Existing methods fall into two broad categories: geometric tracking and direct pose estimation using machine learning. While recent work has shown direct estimation techniques can be quite powerful, geometric tracking methods using point clouds can provide a very high level of 3D accuracy which is necessary for many robotic applications. However these approaches can have difficulty in clutter when large portions of the subject are occluded. To overcome this limitation, we propose a solution based on fully convolutional neural networks (FCN). We develop an optimized Fast-FCN network architecture for our application which allows us to filter observed point clouds and improve tracking accuracy while maintaining interactive frame rates. We also show that this model can be trained with a limited number of examples and almost no manual labelling by using an existing geometric tracker and data augmentation to automatically generate segmentation maps. We demonstrate the accuracy of our full system by comparing it against an existing geometric tracker, and show significant improvement in these challenging scenarios.
|
|
13:30-14:45, Paper TuBT1-07.5 | Add to My Program |
Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-Time Humanoid Robot Navigation |
Nik Aznan, Nik Khadijah | Durham University |
Connolly, Jason | Durham University |
Al Moubayed, Noura | Durham University |
Breckon, Toby | Durham University |
Keywords: Brain-Machine Interface, Humanoid Robots, Deep Learning in Robotics and Automation
Abstract: This paper addresses the challenge of humanoid robot teleoperation in a natural indoor environment via a Brain-Computer Interface (BCI). We leverage deep Convolutional Neural Network (CNN) based image and signal understanding to facilitate both real-time object detection and dry-Electroencephalography (EEG) based human cortical brain bio-signals decoding. We employ recent advances in dry-EEG technology to stream and collect the cortical waveforms from subjects while they fixate on variable Steady State Visual Evoked Potential (SSVEP) stimuli generated directly from the environment the robot is navigating. To these ends, we propose the use of novel variable BCI stimuli by utilising the real-time video streamed via the on-board robot camera as visual input for SSVEP, where the CNN detected natural scene objects are altered and flickered with differing frequencies (10Hz, 12Hz and 15Hz). These stimuli are not akin to traditional stimuli - as both the dimensions of the flicker regions and their on-screen position changes depending on the scene objects detected. On-screen object selection via such a dry-EEG enabled SSVEP methodology, facilitates the on-line decoding of human cortical brain signals, via a specialised secondary CNN, directly into teleoperation robot commands (approach object, move in a specific direction: right, left or back). The resulting classification demonstrates high performance with mean accuracy of 85% for the real-time robot navigation experiment.
|
|
13:30-14:45, Paper TuBT1-07.6 | Add to My Program |
General Hand-Eye Calibration Based on Reprojection Error Minimization |
Koide, Kenji | University of Padova |
Menegatti, Emanuele | The University of Padua |
Keywords: Calibration and Identification, Industrial Robots
Abstract: This paper describes a novel hand-eye calibration technique based on reprojection error minimization. In contrast to traditional hand-eye calibration methods, the proposed method directly takes images of the calibration pattern and does not require to explicitly estimate the camera pose for each input image. The proposed method is implemented as a pose graph optimization problem, so that it can solve the estimation problem efficiently and robustly, and it can be easily extended for different projection models. It can deal with different camera models (e.g, X-ray cameras with a source-detector projection model) by changing the projection model. Through simulations, we validated that the proposed method shows a good estimation accuracy, and it can be applied to hand-eye calibration with a source-detector camera model. The experimental results with real robots show that the proposed method is applicable to real environments, and it improves the quality of a task which requires accurate hand-eye estimation, like 3D reconstruction.
|
|
TuBT1-08 Interactive Session, 220 |
Add to My Program |
Visual Localization II - 2.2.08 |
|
|
|
13:30-14:45, Paper TuBT1-08.1 | Add to My Program |
Estimating the Localizability in Tunnel-Like Environments Using LiDAR and UWB |
Zhen, Weikun | Carnegie Mellon University |
Scherer, Sebastian | Carnegie Mellon University |
Keywords: Localization, Range Sensing, Aerial Systems: Applications
Abstract: The application of robots in inspection tasks has been growing quickly thanks to the advancements in autonomous navigation technology, especially the robot localization techniques in GPS-denied environments. Although many methods have been proposed to localize a robot using onboard sensors such as cameras and LiDARs, achieving robust localization in geometrically degenerated environments, e.g. tunnels, remains a challenging problem. In this work, we focus on the robust localization problem in such situations. A novel degeneration characterization model is presented to estimate the localizability at a given location in the prior map. And the localizability of a LiDAR and an Ultra-Wideband (UWB) ranging radio is analyzed. Additionally, a probabilistic sensor fusion method is developed to combine IMU, LiDAR and the UWB. Experiment results show that this method allows for robust localization inside a long straight tunnel.
|
|
13:30-14:45, Paper TuBT1-08.2 | Add to My Program |
Global Localization with Object-Level Semantics and Topology |
Liu, Yu | Heriot-Watt University |
Petillot, Yvan R. | Heriot-Watt University |
Lane, David | Heriot-Watt University |
Wang, Sen | Edinburgh Centre for Robotics, Heriot-Watt University |
Keywords: Localization, Semantic Scene Understanding, Computer Vision for Other Robotic Applications
Abstract: Global localization lies at the heart of autonomous navigation and Simultaneous Localization and Mapping (SLAM). The appearance-based approach has been successful, but still faces many open challenges in environments where visual conditions vary significantly over time. In this paper, we propose an integrated solution to leverage object-level dense semantics and spatial understanding of the environment for global localization. Our approach models an environment with 3D dense semantics, semantic graph and their topology. This object-level representation is then used for place recognition via semantic object association, followed by 6-DoF pose estimation by the semantic-level point alignment. Extensive experiments show that our approach can achieve robust global localization under extreme appearance changes. It is also capable of coping with other challenging scenarios, such as dynamic environments and incomplete query observations.
|
|
13:30-14:45, Paper TuBT1-08.3 | Add to My Program |
Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance Using Single-View Depth Estimation |
Garg, Sourav | Queensland University of Technology |
Vankadari, Madhu Babu | TCS |
Dharmasiri, Thanuja | Monash University |
Hausler, Stephen | Queensland University of Technology |
Sünderhauf, Niko | Queensland University of Technology |
Kumar, Swagat | Tata Consultancy Services |
Drummond, Tom | Monash University |
Milford, Michael J | Queensland University of Technology |
Keywords: Localization, Deep Learning in Robotics and Automation
Abstract: Visual place recognition (VPR) - the act of recognizing a familiar visual place - becomes difficult when there is extreme environmental appearance change or viewpoint change. Particularly challenging is the scenario where both phenomena occur simultaneously, such as when returning for the first time along a road at night that was previously traversed during the day in the opposite direction. While such problems can be solved with panoramic sensors, humans solve this problem regularly with limited field-of-view vision and without needing to constantly turn around. In this paper, we present a new depth- and temporal-aware visual place recognition system that solves the opposing viewpoint, extreme appearance-change visual place recognition problem. Our system performs sequence-to-single frame matching by extracting depth-filtered keypoints using a state-of-the-art depth estimation pipeline, constructing a keypoint sequence over multiple frames from the reference dataset, and comparing these keypoints to the keypoints extracted from a single query image. We evaluate the system on a challenging benchmark dataset and show that it consistently outperforms state-of-the-art techniques. We also develop a range of diagnostic simulation experiments that characterize the contribution of depth-filtered keypoint sequences with respect to key domain parameters including the degree of appearance change and camera motion.
|
|
13:30-14:45, Paper TuBT1-08.4 | Add to My Program |
Geometric Relation Distribution for Place Recognition |
Lodi Rizzini, Dario | University of Parma |
Galasso, Francesco | Elettric80 S.p.A |
Caselli, Stefano | University of Parma |
Keywords: Mapping, Localization, Range Sensing
Abstract: In this paper, we illustrate Geometric Relation Distribution (GRD), a novel signature for place recognition and loop closure with landmark maps. GRD encodes on geometric pairwise relations between landmark points into a continuous probability density function. The pairwise angles are represented by von Mises distribution whereas two alternative distributions, Erlang or biased Rayleigh, are proposed for distances. The GRD function is represented through its expansion into Fourier series and Laguerre polynomial basis. Such orthogonal basis representation enable efficient computation of the translation and rotation invariant metric used to compare signatures and find potential loop closure candidates. The effectiveness of the proposed method is assessed through experiments with standard datasets.
|
|
13:30-14:45, Paper TuBT1-08.5 | Add to My Program |
Multi-Process Fusion: Visual Place Recognition Using Multiple Image Processing Methods |
Hausler, Stephen | Queensland University of Technology |
Jacobson, Adam | Queensland University of Technology |
Milford, Michael J | Queensland University of Technology |
Keywords: Localization, Visual-Based Navigation
Abstract: Typical attempts to improve the capability of visual place recognition techniques include the use of multi-sensor fusion and the integration of information over time from image sequences. These approaches can improve performance but have disadvantages including the need for multiple physical sensors and calibration processes, both for multiple sensors and for tuning the image matching sequence length. In this paper we address these shortcomings with a novel “multi-sensor” fusion approach applied to multiple image processing methods for a single visual image stream, combined with a dynamic sequence matching length technique and an automatic weighting scheme. In contrast to conventional single method approaches, our approach reduces the performance requirements of a single image processing methodology, instead requiring that within the suite of image processing methods, at least one performs well in any particular environment. In comparison to static sequence length techniques, the dynamic sequence matching technique enables reduced localization latencies through analysis of recognition quality metrics when re-entering familiar locations. We evaluate our approach on multiple challenging benchmark datasets, achieving superior performance to two state-of-the-art visual place recognition systems across environmental changes including winter to summer, afternoon to morning and night to day.
|
|
13:30-14:45, Paper TuBT1-08.6 | Add to My Program |
Effective Visual Place Recognition Using Multi-Sequence Maps |
Vysotska, Olga | University of Bonn |
Stachniss, Cyrill | University of Bonn |
Keywords: Localization
Abstract: Visual place recognition is a challenging task, especially in outdoor environments as the scenes naturally change their appearance. In this paper, we propose a method for visual place recognition that is able to deal with seasonal changes, different weather condition as well as illumination changes. Our approach localizes the robot in a map, which is represented by multiple image sequences collected in the past at different points in time. Our approach is also able to localize a vehicle in a map generated from Google Street View images. Due to the deployment of an efficient hashing-based image retrieval strategy for finding potential matches in combination with informed search in a data association graph, our approach robustly localizes a robot and quickly relocalizes it if getting lost. Our experiments suggest that our algorithm is an effective matching approach to align the currently obtained images with multiple trajectories for online operation.
|
|
TuBT1-09 Interactive Session, 220 |
Add to My Program |
Perception for Manipulation II - 2.2.09 |
|
|
|
13:30-14:45, Paper TuBT1-09.1 | Add to My Program |
Exploiting Trademark Databases for Robotic Object Fetching |
Song, Joshua | The University of Queensland |
Kurniawati, Hanna | Australian National University |
Keywords: Service Robots, Semantic Scene Understanding, Deep Learning in Robotics and Automation
Abstract: Service robots require the ability to recognize various household objects in order to carry out certain tasks, such as fetching an object for a person. Manually collecting information on all the objects a robot may encounter in a household is tedious and time-consuming; therefore this paper proposes the use of large-scale data from existing trademark databases. These databases contain logo images and a description of the goods and services the logo was registered under. For example, Pepsi is registered under soft drinks. We extend domain randomization in order to generate synthetic data to train a convolutional neural network logo detector, which outperformed previous logo detectors trained on synthetic data. We also provide a practical implementation for object fetching on a robot, which uses a Kinect and the logo detector to identify the object the human user requested. Tests on this robot indicate promising results, despite not using any real world photos for training.
|
|
13:30-14:45, Paper TuBT1-09.2 | Add to My Program |
Object Detection Approach for Robot Grasp Detection |
Karaoguz, Hakan | Royal Institute of Technology KTH |
Jensfelt, Patric | KTH - Royal Institute of Technology |
Keywords: Perception for Grasping and Manipulation, Object Detection, Segmentation and Categorization, Deep Learning in Robotics and Automation
Abstract: In this paper, we focus on the robot grasping problem with parallel grippers using image data. For this task, we propose and implement an end-to-end approach. In order to detect the good grasping poses for a parallel gripper from RGB images, we have employed transfer learning for a Convolutional Neural Network (CNN) based object detection architecture. Our obtained results show that, the adapted network either outperforms or is on-par with the state-of-the art methods on a benchmark dataset. We also performed grasping experiments on a real robot platform to evaluate our methods real world performance.
|
|
13:30-14:45, Paper TuBT1-09.3 | Add to My Program |
MetaGrasp: Data Efficient Grasping by Affordance Interpreter Network |
Cai, Junhao | Sun Yat-Sen University |
Cheng, Hui | Sun Yat-Sen University |
Zhang, Zhanpeng | SenseTime Group Limited |
Su, Jingcheng | Sun Yat-Sen University |
Keywords: Visual Learning, Perception for Grasping and Manipulation, Grasping
Abstract: Data-driven approach for grasping shows significant advance recently. But these approaches usually require much training data. To increase the efficiency of grasping data collection, this paper presents a novel grasp training system including the whole pipeline from data collection to model inference. The system can collect effective grasp sample with a corrective strategy assisted by antipodal grasp rule, and we design an affordance interpreter network to predict pixelwise grasp affordance map. We define graspability, ungraspability and background as grasp affordances. The key advantage of our system is that the pixel-level affordance interpreter network trained with only a small number of grasp samples under antipodal rule can achieve significant performance on totally unseen objects and backgrounds. The training sample is only collected in simulation. Extensive qualitative and quantitative experiments demonstrate the accuracy and robustness of our proposed approach. In the real-world grasp experiments, we achieve a grasp success rate of 93% on a set of household items and 91% on a set of adversarial items with only about 6,300 simulated samples. We also achieve 87% accuracy in clutter scenario. Although the model is trained using only RGB image, when changing the background textures, it also performs well and can achieve even 94% accuracy on the set of adversarial objects, which outperforms current state-of-the-art methods.
|
|
13:30-14:45, Paper TuBT1-09.4 | Add to My Program |
Toward Fingertip Non-Contact Material Recognition and Near-Distance Ranging for Robotic Grasping |
Fang, Cheng | Texas A&M University |
Wang, Di | Texas A&M University |
Song, Dezhen | Texas A&M University |
Zou, Jun | Texas A&M University |
Keywords: Perception for Grasping and Manipulation, Range Sensing, Grasping
Abstract: We report the feasibility study of a new acoustic and optical bi-modal distance & material sensor for robotic grasping. The new sensor is designed to be mounted on the robot fingertip to provide last-moment perception before contact happens. It is based on both pulse-echo ultrasound and optoacoustic effects enabled by single-element air-coupled transducers. In contrast to conventional contact-based and recent pre-touch approaches, this new method overcomes their disadvantages and provides robotic fingers with the capability to detect the distance and material type of the target at a near distance before contact occurs, which is crucial for robust and nimble grasping. The proposed sensor has been tested with different materials, shapes, and porous properties. The experimental results show that this sensor design is functional and practical.
|
|
13:30-14:45, Paper TuBT1-09.5 | Add to My Program |
Video-Based Prediction of Hand-Grasp Preshaping with Application to Prosthesis Control |
Taverne, Luke T. | ETH Zurich |
Cognolato, Matteo | University of Applied Sciences Western Switzerland (HES-SO) |
Bützer, Tobias | ETH Zurich |
Gassert, Roger | ETH Zurich |
Hilliges, Otmar | ETH Zurich |
Keywords: Perception for Grasping and Manipulation, Prosthetics and Exoskeletons, Deep Learning in Robotics and Automation
Abstract: Among the currently available grasp-type selec- tion techniques for hand prostheses, there is a distinct lack of intuitive, robust, low-latency solutions. In this paper we investigate the use of a portable, forearm-mounted, video- based technique for the prediction of hand-grasp preshaping for arbitrary objects. The purpose of this system is to automatically select the grasp-type for the user of the prosthesis, potentially increasing ease-of-use and functionality. This system can be used to supplement and improve existing control strategies, such as surface electromyography (sEMG) pattern recognition, for prosthetic and orthotic devices. We designed and created a suitable dataset consisting of RGB-D video data for 2212 grasp examples split evenly across 7 classes; 6 grasps commonly used in activities of daily living, and an additional no-grasp category. We processed and analyzed the dataset using several state-of- the-art deep learning architectures. Our selected model shows promising results for realistic, intuitive, real-world use, reaching per-frame accuracies on video sequences of up to 95.90% on the validation set. Such a system could be integrated into the palm of a hand prosthesis, allowing an automatic prediction of the grasp-type without requiring any special movements or aiming by the user.
|
|
13:30-14:45, Paper TuBT1-09.6 | Add to My Program |
Learning Affordance Segmentation for Real-World Robotic Manipulation Via Synthetic Images |
Chu, Fu-Jen | University of Michigan |
Xu, Ruinian | Georgia Institute of Technology |
Vela, Patricio | Georgia Institute of Technology |
Keywords: Perception for Grasping and Manipulation, Deep Learning in Robotics and Automation, RGB-D Perception
Abstract: This paper presents a deep learning framework to predict the affordances of object parts for robotic manipulation. The framework segments affordance maps by jointly detecting and localizing candidate regions within an image. Rather than requiring annotated real-world images, the framework learns from synthetic data and adapts to real-world data without supervision. The method learns domain-invariant region proposal networks and task-level domain adaptation components with regularization on the predicted domains. A synthetic version of the UMD dataset is collected for auto-generating annotated, synthetic input data. Experimental results show that the proposed method outperforms an unsupervised baseline, and achieves performance close to state-of-the-art supervised approaches. An ablation study establishes the performance gap between the proposed method and the supervised equivalent (30%). Real-world manipulation experiments demonstrate use of the affordance segmentations for task execution, which achieves the same performance with supervised approaches.
|
|
TuBT1-10 Interactive Session, 220 |
Add to My Program |
Human-Robot Interaction III - 2.2.10 |
|
|
|
13:30-14:45, Paper TuBT1-10.1 | Add to My Program |
Reactive Walking Based on Upper-Body Manipulability: An Application to Intention Detection and Reaction |
Mohammadi, Pouya | Braunschweig University of Technology |
Mingo Hoffman, Enrico | Fondazione Istituto Italiano Di Tecnologia |
Muratore, Luca | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Steil, Jochen J. | Technische Universität Braunschweig |
Keywords: Physical Human-Robot Interaction, Humanoid Robots, Humanoid and Bipedal Locomotion
Abstract: In this paper, we look at the challenge of human robot interaction in locomotion. We consider a hand-in-hand interaction scenario where a human compliantly interacts with the upper-body of an impedance controlled humanoid. By exploring the velocity transmission of the robot arms, and the interaction in terms of robot arms manipulation quality evaluated through the monitoring of their manipulability the proposed method derives suitable reactive steps in appropriate directions to ensure that the robot manipulation ability is maintained with the robot arms providing high capacity of motion along the different directions. The proposed approach can be combined with different walking pattern generators and is not tailored to a specific one used in this work. The results of the proposed method are experimentally validated on the COMAN+ humanoid robot showing the efficacy of the method to generate reactive stepping driven by the interaction and manipulation motion of the human operator. Besides, the work also provides a real-time software architecture to control humanoid COMAN+, but it is also flexible to be used for the control of other robot platforms.
|
|
13:30-14:45, Paper TuBT1-10.2 | Add to My Program |
A Self-Modulated Impedance Multimodal Interaction Framework for Human-Robot Collaboration |
Muratore, Luca | Istituto Italiano Di Tecnologia |
Laurenzi, Arturo | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Keywords: Physical Human-Robot Interaction, Compliance and Impedance Control, Humanoid Robots
Abstract: Human Robot interaction is a fundamental perquisite for any robot performing a physical task in collaboration with a human. The presence of disturbances arising from the partially known tasks payloads, the unexpected interaction forces in general, and the uncertainty in the interpretation of the human intention in terms of motions and forces can pose significant challenges and eventually compromise the execution of the collaborative task. This work presents a novel, intrinsically adaptable multimodal (force, motion and verbal) interaction framework for human-robot collaboration (HRC) that leverages on an online self-tuning stiffness regulation principle to provide adaptation to interaction/payload forces and reject disturbances arising by unexpected interaction loads. Besides the presented method, it enables the rejection of unnecessary motion commands (e.g. oscillations generated by the human operator) to reach the robot co-worker through the filtering of the human generated motions, that are outside the range (in terms of speed and acceleration) of the envisioned manipulation manoeuvres. Finally, a verbal interaction channel allows the operator to convey securely his high level intentions and to control the states of the task execution. We evaluated and demonstrated the effectiveness of the proposed multimodal interaction framework in a high weight carrying human-robot collaboration task using the humanoid robot COMAN+.
|
|
13:30-14:45, Paper TuBT1-10.3 | Add to My Program |
SMT-Based Control and Feedback for Social Navigation |
Campos, Thais | Cornell University |
Pacheck, Adam | Cornell University |
Hoffman, Guy | Cornell University |
Kress-Gazit, Hadas | Cornell University |
Keywords: Physical Human-Robot Interaction, Formal Methods in Robotics and Automation
Abstract: This paper combines techniques from Formal Methods and Human-Robot Interaction (HRI) to address the challenge of a robot walking with a human while maintaining a socially acceptable distance and avoiding collisions. We formulate a set of constraints on the robot motion using Satisfiability Modulo Theories (SMT) formulas, and synthesize robot control that is guaranteed to be safe and correct. Due to its use of high-level formal specifications, the controller is able to provide feedback to the user in situations where human behavior causes the robot to fail. This feedback allows the human to adjust their behavior and recover joint navigation. We demonstrate the behavior of the robot in a variety of simulated scenarios and compare it to utility-based side-by-side navigation control.
|
|
13:30-14:45, Paper TuBT1-10.4 | Add to My Program |
Safe and Efficient High Dimensional Motion Planning in Space-Time with Time Parameterized Prediction |
Li, Shen | MIT |
Shah, Julie A. | MIT |
Keywords: Physical Human-Robot Interaction, Human-Centered Robotics, Manipulation Planning
Abstract: In this work, we propose an algorithm that can plan safe and efficient robot trajectories in real time, given time-parameterized motion predictions, in order to avoid fast-moving obstacles in human-robot collaborative environments. Our algorithm is able to reduce the robot configuration space and the time domain significantly by constructing a Lazy Safe Interval Probabilistic Roadmap based on a pre-planned path. The algorithm then plans efficient obstacle-avoidance strategies within the space-time roadmap. We benchmarked our algorithm by evaluating the performance of a simulated 6-joint manipulator attempting to avoid a quickly moving human hand, using a dataset collected from human experiments. We compared our algorithm's performance with those of 8 variations of prior state-of-the-art planners. Results from this empirical evaluation indicate that our method generated safe plans in 97.5% of the evaluated situations, achieved a planning speed 30 times faster than the benchmarked methods that planned in the time domain without space reduction, and accomplished the minimal solution execution time among the benchmarked planners with a similar planning speed.
|
|
13:30-14:45, Paper TuBT1-10.5 | Add to My Program |
Fast Online Segmentation of Activities from Partial Trajectories |
Iqbal, Tariq | Massachusetts Institute of Technology |
Li, Shen | MIT |
Fourie, Christopher K | Massachusetts Institute of Technology (MIT) |
Hayes, Bradley | University of Colorado Boulder |
Shah, Julie A. | MIT |
Keywords: Physical Human-Robot Interaction, Human Detection and Tracking, Industrial Robots
Abstract: Augmenting a robot with the capacity to understand the activities of the people it collaborates with in order to then label and segment those activities allows the robot to generate an efficient and safe plan for performing its own actions. In this work, we introduce an online activity segmentation algorithm that can detect activity segments by processing a partial trajectory. We model the transitions through activities as a hidden Markov model, which runs online by implementing an efficient particle-filtering approach to infer the maximum a posteriori estimate of the activity sequence. This process is complemented by an online search process to refine activity segments using task model information about the partial order of activities. We evaluated our algorithm by comparing its performance to two state-of-the-art activity segmentation algorithms on three human activity datasets. The proposed algorithm improved activity segmentation accuracy across all three datasets compared with the other two approaches, with a range from 11.3% to 65.5%, and could accurately recognize an activity through observation alone for 31.6% of the initial trajectory of that activity, on average. We also implemented the algorithm onto an industrial mobile robot during an automotive assembly task in which the robot tracked a human worker's progress and provided the worker with the correct materials at the appropriate time.
|
|
13:30-14:45, Paper TuBT1-10.6 | Add to My Program |
Tactile-Based Whole-Body Compliance with Force Propagation for Mobile Manipulators (I) |
Leboutet, Quentin | Technical University of Munich |
Dean-Leon, Emmanuel | Technischen Universitaet Muenchen |
Bergner, Florian | Technical University of Munich |
Cheng, Gordon | Technical University of Munich |
Keywords: Physical Human-Robot Interaction, Force and Tactile Sensing, Wheeled Robots
Abstract: We propose a control method, providing mobile robots with whole-body compliance capabilities, in response to multi-contact physical interactions with their environment. The external forces applied to the robot, as well as their localization on its kinematic tree, are measured using a multimodal, self-configuring and self-calibrating artificial skin. We formulate a compliance control law in Cartesian space, as a set of quadratic optimization problems, solved in parallel for each limb involved in the interaction process. This specific formulation makes it possible to determine the torque commands required to generate the desired reactive behaviors, while taking the robot kinematic and dynamic constraints into account. When a given limb fails to produce the desired compliant behavior, the generalized force residual at the considered contact points is propagated to a parent limb in order to be adequately compensated. Hence, the robot's compliance range can be extended in a both robust and easily adjustable manner. The experiments performed on a dual-arm velocity-controlled mobile manipulator, show that our methodology is robust to nullspace interactions and robot physical constraints.
|
|
TuBT1-11 Interactive Session, 220 |
Add to My Program |
Medical Robotics VI - 2.2.11 |
|
|
|
13:30-14:45, Paper TuBT1-11.1 | Add to My Program |
Laparoscopy Instrument Tracking for Single View Camera and Skill Assessment |
Gautier, Benjamin | Heriot-Watt University |
Harun, Tugal | Heriot-Watt University |
Erden, Mustafa Suphi | Heriot-Watt University |
Keywords: Surgical Robotics: Laparoscopy, Visual Tracking
Abstract: Assessment of minimally invasive surgical skills is a non-trivial task, usually requiring the presence and time of expert observers, including subjectivity of these observers, requiring special and expensive equipment and software dedicated for assessment. This study develops an algorithm for tracking of laparoscopy instruments in the video cues of a standard laparoscopy training box with a single webcam camera and proposes new criteria to assess skill level using the extracted tool trajectories. These two techniques together constitute a significant step towards developing a low cost, automated, and widely applicable laparoscopy training and assessment system using a standard physical training box equipped with a webcam, without requiring any special and extra equipment or sensors. The developed visual tracking algorithm recovers the 3D positions of the laparoscopic instruments tips to which simple colored tapes (markers) are attached. The new assessment criteria we propose are based on frequency analysis and linear discriminant analysis of the 3D reconstructed trajectories of the instruments. The performance of these proposed criteria are compared to the conventional criteria for laparoscopy training and demonstrated to be superior to those on the data we have recorded from six professional laparoscopy surgeons and ten novice subjects.
|
|
13:30-14:45, Paper TuBT1-11.2 | Add to My Program |
OffsetNet: Deep Learning for Localization in the Lung Using Rendered Images |
Sganga, Jake | Stanford University |
Eng, David | Stanford |
Graetzel, Chauncey | Auris Health Inc |
Camarillo, David B. | Stanford University |
Keywords: Computer Vision for Automation, Deep Learning in Robotics and Automation, Surgical Robotics: Steerable Catheters/Needles
Abstract: Navigating surgical tools in the dynamic and tortuous anatomy of the lung's airways requires accurate, real-time localization of the tools with respect to the preoperative scan of the anatomy. Such localization can inform human operators or enable closed-loop control by autonomous agents, which would require accuracy not yet reported in the literature. In this paper, we introduce a deep learning architecture, called OffsetNet, to accurately localize a bronchoscope in the lung in real-time. After training on only 30 minutes of recorded camera images in conserved regions of a lung phantom, OffsetNet tracks the bronchoscope's motion on a held-out recording through these same regions at an update rate of 47 Hz and an average position error of 1.4 mm. Because this model performs poorly in less conserved regions, we augment the training dataset with simulated images from these regions. To bridge the gap between camera and simulated domains, we implement domain randomization and a generative adversarial network (GAN). After training on simulated images, OffsetNet tracks the bronchoscope's motion in less conserved regions at an average position error of 2.4 mm, which meets conservative thresholds required for successful tracking.
|
|
13:30-14:45, Paper TuBT1-11.3 | Add to My Program |
A Self-Adaptive Motion Scaling Framework for Surgical Robot Remote Control |
Zhang, Dandan | Imperial College London |
Xiao, Bo | King's College London |
Huang, Baoru | Imperial College London |
Zhang, Lin | Imperial College London |
Liu, Jindong | Imperial College London |
Yang, Guang-Zhong | Imperial College London |
Keywords: Surgical Robotics: Laparoscopy, Medical Robots and Systems, Learning and Adaptive Systems
Abstract: Master-slave control is a common form of human-robot interaction for robotic surgery. To ensure seamless and intuitive control, a mechanism of self-adaptive motion scaling during teleoperation is proposed in this paper. The operator can retain precise control when conducting delicate or complex manipulation, while the movement to a remote target is accelerated via adaptive motion scaling. The proposed framework consists of three components: 1) situation awareness, 2) skill level awareness, and 3) task awareness. The self-adaptive motion scaling ratio allows the operators to perform surgical tasks with high efficiency, forgoing the need for frequent clutching and instrument repositioning. The proposed framework has been verified on a da Vinci Research Kit (dVRK) to assess its usability and robustness. An in-house database is constructed for offline model training and parameter estimation, including both the kinematic data obtained from the robot and visual cues captured through the endoscope. Detailed user studies indicate that a suitable motion-scaling ratio can be obtained and adjusted online. The overall performance of the operators in terms of control efficiency and task completion is significantly improved with the proposed framework.
|
|
13:30-14:45, Paper TuBT1-11.4 | Add to My Program |
Autonomous Flexible Endoscope for Minimally Invasive Surgery with Enhanced Safety |
Ma, Xin | Chinese Univerisity of HongKong |
Song, Chengzhi | Chinese University of Hong Kong, |
Chiu, WAI, YAN Philip | Chinese University of Hong Kong |
Li, Zheng | The Chinese University of Hong Kong |
Keywords: Surgical Robotics: Laparoscopy, Visual Tracking, Robot Safety
Abstract: Automation in robotic surgery has become an increasingly attractive topic. Although full automation remains fictional, task autonomy and conditional autonomy are highly achievable. Apart from the performance of task fulfillment, one major concern in robotic surgery is safety. In this paper, we present a flexible endoscope that can help to guide the minimally invasive surgical operations automatically. It is developed based on the tendon-driven continuum mechanism and is integrated with the da Vinci Research Kit (DVRK). In total, the proposed flexible endoscope has six degree-of-freedoms (DOFs). Visual servoing is adopted to automatically track the surgical instruments. During the tracking, optimal control method is used to minimize the motion and space occupation of the flexible endoscope, which will improve the safety of both the robot system and the assistants nearby. Compared with the existing rigid endoscope, both the experimental results and the user study results show that the proposed flexible endoscope has advantages of being safer and less space occupation without reducing its comfort level.
|
|
13:30-14:45, Paper TuBT1-11.5 | Add to My Program |
Using Augmentation to Improve the Robustness to Rotation of Deep Learning Segmentation in Robotic-Assisted Surgical Data |
Itzkovich, Danit | Ben-Gurion University of the Negev |
Sharon, Yarden | Ben-Gurion University of the Negev |
Jarc, Tony | Intuitive Surgical |
Refaely, Yael | Soroka Medical Center |
Nisky, Ilana | Ben Gurion University of the Negev |
Keywords: Surgical Robotics: Laparoscopy, Deep Learning in Robotics and Automation
Abstract: Robotic-Assisted Minimally Invasive Surgery allows for easy recording of kinematic data, and presents excellent opportunities for data-intensive approaches to assessment of surgical skill, system design, and automation of procedures. However, typical surgical cases result in long data streams, and therefore, automated segmentation into gestures is important. The public release of the JIGSAWS dataset allowed for developing and benchmarking data-intensive segmentation algorithms. However, this dataset is small and the gestures are similar in their structure and directions. This may limit the generalization of the algorithms to real surgical data that are characterized by movements in arbitrary directions. In this paper, we use a recurrent neural network to segment a suturing task, and demonstrate one such generalization problem - limited generalization to rotation. We propose a simple augmentation that can solve this problem without collecting new data, and demonstrate its benefit using: (1) the JIGSAWS dataset, and (2) a new dataset that we recorded with a da Vinci Research Kit. Our study highlights the prospect of using data augmentation in the analysis of kinematic data in surgical data science.
|
|
TuBT1-12 Interactive Session, 220 |
Add to My Program |
Rehabilitation Robotics II - 2.2.12 |
|
|
|
13:30-14:45, Paper TuBT1-12.1 | Add to My Program |
Deep Learning Based Motion Prediction for Exoskeleton Robot Control in Upper Limb Rehabilitation |
Ren, Jialiang | National Taiwan University |
Chien, Ya-Hui | National Taiwan University |
Chia, En-Yu | National Taiwan University |
Fu, Li-Chen | National Taiwan University |
Lai, Jin-Shin | National Taiwan University |
Keywords: Deep Learning in Robotics and Automation, Rehabilitation Robotics, Prosthetics and Exoskeletons
Abstract: The synchronization of the movement between exoskeleton robot and human arm is crucial for Robot-assisted training (RAT) in upper limb rehabilitation. In this paper, we propose a deep learning based motion prediction model which is applied to our recently developed 8 degrees-of-freedom (DoFs) upper limb rehabilitation exoskeleton, named NTUH-II. The human arm dynamics and surface electromyography (sEMG) can be first measured by two wireless sensors and used as input of deep learning model to predict user’s motion. Then, the prediction can be used as desired motion trajectory of the exoskeleton. As a result, the robot arm can follow the movement on either side of the user’s arm in real-time. Various experiments have been conducted to verify the performance of the proposed motion prediction model, and the results show that the proposed motion prediction implementation can reduce the mean absolute error and the average delay time of movement between human arm and robot arm.
|
|
13:30-14:45, Paper TuBT1-12.2 | Add to My Program |
Adaptive Gait Planning for Walking Assistance Lower Limb Exoskeletons in Slope Scenarios |
Zou, Chaobin | University of Electronic Science and Technology of China |
Huang, Rui | University of Electronic Science and Technology of China |
Cheng, Hong | University of Electronic Science and Technology |
Chen, Qiming | University of Electronic Science and Technology of China |
Qiu, Jing | University of Electronic Science and Technology of China |
Keywords: Rehabilitation Robotics, Robust/Adaptive Control of Robotic Systems, Wearable Robots
Abstract: Lower-limb exoskeleton has gained considerable interests in walking assistance applications for paraplegic patients. In walking assistance of paraplegic patients, the exoskeleton should have the ability to help the patients to walk over different terrains in the daily life, such as slope terrains. One critical issue is how to plan the stepping locations on slopes with different gradients, and generate stable and human-like gaits for patients. This paper proposed an adaptive gait planning approach which can generate gait trajectories adapt to slopes with different gradients for lower-limb walking assistance exoskeletons. We modeled the human-exoskeleton system as a 2D Linear Inverted Pendulum Model (2D-LIPM) with an external force in two-dimensional sagittal plane, and proposed a Dynamic Gait Generator (DGG) based on an extension of the conventional Capture Point (CP) theory and Dynamic Movement Primitives (DMPs). The proposed approach can dynamically generate reference foot locations for each step on slopes, and human-like adaptive gait trajectories can be reproduced after the learning from demonstrated trajectories that sampled from level ground walking of normal healthy human. We demonstrated the efficiency of the proposed approach on both the Gazebo simulation platform and an exoskeleton named AIDER. Experimental results indicate that the proposed approach is able to provide the ability for exoskeletons to generate appropriate gaits adapt to slopes with different gradients
|
|
13:30-14:45, Paper TuBT1-12.3 | Add to My Program |
A Data-Driven Predictive Model of Individual-Specific Effects of FES on Human Gait Dynamics |
Drnach, Luke | Georgia Institute of Technology |
Allen, Jessica | West Virginia University |
Essa, Irfan | Georgia Institute of Technology |
Ting, Lena | Emory University and Georgia Tech |
Keywords: Model Learning for Control, Human-Centered Robotics, Rehabilitation Robotics
Abstract: Modeling individual-specific gait dynamics based on kinematic data could aid development of gait rehabilitation robotics by enabling robots to predict the user's gait kinematics with and without external inputs, such as mechanical or electrical perturbations. Here we address a current limitation of data-driven gait models, which do not yet predict human responses to perturbations. We used Switched Linear Dynamical Systems (SLDS) to model joint angle kinematic data from healthy individuals walking on a treadmill during normal gait and during gait perturbed by functional electrical stimulation (FES) to the ankle muscles. Our SLDS models were able to predict the time-evolution of joint kinematics in each of four gait phases, as well as across an entire gait cycle. Because the SLDS dynamics matrices encoded significant coupling across joints, we compared the SLDS predictions to that of a kinematic model, where the joint angles were independent. Gait kinematics predicted by SLDS and kinematic models were similar over time horizons of a few milliseconds, but SLDS models provided better predictions of gait kinematics over time horizons of up to a second. We also demonstrated that SLDS models can infer and predict individual-specific responses to FES during swing phase. As such, SLDS models may be a promising approach for online estimation and control of and human gait dynamics, allowing robotic control strategies to be tailored to an individual's specific gait coordination patterns.
|
|
13:30-14:45, Paper TuBT1-12.4 | Add to My Program |
The (Sensorized) Hand Is Quicker Than the Eye: Restoring Grasping Speed and Confidence for Amputees with Tactile Reflexes |
Fishel, Jeremy | SynTouch, LLC |
Matulevich, Blaine | California Institute of Technology |
Muller, Kelsey A. | SynTouch |
Berke, Gary M. | Berke Prosthetics |
Keywords: Force and Tactile Sensing, Prosthetics and Exoskeletons, Perception for Grasping and Manipulation
Abstract: Myoelectric prosthetic hand users have difficulty with, and frequently avoid, grasping fragile objects with their prosthesis. While the sense of touch is known to be critical for human hand dexterity, it has been virtually absent in prosthetic hands. In this study, a standard myoelectric prosthetic hand was modified with tactile sensors and a simple tactile reflex to inhibit excessive forces on contact. The tactile sensors were made from an open-cell self-skinning polyurethane foam that produced a detectable increase in air pressure inside the foam when contacted. This contact signal was then used by an inhibitory reflex controller which served to reduce the gain of weaker closing signals after contact but allow stronger closing signals to pass through. Four unilateral myoelectric prosthesis users completed five trials of three different timed grasping tasks with fragile and rigid items. Subjects performed each task in three different scenarios: with their sound side limb, their current myoelectric hand, and the modified prosthesis with tactile reflex. Findings demonstrated that grasping performance with fragile objects was significantly enhanced using the modified prosthesis, even nearing the performance of subject’s sound side limb. Results suggest that this approach can substantially improve the speed and success of grasping fragile items, leading to improved use patterns, decreased cognitive effort, and improved user confidence.
|
|
13:30-14:45, Paper TuBT1-12.5 | Add to My Program |
Development of a Soft Power Suit for Lower Back Assistance |
Yao, Zhejun | Helmut Schmidt University |
Linnenberg, Christine | University of Innsbruck |
Weidner, Robert | Helmut Schmidt University |
Wulfsberg, Jens | Helmut Schmidt University |
Keywords: Wearable Robots, Biomimetics
Abstract: Mechanical stresses on the spine are a significant risk factor for low back pain, a highly prevalent health problem around the world. Certain occupational activities such as repetitive heavy lifting and static bending posture lead to high loads on the lower back. To address this problem, we are developing a soft power suit capable of reducing physical load on the lower back during dynamic lifting and static forward bending. The power suit is designed to mimic the force transmission in the body and duplicate the force generated by muscles and tendons. Two twisted string actuators (TSAs) attached to a back brace are used to generate tensile forces which assist the underlying muscles to control trunk flexion. The fabric construction and TSA enable a lightweight design of the suit: without the battery, the entire system weighs only 2.4kg. Here we present the design and implementation of the prototype system along with a preliminary biomechanical study that evaluates the effect of the system on the body. The results show that the power suit does not change the wearer’s bending kinematics and helps the subject to keep the static bending posture. Moreover, using the power suit significantly reduced the muscle activation required for both static bending and dynamic lifting (50.2–54.0% and 21.4–25.2% reduction, respectively).
|
|
13:30-14:45, Paper TuBT1-12.6 | Add to My Program |
Toward Controllable Hydraulic Coupling of Joints in a Wearable Robot (I) |
Treadway, Emma | University of Michigan |
Gan, Zhenyu | University of Michigan |
Remy, C. David | University of Michigan |
Gillespie, Brent | University of Michigan |
Keywords: Rehabilitation Robotics, Haptics and Haptic Interfaces, Physical Human-Robot Interaction
Abstract: In this paper, we develop theoretical foundations for a new class of rehabilitation robot: body-powered devices that route power between a user’s joints. By harvesting power from a healthy joint to assist an impaired joint, novel bimanual and self-assist therapies are enabled. This approach complements existing robotic therapies aimed at promoting recovery of motor function after neurological injury. We employ hydraulic transmissions for routing power, or equivalently for coupling the motions of a user’s joints. Fluid power routed through flexible tubing imposes constraints within a limb or between homologous joints across the body. Variable transmissions allow constraints to be steered on the fly, and simple valve switching realizes free space and locked motion. We examine two methods for realizing variable hydraulic transmissions: using valves to switch among redundant cylinders (digital hydraulics) or using an intervening electromechanical link. For both methods, we present a rigorous mathematical framework for describing and controlling the resulting constraints. Theoretical developments are supported by experiments using a prototype fluid-power exoskeleton.
|
|
TuBT1-13 Interactive Session, 220 |
Add to My Program |
Soft Robots III - 2.2.13 |
|
|
|
13:30-14:45, Paper TuBT1-13.1 | Add to My Program |
A New Soft Fingertip Based on Electroactive Hydrogels |
López-Díaz, Antonio | Universidad De Castilla-La Mancha |
Martin Pacheco, Ana | University of Castilla La Mancha (IRICA) |
Fernandez, Raul | Universidad De Castilla La Mancha |
Rodríguez, Antonio M. | UCLM |
Herrero, María Antonia | Universidad De Castilla-La Mancha |
Vázquez, Ester | Universidad De Castilla La Mancha |
Vazquez, Andres S. | Universidad De Castilla La Mancha |
Keywords: Soft Material Robotics, Grasping
Abstract: In this work we present the design and application of an active soft fingertip for robotic hands. This fingertip is based on a new type of hydrogel which has been designed with the purpose of overcoming some of the major drawbacks of previous hydrogels such as the dependency of aqueous solutions. Fingertip applications benefit from the changes of stiffness and volume which take place in our hydrogel when electric fields are applied. Theoretical modeling and experimental verification of the fingertip properties are presented in this work, showing its potential usability in grasping and manipulation tasks.
|
|
13:30-14:45, Paper TuBT1-13.2 | Add to My Program |
Open Loop Position Control of Soft Continuum Arm Using Deep Reinforcement Learning |
Satheeshbabu, Sreeshankar | University of Illinois Urbana Champaign |
Uppalapati, Naveen Kumar | University of Illinois at Urbana-Champaign |
Chowdhary, Girish | University of Illinois at Urbana Champaign |
Krishnan, Girish | University of Illinois Urbana Champaign |
Keywords: Soft Material Robotics, Deep Learning in Robotics and Automation, Motion Control of Manipulators
Abstract: Soft robots undergo large nonlinear spatial deformations due to both inherent actuation and external loading. The physics underlying these deformations is complex, and often requires intricate analytical and numerical models. The complexity of these models may render traditional model-based control difficult and unsuitable. Model-free methods offer an alternative for analyzing the behavior of such complex systems without the need for elaborate modeling techniques.In this paper, we present a model-free approach for open loop position control of a soft spatial continuum arm, based on deep reinforcement learning. The continuum arm is pneumatically actuated and attains a spatial workspace by a combination of unidirectional bending and bidirectional torsional deformation. We use Deep-Q Learning with experience replay to train the system in simulation. The efficacy and robustness of the control policy obtained from the system is validated both in simulation and on the continuum arm prototype for varying external loading conditions.
|
|
13:30-14:45, Paper TuBT1-13.3 | Add to My Program |
Motion Planning for High-DOF Manipulation Using Hierarchical System Identification |
Pan, Zherong | The University of North Carolina at Chapel Hill |
Jia, Biao | University of Maryland at College Park |
Manocha, Dinesh | University of North Carolina at Chapel Hill |
Keywords: Soft Material Robotics, Learning and Adaptive Systems, Model Learning for Control
Abstract: We present an efficient algorithm for motion planning and controlling a robot system with a high number of degrees-of-freedom (DOF). These systems include high-DOF soft robots or articulated robots interacting with a deformable environment. We present a novel technique to accelerate the evaluations of the forward dynamics function by storing the results of costly computations in a hierarchical adaptive grid. Furthermore, we exploit the underactuated properties of the robot systems and build the grid in a low-dimensional space. Our approach approximates the forward dynamics function with guaranteed error bounds and can be used in optimization-based motion planning and reinforcement-learning-based feedback control. We highlight the performance on two high-DOF robot systems: a line-actuated elastic robot arm and an underwater swimming robot in water. Compared to prior techniques based on exact dynamics evaluation, we observe one to two orders of magnitude improvement in the performance.
|
|
13:30-14:45, Paper TuBT1-13.4 | Add to My Program |
Resilient Task Planning and Execution for Reactive Soft Robots |
Hamill, Scott | Cornell University |
Whitehead, John | Cornell University |
Ferenz, Peter | Cornell University |
Shepherd, Robert | Cornell University |
Kress-Gazit, Hadas | Cornell University |
Keywords: Soft Material Robotics, Formal Methods in Robotics and Automation, Task Planning
Abstract: Soft robots utilize compliant materials to perform motions and behaviors not typically achievable by rigid bodied systems. These materials and soft actuator fabrication methods have been leveraged to create multigait walking soft robots. However, soft materials are prone to failure, restricting the ability of soft robots to accomplish tasks. In this work we address the problem of generating reactive controllers for multigait walking soft robots that are resilient to actuator failure by applying methods of formal synthesis. We present a sensing-based abstraction for actuator performance, provide a framework for encoding multigait behavior and actuator failure in Linear Temporal Logic (LTL), and demonstrate synthesized controllers on a physical soft robot.
|
|
13:30-14:45, Paper TuBT1-13.5 | Add to My Program |
Dynamic Morphological Computation through Damping Design of Soft Material Robots: Application to Under-Actuated Grippers |
Di Lallo, Antonio | Università Di Pisa |
Catalano, Manuel Giuseppe | Istituto Italiano Di Tecnologia |
Garabini, Manolo | Università Di Pisa |
Grioli, Giorgio | Istituto Italiano Di Tecnologia |
Gabiccini, Marco | University of Pisa |
Bicchi, Antonio | Università Di Pisa |
Keywords: Soft Material Robotics, Underactuated Robots, Grippers and Other End-Effectors
Abstract: This article presents the design of soft material robots with tunable damping properties. This study derives from the investigation of an under-actuated dynamic approach involving multi-chamber pneumatic systems. The co-design of the mechanical parameters (stiffness and damping) of the system along with the time profile of the input allows to obtain different behaviors using a reduced number of feeding line. In this work we analyze via simulations and experiments several approaches to tune the damping of soft robots. The most effective solution employs a layer of granular material immersed in viscous oil within the chamber wall. This method has been employed to realize bending actuators with a continuous deformation pattern. Finally, we show an application involving a two-fingered gripper fed by a single pneumatic line, which is able to perform pinch and power grasp.
|
|
13:30-14:45, Paper TuBT1-13.6 | Add to My Program |
Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators (I) |
George Thuruthel, Thomas | The BioRobotics Institute - Scuola Superiore Sant'Anna |
Falotico, Egidio | Scuola Superiore Sant'Anna |
Renda, Federico | Khalifa University of Science and Technology |
Laschi, Cecilia | Scuola Superiore Sant'Anna |
Keywords: Soft Material Robotics, Model Learning for Control, Dynamics
Abstract: Dynamic control of soft robotic manipulators is an open problem yet to be well explored and analyzed. Most of the current applications of soft robotic manipulators utilize static or quasi-dynamic controllers based on kinematic models or linearity in the joint space. However, such approaches are not truly exploiting the rich dynamics of a soft-bodied system. In this paper, we present a model-based policy learning algorithm for closed-loop predictive control of a soft robotic manipulator. The forward dynamic model is represented using a recurrent neural network. The closed-loop policy is derived using trajectory optimization and supervised learning. The approach is verified first on a simulated piecewise constant strain model of a cable driven under-actuated soft manipulator. Furthermore, we experimentally demonstrate on a soft pneumatically actuated manipulator how closed-loop control policies can be derived that can accommodate variable frequency control and unmodeled external loads.
|
|
TuBT1-14 Interactive Session, 220 |
Add to My Program |
Haptics & Interfaces II - 2.2.14 |
|
|
|
13:30-14:45, Paper TuBT1-14.1 | Add to My Program |
Augmented Reality Assisted Instrument Insertion and Tool Manipulation for the First Assistant in Robotic Surgery |
Qian, Long | Johns Hopkins University |
Deguet, Anton | Johns Hopkins University |
Wang, Zerui | The Chinese University of Hong Kong |
Liu, Yunhui | Chinese University of Hong Kong |
Kazanzides, Peter | Johns Hopkins University |
Keywords: Virtual Reality and Interfaces, Medical Robots and Systems, Human Factors and Human-in-the-Loop
Abstract: In robotic-assisted laparoscopic surgery, the first assistant (FA) stands at the bedside assisting the intervention, while the surgeon sits at the console teleoperating the robot. Tasks for the FA include navigating new instruments into the surgeon's field-of-view and passing in or retracting materials from the body using hand-held tools. We previously developed ARssist, an augmented reality application based on an optical see-through head-mounted display, to aid the FA. In this paper, we refine the system and first perform a pilot study with three experienced surgeons for two specific tasks: instrument insertion and tool manipulation. The results suggest that ARssist would be especially useful for less experienced assistants and for difficult hand-eye configurations. We then perform a multi-user study with inexperienced subjects. The results show that ARssist can reduce navigation time by 34.57%, enhance insertion path consistency by 41.74%, reduce root-mean-square path deviation by 40.04%, and reduce tool manipulation time by 72.25%. Thus, ARssist has the potential to improve efficiency, safety and hand-eye coordination, especially for novice assistants.
|
|
13:30-14:45, Paper TuBT1-14.2 | Add to My Program |
High-Fidelity Grasping in Virtual Reality Using Glove-Based System |
Liu, Hangxin | University of California, Los Angeles |
Zhang, Zhenliang | Beijing Institute of Technology |
Xie, Xu | UCLA |
Zhu, Yixin | University of California, Los Angeles |
Zhu, Song-Chun | UCLA |
Keywords: Sensor Networks, Virtual Reality and Interfaces
Abstract: This paper presents a design that jointly provides hand pose sensing, hand localization, and haptic feedback to facilitate real-time stable grasps in Virtual Reality (VR). The design is based on an easy-to-replicate glove-based system that can reliably perform (i) a high-fidelity hand pose sensing in real time through a network of 15 IMUs, and (ii) the hand localization using a Vive Tracker. The supported physics-based simulation in VR is capable of detecting collisions and contact points for virtual object manipulation, which drives the collision event to trigger the physical vibration motors on the glove to signal the user, providing a better realism inside virtual environments. A caging-based approach using collision geometry is integrated to determine whether a grasp is stable. In the experiment, we showcase successful grasps of virtual objects with large geometry variations. Comparing to the popular LeapMotion sensor, we demonstrate the proposed glove-based design yields a higher success rate in various tasks in VR. We hope such a glove-based system can simplify the data collection of human manipulations with VR.
|
|
13:30-14:45, Paper TuBT1-14.3 | Add to My Program |
On the Role of Wearable Haptics for Force Feedback in Teleimpedance Control for Dual-Arm Robotic Teleoperation |
Clark, Janelle | Rice University |
Lentini, Gianluca | University of Pisa |
Barontini, Federica | Italian Institute of Technology |
Catalano, Manuel Giuseppe | Istituto Italiano Di Tecnologia |
Bianchi, Matteo | University of Pisa |
O'Malley, Marcia | Rice University |
Keywords: Haptics and Haptic Interfaces, Telerobotics and Teleoperation, Human Factors and Human-in-the-Loop
Abstract: Robotic teleoperation enables humans to safely complete exploratory procedures in remote locations for applications such as deep sea exploration or building assessments following natural disasters. Successful task completion requires meaningful dual arm robotic coordination and proper understanding of the environment. While these capabilities are inherent to humans via impedance regulation and haptic interactions, they can be challenging to achieve in telerobotic systems. Teleimpedance control has allowed impedance regulation in such applications, and bilateral teleoperation systems aim to restore haptic sensation to the operator, though often at the expense of stability or workspace size. Wearable haptic devices have the potential to apprise the operator of key forces during task completion while maintaining stability and transparency. In this paper, we evaluate the impact of wearable haptics for force feedback in teleimpedance control for dual-arm robotic teleoperation. Participants completed a peg-in-hole, box placement task, aiming to seat as many boxes as possible within the trial period. Experiments were conducted both transparent and opaque boxes. With the opaque box, participants achieved a higher number of successful placements with haptic feedback, and we saw higher mean interaction forces. Results suggest that the provision of wearable haptic feedback may increase confidence when visual cues are obscured.
|
|
13:30-14:45, Paper TuBT1-14.4 | Add to My Program |
Application of a Redundant Haptic Interface in Enhancing Soft-Tissue Stiffness Discrimination |
Torabi, Ali | University of Alberta |
Khadem, Mohsen | University of Edinburgh |
Zareinia, Kourosh | Ryerson University |
Sutherland, Garnette | University of Calgary |
Tavakoli, Mahdi | University of Alberta |
Keywords: Haptics and Haptic Interfaces, Telerobotics and Teleoperation, Medical Robots and Systems
Abstract: Haptic-enabled teleoperated surgical systems have the potential to enhance the accuracy and performance of surgical interventions. The user interface of such a system can provide haptic feedback to the surgeon to more intuitively perform surgical tasks. In this paper, we study the added benefits of redundant manipulators as haptic interfaces for teleoperated surgical systems. First, we introduce the intrinsic benefits of employing a redundant haptic interface, namely, reduced apparent inertia and increased manipulability (one result of which is reduced friction forces). Next, we demonstrate that the haptic interface redundancy can further reduce its apparent inertia and friction via appropriately manipulating the extra degrees of freedom of the interface. This will consequently enhance the haptic feedback resolution (sensitivity) for the user. Finally, a psychophysical experiment is performed to validate the improved force perception for the user in a virtual soft-tissue palpation task. We conduct a set of perceptual experiments to evaluate how a redundant and non-redundant user interface affects the perception of the virtual stiffness. Experimental results demonstrate that the redundancy in the haptic user interface helps to enhance tissue stiffness discrimination ability of the user by reducing the distortions caused by the kinematics and dynamics of the user interface.
|
|
13:30-14:45, Paper TuBT1-14.5 | Add to My Program |
Towards Robotic Feeding: Role of Haptics in Fork-Based Food Manipulation |
Bhattacharjee, Tapomayukh | University of Washington |
Lee, Gilwoo | University of Washington |
Song, Hanjun | University of Washington |
Srinivasa, Siddhartha | University of Washington |
Keywords: Haptics and Haptic Interfaces, Perception for Grasping and Manipulation, Force and Tactile Sensing
Abstract: Autonomous feeding is challenging because it requires manipulation of food items with various compliance, sizes, and shapes. To understand how humans manipulate food items during feeding and to explore ways to adapt their strategies to robots, we collected a rich dataset of human trajectories by asking them to pick up food and feed it to a mannequin. From the analysis of the collected haptic and motion signals, we demonstrate that humans adapt their control policies to accommodate to the compliance and shape of the food item being acquired. We propose a taxonomy of manipulation strategies for feeding to highlight such policies. As a first step to generate compliance-dependent policies, we propose a set of classifiers for compliance-based food categorization from haptic and motion signals. We compare these human manipulation strategies with fixed position-control policies via a robot. Our analysis of success and failure cases of human and robot policies further highlights the importance of adapting the policy to the compliance of a food item.
|
|
13:30-14:45, Paper TuBT1-14.6 | Add to My Program |
Data-Driven Haptic Modeling of Normal Interactions on Viscoelastic Deformable Objects Using a Random Forest |
Bhardwaj, Amit | Pohang University of Science and Technology (POSTECH) |
Cha, Hojun | Computer Science and Technology |
Choi, Seungmoon | POSTECH |
Keywords: Haptics and Haptic Interfaces, Virtual Reality and Interfaces, Contact Modeling
Abstract: In this paper, we propose a new data-driven approach for haptic modeling of normal interactions on homogeneous viscoelastic deformable objects. The approach is based on a well-known machine learning technique: random forest. Here we employ a random forest for regression. We acquire discrete-time interaction data for many automated cyclic compressions of a deformable object. A random forest is trained to estimate a nonparametric relationship between the position and response forces. We train the forest on very simple normal interactions. Our results show that a model trained with just 10% of the training data is capable of modeling other unseen complex normal homogeneous interactions with good accuracy. Thus, it can handle large and complex datasets. In addition, our approach requires five times less training data than the standard approach in the literature to provide similar accuracy.
|
|
TuBT1-15 Interactive Session, 220 |
Add to My Program |
SLAM - Session V - 2.2.15 |
|
|
|
13:30-14:45, Paper TuBT1-15.1 | Add to My Program |
CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction |
Loo, Shing Yan | University of Alberta, Universiti Putra Malaysia |
Jahani Amiri, Ali | University of Alberta |
Mashohor, Syamsiah | Universiti Putra Malaysia |
Tang, Sai Hong | University Putra Malaysia |
Zhang, Hong | University of Alberta |
Keywords: SLAM, Localization, Visual Learning
Abstract: Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms. In comparison with existing VO and V-SLAM algorithms, semi-direct visual odometry (SVO) has two main advantages that lead to state-of-the-art frame rate camera motion estimation: direct pixel correspondence and efficient implementation of probabilistic mapping method. This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a single-image depth prediction network. By significantly reducing the depth uncertainty of the initialized map point (i.e., small variance centred about the depth prediction), the benefits are twofold: reliable feature correspondence between views and fast convergence to the true depth in order to create new map points. We evaluate our method with two outdoor datasets: KITTI dataset and Oxford Robotcar dataset. The experimental results indicate that improved SVO mapping results in increased robustness and camera tracking accuracy. The implementation of this work is available at https://github.com/yan99033/CNN-SVO
|
|
13:30-14:45, Paper TuBT1-15.2 | Add to My Program |
A Unified Framework for Mutual Improvement of SLAM and Semantic Segmentation |
Wang, Kai | CloudMinds Technologies |
Lin, Yimin | CloudMinds Technologies Inc |
Wang, Luowei | CloudMinds Technologies Inc |
Han, Liming | CloudMinds Technologies Inc |
Hua, Minjie | CloudMinds Technologies Inc |
Wang, Xiang | CloudMinds Technologies Inc |
Lian, Shiguo | CloudMinds Technologies Inc |
Huang, Bill | CloudMinds Technologies Inc |
Keywords: SLAM, Object Detection, Segmentation and Categorization, RGB-D Perception
Abstract: This paper presents a novel framework for simultaneously implementing the localization and segmentation, which are two of the most important vision-based tasks for robotics. While the goals and techniques used for them were considered to be different previously, we show that by making use of the intermediate results of the two modules, their performance can be enhanced at the same time. Our framework is able to handle both the instantaneous motion and long-term changes of instances in localization with the help of the segmentation result, which also benefits from the refined 3D pose information. We conduct experiments on various datasets, and prove that our framework works effectively on improving the precision and robustness of the two tasks and outperforms existing localization and segmentation algorithms.
|
|
13:30-14:45, Paper TuBT1-15.3 | Add to My Program |
MID-Fusion: Octree-Based Object-Level Multi-Instance Dynamic SLAM |
Xu, Binbin | Imperial College London |
Li, Wenbin | Imperial College London |
Tzoumanikas, Dimos | Imperial College London |
Bloesch, Michael | Imperial College |
Davison, Andrew J | Imperial College London |
Leutenegger, Stefan | Imperial College London |
Keywords: SLAM, Mapping, RGB-D Perception
Abstract: We propose a new multi-instance dynamic RGB-D SLAM system using an object-level octree-based volumetric representation. It can provide robust camera tracking in dynamic environments and at the same time, continuously estimate geometric, semantic, and motion properties for arbitrary objects in the scene. For each incoming frame, we perform instance segmentation to detect objects and refine mask boundaries using geometric and motion information. Meanwhile, we estimate the pose of each existing moving object using an object-oriented tracking method and robustly track the camera pose against the static scene. Based on the estimated camera pose and object poses, we associate segmented masks with existing models and incrementally fuse corresponding colour, depth, semantic, and foreground object probabilities into each object model. In contrast to existing approaches, our system is the first system to generate an object-level dynamic volumetric map from a single RGB-D camera, which can be used directly for robotic tasks. Our method can run at 2-3 Hz on a CPU, excluding the instance segmentation part. We demonstrate its effectiveness by quantitatively and qualitatively testing it on both synthetic and real-world sequences.
|
|
13:30-14:45, Paper TuBT1-15.4 | Add to My Program |
Surfel-Based Dense RGB-D Reconstruction with Global and Local Consistency |
Yang, Yi | Carnegie Mellon University |
Dong, Wei | Carnegie Mellon University |
Kaess, Michael | Carnegie Mellon University |
Keywords: SLAM, Localization, Mapping
Abstract: Achieving high surface reconstruction accuracy in dense mapping has been a desirable target for both robotics and vision communities. In the robotics literature, simultaneous localization and mapping (SLAM) systems use RGB-D cameras to reconstruct a dense map of the environment. They leverage the depth input to provide accurate local pose estimation and a locally consistent model. However, drift in the pose tracking over time leads to misalignments and artifacts. On the other hand, offline computer vision methods, such as the pipeline that combines structure-from-motion (SfM) and multi-view stereo (MVS), estimate the camera poses by performing batch optimization. These methods achieve global consistency, but suffer from heavy computation loads. We propose a novel approach that integrates both methods to achieve locally and globally consistent reconstruction. First, we estimate poses of keyframes in the offline SfM pipeline to provide strong global constraints at relatively low cost. Afterwards, we compute odometry between frames driven by off-the-shelf SLAM systems with high local accuracy. We fuse the two pose estimations using factor graph optimization to generate accurate camera poses for dense reconstruction. Experiments on real-world and synthetic datasets demonstrate that our approach produces more accurate models comparing to existing dense SLAM systems, while achieving significant speedup with respect to state-of-the-art SfM-MVS pipelines.
|
|
13:30-14:45, Paper TuBT1-15.5 | Add to My Program |
A-SLAM: Human-In-The-Loop Augmented SLAM |
Sidaoui, Abbas | American University of Beirut |
Kassem Zein, Mohammad | American University of Beirut (AUB) |
Elhajj, Imad | American University of Beirut |
Asmar, Daniel | American University of Beirut |
Keywords: SLAM, Virtual Reality and Interfaces, Wheeled Robots
Abstract: In this work, we are proposing an intuitive Augmented SLAM method (A-SLAM) that allows the user to interact, in real-time, with a robot running SLAM to correct for pose and map errors. We built an AR application that works on HoloLens and allows the operator to view the robot’s map superposed on the physical environment and edit it. Through map editing, the operator can account for errors affecting real environment’s representation by adding navigation-forbidden areas to the map in addition to the ability to correct errors affecting the localization. The proposed system allows the operator to edit the robot’s pose (based on SLAM request) and can be extended to sending navigation goals to the robot, viewing the planned path to evaluate it before execution, and tele-operating the robot. The proposed solution could be applied on any 2D-based SLAM algorithm and can easily be extended to 3D SLAM techniques. We validated our system through experimentation on pose correction and map editing. Experiments demonstrated that through A-SLAM, SLAM run-time is cut to half, post-processing of maps is totally eliminated, and high quality occupancy grid maps could be achieved with minimal added computational and hardware costs.
|
|
13:30-14:45, Paper TuBT1-15.6 | Add to My Program |
Iteratively Reweighted Midpoint Method for Fast Multiple View Triangulation |
Yang, Kui | Beihang University |
Fang, Wei | Beijing University of Posts and Telecommunications |
Zhao, Yan | Beihang University |
Deng, Nianmao | Beihang University |
Keywords: SLAM, Mapping
Abstract: The classic midpoint method for triangulation is extremely fast, but usually labelled as inaccurate. We investigate the cost function that the midpoint method tries to minimize, and the result shows that the midpoint method is prone to underestimate the accuracy of the measurement acquired relatively far from the 3D point. Accordingly, the cost function used in this work is enhanced by assigning a weight to each measurement, which is inversely proportional to the distance between the 3D point and the corresponding camera center. After analyzing the gradient of the modified cost function, we propose to do minimization by applying fixed-point iterations to find the roots of the gradient. Thus the proposed method is called the iteratively reweighted midpoint method. In addition, a theoretical study is presented to reveal that the proposed method is an approximation to the Newton’s method near the optimal point, and hence inherits the quadratic convergence rate. At last, the comparisons of the experimental results on both synthetic and real datasets demonstrate that the proposed method is more efficient than the state-of-the-art while achieves the same level of accuracy.
|
|
TuBT1-16 Interactive Session, 220 |
Add to My Program |
Humanoid Robots V - 2.2.16 |
|
|
|
13:30-14:45, Paper TuBT1-16.1 | Add to My Program |
Balance Map Analysis As a Measure of Walking Balance Based on Pendulum-Like Leg Movements |
Kagawa, Takahiro | Aichi Institute of Technoogy |
Keywords: Humanoid and Bipedal Locomotion, Passive Walking, Legged Robots
Abstract: This paper proposes an analysis of walking balance in terms of movements of stance and swing legs based on an inverted pendulum and a simple pendulum. Linearization, decoupling and non-dimensionalization of a compass gait model enable to characterize the relationship of the trajectories between the stance and swing legs by only two parameters (energy ratio and phase difference). The energy ratio is defined by the ratio of the orbital energy between the pendulums. The phase difference represents the position of the stance leg in relation to the swing leg. This study considers an orbital energy conservation of a step transition and analyze reachability of a desirable touchdown condition. If the time evolution from a current state is not reachable to the desired touchdown region, the state is labeled as a state in balance loss. Analyzing the reachability limits of the energy ratio and phase difference, we illustrate the balance loss and safe regions on the phase portrait of the inverted pendulum, which is termed as balance map. We examined the effects of the simplification and linearization of the compass gait model by computer simulations. Through the simulations of walking with perturbations, we confirmed that the balance map analysis could predict a future falling in the early phase for even trajectories derived by the nonlinear model.
|
|
13:30-14:45, Paper TuBT1-16.2 | Add to My Program |
Non-Parametric Imitation Learning of Robot Motor Skills |
Huang, Yanlong | Istituto Italiano Di Tecnologia |
Rozo, Leonel | Bosch Center for Artificial Intelligence |
Silvério, João | Istituto Italiano Di Tecnologia |
Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Keywords: Learning from Demonstration, Humanoid Robots
Abstract: Unstructured environments impose several challenges when robots are required to perform different tasks and adapt to unseen situations. In this context, a relevant problem arises: how can robots learn to perform various tasks and adapt to different conditions? A potential solution is to endow robots with learning capabilities. In this line, imitation learning emerges as an intuitive way to teach robots different motor skills. This learning approach typically mimics human demonstrations by extracting invariant motion patterns and subsequently applies these patterns to new situations. In this paper, we propose a novel kernel treatment of imitation learning, which endows the robot with imitative and adaptive capabilities. In particular, due to the kernel treatment, the proposed approach is capable of learning human skills associated with high-dimensional inputs. Furthermore, we study a new concept of correlation-adaptive imitation learning, which allows for the adaptation of correlations exhibited in high-dimensional demonstrated skills. Several toy examples and a collaborative task with a real robot are provided to verify the effectiveness of our approach.
|
|
13:30-14:45, Paper TuBT1-16.3 | Add to My Program |
Dynamic Stepping on Unknown Obstacles with Upper-Body Compliance and Angular Momentum Damping from the Reaction Null-Space |
Hidaka, Yuki | Tokyo City University |
Nishizawa, Kajun | Tokyo City University |
Nenchev, Dragomir | Tokyo City University |
Keywords: Humanoid Robots, Humanoid and Bipedal Locomotion, Dynamics
Abstract: Contact destabilization after an impact that occurs at high-speed, e.g. when a robot steps on an obstacle of unknown height, can be tackled by injecting angular momentum damping for a short time interval immediately after the impact. This is done by making use of the motion from within the reaction null-space (RNS). The angular momentum damping results in an appropriate arm motion that stabilizes the contacts. An impact at high-speed occurs when the stepping time is very short. In this case, conventional controllers cannot handle the reaction stemming from the swing leg dynamics. A general whole-body controller is designed that makes use of the relative angular acceleration control component to inject the angular momentum damping. The proposed control method is robust; it can deal with obstacles of various height and inclination without altering the feedback gains. The controller is fast since iterative optimization is avoided. The performance is examined via a simulated dynamic stepping.
|
|
13:30-14:45, Paper TuBT1-16.4 | Add to My Program |
Efficient Humanoid Contact Planning Using Learned Centroidal Dynamics Prediction |
Lin, Yu-Chi | University of Michigan |
Ponton, Brahayam | Max Planck Institute for Intelligent Systems |
Righetti, Ludovic | New York University |
Berenson, Dmitry | University of Michigan |
Keywords: Motion and Path Planning, Humanoid and Bipedal Locomotion
Abstract: Humanoid robots dynamically navigate an environment by interacting with it via contact wrenches exerted at intermittent contact poses. Therefore, it is important to consider dynamics when planning a contact sequence. Traditional contact planning approaches assume a quasi-static balance criterion to reduce the computational challenges of selecting a contact sequence over a rough terrain. This however limits the applicability of the approach when dynamic motions are required, such as when walking down a steep slope or crossing a wide gap. Recent methods overcome this limitation with the help of efficient mixed integer convex programming solvers capable of synthesizing dynamic contact sequences. Nevertheless, its exponential-time complexity limits its applicability to short time horizon contact sequences within small environments. In this paper, we go beyond current approaches by learning a prediction of the dynamic evolution of the robot centroidal momenta, which can then be used for quickly generating dynamically robust contact sequences for robots with arms and legs using a search-based contact planner. We demonstrate the efficiency and quality of the results of the proposed approach in a set of dynamically challenging scenarios.
|
|
13:30-14:45, Paper TuBT1-16.5 | Add to My Program |
Sparse Optimization of Contact Forces for Balancing Control of Multi-Legged Humanoids |
Parigi Polverini, Matteo | Istituto Italiano Di Tecnologia |
Mingo Hoffman, Enrico | Fondazione Istituto Italiano Di Tecnologia |
Laurenzi, Arturo | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Keywords: Optimization and Optimal Control, Humanoid Robots, Force Control
Abstract: Multi-legged humanoid platforms present an inherent redundancy in the number of end-effectors required to perform interaction tasks, such as balancing and manipulation. The most relevant possibility opened up by end-effector redundancy consists in using a subset of the available end-effectors to perform a primary task, while employing the remaining end-effectors to perform a secondary tasks. As a consequence, it necessarily requires a methodology to automatically find the smallest set of end-effectors required to perform a primary task. For the balancing control of a torque-controlled humanoid, this is equivalent to finding a sparse solution of a contact force distribution problem. To this end, two different sparse optimisation approaches are presented and extensively discussed in this work. The effectiveness of the proposed approaches has been validated on a simulated model of the CENTAURO robot developed at the Istituto Italiano di Tecnologia.
|
|
13:30-14:45, Paper TuBT1-16.6 | Add to My Program |
Scalable Closed-Form Trajectories for Periodic and Non-Periodic Human-Like Walking |
Faraji, Salman | EPFL |
Ijspeert, Auke | EPFL |
Keywords: Humanoid and Bipedal Locomotion, Natural Machine Motion, Simulation and Animation
Abstract: We present a new framework to generate human-like lower-limb trajectories in periodic and non-periodic walking. In our method, walking dynamics is encoded in 3LP, a linear simplified model composed of three pendulums to simulate falling, swing, and torso balancing dynamics. To stabilize the motion, we use an optimal time-projecting controller which suggests new footstep locations. On top of gait generation and stabilization in the simplified space, we introduce a kinematic conversion that synthesizes more human-like trajectories by combining geometric variables of the 3LP model adaptively. Without any tuning, numerical optimization or off-line data, our walking gaits are scalable with respect to body properties and gait parameters. We can change body mass and height, walking direction, speed, frequency, double support time, torso style, ground clearance, and terrain inclinations. We can also simulate constant external dragging forces or momentary perturbations. The proposed framework offers closed-form solutions with simulation speeds orders of magnitude faster than real time. This can be used for video games and animations on portable electronic devices with limited power. It also gives insights for generation of more human-like walking gaits on humanoid robots.
|
|
TuBT1-17 Interactive Session, 220 |
Add to My Program |
Aerial Systems: Mechanisms II - 2.2.17 |
|
|
|
13:30-14:45, Paper TuBT1-17.1 | Add to My Program |
Flying STAR, a Hybrid Crawling and Flying Sprawl Tuned Robot |
Meiri, Nir | Ben Gurion University of the Negev |
Zarrouk, David | Ben Gurion University |
Keywords: Aerial Systems: Mechanics and Control, Mechanism Design, Search and Rescue Robots
Abstract: This paper presents Flying STAR (FSTAR) a reconfigurable hybrid flying quadcopter robot. FSTAR is the latest in the family of the STAR robots fitted with a sprawling mechanism and propellers allowing it to both run and fly using the same motors. The combined capabilities of running and flying allows FSTAR to fly over obstacles or run underneath them and move inside pipes. The robot can reduce its width to crawl in confined spaces or underneath obstacles while touching the ground. We first describe the design of the robot and the configuration of the wheels and propellers in the flying and running modes. Then we present the 3D printed prototype of the FSTAR robot which we used for our experiments. We evaluate the energy requirements of the robot and the forces it can generate. The experimental robot can fly like an ordinary quadcopter but can also run on the ground at a speed of up to 2.6 m/s to save energy (see video).
|
|
13:30-14:45, Paper TuBT1-17.2 | Add to My Program |
Autonomous Cooperative Flight of Rigidly Attached Quadcopters |
González Morín, Diego | Ericsson Research |
Araujo, Jose | Ericsson |
Tayamon, Soma | Ericsson |
Andersson, Lars A. A. | Ericsson Research |
Keywords: Aerial Systems: Mechanics and Control, Model Learning for Control
Abstract: In this paper, a method for online parameter estimation and automatic control of a system of rigidly attached quadcopters is introduced. First, the method performs an estimation of the physical structure attaching the quadcopters by relying solely on information from the quadcopters' Inertial Measurement Units (IMU). This information is obtained via simple and short online experiments, allowing their plug and play assembly without any human intervention. Then, given the estimated physical attachment's parameters, a stable operation of the quadcopters is achieved via an adaptive controller architecture, where the controller parameters are obtained using Reinforcement Learning. Finally, experimental results validate the proposed method, showing that a correct estimation of the physical structure is obtained allowing the autonomous flight of a pair of attached quadcopters.
|
|
13:30-14:45, Paper TuBT1-17.3 | Add to My Program |
Energy Optimal Control Allocation in a Redundantly Actuated Omnidirectional UAV |
Dyer, Eric | McMaster University |
Sirouspour, Shahin | McMaster University |
Jafarinasab, Mohammad | McMaster University |
Keywords: Aerial Systems: Mechanics and Control, Redundant Robots, Optimization and Optimal Control
Abstract: This paper presents a novel actuation model and control allocation strategy for a redundantly-actuated multi-rotor unmanned aerial vehicle (UAV), referred to as the omnicopter. With an unconventional configuration, the omnicopter's eight propellers are able to produce all the six components of net force/torque, with two degrees of actuation redundancy. This enables the vehicle to execute motion trajectories unattainable with conventional underactuated multi-rotors. A new actuators inverse model is proposed that accounts for significant propellers' airflow interactions in relating their output thrust forces to their input motor commands. Actuation redundancy is resolved by solving a convex constrained optimization problem. Its solution yields the most power efficient set of propellers thrusts that would produce a required net force/torque, while respecting the propeller thrust limits. When the required force/torque is infeasible due to the thrust limits, the solution would minimize the norm of the error between the desired and actual net force/torque vectors. Experimental results demonstrate the effectiveness of the proposed model and control allocation strategy.
|
|
13:30-14:45, Paper TuBT1-17.4 | Add to My Program |
Development of SAM: Cable-Suspended Aerial Manipulator |
Sarkisov, Yuri | Skolkovo Institute of Science and Technology |
Kim, Min Jun | DLR |
Bicego, Davide | LAAS-CNRS |
Tsetserukou, Dzmitry | Skolkovo Institute of Science and Technology |
Ott, Christian | German Aerospace Center (DLR) |
Franchi, Antonio | LAAS-CNRS |
Kondak, Konstantin | German Aerospace Center |
Keywords: Aerial Systems: Mechanics and Control, Mobile Manipulation
Abstract: High risk of a collision between rotor blades and the obstacles in a complex environment imposes restrictions on the aerial manipulators. To solve this issue, a novel system cable-Suspended Aerial Manipulator (SAM) is presented in this paper. Instead of attaching a robotic manipulator directly to an aerial carrier, it is mounted on an active platform which is suspended on the carrier by means of a cable. As a result, higher safety can be achieved because the aerial carrier can keep a distance from the obstacles. For self-stabilization, the SAM is equipped with two actuation systems: winches and propulsion units. This paper presents an overview of the SAM including the concept behind, hardware realization, control strategy, and the first experimental results.
|
|
13:30-14:45, Paper TuBT1-17.5 | Add to My Program |
The Phoenix Drone: An Open-Source Dual-Rotor Tail-Sitter Platform for Research and Education |
Wu, Yilun | University of Toronto |
Du, Xintong | University of Toronto |
Duivenvoorden, Rikky Ricardo Petrus Rufino | University of Toronto |
Kelly, Jonathan | University of Toronto |
Keywords: Aerial Systems: Mechanics and Control, Motion Control, Field Robots
Abstract: In this paper, we introduce the Phoenix drone: the first completely open-source tail-sitter micro aerial vehicle (MAV) platform. The vehicle has a highly versatile, dual-rotor design and is engineered to be low-cost and easily extensible/modifiable. Our open-source release includes all of the design documents, software resources, and simulation tools needed to build and fly a high-performance tail-sitter for research and educational purposes. The drone has been developed for precision flight with a high degree of control authority. Our design methodology included extensive testing and characterization of the aerodynamic properties of the vehicle. The platform incorporates many off-the-shelf components and 3D-printed parts, in order to keep the cost down. Nonetheless, the paper includes results from flight trials which demonstrate that the vehicle is capable of very stable hovering and accurate trajectory tracking. Our hope is that the open-source Phoenix reference design will be useful to both researchers and educators. In particular, the details in this paper and the available open-source materials should enable learners to gain an understanding of aerodynamics, flight control, state estimation, software design, and simulation, while experimenting with a unique aerial robot.
|
|
13:30-14:45, Paper TuBT1-17.6 | Add to My Program |
Fast and Efficient Aerial Climbing of Vertical Surfaces Using Fixed-Wing UAVs |
Mehanovic, Dino | Université De Sherbrooke |
Rancourt, David | Université De Sherbrooke |
Lussier Desbiens, Alexis | Université De Sherbrooke |
Keywords: Aerial Systems: Mechanics and Control, Climbing Robots, Sensor-based Control
Abstract: We present improvements to Sherbrooke’s multimodal autonomous drone (S-MAD), a microspine-based perching fixed-wing UAV that enables thrust-assisted climbing along vertical surfaces. Aircraft models are used to predict the performance of various aerial climb regimes and to design a controller for wall distance tracking. It is found that fast, long, and vertical climbs are favorable. Both short and long vertical autonomous climb maneuvers are demonstrated on rough surfaces (e.g., brick, roofing shingles). Results show that the S-MAD compares favorably with existing climbers, reaching a specific resistance of 19 with a much faster vertical speed (i.e., 2 m/s). A reduction in S-MAD’s aerodynamic drag and an improved motor efficiency could bring its specific resistance down to 7, at a vertical speed of 5 m/s.
|
|
TuBT1-19 Interactive Session, 220 |
Add to My Program |
Flexible Robots - 2.2.19 |
|
|
|
13:30-14:45, Paper TuBT1-19.1 | Add to My Program |
1-Actuator 3-DoF Manipulation Using an Underactuated Mechanism with Multiple Nonparallel and Viscoelastic Passive Joints |
Kurita, Taisuke | Osaka University |
Higashimori, Mitsuru | Osaka University |
Keywords: Underactuated Robots, Compliant Joint/Mechanism, Dexterous Manipulation
Abstract: This paper presents a nonprehensile manipulation based on the vibration of a plate, in which three degrees of freedom (DoF) of a planar part are controlled using only one actuator. First, the model of a manipulator with a flat plate end effector is proposed. The manipulator employs an underactuated mechanism including an active joint and multiple passive viscoelastic joints, in which the joint axes are arranged nonparallel to each other. Based on the model, the orbit of the plate for a sinusoidal displacement input to the active joint is theoretically derived. It is revealed that not only the orbital shape but also the orbital direction can be varied according to the input frequency. Based on the switching frequency of the orbital direction, a design index for the mechanical parameters is shown. Subsequently, the contribution of the switching of the orbital direction to the three-DoF manipulation of a part is explored via simulation. Eight primitives utilizing the plate orbital motions in both counter-clockwise and clockwise directions are provided. Finally, the proposed method is demonstrated by experiments.
|
|
13:30-14:45, Paper TuBT1-19.2 | Add to My Program |
Spline Based Curve Path Following of Underactuated Snake Robots |
Yang, Weixin | University of Nevada, Reno |
Wang, Gang | University of Nevada |
Shao, Haiyan | University of Jinan |
Shen, Yantao | University of Nevada, Reno |
Keywords: Underactuated Robots, Motion and Path Planning, Biologically-Inspired Robots
Abstract: This paper investigates the curve path following problem for a class of planar underactuated bio-inspired snake robots. The time-varying line-of-sight (LOS) guidance law and the cubic spline interpolation (CSI) path-planning method are employed. Existing studies focus on straight line path following which only gives a solution for snake robot motion control in relatively simple environments. Considering the snake robot’s many degrees of freedom and excellent mobility in terrains, we propose a more applicable solution of curve path following for snake robots on the ground. The improved LOS helps the snake robot to steer aggressively at a sharp turning point. Furthermore, to avoid the sideslip of the snake robot caused by the ground friction change, an integral controller is introduced in the design of the heading reference. Simulations and experiments on an 8-link custom-built snake robot are conducted and the results demonstrate and validate the effectiveness of the proposed curve path following algorithm.
|
|
13:30-14:45, Paper TuBT1-19.3 | Add to My Program |
High-Bandwidth Control of Twisted String Actuators |
Nedelchev, Simeon | Korea University of Technology and Education |
Gaponov, Igor | Korea University of Technology and Education |
Ryu, Jee-Hwan | Korea Univ. of Tech. and Education |
Keywords: Tendon/Wire Mechanism, Motion Control, Learning and Adaptive Systems
Abstract: Twisted string actuators are an emerging type of transmission systems that may benefit various applications of robotics and mechatronics. However, control of TSAs in applications that require high bandwidth has attracted comparatively little interest from research community, mainly due to complexity of twisted string behavior. This paper proposes a new adaptive control methodology that allows to sufficiently increase bandwidth of TSA-based systems. We reformulate mathematical model of the TSA into a suitable form for online parameter estimation, outline adaptive estimation methods and propose a method to design variable controller gain that rectifies nonlinearities in the system. We present experimental comparison of proposed adaptive control strategies with two conventional TSA control techniques. Experimental results demonstrated that the proposed adaptive control architecture with feedforward speed term was nearly insensitive to increase in input signal frequency while reducing position tracking error by 80%. Proposed algorithm can be applied in any TSA control system that has input and output signal measurements.
|
|
13:30-14:45, Paper TuBT1-19.4 | Add to My Program |
TREE: A Variable Topology, Branching Continuum Robot |
Lastinger, Michael | Clemson University |
Verma, Siddharth | Clemson University |
Kapadia, Apoorva | Clemson University |
Walker, Ian | Clemson University |
Keywords: Tendon/Wire Mechanism, Flexible Robots, Biologically-Inspired Robots
Abstract: We describe the design and physical realization of a novel branching continuum robot, aimed at inspection and cleaning operations in hard-to-reach environments at depths greater than human arm lengths. The design, based on a hybrid concentric-tube/tendon actuated continuum trunk core, features two pairs of fully retractable continuum branches. The retractable nature of the branches allows the robot to actively change its topology, allowing it to penetrate narrow openings and expand to adaptively engage complex environmental geometries. We detail and discuss the realization of a physical prototype of the design, and its testing in a simulated glove box environment.
|
|
13:30-14:45, Paper TuBT1-19.5 | Add to My Program |
Learning a State Transition Model of an Underactuated Adaptive Hand |
Sintov, Avishai | Rutgers University |
Morgan, Andrew | Yale University |
Kimmel, Andrew | Rutgers University |
Dollar, Aaron | Yale University |
Bekris, Kostas E. | Rutgers, the State University of New Jersey |
Boularias, Abdeslam | Carnegie Mellon University |
Keywords: Tendon/Wire Mechanism, Underactuated Robots, Dexterous Manipulation
Abstract: Fully-actuated, multi-fingered robotic hands are often expensive and fragile. Low-cost, under-actuated hands are appealing but present challenges due to the lack of analytical models. This paper aims to learn a stochastic version of such models automatically from data with minimum user effort. The focus is on identifying the dominant, sensible features required to express hand state transitions given quasi-static motions, thereby enabling the learning of a probabilistic transition model from recorded trajectories. Experiments both with Gaussian Processes (GP) and Neural Network models are included for analysis and evaluation. The metric for local GP regression is obtained with a manifold learning approach, known as Diffusion Maps, to uncover the lower-dimensional subspace in which the data lies and provide a geodesic metric. Results show that using Diffusion Maps with a feature space composed of the object position, actuator angles, and actuator loads, sufficiently expresses the hand-object system configuration and can provide accurate enough predictions for a relatively long horizon. To the best of the authors' knowledge, this is the first learned transition model for such underactuated hands that achieves this level of predictability. Notably, the same feature space implicitly embeds the size of the manipulated object and can generalize to new objects of varying sizes. Furthermore, the learned model can identify states that are on the verge of failure and should be avoided.
|
|
13:30-14:45, Paper TuBT1-19.6 | Add to My Program |
Continuum Robot Stiffness under External Loads and Prescribed Tendon Displacements (I) |
Oliver-Butler, Kaitlin | University of Tennessee |
Till, John | University of Tennessee, Knoxville |
Rucker, Caleb | University of Tennessee |
Keywords: Tendon/Wire Mechanism, Flexible Robots, Soft Material Robotics
Abstract: Soft and continuum robots driven by tendons or cables have wide-ranging applications, and many mechanics-based models for their behavior have been proposed. In this paper, we address the unsolved problem of predicting robot deflection and stiffness with respect to environmental loads where the axial displacements of the tendon ends are held constant. We first solve this problem analytically for a tendon-embedded Euler–Bernoulli beam. Nondimensionalized equations and plots describe how tendon stretch and routing path affect the robot’s output stiffness at any point. These analytical results enable stiffness analysis of candidate robot designs without extensive computational simulations. Insights gained through this analysis include the ability to increase robot stiffness by using converging tendon paths. Generalizing to large deflections in three dimensions (3-D), we extend a previous nonlinear Cosserat-rod-based model for tendon-driven robots to handle prescribed tendon displacements, tendon stretch, pretension, and slack. We then provide additional dimensionless plots in the actuated case for loads in 3-D. The analytical formulas and numerically computed model are experimentally validated on a prototype robot with good agreement.
|
|
TuBT1-20 Interactive Session, 220 |
Add to My Program |
Force and Tactile Sensing II - 2.2.20 |
|
|
|
13:30-14:45, Paper TuBT1-20.1 | Add to My Program |
Model Based in Situ Calibration with Temperature Compensation of 6 Axis Force Torque Sensors |
Andrade Chavez, Francisco Javier | Instituto Italiano Di Tecnologia |
Nava, Gabriele | Istituto Italiano Di Tecnologia |
Traversaro, Silvio | Istituto Italiano Di Tecnologia |
Nori, Francesco | DeepMind |
Pucci, Daniele | Italian Institute of Technology |
Keywords: Force and Tactile Sensing, Calibration and Identification, Humanoid Robots
Abstract: It is well known that sensors using strain gauges have a potential dependency on temperature. This creates temperature drift in the measurements of six axis force torque sensors (F/T). The temperature drift can be considerable if an experiment is long or the environmental conditions are different from when the calibration of the sensor was performed. Other textit{in situ} methods disregard the effect of temperature on the sensor measurements. Experiments performed using the humanoid robot platform iCub show that the effect of temperature is relevant. The model based textit{in situ} calibration of six axis force torque sensors method is extended to perform temperature compensation.
|
|
13:30-14:45, Paper TuBT1-20.2 | Add to My Program |
Whole-Body Active Compliance Control for Humanoid Robots with Robot Skin |
Dean-Leon, Emmanuel | Technischen Universitaet Muenchen |
Guadarrama-Olvera, Julio Rogelio | Technical University of Munich |
Bergner, Florian | Technical University of Munich |
Cheng, Gordon | Technical University of Munich |
Keywords: Force and Tactile Sensing, Compliance and Impedance Control, Physical Human-Robot Interaction
Abstract: Humanoid robots are expected to interact in human environments, where physical interactions are unavoidable. Therefore, whole-body control methods that include multi-contact interactions are required. The new emerging technologies in touch sensing are fundamental to acquire online and rich information about these physical interactions with the environment. These technologies lead to the design of novel control systems that can profit from the tactile sensor information in an efficient form, thus producing reactive and compliant robots capable of interacting with their environment. In this paper, we present a novel control framework to integrate the multi-modal tactile information of a robot skin with different control strategies, producing dynamic behaviors suitable for Human-Robot Interactions (HRI). The control framework was experimentally evaluated on a full-size humanoid robot covered with more than 1260 skin cells distributed in the whole robot body. The results show that multi-modal tactile information can be fused hierarchically with multiple control strategies, producing active compliance in a position-controlled stiff humanoid robot.
|
|
13:30-14:45, Paper TuBT1-20.3 | Add to My Program |
Internal Array Electrodes Improve the Spatial Resolution of Soft Tactile Sensors Based on Electrical Resistance Tomography |
Lee, Hyosang | Max Planck Institute for Intelligent Systems |
Park, Kyungseo | KAIST |
Kim, Jung | KAIST |
Kuchenbecker, Katherine J. | Max Planck Institute for Intelligent Systems |
Keywords: Force and Tactile Sensing, Soft Material Robotics, Physical Human-Robot Interaction
Abstract: Robots operating in unstructured environments would benefit from soft whole-body tactile sensors, but implementing such systems typically requires complex electrical wiring to a large number of sensing elements. The reconstruction method called electrical resistance tomography (ERT) has shown promising results (good coverage, manufacturability, and robustness) using electrodes located only along the boundary of the sensing region. However, relatively poor spatial resolution in the sensor’s central region is a major drawback of the ERT approach. This paper introduces a new scheme of internal array electrodes to improve spatial resolution. We also systematically derive the optimal pairwise current injection patterns from a mathematical formulation of the ERT system. By highlighting the importance of each electrode pair, this approach enabled us to reduce the number of current injection patterns. Simulation of the standard and proposed sensor designs revealed that the internal array electrodes greatly improve distinguishability in the central region. For validation, a fabric-based soft tactile sensor made of multiple conductive fabrics was developed, including electronics that enable sampling at 200 Hz. During a 225-point localization test conducted without sensor-specific calibration, the constructed sensor showed average localization errors of 2.85 cm ± 1.02 cm. This result is notable because only 16 point electrodes were used to achieve this performance.
|
|
13:30-14:45, Paper TuBT1-20.4 | Add to My Program |
Dense Tactile Force Estimation Using GelSlim and Inverse FEM |
Ma, Daolin | Massachusetts Institute of Technology |
Donlon, Elliott | MIT |
Dong, Siyuan | MIT |
Rodriguez, Alberto | Massachusetts Institute of Technology |
Keywords: Force and Tactile Sensing, Mechanism Design, Contact Modeling
Abstract: In this paper, we present a new version of tactile sensor GelSlim 2.0 with the capability to estimate the contact force distribution in real time. The sensor is vision-based and uses an array of markers to track deformations on a gel pad due to contact. A new hardware design makes the sensor more rugged, parametrically adjustable and improves illumination. Leveraging the sensor's increased functionality, we propose to use inverse Finite Element Method (iFEM), a numerical method to reconstruct the contact force distribution based on marker displacements. The sensor is able to provide force distribution of contact with high spatial density. Experiments and comparison with ground truth show that the reconstructed force distribution is physically reasonable with good accuracy.
|
|
13:30-14:45, Paper TuBT1-20.5 | Add to My Program |
Sensing the Frictional State of a Robotic Skin Via Subtractive Color Mixing |
Lin, Xi | ISM, CNRS, Aix-Marseille Université |
Wiertlewski, Michael | CNRS, Aix Marseille University |
Keywords: Force and Tactile Sensing, Soft Material Robotics
Abstract: The perception of surface properties such as shape and adherence is crucial to ensure that the hand-held object is stable. Without touch, precise manipulation becomes difficult. Some robotic tactile sensors use cameras that observe the elastic deformation of a membrane to detect edges or slippage of the contact. Information about the contact state drive innovative control strategies. However, most previous methods on these lines do not include quantitative means of measuring the 3- dimensional deformation of the skin or suffer from a lack of spatial resolution. Here we present a tactile sensor based on a subtractive color mixing process designed to track the 3- dimensional displacement of an array of markers, using the information delivered by the color channel of off-the-shelf cameras. The distributed shear and normal deformation can be assessed from the spectrum of the light reflected and refracted by an array of diffusive and transmissive markers placed on two superimposed layers. The markers show various blends of colors, depending on the displacement at the surface. The color pattern of each marker can be tracked with little computation and remains robust external lighting. The ability to sense the 3- dimensional deformation field can improve robotics perception of frictional properties which have applications in the fields of robotic control and human-robot interactions.
|
|
13:30-14:45, Paper TuBT1-20.6 | Add to My Program |
A Sense of Touch for the Shadow Modular Grasper |
Pestell, Nicholas | University of Bristol |
Cramphorn, Luke | Bristol University |
Papadopoulos, Fotios | Plymouth University |
Lepora, Nathan | University of Bristol |
Keywords: Force and Tactile Sensing, Perception for Grasping and Manipulation, Grippers and Other End-Effectors
Abstract: In this study, we have designed and built a set of tactile fingertips for integration with a commercial, three-fingered robot hand, the Shadow Modular Grasper. The fingertips are an evolution of an established optical, biomimetic tactile sensor, the TacTip. In developing the tactile fingertips, we have progressed the technology in areas such as miniaturization, development of custom-shaped finger-pads and integration of multiple sensors. From these fingertips, we extract a set of high-level features with intuitive relationships to tactile quantities such as contact location and pressure. We present a simple linear-regression method for predicting roll and pitch of the finger-pad relative to a surface normal and show that the method generalises to unknown depths and shapes. Finally, we apply this prediction to a grasp-control method with the Modular Grasper and show that it can adjust the grasp on three real-world objects from the YCB object set in order to attain a greater area of contact at each fingertip.
|
|
TuBT1-21 Interactive Session, 220 |
Add to My Program |
Deep Visual Learning II - 2.2.21 |
|
|
|
13:30-14:45, Paper TuBT1-21.1 | Add to My Program |
Pose Graph Optimization for Unsupervised Monocular Visual Odometry |
Li, Yang | The University of Tokyo |
Ushiku, Yoshitaka | OMRON SINIC X Corpolation |
Harada, Tatsuya | The University of Tokyo |
Keywords: Deep Learning in Robotics and Automation, SLAM, Localization
Abstract: Unsupervised Learning based monocular visual odometry (VO) has lately drawn significant attention for its potential in label-free leaning ability and robustness to camera parameters and environmental variations. However, partially due to the lack of drift correction technique, these methods are still by far less accurate than geometric approaches for large-scale odometry estimation. In this paper, we propose to leverage graph optimization and loop closure detection to overcome limitations of unsupervised learning based monocular visual odometry. To this end, we propose a hybrid VO system which combines an unsupervised monocular VO called NeuralBundler with a pose graph optimization back-end. NeuralBundler is a neural network architecture that uses temporal and spatial photometric loss as main supervision and generates a windowed pose graph consists of multi-view 6DoF constraints. We propose a novel pose cycle consistency loss to relieve the tensions in the windowed pose graph, leading to improved performance. In the back-end, a global pose graph is built from local and loop 6DoF constraints estimated by NeuralBundler, and is optimized over SE(3). Empirical evaluation on the KITTI odometry dataset demonstrates that 1) NeuralBundler achieves state-of-the-art performance on unsupervised monocular VO estimation, and 2) our whole approach can achieve efficient loop closing and show favorable overall translational accuracy compared to established monocular SLAM systems.
|
|
13:30-14:45, Paper TuBT1-21.2 | Add to My Program |
Probably Unknown: Deep Inverse Sensor Modelling in Radar |
Weston, Robert James | Oxford Robotics Institute, University of Oxford |
Cen, Sarah Huiyi | University of Oxford |
Newman, Paul | Oxford University |
Posner, Ingmar | Oxford University |
Keywords: Deep Learning in Robotics and Automation, Range Sensing, Mapping
Abstract: Radar presents a promising alternative to lidar and vision in autonomous vehicle applications, able to detect objects at long range under a variety of weather conditions. However, distinguishing between occupied and free space from raw radar power returns is challenging due to complex interactions between sensor noise and occlusion. To counter this we propose to learn an Inverse Sensor Model (ISM) converting a raw radar scan to a grid map of occupancy probabilities using a deep neural network. Our network is self-supervised using partial occupancy labels generated by lidar, allowing a robot to learn about world occupancy from past experience without human supervision. We evaluate our approach on five hours of data recorded in a dynamic urban environment. By accounting for the scene context of each grid cell our model is able to successfully segment the world into occupied and free space, outperforming standard CFAR filtering approaches. Additionally by incorporating heteroscedastic uncertainty into our model formulation, we are able to quantify the variance in the uncertainty throughout the sensor observation. Through this mechanism we are able to successfully identify regions of space that are likely to be occluded.
|
|
13:30-14:45, Paper TuBT1-21.3 | Add to My Program |
Uncertainty-Aware Occupancy Map Prediction Using Generative Networks for Robot Navigation |
Katyal, Kapil | Johns Hopkins University Applied Physics Lab |
Popek, Katie | Johns Hopkins University Applied Physics Lab |
Paxton, Chris | NVIDIA Research |
Burlina, Philippe | Johns Hopkins University Applied Physics Laboratory |
Hager, Gregory | Johns Hopkins University |
Keywords: Deep Learning in Robotics and Automation, Mapping, Visual-Based Navigation
Abstract: Efficient exploration through unknown environments remains a challenging problem for robotic systems. In these situations, the robot's ability to reason about its future motion is often severely limited by sensor field of view (FOV). By contrast, biological systems routinely make decisions by taking into consideration what might exist beyond their FOV based on prior experience. We present an approach for predicting occupancy map representations of sensor data for future robot motions using deep neural networks. We develop a custom loss function used to make accurate prediction while emphasizing physical boundaries. We further study extensions to our neural network architecture to account for uncertainty and ambiguity inherent in mapping and exploration. Finally, we demonstrate a combined map prediction and information-theoretic exploration strategy using the variance of the generated hypotheses as the heuristic for efficient exploration of unknown environments.
|
|
13:30-14:45, Paper TuBT1-21.4 | Add to My Program |
Empty Cities: Image Inpainting for a Dynamic-Object-Invariant Space |
Bescos, Berta | University of Zaragoza |
Neira, José | Universidad De Zaragoza |
Siegwart, Roland | ETH Zurich |
Cadena Lerma, Cesar | ETH Zurich |
Keywords: Deep Learning in Robotics and Automation, Localization, SLAM
Abstract: In this paper we present an end-to-end deep learning framework to turn images that show dynamic content, such as vehicles or pedestrians, into realistic static frames. This objective encounters two main challenges: detecting all the dynamic objects, and inpainting the static occluded background with plausible imagery. The second problem is approached with a conditional generative adversarial model that, taking as input the original dynamic image and its dynamic/static binary mask, is capable of generating the final static image. The former challenge is addressed by the use of a convolutional network that learns a multi-class semantic segmentation of the image. These generated images can be used for applications such as augmented reality or vision-based robot localization purposes. To validate our approach, we show both qualitative and quantitative comparisons against other state-of-the-art inpainting methods by removing the dynamic objects and hallucinating the static structure behind them. Furthermore, to demonstrate the potential of our results, we carry out pilot experiments that show the benefits of our proposal for visual place recognition.
|
|
13:30-14:45, Paper TuBT1-21.5 | Add to My Program |
Autonomous Exploration, Reconstruction, and Surveillance of 3D Environments Aided by Deep Learning |
Ly, Louis | UT Austin |
Tsai, Richard | UT Austin |
Keywords: Deep Learning in Robotics and Automation, Mapping, Autonomous Agents
Abstract: We propose a greedy and supervised learning approach for visibility-based exploration, reconstruction and surveillance. Using a level set representation, we train a convolutional neural network to determine vantage points that maximize visibility. We show that this method drastically reduces the on-line computational cost and determines a small set of vantage points that solve the problem. This enables us to efficiently produce highly-resolved and topologically accurate maps of complex 3D environments. Unlike traditional next- best-view and frontier-based strategies, the proposed method accounts for geometric priors while evaluating potential vantage points. While existing deep learning approaches focus on obstacle avoidance and local navigation, our method aims at finding near-optimal solutions to the more global exploration problem. We present realistic simulations on 2D and 3D urban environments.
|
|
13:30-14:45, Paper TuBT1-21.6 | Add to My Program |
GANVO: Unsupervised Deep Monocular Visual Odometry and Depth Estimation with Generative Adversarial Networks |
Almalioglu, Yasin | The University of Oxford |
Saputra, Muhamad Risqi U. | University of Oxford |
Porto Buarque de Gusmão, Pedro | University of Oxford |
Markham, Andrew | Oxford University |
Trigoni, Niki | University of Oxford |
Keywords: Deep Learning in Robotics and Automation, Localization, Visual Tracking
Abstract: In the last decade, supervised deep learning approaches have been extensively employed in visual odometry (VO) applications, which is not feasible in environments where labelled data is not abundant. On the other hand, unsupervised deep learning approaches for localization and mapping in unknown environments from unlabelled data have received comparatively less attention in VO research. In this study, we propose a generative unsupervised learning framework that predicts 6-DoF pose camera motion and monocular depth map of the scene from unlabelled RGB image sequences, using deep convolutional Generative Adversarial Networks (GANs). We create a supervisory signal by warping view sequences and assigning the re-projection minimization to the objective loss function that is adopted in multi-view pose estimation and single-view depth generation network. Detailed quantitative and qualitative evaluations of the proposed framework on the KITTI and Cityscapes datasets show that the proposed method outperforms both existing traditional and unsupervised deep VO methods providing better results for both pose estimation and depth recovery.
|
|
TuBT1-22 Interactive Session, 220 |
Add to My Program |
Object Recognition & Segmentation II - 2.2.22 |
|
|
|
13:30-14:45, Paper TuBT1-22.1 | Add to My Program |
Fast Instance and Semantic Segmentation Exploiting Local Connectivity, Metric Learning, and One-Shot Detection for Robotics |
Milioto, Andres | University of Bonn |
Mandtler, Leonard | University of Bonn |
Stachniss, Cyrill | University of Bonn |
Keywords: Object Detection, Segmentation and Categorization, Semantic Scene Understanding, Deep Learning in Robotics and Automation
Abstract: Semantic scene understanding is important for autonomous robots that aim to navigate dynamic environments, manipulate objects, or interact with humans in a natural way. In this paper, we address the problem of jointly performing semantic segmentation as well as instance segmentation in an online fashion, so that autonomous robots can use this information on-the-go and without sacrificing accuracy. We achieve this by exploiting a local connectivity prior of objects in the real world and a multi-task convolutional neural network architecture. The network identifies the individual object instances and their classes without region proposals or pre-segmentation of the images into individual classes. We implemented and thoroughly evaluated our approach, and our experiments suggest that our method can be used to accurately segment instance masks of objects and identify their class in an online fashion.
|
|
13:30-14:45, Paper TuBT1-22.2 | Add to My Program |
Adding Cues to Binary Feature Descriptors for Visual Place Recognition |
Schlegel, Dominik | Sapienza - University of Rome |
Grisetti, Giorgio | Sapienza University of Rome |
Keywords: Recognition, Localization, SLAM
Abstract: In this paper we propose an approach to embed multi-dimensional continuous cues in binary feature descriptors used for visual place recognition. The embedding is achieved by extending each feature descriptor with a binary string that encodes a cue and supports the Hamming distance metric. Augmenting the descriptors in such a way has the advantage of being transparent to the procedure used to compare them. We present a concrete application of our methodology, demonstrating the considered type of continuous cue. Additionally, we conducted a broad quantitative and comparative evaluation on that application, covering five benchmark datasets and several state-of-the-art image retrieval approaches in combination with various binary descriptor types.
|
|
13:30-14:45, Paper TuBT1-22.3 | Add to My Program |
Recursive Bayesian Classification for Perception of Evolving Targets Using a Gaussian Toroid Prediction Model |
Steckenrider, J. Josiah | Virginia Polytechnic Institute and State University |
Furukawa, Tomonari | Virginia Polytechnic Institute and State University |
Keywords: Probability and Statistical Methods, Recognition, Sensor Fusion
Abstract: This paper proposes a probabilistic framework for classification of evolving targets, leveraging the principles of recursive Bayesian estimation in a perception-oriented context. By implementing a Gaussian toroid prediction model of the perception target's evolution, the proposed recursive Bayesian classification (RBC) scheme provides probabilistically robust classification. Appropriate features are extracted from the target, which is then probabilistically represented in a belief space. This approach is capable of handling high-dimensional belief spaces, while simultaneously allowing for multi-Gaussian representation of belief without computational complexity that hinders real-time analysis. The proposed technique is validated over several parameter values by thousands of simulated experiments, where it is shown to outperform naive classification when high observational uncertainty is present.
|
|
13:30-14:45, Paper TuBT1-22.4 | Add to My Program |
Large-Scale Object Mining for Object Discovery from Unlabeled Video |
Osep, Aljosa | RWTH Aachen University |
Voigtlaender, Paul | RWTH Aachen University |
Luiten, Jonathon | Mr |
Breuers, Stefan | RWTH Aachen University |
Leibe, Bastian | RWTH Aachen University |
Keywords: Object Detection, Segmentation and Categorization, Visual Learning, Visual Tracking
Abstract: This paper addresses the problem of object discovery from unlabeled driving videos captured in a realistic automotive setting. Identifying recurring object categories in such raw video streams is a very challenging problem. Not only do object candidates first have to be localized in the input images, but many interesting object categories occur relatively infrequently. Object discovery will therefore have to deal with the difficulties of operating in the long tail of the object distribution. We demonstrate the feasibility of performing fully automatic object discovery in such a setting by mining object tracks using a generic object tracker. In order to facilitate further research in object discovery, we release a collection of more than 360,000 automatically mined object tracks from 10+ hours of video data (560,000 frames). We use this dataset to evaluate the suitability of different feature representations and clustering strategies for object discovery.
|
|
13:30-14:45, Paper TuBT1-22.5 | Add to My Program |
Goal-Oriented Object Importance Estimation in On-Road Driving Videos |
Gao, Mingfei | University of Maryland |
Tawari, Ashish | Honda Research Institute |
Martin, Sujitha | Honda Research Institute |
Keywords: Computer Vision for Automation
Abstract: We formulate a new problem as Object Importance Estimation (OIE) in on-road driving videos, where the road users are considered as important objects if they have influence on the control decision of the ego-vehicle's driver. The importance of a road user depends on both its visual dynamics, e.g., appearance, motion and location, in the driving scene and the driving goal, e.g., the planned path, of the ego vehicle. We propose a novel framework that incorporates both visual model and goal representation to conduct OIE. To evaluate our framework, we collect an on-road driving dataset at traffic intersections in the real world and conduct human-labeled annotation of the important objects. Experimental results show that our goal-oriented method outperforms baselines and has much more improvement on the left-turn and right-turn scenarios. Furthermore, we explore the possibility of using object importance for driving control prediction and demonstrate that binary brake prediction can be improved with the information of object importance.
|
|
13:30-14:45, Paper TuBT1-22.6 | Add to My Program |
Priming Deep Pedestrian Detection with Geometric Context |
Chakraborty, Ishani | Microsoft |
Hua, Gang | Stevens Institute of Technology |
Keywords: Object Detection, Segmentation and Categorization, Visual Learning
Abstract: We investigate the role of geometric context in deep neural networks to establish better pedestrian detectors that are more robust to occlusions. Notwithstanding their demonstrated successes in the wild, deep object detectors underperform in crowded scenes with high intra-category occlusions. One brute-force solution is to collect a large number of labeled training samples under occlusion, but the combinatorial increase in the labeling effort makes it an unaffordable solution. We argue that a promising and complementary direction to solve this problem is to bring geometric context to modulate feature learning in a DNN. We identify that an effective way to leverage geometric context is to induce it in two steps - through early fusion, by guiding region proposal generation to focus on occluded regions, and through late fusion, by penalizing misalignments of bounding boxes in both 2D and 3D. Our experiments on multiple state-of-the-art DNN detectors and several detection benchmarks clearly demonstrates that our proposed method outperforms strong baselines by an average of 5%.
|
|
TuBT1-23 Interactive Session, 220 |
Add to My Program |
Motion and Path Planning II - 2.2.23 |
|
|
|
13:30-14:45, Paper TuBT1-23.1 | Add to My Program |
The Robust Canadian Traveler Problem Applied to Robot Routing |
Guo, Hengwei | University of Toronto |
Barfoot, Timothy | University of Toronto |
Keywords: Motion and Path Planning
Abstract: The stochastic Canadian Traveler Problem (CTP), which finds application in robot route selection under uncertainty, aims to find the traversal policy with the minimum expected cost. This paper extends the CTP to what we call the Robust Canadian Traveler Problem (RCTP), in which the variability of the policy cost is also part of the evaluation criteria. An optimal (offline) algorithm and an approximate (online) algorithm are then proposed to compute the policy that has a good balance of both mean and variation of the traversal cost. The benefit of the proposed framework versus traditional approaches is shown by doing simulations in randomly generated worlds as well as on a map of 5 km of paths built from robot field trials. Specifically, the RCTP framework is able to search for sub-optimal policy alternatives with significantly lower worst-case cost and less computational time compared to the optimal policy, but with little sacrifice on the expected cost.
|
|
13:30-14:45, Paper TuBT1-23.2 | Add to My Program |
Improved A-Search Guided Tree Construction for Kinodynamic Planning |
Wang, Yebin | Mitsubishi Electric Research Laboratories |
Keywords: Motion and Path Planning, Autonomous Agents, Nonholonomic Motion Planning
Abstract: With node selection being directed by a heuristic cost [1]-[3], A-search guided tree (AGT) is constructed on-the-fly and enables fast kinodynamic planning. This work presents two variants of AGT to improve computation efficiency. An improved A-search guided tree (i-AGT) biases node expansion through prioritizing control actions, an analogy of prioritizing nodes in AGT. Focusing on node selection, a bi-directional A-search guided tree (BAGT) introduces a second tree originated from the goal in order to offer a better heuristic cost of the first tree. Effectiveness of BAGT pivots on the fact that the second tree encodes obstacles information near the goal. Case study demonstrates that i-AGT consistently reduces the complexity of the tree and improves computation efficiency; and BAGT works largely but not always, particularly with no benefit observed for simple cases.
|
|
13:30-14:45, Paper TuBT1-23.3 | Add to My Program |
Balancing Global Exploration and Local-Connectivity Exploitation with Rapidly-Exploring Random Disjointed-Trees |
Lai, Tin | University of Sydney |
Ramos, Fabio | University of Sydney |
Francis, Gilad | The University of Sydney |
Keywords: Motion and Path Planning
Abstract: Sampling efficiency in a highly constrained environment has long been a major challenge for sampling-based planners. In this work, we propose Rapidly-exploring Random disjointed-Trees* (RRdT*), an incremental optimal multi-query planner. RRdT* uses multiple disjointed-trees to exploit local-connectivity of spaces via Markov Chain random sampling, which utilises neighbourhood information derived from previous successful and failed samples. To balance local exploitation, RRdT* actively explore unseen global spaces when local-connectivity exploitation is unsuccessful. The active trade-off between local exploitation and global exploration is formulated as a multi-armed bandit problem. We argue that the active balancing of global exploration and local exploitation is the key to improving sample efficient in sampling-based motion planners. We provide rigorous proofs of completeness and optimal convergence for this novel approach. Furthermore, we demonstrate experimentally the effectiveness of RRdT*'s locally exploring trees in granting improved visibility for planning. Consequently, RRdT* outperforms existing state-of-the-art incremental planners, especially in highly constrained environments.
|
|
13:30-14:45, Paper TuBT1-23.4 | Add to My Program |
Locomotion Planning through a Hybrid Bayesian Trajectory Optimization |
Seyde, Tim Niklas | MIT, ETH Zurich |
Carius, Jan | ETH Zurich |
Grandia, Ruben | ETH Zurich |
Farshidian, Farbod | ETH Zurich |
Hutter, Marco | ETH Zurich |
Keywords: Motion and Path Planning, Optimization and Optimal Control, Underactuated Robots
Abstract: Locomotion planning for legged systems requires reasoning about suitable contact schedules. The contact sequence and timings constitute a hybrid dynamical system and prescribe a subset of achievable motions. State-of-the-art approaches cast motion planning as an optimal control problem. In order to decrease computational complexity, one common strategy separates footstep planning from motion optimization and plans contacts using heuristics. In this paper, we propose to learn contact schedule selection from high-level task descriptors using Bayesian optimization. A bi-level optimization is defined in which a Gaussian process model predicts the performance of trajectories generated by a motion planning nonlinear program. The agent, therefore, retains the ability to reason about suitable contact schedules, while explicit computation of the corresponding gradients is avoided. We delineate the algorithm in its general form and provide results for planning single-legged hopping. Our method is capable of learning contact schedule transitions that align with human intuition. It performs competitively against a heuristic baseline in predicting task appropriate contact schedules.
|
|
13:30-14:45, Paper TuBT1-23.5 | Add to My Program |
Dynamic Channel: A Planning Framework for Crowd Navigation |
Cao, Chao | Carnegie Mellon University |
Trautman, Peter | Galois Inc |
Iba, Soshi | Honda Research Institute USA |
Keywords: Motion and Path Planning, Collision Avoidance, Human-Centered Robotics
Abstract: Real-time navigation in dense human environments has been a challenging problem in robotics for years. Most existing path planners fail to account for the dynamics of pedestrians because introducing time as an additional dimension in search space will often become computationally prohibitive. On the other hand, most local motion planners only address the imminent collision avoidance problem but fail to offer long-term optimality. In this work, we present an approach, Dynamic Channels, to solve the crowd navigation and more generally, dynamic obstacle avoidance problem. Our method combines the high-level topological path planning with low-level motion planning into a complete pipeline. By formulating the path planning problem as graph-searching in the triangulation space, our planner is able to explicitly reason about the dynamics of obstacles and capture the change of the environment efficiently. We evaluate the efficiency and performance of our approach on public pedestrian datasets and compare it to a state-of-the-art planning algorithm for dynamic obstacle avoidance.
|
|
13:30-14:45, Paper TuBT1-23.6 | Add to My Program |
Composition of Local Potential Functions with Reflection |
Stager, Adam | University of Delaware |
Tanner, Herbert G. | University of Delaware |
Keywords: Motion and Path Planning, Wheeled Robots
Abstract: This paper suggests reflections can be practically useful if they are included in planning for collision capable robot platforms. By modifying a proven strategy for navigation with reflections we maintain global convergence results and reach the goal in less time. An algorithm for identifying reflection surfaces for a given cell decomposition is reported. Baseline and reflected scenarios are compared for two different cell decompositions. Omnipuck, a reflection capable omnidirectional robot meant to store and release impact energy, is used to obtain experimental results and draw conclusions for future work.
|
|
TuBT1-24 Interactive Session, 220 |
Add to My Program |
Industrial Robotics - 2.2.24 |
|
|
|
13:30-14:45, Paper TuBT1-24.1 | Add to My Program |
Analyzing Electromagnetic Actuator Based on Force Analysis |
Ahn, Jaewon | DGIST |
Yun, Dongwon | Daegu Gyeongbuk Institute of Science and Technology (DGIST) |
Keywords: Industrial Robots, Motion Control, Semiconductor Manufacturing
Abstract: By modeling the system with the mechanical, electrical, and magnetic field, we can derive the system equation for the actuator modeling. As it is not easy to conclude the output signal from the equations, we used Simulink to simulate and check the performance aspect of the system. After that, we did several experiments to verify whether experimental force meets with the needed condition for impact hot embossing and matches with simulation force. We tried to adjust the parameters of the system to match the force of experiment result and that of the simulation result. From the comparison, we can consider the analysis of the actuator as precise. A successful study can contribute to the better application of new type hot embossing techniques and better understanding and usage of the electromagnetic actuator when it is applied to another technology and research.
|
|
13:30-14:45, Paper TuBT1-24.2 | Add to My Program |
A Novel Robotic System for Finishing of Freeform Surfaces |
Wen, Yalun | Texas A&M University |
Hu, Jie | Texas A&M University |
Pagilla, Prabhakar Reddy | Texas A&M University |
Keywords: Industrial Robots, Intelligent and Flexible Manufacturing, Factory Automation
Abstract: Surface finishing of freeform surfaces is predominately a manual operation that requires a considerable amount of operator skill; automation of this process has many benefits, including consistent surface quality, preventing hazardous exposure to particulate, etc. A novel robotic surface finishing system, consisting of a robot and an end-effector that includes a force sensor, finishing tool, and proximity laser sensor, is developed in this paper to automate the surface finishing process. The laser sensor is treated as an additional link, and based on it a novel perception system is developed for real-time scanning of the surface that provides the surface profile mesh and the corresponding normal vectors which can be used directly by the robot closed-loop control system for pose tracking. A unique feature of the perception system is that the geometry of the surface profile and normal vectors are all obtained in real-time in the robot base coordinate system, thus eliminating issues such as precise registration of the work piece in the fixture and its location with respect to the robot base coordinates. An impedance-type closed-loop control algorithm is developed for pose tracking. The proposed system and control algorithm are employed to conduct surface finishing experiments on wooden surfaces. A representative sample of the results and measurement images of surface finish are provided to illustrate the capabilities of the robotic surface finishing system. A video
|
|
13:30-14:45, Paper TuBT1-24.3 | Add to My Program |
Context-Dependent Compensation Scheme to Reduce Trajectory Execution Errors for Industrial Manipulators |
Bhatt, Prahar | University of Southern California |
Rajendran, Pradeep | University of Southern California |
McKay, Keith | Hexagon AB |
Gupta, Satyandra K. | University of Southern California |
Keywords: Industrial Robots, Intelligent and Flexible Manufacturing, Calibration and Identification
Abstract: Currently, automatically generated trajectories cannot be directly used on tasks that require high execution accuracies due to errors accused by inaccuracies in the robot model, actuator errors, and controller limitations. These trajectories often need manual refinement. This is not economically viable on low production volume applications. Unfortunately, execution errors are dependent on the nature of the trajectory and end-effector loads, and therefore devising a general purpose automated compensation scheme for reducing trajectory errors is not possible. This paper presents a method for analyzing the given trajectory, executing an exploratory physical run for a small portion of the given trajectory, and learning a compensation scheme based on the measured data. The learned compensation scheme is context-dependent and can be used to reduce the execution error. We have demonstrated the feasibility of this approach by conducting physical experiments.
|
|
13:30-14:45, Paper TuBT1-24.4 | Add to My Program |
Identifying Feasible Workpiece Placement with Respect to Redundant Manipulator for Complex Manufacturing Tasks |
Malhan, Rishi | University of Southern California |
Kabir, Ariyan M | University of Southern California |
Shah, Brual C. | University of Maryland, College Park |
Gupta, Satyandra K. | University of Southern California |
Keywords: Industrial Robots, Intelligent and Flexible Manufacturing, Manufacturing, Maintenance and Supply Chains
Abstract: Successfully completing a complex manufacturing task requires finding a feasible placement of the workpiece in the robot workspace. The workpiece placement should be such that the task surfaces on the workpiece are reachable by the robot, the robot can apply the required forces, and the end-effector/tool can move with the desired velocity. This paper formulates the problem of identifying a feasible placement as a non-linear optimization problem over the constraint violation functions. This is a computationally challenging problem. We show that this problem can be solved by successively searching for the solution by incrementally applying different constraints. We demonstrate the feasibility of our approach using several complex workpieces.
|
|
13:30-14:45, Paper TuBT1-24.5 | Add to My Program |
Geometric Search Based Inverse Kinematics of 7-DoF Redundant Manipulator with Multiple Joint Offsets |
Sinha, Anirban | Stony Brook University |
Chakraborty, Nilanjan | Stony Brook University |
Keywords: Industrial Robots, AI-Based Methods, Manipulation Planning
Abstract: We propose a geometric method to solve inverse kinematics (IK) problems of 7-DoF manipulators with joint offsets at shoulder, elbow, and wrist. Traditionally, inverse position kinematics for redundant manipulators are solved by using an iterative method based on the pseudo-inverse of the manipulator Jacobian. This provides a single solution among the infinitely many possible solutions for the IK problem of redundant manipulators. There are no closed-form IK solutions for redundant manipulators with multiple joint offsets. Using our method we can compute multiple IK solutions using two- parameter search by exploiting geometry of the structure of a redundant manipulator. Our proposed IK algorithm can handle multiple joint offsets and is mathematically simple to implement in a few lines of code. We apply our algorithm to compute IK solutions for 7-DoF redundant Baxter robot (that has joint offsets at shoulder, wrist, and elbow joints) for end-effector configurations where existing geometry-based IK solvers fail to find solutions. We also demonstrate the use of our algorithm in an application where we want to compute an IK solution (among the infinitely many possible solutions) that has minimum error bound in end-effector position, in the presence of random joint actuation and sensing uncertainties.
|
|
13:30-14:45, Paper TuBT1-24.6 | Add to My Program |
New Automated Guided Vehicle System Using Real-Time Holonic Scheduling for Warehouse Picking |
Yoshitake, Hiroshi | Hitachi, Ltd |
Kamoshida, Ryota | Hitachi, Ltd |
Nagashima, Yoshikazu | Hitachi, Ltd |
Keywords: Industrial Robots, Planning, Scheduling and Coordination, Logistics
Abstract: We propose a new robotic system using an automated guided vehicle (AGV) for order picking in logistics warehouses. In AGV picking systems, AGVs transport the entire shelves including the required items (inventory shelves) or shipping boxes (sorting shelves) to the pickers instead of the pickers moving to the shelves, which improves the productivity of the picking activity in warehouses. In conventional systems, the sorting shelves are fixed or are transported only after the completion of the sorting work of the shelves. Our new system transports the sorting shelf even if the sorting shelf is in the middle of sorting. This system improves the productivity of the picking activity by transporting both shelves to appropriate locations at the appropriate time. To handle the complex transport tasks, the proposed system requires a real-time scheduling method. This study applies a real-time holonic scheduling method to resolve scheduling problems. We evaluated the productivity of the proposed system and that of the conventional method that transports sorting shelves only after finishing the sorting work. The results show that the larger the working area and the higher-mixed and lower-volume the picking orders, the more efficiently the proposed method solves scheduling problems.
|
|
TuBT1-25 Interactive Session, 220 |
Add to My Program |
Intelligent Transportation II - 2.2.25 |
|
|
|
13:30-14:45, Paper TuBT1-25.1 | Add to My Program |
Design and Formal Verification of a Safe Stop Supervisor for an Automated Vehicle |
Krook, Jonas | Zenuity |
Svensson, Lars | KTH Royal Institute of Technology |
Li, Yuchao | KTH Royal Institute of Technology |
Feng, Lei | KTH Royal Institute of Technology |
Fabian, Martin | Department of Electrical Engineering |
Keywords: Formal Methods in Robotics and Automation, Intelligent Transportation Systems, Robot Safety
Abstract: Autonomous vehicles apply pertinent planning and control algorithms under different driving conditions. The mode switch between these algorithms should also be autonomous. On top of the nominal planners, a safe fallback routine is needed to stop the vehicle at a safe position if nominal operational conditions are violated, such as for a system failure. This paper describes the design and formal verification of a supervisor to manage all requirements for mode switching between nominal planners, and additional requirements for switching to a safe stop trajectory planner that acts as the fallback routine. The supervisor is designed via a model-based approach and its abstraction is formally verified by model checking. The supervisor is implemented and integrated with the Research Concept Vehicle, an experimental research and demonstration vehicle developed at the KTH Royal Institute of Technology. Simulations and experiments show that the vehicle is able to autonomously drive in a safe manner between two parking lots and can successfully come to a safe stop upon GPS sensor failure.
|
|
13:30-14:45, Paper TuBT1-25.2 | Add to My Program |
Optimization-Based Terrain Analysis and Path Planning in Unstructured Environments |
Graf, Ueli | 9T Labs AG |
Borges, Paulo Vinicius Koerich | CSIRO |
Hernandez, Emili | CSIRO |
Siegwart, Roland | ETH Zurich |
Dubé, Renaud | ETH Zürich |
Keywords: Field Robots, Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: Accurate environment representation is one of the key challenges in autonomous ground vehicle navigation in unstructured environments. We propose a real-time optimization-based approach to terrain modeling and path planning in off-road and rough environments. Our method uses an irregular, hierarchical, graph-like environment model. A space-dividing tree is used to define a compact data structure capturing vertex positions and establishing connectivity. The same unique underlying data structure is used for both terrain modeling and path planning without memory reallocation. Local plans are generated by graph search algorithms and are continuously regenerated for on-the-fly obstacle avoidance inside the scope of the local terrain map. We show that implementing a hierarchical model over a regular space division reduces graph edge expansions by up to 84 %. We illustrate the applicability of the method through experiments with an unmanned ground vehicle in both structured and unstructured environments.
|
|