| |
Last updated on July 1, 2021. This conference program is tentative and subject to change
Technical Program for Monday July 12, 2021
|
MoAT1 |
Lakai Ballroom |
Best Paper Award Competition 1 |
Spotlight |
Chair: Yi, Byung-Ju | Hanyang University |
|
09:00-09:15, Paper MoAT1.1 | |
Microbot Swarm Control through Sequencing of Motion Primitives from Optimal Control Trajectories |
|
Thelasingha, Neelanga | Rensselaer Polytechnic Institute |
Julius, Agung | Rensselaer Polytechnic Institute |
Kim, MinJun | Southern Methodist University |
Keywords: Motion Planning and Obstacle Avoidance, Multi-Robot Systems, Micro/Nano Robots
Abstract: Spatial variance reduction of microbots through global input is a challenging task in microbot manipulation. In this paper, we propose to use a sequence of primary motion maneuvers called motion primitives to perform swarm variance reduction. We extract these primitives from the principal directions of the optimal control trajectories. The primitives efficiently discretize the input space and reduce the dimension of the search space significantly. These enable to exploit light-weight and adaptable search algorithms like A^* for the task of fast sub-optimal input primitive sequence generation. Further, we propose a receding horizon planner to increase robustness to process noise. We validate the proposed methods with several simulated case studies.
|
|
09:15-09:30, Paper MoAT1.2 | |
Enabling Landings on Irregular Surfaces for Unmanned Aerial Vehicles Via a Novel Robotic Landing Gear |
|
Huang, Tsung-Hsuan | Japan Advanced Institute of Science and Technology |
Elibol, Armagan | Japan Advanced Institute of Science and Technology |
Chong, Nak Young | Japan Advanced Inst. of Sci. and Tech |
Keywords: Aerial and Flying Robots, Robotics in Hazardous Applications, Contact: Modeling, Sensing and Control
Abstract: Unmanned Aerial Vehicles (UAVs) have been taking an important place in our daily lives. Drastic technological advancements in the development of UAVs have been extensively enhancing both their capabilities and widening their usage area. Among many others, one of the areas that benefit from UAVs is inspection since they can collect data from areas difficult to reach for humans. Lately, within the inspection task, wall-climbing UAVs have been proposed to collect data with contact-type sensors. However, the major drawback is that they can be used only for flat surfaces. In this paper, we present a lightweight robotic landing gear for enabling UAVs to land on irregular surfaces. Our novel design uses a vacuum system for robotic landing gear to attach the surface, and the movable counterweight composed of a vacuum motor and other control components to balance the flight. To keep its weight low, the robotic landing gear is designed to use only one servo motor and a passive mechanical structure design to guide the vacuum cup at the frontal robotic legs to adapt to different shapes of surfaces. We present experimental results from different scenarios generated within our laboratory environment.
|
|
09:30-09:45, Paper MoAT1.3 | |
Novel Bi-Directional Artificial Muscle Using Shape Memory Alloy Spring Bundles in Honeycomb Architecture |
|
Ali, Hussein | Assistant Professor, Mechanical Dep., Benha Faculty of Engineeri |
Kim, Youngshik | Hanbat National University |
Keywords: Biomimetic and Bioinspired Robots, Soft Robotics, Actuation and Actuators
Abstract: In this work, we developed a novel artificial muscle using shape memory alloy (SMA) spring bundles. We arranged the SMA springs in honeycomb architecture to maximize the power to volume ratio. This artificial muscle has small size and small weight and can be repeated easily for various biologically inspired applications such as an exo-suit, where small size and small weight of the actuator are very critical. This muscle consists of two sets of SMA springs: six SMA springs (Set A) and six antagonistic SMA springs (Set B) arranged in hexagon vertexes for forward and reverse motion respectively. This muscle can be extended in parallel to increase the output force or in series to increase the stroke length. We used an inertia sensor (IMU) for angles feedback and two sets of temperature sensors. The system is modeled and experimentally verified in open-loop and closed-loop control. We used a proportional-integral-derivative (PID) controller to track the desired trajectories. The experimental results show that the system tracks the desired trajectories as designed.
|
|
09:45-10:00, Paper MoAT1.4 | |
External Force Estimation of a Planar Robot with Variable Stiffness Actuators |
|
Ohe, Tatsuya | Ehime University |
Alemayoh, Tsige Tadesse | Ehime University |
Lee, Jae Hoon | Ehime University |
Okamoto, Shingo | Ehime University |
Keywords: Force and Tactile Sensing, Actuation and Actuators, Contact: Modeling, Sensing and Control
Abstract: A variable stiffness actuator is considered a promising mechanism-based approach for embodying compliance in robot systems. By changing the stiffness of each joint, the robot can modulate the stiffness of the whole system for the enhancement of safety or effectiveness of physical interaction with other systems. For more appropriate utilization, an algorithm to estimate the external force applied at the end effector has been investigated in this paper. A variable stiffness mechanism utilizing a lever mechanism was adopted as a joint actuator, and its real-time stiffness model based on hysteresis analysis was used to estimate the applied torque of each joint. Resultantly, the external force is computed using the estimated torque and kinematic relationships between the joint space and the operational space of the robot mechanism. The proposed algorithm was verified through experimental works of two DOF serial and parallel planar robot systems, and their results were compared. It was confirmed that the external force can be estimated correctly in real-time by using the proposed method.
|
|
10:00-10:15, Paper MoAT1.5 | |
3D-Printable Toe-Joint Design of Prosthetic Foot |
|
Um, Huijin | Hanyang University |
Kim, Heonsu | Hanynag University |
Hong, Woolim | Texas A&M University |
Kim, Haksung | Hanyang University |
Hur, Pilwon | Texas A&M University |
Keywords: Mechanism and Design, Rehabilitation and Healthcare Robotics, Performance Evaluation and Optimization
Abstract: The toe joint is one of the important design factors for prosthetic foot. The toe joint provides feeling of springiness in toe-off step and its stiffness affects the ankle kinematics in robotic prosthesis during gait cycle. Moreover, since human toe joints exhibit nonlinear toe torque-angle behavior, these nonlinear characteristics should be considered when the prosthetic foot is designed to mimic human gait behavior more naturally. To implement nonlinear toe joint behavior without additional mechanical components such as actuators, sensors and electronic circuits, the structural foot design should be considered. In this study, the auxetic structure with negative Poisson’s ratio was applied to the toe joint design and bending space was considered for stable bending deformation of prosthetic foot. Finite element analysis was performed to analyze the designed toe joint behavior. The mechanical properties of onyx material which is a short carbon fiber reinforced nylon filament was applied in the FE simulation, considering 3D printing manufacturing. The torque-angle graph for toe joint from a result of FEA was compared with the human toe torque-angle behavior. Consequently, the nonlinear toe stiffness characteristics were implemented through a structured single-part prosthetic design.
|
|
MoBT1 |
Sandpine |
[OS] AI Guidance and Navigation for Mobile Robots in Crowded Space |
Spotlight |
Chair: Hwang, Jung-Hoon | Korea Eletronics Technology Institute |
|
12:45-13:00, Paper MoBT1.1 | |
What Do Pedestrians See?: Left Right Pose Classification in Pedestrian-View (LRPose Recognizer) |
|
Choi, Seungmin | ETRI AI Lab, KAIST Future Vehicle Department |
Lee, Jae-Yeong | ETRI |
Keywords: AI Reasoning Methods for Robotics, Simultaneous Localization and Mapping (SLAM), Intelligent Robotic Vehicles
Abstract: A robot moving on an outdoor sidewalk can recognize its location using GPS signals. However, in urban environments surrounded by skyscrapers, GPS signals are often inaccurate. Or, even if it is accurate, it is not easy to know in which lane the robot is located on a road that is several tens of meters wide. In this article, an interesting image-based neural network is proposed to recognize the position of a moving robot on the sidewalk. In detail, we propose a classifier, Left Right Pose (lrpose) recognizer, that determines whether the pedestrian is on the left side or on the right side of the road in pedestrian-view. The image is assumed to be a frontal image taken from the sidewalk. The lrpose recognizer converts the input image into a features map through convolution layers, and classifies the features into three classes: left, right, and uncertain. About 36,000 ground truth images were collected for training the network. In order for the lrpose recognizer to work robustly against changes in illumination, weather, and environment, images acquired in downtown and suburbs, night and day were included. In the experiment, the proposed lrpose recognizer showed an accuracy of 94.7 % in suburban areas, 74.75 % in urban areas with very high population density, and 84.7 % in combination.
|
|
13:00-13:15, Paper MoBT1.2 | |
Image-Goal Navigation Via Keypoint-Based Reinforcement Learning |
|
Choi, Yunho | Seoul National University |
Oh, Songhwai | Seoul National University |
Keywords: Computer Vision and Visual Servoing, AI Reasoning Methods for Robotics
Abstract: In this paper, we tackle the problem of image-goal navigation which is a crucial robot navigation task but also a hard problem especially when there exist obstacles and limitations on field-of-view (FoV) of the camera. Conventional visual servoing approaches require depth information and camera parameters, and are susceptible to FoV loss, while previous learning-based approaches depend on the unrealistic dense reward function to train the agent with reinforcement learning. To this end, we propose a novel reinforcement learning-based approach which simultaneously utilizes self-supervised local features and global features from an observed image and a target image. The proposed method, KeypointRL, exploits keypoint matching information and generates a self-supervised reward signal which allows the agent to be easily transferred to unseen environments. The proposed model is trained on the subset of image-goal dataset in the photo-realistic Gibson dataset together with Habitat simulator, and shown to outperform baseline algorithms and generalize better.
|
|
13:15-13:30, Paper MoBT1.3 | |
Localizability-Based Topological Local Object Occupancy Map for Homing Navigation |
|
Yoo, Hwiyeon | Seoul National University |
Oh, Songhwai | Seoul National University |
Keywords: Intelligent Robotic Vehicles
Abstract: In this paper, we proposed a localizability-based topological local object occupancy map (TLO2M) for homing navigation. The proposed approach is a combination of topological and metric map representations. We utilize object detection to advance the occupancy grid map and train a structural localizability measuring network with it. As a result, the TLO2M is built based on structural localizability and feature similarity. The proposed method shows a 0.955 success rate of the homing task at the Gibson environments.
|
|
13:30-13:45, Paper MoBT1.4 | |
Positional Weighted Memory Module for Semantic Segmentation |
|
Hyun, Junhyuk | Yonsei University |
Kim, Euntai | Yonsei University |
Keywords: Object Recognition, AI Reasoning Methods for Robotics
Abstract: The robot must be aware of its surroundings for driving. Semantic segmentation is a task that recognizes the surrounding environment using images. In this paper, we confirm that the semantic segmentation performance improves by applying Transformer. Furthermore, we improve the QK matching inside the Transformer to achieve higher performance. We used the Cityscapes dataset for the experiment, and the proposed method achieved 0.43% higher performance compared to positional encoding.
|
|
13:45-14:00, Paper MoBT1.5 | |
Data Association Based on Sensor Fusion for Multi Object Tracking |
|
Jo, Sungjin | Yonsei University |
Cho, Minho | Yonsei University |
Kim, Dong Yeop | KETI (Korea Electronics Technology Institute) |
Hwang, Jung-Hoon | Korea Eletronics Technology Institute |
Kim, Euntai | Yonsei University |
Keywords: Wheeled Mobile Robots, Intelligent Robotic Vehicles
Abstract: A mobile robot to drive in crowded real-world environments should be capable of reliably detecting and tracking of multiple objects. While most MOT approaches exhibit impressive advances, they follow a detection-based tracking framework. In a real environment, since objects that are not in the general detection learning class, such as a kickboard, exist, model free tracking technology is required that is not constrained by detection class. In this paper, we propose a new data association method based on sensor fusion between LiDAR and camera for model free object tracking.
|
|
14:00-14:15, Paper MoBT1.6 | |
Floorplan-Based Localization and Map Update Using LiDAR Sensor |
|
Song, Seungwon | KAIST |
Myung, Hyun | KAIST (Korea Adv. Inst. Sci. & Tech.) |
Keywords: Simultaneous Localization and Mapping (SLAM), Range, Sonar, GPS and Inertial Sensing
Abstract: In this paper, we propose a novel localization and map update method for indoor using the LIDAR sensor and floorplan. Existing indoor localization algorithms need a previously generated 3D map and match those maps to the actual structure to get the precise location because there is no position reference like GPS. To solve this problem, the localization and map update method based on the floorplan, which generally exists, is proposed. For this, 3D LiDAR point clouds are accumulated, and ceiling parts are extracted, which is less sensitive to environmental changes such as furniture. Thereafter, the lines are extracted from the border of the ceiling parts, and the position is estimated through the Monte Carlo Localization algorithm using the comparison with floorplan lines. Even the ceiling parts are less sensitive to environmental changes, the floorplan and the actual environment may be different due to modification of the structure like added walls. Therefore, if the estimated position is determined to be accurate, extracted lines are merged with the previous floorplan line map. The proposed algorithm is tested in the actual environment, and through the results, the performance of the algorithm has been verified.
|
|
14:15-14:30, Paper MoBT1.7 | |
Cost Analysis for Semantic Drivable Area of Autonomous Navigation Planner |
|
Kim, Dong Yeop | KETI (Korea Electronics Technology Institute) |
Son, Hyun Sik | Korea Electronics Technology Institute |
Lee, Jae Min | KETI (Korea Electronics Technology Institute) |
Kim, Keunhwan | Korea Electronics Technology Institute |
Kim, Tae-Keun | Korea Electronics Technology Institute |
Kim, Euntai | Yonsei University |
Hwang, Jung-Hoon | Korea Eletronics Technology Institute |
Keywords: Intelligent Robotic Vehicles, Wheeled Mobile Robots, Motion Planning and Obstacle Avoidance
Abstract: For mobile robot navigation, many algorithmsexploit the concept of cost used in optimization theory. In otherwords, the cost is a key of extracting spatial intelligence for themobile robot. We designed the cost for the last mile deliveryrobot in urban space, and proposed in this paper. Our design isbased on the time consumption during navigation. Additionally,our mobile robot system is based on the bogie suspension toovercoming curbs, and it is reflected to our cost design.
|
|
14:30-14:45, Paper MoBT1.8 | |
Outdoor Navigation of a Mobile Robot in Crowded Environment |
|
Kang, Keundong | Korea Univesity |
Lee, Jinwon | Korea University |
Cho, Soohyun | Korea University |
Cho, Ikhyeon | Korea University |
Park, Geonhyeok | Korea University |
Cho, Minwoo | Korea University |
Pyo, Daehyun | Korea University |
Chung, Woojin | Korea University |
Keywords: Robotic Systems Architectures and Programming, Wheeled Mobile Robots, Dynamics and Control
Abstract: Developing a safe mobile robot navigation system for a delivery service is receiving a lot of attention. To guarantee the safety of a mobile robot, it is necessary to generate the path with low risk. In this study, we propose the safe mobile robot navigation system by reflecting information about the area with high risk of collision in global path planning. In experimental results, we figured out the proposed method can generate the path, which is safer and more efficient than the path generated by the original method.
|
|
14:45-15:00, Paper MoBT1.9 | |
Curriculum Reinforcement Learning for Robot Navigation Via Rapidly Exploring Randomized Tree |
|
Lee, Kyowoon | Ulsan National Institute of Science and Technology |
Kim, Seongun | Korea Advanced Institute of Science and Technology |
Han, Jiyeon | Korea Advanced Institute of Science and Technology |
Choi, Hwanil | Korea Advanced Institute of Science and Technology |
Matsunaga, Daiki | KAIST |
Choi, Jaesik | Ulsan National Institute of Science and Technology |
Keywords: Motion Planning and Obstacle Avoidance, Wheeled Mobile Robots
Abstract: This paper addresses a challenge facing reinforce ment learning (RL) for robot navigation: a way to design a good reward function for escaping local optima during the policy learning stage in a variety of environments. Previously, this problem was tackled by Hindsight Experience Replay (HER) which exploits previous replays with heuristic goals, but under-performs in challenging tasks where goals are difficult to achieve through random explorations. To handle this problem, we propose an efficient solution to curriculum learning for robot navigation in the presence of obstacles. We use a sampling-based planner, Rapidly Exploring Randomized Tree (RRT) to generate intermediate goals that are easy to achieve and to select one that is not too hard and not too easy for the current agent. The proposed method allows learning the robot navigation policy in the reward settings where the reward is sparse and binary and therefore avoids the need for designing a complicated shaped reward function. Experimental results on two simple yet illustrative simulated environments show that our proposed algorithm significantly improves the performance in terms of sample efficiency and success rate.
|
|
MoCT2 |
Sandpine |
[OS] Cloud Robot Intelligence |
Spotlight |
Chair: Jang, Minsu | Electronics & Telecommunications Research Institute |
|
15:00-15:15, Paper MoCT2.1 | |
Selection of Class-Conditional Filters for Semantic Shifted OOD Detection |
|
Yu, Yeonguk | Gwangju Institute of Science and Technology |
Sung, Ho Shin | Gwangju Institute of Science and Technology |
Kim, Jong-Won | GIST(Gwangju Institute of Science and Technology) |
Lee, Kyoobin | Gwangju Institute of Science and Technology |
Keywords: Object Recognition
Abstract: Deep neural networks have been deployed in a wide range of applications with remarkable performance but can be easily fooled with data that are out-of-distribution (OOD). Recent works have proposed detection methods for OOD benchmarks consisting of small image datasets from separate domains (i.e. object classification dataset for training samples and digit classification dataset for OOD samples). However, these methods fail for OOD benchmarks consisting of image datasets from the same domain (i.e. Korean food classification dataset for training samples and Italian food classification dataset for OOD samples). To solve this issue, we propose an OOD detection framework that utilizes two simple operations: counting to find class-wise highly activated filters in the last convolution layer, and summation to calculate the confidence score by summation of the activation of highly activated filters. The proposed framework is based on our assumption that given an OOD sample from the same domain, the CNN model will produce similar feature maps through all its filters, while some differences might be found in the feature map from highly activated filters. We show that our method achieves the highest performance for OOD benchmarks consisting of the Food101 dataset, which provides meaningful insights on how this issue, that has been encountered in recent works, can be effectively addressed
|
|
15:15-15:30, Paper MoCT2.2 | |
Study on Requirements of Cloud-Based Development Environment for Easy Development of ROS Modules |
|
Kim, Mi-sook | Kangwon National University |
Kim, SangGyu | Kangwon National University |
Song, ByoungYoul | Electronics and Telecommunications Research Institute |
Jeong, Young-sook | Electronics and Telecommunications Research Institute |
Park, Hong Seong | Kangwon National University |
Keywords: Robotic Systems Architectures and Programming, Modular Robots
Abstract: This paper analyzes use cases through machine learning modules for Amazon's cloud-based Robot Operating System (ROS) development environment and the general ROS development environment to investigate the requirements of this tool. By analyzing these use cases, this paper proposes the optimal cloud-based ROS development environment requirements.
|
|
15:30-15:45, Paper MoCT2.3 | |
Optimizing Human-Robot Interaction through Personalization: An Evidence-Informed Guide to Designing Social Service Robots |
|
Gasteiger, Norina | The University of Auckland |
Hellou, Mehdi | The University of Auckland |
Ahn, Ho Seok | The University of Auckland, Auckland |
Keywords: Social and Socially Assistive Robotics
Abstract: The application of social service robots is increasing in our daily lives, including in healthcare, education and public spaces (e.g., hospitality or museums). With this near-ubiquity comes the requirement of robots to interact with a diverse group of individuals; each with their own needs, preferences and expectations. This review identifies and synthesizes previous literature on personalization and culturalization within the field of human-robot interaction. Three over-arching considerations are discussed: (1) intended purpose and actions (i.e. service and behavior), (2) interactive functions (i.e. communication, language and proxemics) and (3) physical appearance of robots. Lastly, recommendations are made for other roboticists, when designing individually responsive social service robots.
|
|
15:45-16:00, Paper MoCT2.4 | |
Investigating Frontline Service Employees to Identify Behavioral Goals of Restaurant Service Robot: An Exploratory Study |
|
Kim, Min-Gyu | Korea Institute of Robot and Convergence |
Yoon, Heeyoon | Korea Institute of Robotics & Technology Convergence |
Kim, Juhyun | Korea Institute of Robotics & Techonlogy Convergence |
Kim, Jungjun | Korea Institute of Robotics and Technology Convergence |
Sohn, Dongseop | Korea institute Of robot & Convergence |
Kim, Kyoung Ho | Korea Institute of Robotics & Technology Convergence |
Keywords: Social and Socially Assistive Robotics
Abstract: This paper introduces the investigation of the frontline service employees in order to identify and suggest the behavioral goals of the restaurant service robots. We conducted the survey-based study to examine the optimal service action selection evaluated by the restaurant service employees. In addition, there was a video-based exploratory behavior analysis to find out the different behavioral factors between expert and novice employees in the real restaurant. It was found out that the employees have the preferred service actions to perform in each service episode in the restaurant setting and the service behaviors of the frontline service robot in the restaurant needed to be designed to be proactive for effective service delivery.
|
|
16:00-16:15, Paper MoCT2.5 | |
A Survey on Simulation Environments for Reinforcement Learning |
|
Kim, Taewoo | Electronics and Telecommunications Research Institute |
Jang, Minsu | Electronics & Telecommunications Research Institute |
Kim, Jaehong | ETRI |
Keywords: World Modelling, Social and Socially Assistive Robotics
Abstract: Most of the recent studies of reinforcement learning and robotics basically employ computer simulation due to the advantages of time and cost. For this reason, users have to spare time for investigation in order to choose optimal environment for their purposes. This paper presents a survey result that can be a guidance in user's choice for simulation environments. The investigation result includes features, brief historical backgrounds, license policies and formats for robot and object description of the eight most popular environments in robot RL studies. We also propose a quantitative evaluation method for those simulation environments considering the features and a pragmatic point of view.
|
|
MoDT1 |
Lakai Ballroom |
Best Paper Award Competition 2 |
Spotlight |
Chair: Yi, Byung-Ju | Hanyang University |
|
18:00-18:15, Paper MoDT1.1 | |
Comparison of Omnidirectional Image Retrieval Based on Image Representations |
|
Yun, Jongseob | NAVER LABS |
Yeon, Suyong | NAVER LABS |
Lee, Taejae | NAVERLABS |
Lee, Donghwan | Naverlabs |
Keywords: Computer Vision and Visual Servoing
Abstract: With the increasing importance of visual localization in robotics, image retrieval that can improve visual localization has been studied. Based on the recent multi-camera robot platform, attention is now being paid to omnidirectional image retrieval. However, omnidirectional image retrieval has drawbacks that make it difficult to apply image retrieval methods. It not only suffers from image distortion but is also accompanied by inefficiency during feature extraction since omnidirectional image is defined in non-Euclidean space. In this paper, we discuss the effect of the image distortion and inefficiency on the image retrieval task. We compare two omnidirectional image representations with different levels of image distortion and the feature extraction efficiency using a single image retrieval network. The Gangnam street view dataset captured by multi-camera system on a moving vehicle is used for evaluation. From the comparison results, we recommend a suitable approach for omnidirectional image retrieval.
|
|
18:15-18:30, Paper MoDT1.2 | |
Object Manipulation System Based on Image-Based Reinforcement Learning |
|
Kim, Sunin | Korea University |
Jo, HyunJun | Korea University |
Song, Jae-Bok | Korea University |
Keywords: AI Reasoning Methods for Robotics, Manipulation Planning and Control, Robotic Systems Architectures and Programming
Abstract: Advances in reinforcement learning algorithms allow robots to learn complex tasks such as object manipulation. However, most of these tasks have been implemented only in the simulation. In addition, it is difficult to apply reinforcement learning in the real world because it is not easy to obtain the state required in the learning process such as the position of an object and it is difficult to collect a lot of data. Moreover, existing reinforcement learning algorithms are designed to learn one task, so there is a limit to learning various tasks. To address such a problem, a novel system is proposed that can be applied to the real world after learning multiple tasks in the simulation. First, by proposing a generative model that converts real-world images into simulation ones, simulation-to-real-world transfer is now possible in which the results of learning in the simulation are applied directly to the real world. In addition, to learn multiple tasks using images, a reinforcement learning algorithm combining variational auto-encoder (VAE) and asymmetric Actor-Critic was developed. To verify this system, experiments were conducted in which the algorithms learned in the simulation were applied to the real world and achieved a success rate of 83.8% showing that the proposed system can successfully perform multiple manipulation tasks.
|
|
18:30-18:45, Paper MoDT1.3 | |
Pepper to Fall: A Perception Method for Sweet Pepper Robotic Harvesting |
|
Polic, Marsela | University of Zagreb |
Tabak, Jelena | University of Zagreb, Faculty of Electrical Engineering and Comp |
Orsag, Matko | University of Zagreb, Faculty of Electrical Engineering and Comp |
Keywords: Object Recognition, Computer Vision and Visual Servoing, Manipulation Planning and Control
Abstract: In this paper we propose a robotic system for picking peppers in a structured robotic greenhouse environment. A commercially available collaborative robot manipulator (cobot) is equipped with an RGB-D camera used to detect a correct pose to grasp peppers. The detection method is developed using sim2real and transfer learning. Point cloud data is used to detect the pepper's 6DOF pose for grasping through geometric model fitting. A state machine is derived to control the system workflow. A series of experiments is conducted to test the precision and the robustness of detection, and harvesting procedure success rate.
|
|
18:45-19:00, Paper MoDT1.4 | |
Deep Skill Chaining from Incomplete Demonstrations |
|
Kang, Minjae | Seoul National University (SNU) |
Oh, Songhwai | Seoul National University |
Keywords: AI Reasoning Methods for Robotics, Learning From Humans, Manipulation Planning and Control
Abstract: Imitation learning is a methodology that trains an agent using demonstrations from skilled experts without external rewards. However, for a complex task with a long horizon, it is challenging to obtain data that exactly match the desired task. In general, humans can easily assign a sequence of simple tasks for performing complex tasks. If a person gives an agent an order of simple tasks to carry out a complex task, we can find a skill sequence efficiently by learning the corresponding skills. However, independently trained low-level skills (simple tasks) are incompatible, so they cannot be performed in sequence without additional refinement. In this context, we propose a method to create a skill chain by connecting independently learned skills. For connecting two consecutive low-level policies, we need to find a new policy defined as a bridge skill. To train a bridge skill, a well-designed reward function is required, but in the real world, only sparse rewards can be given according to the success of the overall task. To complement this issue, we introduce a novel latent-distance reward function from fragmented demonstrations. Also, we use binary classifiers to determine whether the current state is capable of performing the skill that follows. As a result, the skill chain formed from incomplete demonstrations can successfully perform complex tasks which require performing multiple skills in a sequence. In the experiment, we perform manipulation tasks with RGBD images as input in the Baxter simulator implemented using MuJoCo. We verify that skill chains can be successfully trained from incomplete data while confirming that the agent can be trained much more efficiently and stably through the proposed latent-distance rewards.
|
|
19:00-19:15, Paper MoDT1.5 | |
What If There Were No Loops? Large-Scale Graph-Based SLAM with Traffic Sign Detection in an HD Map Using LiDAR Inertial Odometry |
|
Sung, Chang Ki | KAIST |
Jeon, Seulgi | KAIST |
Lim, Hyungtae | Korea Advanced Institute of Science and Technology |
Myung, Hyun | KAIST (Korea Adv. Inst. Sci. & Tech.) |
Keywords: Simultaneous Localization and Mapping (SLAM)
Abstract: This paper proposes a large-scale graph-based SLAM (Simultaneous Localization and Mapping) approach with the traffic sign data involved in the HD (high definition) map. The graph is structured by the IMU factor, LiDAR-inertial factor, traffic sign factor, and loop closure factor. The IMU factor is generated by the IMU pre-integration. The IMU pre-integration result is used to de-skew point cloud in the preprocessing and is used for LiDAR odometry optimization as an initial guess. The traffic sign factor is generated by the detection and map matching process. The loop closure is searched based on the geometry information in Euclidean space. The graph structure is optimized when the loop closure factor or the traffic sign factor is updated. The proposed method solves the long-term drift error problem of the SLAM in the large-scale environment and also improves the localization accuracy compare with the state-of-the-art LiDAR-inertial odometry methods. Also, the proposed method is intensively tested with collected datasets in the city where the GPS multi-path problem occurs and inside the campus.
|
|
MoET1 |
Online Only |
[OS] Robots in the Household: A Review of Task Knowledge Acquisition,
Planning and Execution - Online Only |
Spotlight |
Chair: Paulius, David Andres | Technical University of Munich |
|
21:00-21:15, Paper MoET1.1 | |
Work in Progress - Automated Generation of Robotic Planning Domains from Observations |
|
Diehl, Maximilian | Chalmers University of Technology |
Ramirez-Amaro, Karinne | Chalmers University of Technology |
Keywords: Learning From Humans, AI Reasoning Methods for Robotics
Abstract: In this paper, we report the results of our latest work on the automated generation of planning operators from human demonstrations, and we present some of our future research ideas. To automatically generate planning operators, our system segments and recognizes different observed actions from human demonstrations. We then proposed an automatic extraction method to detect the relevant preconditions and effects from these demonstrations. Finally, our system generates the associated planning operators and finds a sequence of actions that satisfies a user-defined goal using a symbolic planner. The plan is deployed on a simulated TIAGo robot. Our future research directions include learning from and explaining execution failures and detecting cause-effect relationships between demonstrated hand activities and their consequences on the robot's environment. The former is crucial for trust-based and efficient human-robot collaboration and the latter for learning in realistic and dynamic environments.
|
|
21:15-21:30, Paper MoET1.2 | |
Evaluating Recipes Generated from Functional Object-Oriented Network |
|
Sakib, Md Sadman | University of South Florida |
Baez, Hailey | University of South Florida |
Paulius, David Andres | Technische Universität München (Technical University of Munich) |
Sun, Yu | University of South Florida |
Keywords: AI Reasoning Methods for Robotics, Learning From Humans
Abstract: The functional object-oriented network (FOON) has been introduced as a knowledge representation, which takes the form of a graph, for symbolic task planning. To get a sequential plan for a manipulation task, a robot can obtain a task tree through a knowledge retrieval process from the FOON. To evaluate the quality of an acquired task tree, we compare it with a conventional form of task knowledge, such as recipes or manuals. We first automatically convert task trees to recipes, and we then compare them with the human-created recipes in the Recipe1M+ dataset via a survey. Our preliminary study finds no significant difference between the recipes in Recipe1M+ and the recipes generated from FOON task trees in terms of correctness, completeness, and clarity.
|
|
21:30-21:45, Paper MoET1.3 | |
A Road-Map to Robot Task Execution with the Functional Object-Oriented Network |
|
Paulius, David Andres | Technische Universität München (Technical University of Munich) |
Agostini, Alejandro | Technical University of Munich |
Sun, Yu | University of South Florida |
Lee, Dongheui | Technical University of Munich |
Keywords: Learning From Humans, AI Reasoning Methods for Robotics
Abstract: Following works on joint object-action representations, the functional object-oriented network (FOON) was introduced as a knowledge graph representation for robots. Taking the form of a bipartite graph, a FOON contains symbolic or high-level information that would be pertinent to a robot's understanding of its environment and tasks in a way that mirrors human understanding of actions. In this work, we outline a road-map for future development of FOON and its application in robotic systems for task planning as well as knowledge acquisition from demonstration. We propose preliminary ideas to show how a FOON can be created in a real-world scenario with a robot and human teacher in a way that can jointly augment existing knowledge in a FOON and teach a robot the skills it needs to replicate the demonstrated actions and solve a given manipulation problem.
|
| |