| |
Last updated on January 18, 2021. This conference program is tentative and subject to change
Technical Program for Tuesday January 12, 2021
|
TuC1 Special Session, Room 1 |
Add to My Program |
Applied Field Robotics through Machine Learning -Part 1 |
|
|
Chair: Miyagusuku, Renato | Utsunomiya University |
Organizer: Yamashita, Atsushi | The University of Tokyo |
Organizer: Kobayashi, Yuichi | Shizuoka University |
Organizer: Miyagusuku, Renato | Utsunomiya University |
Organizer: Louhi Kasahara, Jun Younes | The University of Tokyo |
|
15:50-16:10, Paper TuC1.1 | Add to My Program |
Positive Weak Supervision Quality Increase by Consolidation for Acoustic Defect Detection in Concrete Structures (I) |
|
Louhi Kasahara, Jun Younes | The University of Tokyo |
Yamashita, Atsushi | The University of Tokyo |
Asama, Hajime | The University of Tokyo |
Keywords: Machine Learning, Automation Systems, Sensor Fusion
Abstract: The aging of concrete social infrastructures such as tunnels, bridges, and highways is a growing concern worldwide. Those require careful inspection to ensure their users' safety and traditional manual methods are not viable solutions due to the growing population of structures in need of testing and the manpower shortage. Among those inspection methods, the hammering test has been the focus of several previous works, including notably weakly supervised approaches. Those approaches query a human user on random audio sample pair similarity to transform the feature space into one suited for defect detection. However, the quality of the weak supervision obtained in such a way is often variable. Therefore, we propose a method to improve positive weak supervision quality by consolidating the dataset prior to the query process. Experiments conducted with concrete test blocks showed the effectiveness of our proposed method.
|
|
16:10-16:30, Paper TuC1.2 | Add to My Program |
An Unsupervised Learning Approach Toward Automatic Selection of Recognition Parameters for Mobile Robot Navigation in Less Structured Environments (I) |
|
Tanaka, Seima | Shizuoka University |
Kobayashi, Yuichi | Shizuoka University |
Keywords: Machine Learning, Vision Systems, Autonomous Vehicle Navigation
Abstract: Autonomous mobile robots are expected to act in less structured environment such as outdoor environment, specifically aiming for agricultural use. In such environments, it is desired that the recognition of environments for detecting pathway for the robot can be prepared with less processing cost of an engineer's hand-tuning. This paper presents an unsupervised learning approach to support the acquisition of features required for automatic selection of sensor features and parameters for sensor information processing. As an example of sensor feature, lines are detected from a 2D image obtained by a depth camera, their corresponding 3D configurations are calculated using depth information, and the line segments in 3D space can be used as cues to detect pathways for a mobile robot. Resolution of the image is one parameter that requires adjustment depending on the environment. The proposed method can obtain the necessary features for automatic selection of appropriate resolution parameters by performing mean shift clustering of 3D line configuration for different resolution parameters. The proposed unsupervised learning approach has been tested in multiple environments with diverse appearances and suggested that it is an effective method for automatic selection of appropriate parameters depending on the environment.
|
|
16:30-16:50, Paper TuC1.3 | Add to My Program |
Effects of Video Filters for Learning an Action Recognition Model for Construction Machinery from Simulated Training Data (I) |
|
Sim, Jinhyeok | The University of Tokyo |
Louhi Kasahara, Jun Younes | The University of Tokyo |
Chikushi, Shota | The University of Tokyo |
Nagatani, Keiji | The University of Tokyo |
Chiba, Takumi | Fujita Co., Ltd |
Chayama, Kazuhiro | Fujita Co., Ltd |
Yamashita, Atsushi | The University of Tokyo |
Asama, Hajime | The University of Tokyo |
Keywords: Vision Systems, Machine Learning, Automation Systems
Abstract: In the construction industry, construction machinery are an important factor in the overall productivity and efficiency of a worksite. Thus, emphasis is put on the monitoring of actions conducted by construction machinery. This was traditionally done manually by humans, which is a time consuming and laborious task. Automatic action recognition of construction machinery is therefore needed. The field of action recognition is predominantly occupied by Deep Learning approaches and several previous works focused on adapting such approaches for construction machinery. However, the issue of obtaining training data is particularly troublesome for construction machinery. Our previous work proposed a Deep Learning method for learning an action recognition model from training data generated in a simulator using video filters but the precise contributions of the introduced video filter were unclear. The purpose of this study is therefore to clarify the effects of video filters for learning an action recognition model for construction machinery from simulated training data.
|
|
16:50-17:10, Paper TuC1.4 | Add to My Program |
Evaluation of Mapping and Path Planning for Non-Holonomic Mobile Robot Navigation in Narrow Pathway for Agricultural Application (I) |
|
Choudhary, Anupam | Shizuoka University |
Kobayashi, Yuichi | Shizuoka University |
Arjonilla Garcia, Francisco Jesus | Shizuoka University, Graduate School of Science and Technology |
Nagasaka, Satoshi | Somic Management Holdings Inc |
Koike, Megumu | Somic Management Holdings Inc |
Keywords: Motion and Path Planning, Sensor Fusion, Autonomous Vehicle Navigation
Abstract: This paper evaluates mapping and path planning methods for mobile robot with non-holonomic constraint in the narrow pathways. Selection of sensors such as depth camera or LiDAR sensor is complex problem as it depends on applications, demand for cost, robustness and data processing. Along with sensor selection map generation is essential task for mobile robot navigation. This paper presents experimental evaluation of laser-based mapping algorithm i.e., Gmapping and vision based mapping i.e., RTAB-Map. The platform used for autonomous navigation is mobile robot with nonholonomic constraint. The path planning for mobile robot with non-holonomic constraint is more complex as not all arbitrary trajectories are kinematically feasible. The application of mobile robot navigation is to transfer agriculture products in greenhouse from one place to another. Generally, the pathways of greenhouse are narrow, which often results in the planner failing to generate a traversable trajectory if the mobile robot is restricted to forward movement, hence the switchback (forward and backward) path planning is essential to navigate in such environments. In the following discussion, we implement the Reeds-Shepp curve based path planning for mobile robot with a non-holonomic constraint to navigate in narrow pathways. Reeds-Shepp curve can generate various combinations of such switch-back trajectories and it remains unmatched in terms of computation efficiency and reliability compared to other curves. Effectiveness of the proposed path planning method is validated experimentally.
|
|
17:10-17:30, Paper TuC1.5 | Add to My Program |
Potential of Incorporating Motion Estimation for Image Captioning (I) |
|
Iwamura, Kiyohiko | The University of Tokyo |
Louhi Kasahara, Jun Younes | The University of Tokyo |
Moro, Alessandro | Ritecs Inc |
Yamashita, Atsushi | The University of Tokyo |
Asama, Hajime | The University of Tokyo |
Keywords: Machine Learning, Vision Systems, Systems for Service/Assistive Applications
Abstract: Automatic image captioning has various important applications such as indexing images on the Web or the depiction of visual contents for the visually impaired. Recently, deep learning based probabilistic frameworks have been greatly researched for image captioning. However the existing deep learning methods are only established on visual features, which have problems generating captions related to motions, because visual features from images do not include motion features. In this paper, we propose a novel, end-to-end trainable, deep learning image captioning model that estimates motion features from a image to help generate captions. Our proposed model was evaluated on two datasets, MSR-VTT2016- Image, and several copyright free images. We demonstrate that our proposed method using motion features improves performance on caption generation and that the quality of motion features is important to generate captions.
|
|
17:30-17:50, Paper TuC1.6 | Add to My Program |
Distance Invariant Sparse Autoencoder for Wireless Signal Strength Mapping (I) |
|
Miyagusuku, Renato | Utsunomiya University |
Ozaki, Koichi | Utsunomiya University |
Keywords: Machine Learning, Systems for Field Applications
Abstract: Wireless signal strength based localization can enable robust localization for robots using inexpensive sensors. For this, a location-to-signal-strength map has to be learned for each access point in the environment. Due to the ubiquity of Wireless networks in most environments, this can result in tens or hundreds of maps. To reduce the dimensionality of this problem, we employ autoencoders, which are a popular unsupervised approach for feature extraction and data compression. In particular, we propose the use of sparse autoencoders that learn latent spaces that preserve the relative distance between inputs. Distance invariance between input and latent spaces allows our system to successfully learn compact representations that allow precise data reconstruction but also have a low impact on localization performance when using maps from the latent space rather than the input space. We demonstrate the feasibility of our approach by performing experiments in outdoor environments.
|
|
TuC2 Regular Session, Room 2 |
Add to My Program |
Automation Systems |
|
|
Chair: Harada, Kensuke | Osaka University |
Co-Chair: Lian, Feng-Li | National Taiwan University |
|
15:50-16:10, Paper TuC2.1 | Add to My Program |
Evaluation of Tomato Fruit Harvestability for Robotic Harvesting |
|
Fujinaga, Takuya | Kyushu Institute of Technology |
Yasukawa, Shinsuke | Kyushu Institute of Technology |
Ishii, Kazuo | Kyushu Institiute of Technology |
Keywords: Automation Systems, Systems for Field Applications, Vision Systems
Abstract: Harvestability is a quantitative index of how easy tomato fruits are to harvest using a robot. Previous studies on tomato harvesting robots have focused on tomato fruit detection methods, harvesting mechanisms, harvesting success rates, and harvesting times. However, tomato fruit harvestability using robots has not been quantitatively assessed. In this paper, we propose a method for evaluating the tomato fruit harvestability using a tomato harvesting robot. We first evaluated the harvestability qualitatively, based on the results of harvesting experiments conducted in a tomato greenhouse. Harvestability was then quantitatively evaluated using a camera (hereinafter referred to as a hand camera) attached to an end-effector of the tomato harvesting robot developed. The hand camera consists of an RGB camera and a depth camera. The occlusion ratio of obstacles (stems, peduncles, and other fruits) to a target fruit is calculated using the RGB image and depth image acquired by the hand camera. The larger the occlusion ratio was, i.e., the more obstacles there were in front of the target fruit, the more difficult the target fruit was to harvest. Conversely, if the occlusion ratio is low, the harvestability is high. This study shows that the occlusion ratio is effective as a quantitative indicator of the tomato fruit harvestability.
|
|
16:10-16:30, Paper TuC2.2 | Add to My Program |
Azimuth Angle Estimation Based on Sound Wave Reflection for Mirrors and Transparent Objects |
|
Katayama, Hiroki | Nara Institute of Science and Technology |
Kiyokawa, Takuya | Nara Institute of Science and Technology |
Takamatsu, Jun | Nara Institute of Science and Technology |
Ogasawara, Tsukasa | Nara Institute of Science and Technology |
Keywords: Automation Systems
Abstract: Man-made environments contain many transparent objects, but it is difficult for robots to recognize transparent objects. To solve this issue, we use sound waves to recognize such hard-to-recognize objects by light. This research challenges the estimation for the azimuth angles of a transparent window to enable a robot to wipe the window. To achieve this, we propose a model-free learning method using a Support Vector Regression (SVR) to capture the features on the sound reflection from the target plane. To determine an input sound signal to the SVR, we derive a sound reflection model refer to the Shape-from-Shading in the computer vision field. Following the model, we decide and use a sound property in the frequency domain recorded by a microphone as the input to SVR. As a result of the experiments using a transparent plate, we were able to estimate the azimuth angle of less than 3 degrees in an anechoic chamber. As an example of a robot application, we developed a robot wiping system that can handle such a transparent window. Even in such a realistic environment for the sensor system, we were able to estimate the azimuth angle in almost less than 5 degrees.
|
|
16:30-16:50, Paper TuC2.3 | Add to My Program |
Error Identification and Recovery in Robotic Snap Assembly |
|
Hayami, Yusuke | Osaka University |
Wan, Weiwei | Osaka University |
Koyama, Keisuke | Osaka University |
Shi, Peihao | Osaka University |
Rojas, Juan | Chinese University of Hong Kong |
Harada, Kensuke | Osaka University |
Keywords: Factory Automation, Automation Systems
Abstract: Existing methods for predicting robotic snap joint assembly cannot predict failures before their occurrence. To address this limitation, this paper proposes a method for predicting error states before the occurence of error, thereby enabling timely recovery. Robotic snap joint assembly requires precise positioning; therefore, even a slight offset between parts can lead to assembly failure. To correctly predict error states, we apply functional principal component analysis (fPCA) to 6D force/torque profiles that are terminated before the occurence of an error. The error state is identified by applying a feature vector to a decision tree, wherein the support vector machine (SVM) is employed at each node. If the estimation accuracy is low, we perform additional probing to more correctly identify the error state. Finally, after identifying the error state, a robot performs the error recovery motion based on the identified error state. Through the experimental results of assembling plastic parts with four snap joints, we show that the error states can be correctly estimated and a robot can recover from the identified error state.
|
|
16:50-17:10, Paper TuC2.4 | Add to My Program |
Picking of One Sheet of Cotton Cloth by Rolling up Using Cylindrical Brushes |
|
Kawasaki, Yuichi | Shinshu University |
Arnold, Solvi | Shinshu Univeristy |
Yamazaki, Kimitoshi | Shinshu University |
Keywords: Factory Automation, Automation Systems, Vision Systems
Abstract: In this paper, we describe a method for automating the process of lifting one sheet of cloth from a stack of cotton sheets. In factory manufacturing of cloth products, many of the procedures for installing fabric parts on machines are still performed manually. In this study, we propose a method that first detects the edge of a sheet of cotton cloth, and then lifts it up by means of a cylindrical brush. Advantages of this method are that it avoids cloth damage, as well as its future potential for enabling dexterous manipulation by means of fine brushes. In verification experiments, the proposed end-effector was attached to the tip of a serial link manipulator. Then a system combining a color camera, a tactile sensor, and a lighting source was constructed. Using the system, we confirmed that a certain level of picking performance was obtained. In addition, we organized possible failures on the picking task and devised a vision process that can distinguish failure patterns.
|
|
17:10-17:30, Paper TuC2.5 | Add to My Program |
Automatic Toolpath Pattern Recommendation for Various Industrial Applications Based on Deep Learning |
|
Xie, Zhen | Agency for Science, Technology and Research (A*STAR) |
Somani, Nikhil | Agency for Science, Technology and Research (A*STAR) |
Tan, Yong Jian Samuel | University of Glasgow |
Chen, Ye Seng Josh | Agency for Science, Technology and Research (A*STAR) |
Keywords: Robotics technology, Automation systems, Decision making systems
Abstract: Automatic toolpath generation (ATG) systems are a class of robotic systems aimed at generating customized patterns of robotic trajectories and toolpaths to automate several industrial processes such as polishing, deburring, masking, etc. ATG systems are especially valuable in automating industrial processes that require high precision, are repetitive or labor intensive. ATG systems face challenges when CAD data for the manufacturing object/workpiece is either not available or inaccurate. We have developed an adaptive ATG system that can generate the robotic tool path based on scan data for both contact and non-contact manufacturing processes. There are total five tool path patterns designed for our ATG systems, including zigzag tool path, spiral tool path, meridian tool path, contour tool path and boundary tool path. Since the workpiece/coupon differs in shapes and sizes, the choice of a specific tool path pattern or a combination of several patters for the given process is a critical criterion for optimal manufacturing results. In this paper, we present our tool path pattern recommendation methods based on deep learning neural networks in Tensorflow. 3D point cloud segmentation and classification are deployed to identify the object features. Moreover, transfer learning is used in order to enhance performance and save training time. This collective decision making can be implemented either on the computing edge or cloud.
|
|
17:30-17:50, Paper TuC2.6 | Add to My Program |
Characterization and Modeling of Grinding Speed and Belt Wear Condition for Robotic Grinding Process |
|
Yang, Hsuan-Yu | National Taiwan University |
Lian, Feng-Li | National Taiwan University |
Keywords: Factory Automation, Control Theory and Technology, Automation Systems
Abstract: Nowadays, robot arms are gradually widely used in the plumbing industry due to the severe problem of labor shortage. When using robot arms to conduct the grinding process, the belt wear occurs gradually and affect the workpiece quality. In order to keep the same workpiece quality, a belt speed control system is proposed. The proposed model can compute the wear condition of the abrasive belt, judge the moment to change a new belt, and control the belt speed to main the same workpiece quality. Experimental results showed that the coefficient of determination of the model are all higher than 0.878, and when the belt wear affects the workpiece quality, the proposed system can update the new belt speed to ensure the workpiece quality is the same within 3 round control.
|
|
TuC3 Regular Session, Room 3 |
Add to My Program |
Soft Robotics/Bio-Inspired Robotics |
|
|
Chair: Hayakawa, Takeshi | Chuo University |
|
15:50-16:10, Paper TuC3.1 | Add to My Program |
The Flatworm-Like Pedal Locomotory Robot WORMESH-I: Locomotion Based on the Combination of Pedal Waves |
|
Ganegoda Vidanage, Charaka Rasanga | Saitama University |
Hodoshima, Ryuichi | Saitama University |
Kotosaka, Shinya | Saitama University |
Keywords: Biologically-Inspired Robotic Systems
Abstract: Flatworms are dorsoventrally flattened, bilaterally symmetrical, and soft-bodied. They can move on rough terrain, swim, and climb a shore reef using pedal locomotion and continuous gliding propulsion along the bottom of the body. Inspired by the flatworm, we have been developping the flatworm-like robot, which consisted of the same modules connected via multi degree of freedom (DOF) joints. The pedal locomotion is the primary locomotor mode, however, motion generation is highly complex. Therefore, this research focused on locomotion of flatworm-like pedal locomotory robot for the translational, spinning, omnidirectional motions. We proposed a new method of generating various locomotion combining multiple pedal waves. The proposed method was verified with a simulator with a physics engine. After this verification, we analyzed locomotion characteristics considering 1) the parameters of pedal wave based on the serpenoid curve; initial winding angle α and temporal frequency of the traveling wave ω, 2) number of robot modules arranged in a matrix form; 3, 4 and 8. The similation results showed the relationship between translational and angular velocity of COG and the parameters of pedal wave.
|
|
16:10-16:30, Paper TuC3.2 | Add to My Program |
Emergence of Motor Synergy in Multi-Directional Reaching with Deep Reinforcement Learning |
|
Han, Jihui | Tohoku University |
Chai, Jiazheng | Tohoku University |
Hayashibe, Mitsuhiro | Tohoku University |
Keywords: Machine Learning, Biologically-Inspired Robotic Systems, Biomimetics
Abstract: In this study, we apply Deep Reinforcement Learning for handling full-dimensional 7 degrees of freedom arm reaching, and demonstrate the relations among motion error, energy, and synergy emergence during the learning process, to reveal the mechanism of employing motor synergy. Although synergy information has never been encoded into the reward function, the synergy effect naturally emerges, leading to a similar situation as human motion learning. To the best of our knowledge, this is a pioneer study verifying a concurrent relation between the error-energy index and synergy development in DRL for multi-directional reaching tasks. In addition, our proposed feedback-augmented DRL controller shows better capability over DRL only in terms of error-energy index.
|
|
16:30-16:50, Paper TuC3.3 | Add to My Program |
Laser Scanning Drive of the Peristaltic Micro-Gelrobot with Soft Rigid Hybrid Structures |
|
Kodera, Shunnosuke | Chuo University |
Watanabe, Tomoki | Chuo University |
Yokoyama, Yoshiyuki | Toyama Industrial Technology Research and Development Center |
Hayakawa, Takeshi | Chuo University |
Keywords: Micro/Nano Systems, Soft Robotics, Biomimetics
Abstract: We present a micro-gelrobot driven by irradiation of a laser. The proposed micro-gelrobot mimics living organisms and moves by peristaltic motion. In this paper, we show the micro-gelrobot has multiple soft structures and rigid structures. The soft structures work as actuators like muscles and rigid strictures work as supporting bodies like skeletons. As rigid structures, SU-8 which is hard photoresist is used. As soft actuators, bioresist which is a photo-patternable temperature-responsive gel is used. Bioresist swells by absorbing water at lower temperature than 32°C and shrinks by releasing water at higher temperature than 32°C. Thus, we use this changing volume as a displacement of an actuator. In addition, we mixed bioresist with graphene as light absorber to actuate this gel actuator by near infrared laser irradiation. The proposed micro-gelrobot is fabricated by using photolithography, a common MEMS fabrication process. Additionally, fabrication process of the proposed micro-gelrobot include release process of the robot from the glass substrate. For release of micro-gelrobot, we use sacrificial layer made of dextran which is a kind of polysaccharide. We use digital mirror device (DMD) to irradiate patterned light to actuate multiple gel actuators. We succeeded in driving the proposed micro-gelrobot using laser scanning. In addition, we evaluate a displacement of the peristaltic motion. The proposed micro-gelrobot can move forward by performing a peristaltic motion.
|
|
16:50-17:10, Paper TuC3.4 | Add to My Program |
Stiffness Control of Variable Stiffness Link Using a Conductive Fabric Based Proximity Sensor |
|
Ishihara, Mana | Ritsumeikan University |
Matsuno, Takahiro | Ritsumeikan Univ |
Althoefer, Kaspar | Queen Mary University of London |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords: Soft Robotics, Human-Robot Cooperation/Collaboration
Abstract: With an ongoing interest to create solutions allowing for safe physical interaction between humans and robots, new approaches to the design of robot arms are appearing. One example is the recently developed Variable Stiffness Link (VSL) - a new type of inflatable soft robot link made of flexible, yet inextensible fabric sleeves and capable of changing its stiffness. At high stiffness, the VSL can perform comparable to a rigid link within a standard robot arm, e.g., carry out picking and placing tasks in a factory environment. However, by decreasing the air pressure in such links their stiffness also decreases, and the robot arm’s compliance increases. Hence, when a human erroneously intrudes into the robot working area, safety can be ensured by simply reducing the pressure in the robot’s links. In this context, a major challenge is to determine whether a human is within the robot’s range. In this study, we propose to integrate a proximity, capacitance-based sensor made of conductive fabric with a VSL. Since the proposed proximity sensor is made of fabric, it is ideally suited for integration with the VSL: the sensor material is flexible and at the same time inextensible and as such can be used as the outer layer of a VSL without affecting its stiffness controllability. An appropriate control strategy was developed capable of changing the stiffness of the VSL depending on the distance between the VSL and a human or object nearby using the new sensor. The proposed sensor/control system determines the distance to a human or object in the vicinity of the VSL by evaluating the capacitance as measured by the integrated proximity sensor and adjusts the link’s stiffness accordingly to ensure safety. It was experimentally validated that, our new approach was capable of reducing the VSL’s stiffness when a human was approaching, even before contact occurred.
|
|
TuC4 Special Session, Room 4 |
Add to My Program |
Robotic Teleoperation and Environmental Sensing |
|
|
Chair: Woo, Hanwool | The University of Tokyo |
Co-Chair: Ji, Yonghoon | JAIST |
Organizer: Woo, Hanwool | The University of Tokyo |
Organizer: Tamura, Yusuke | Tohoku University |
Organizer: Kono, Hitoshi | Tokyo Polytechnic University |
Organizer: Ji, Yonghoon | JAIST |
Organizer: Fujii, Hiromitsu | Chiba Institute of Technology |
|
15:50-16:10, Paper TuC4.1 | Add to My Program |
Gamma-Ray Image Noise Generation Using Energy-Image Converter Based on Image Histogram (I) |
|
Komatsu, Ren | The University of Tokyo |
Woo, Hanwool | The University of Tokyo |
Tamura, Yusuke | Tohoku University |
Yamashita, Atsushi | The University of Tokyo |
Asama, Hajime | The University of Tokyo |
Keywords: System Simulation, Vision Systems
Abstract: We propose a novel method to simulate image noise caused by gamma-ray irradiation. Monte Carlo simulation is utilized to calculate the interaction between the gamma-ray and the image sensor, and the energy deposit in each pixel is estimated. A converter module is proposed for generating image noise from the energy deposit. The conversion is designed so that the converted energy deposit has a similar image histogram to the real gamma-ray image noise. The real gamma-ray image noise is obtained by using circular fisheye cameras in gamma-ray irradiation tests. We demonstrate the effectiveness of the proposed method in experiments. We believe that this study would be beneficial for the development of methods for decommissioning the Fukushima Daiichi Nuclear Power Plant by the research community. Our source code is available at https://github.com/matsuren/pixel_noise_sim_geant4.
|
|
16:10-16:30, Paper TuC4.2 | Add to My Program |
ICP-Based SLAM Using LiDAR Intensity and Near-Infrared Data (I) |
|
Kataoka, Ryosuke | Chuo University |
Suzuki, Ryuki | Chuo University |
Ji, Yonghoon | JAIST |
Fujii, Hiromitsu | Chiba Institute of Technology |
Kono, Hitoshi | Tokyo Polytechnic University |
Umeda, Kazunori | Chuo University |
Keywords: Autonomous Vehicle Navigation, Sensor Fusion, Vision Systems
Abstract: Abstract—This paper describes a new scan matching-based mapping method using a mobile robot in an environment including various physical properties. In scan matching, localization is performed mainly focusing on the shape information of the environment. However, the localization cannot be performed correctly and matching may fail when a similar shape existing in different places if only the shape information is taken into account. Therefore, we propose a new method to improve the accuracy of scan matching by considering abundant physical features existing in the environment. In this method, it is possible to utilize not only the shape information but also the physical information of the environment as features by measuring LiDAR intensity and near-infrared data to detect water puddles. Experiment results in the real environment shows that relatively accurate map of the environment can be built by utilizing the physical information.
|
|
16:30-16:50, Paper TuC4.3 | Add to My Program |
Simplified Image Reconstruction Method in 4π Compton Imaging for Radioactive Source Identification (I) |
|
Mukai, Atsushi | Nagoya University |
Tomita, Hideki | Nagoya University |
Hara, Shintaro | Nagoya University |
Yamagishi, Keita | Nagoya University |
Terabayashi, Ryohei | Nagoya University |
Shimoyama, Tetsuya | Nagoya University |
Shimazoe, Kenji | The University of Tokyo |
Keywords: Environment Monitoring and Management
Abstract: We proposed a simplified image reconstruction method in 4π Compton imaging to determine the intensity and location of point-like gamma-ray sources. The proposed method can be quickly done by fixing an apex of each Compton cone to a center of a Compton imaging detector and using a lookup table of cone projection profiles for all possible combinations. Using a prototype CdTe array detector with the proposed reconstruction method, 4π Compton images for a 137Cs gamma-ray point source far from the detector were obtained. Also, the identification of a 137Cs point source in a plastic box was successfully demonstrated.
|
|
16:50-17:10, Paper TuC4.4 | Add to My Program |
Primary-View-Consistent Operation Method of Master Controller in Multiple Screen Teleoperation System (I) |
|
Tsuji, Hiroki | Kobe University |
Katayama, Raita | Kobe University |
Nagano, Hikaru | Kobe University |
Tazaki, Yuichi | Kobe University |
Yokokohji, Yasuyoshi | Kobe University |
Keywords: Human Interface
Abstract: In teleoperation systems, the reference frame and the posture of the master controller should be consistent to those of the follower arm in the remote environment; otherwise the operator will be forced to perform mental rotation which is a burden for him/her. Such consistencies become more important in the case of multiple screen teleoperation systems, where the operator can change the primary view, for example, among overhead view, side view, and hand camera view, depending on the situation. In this paper, we propose a unified method of master controller that is consistent to the primary view in multiple screen teleoperation systems. We first propose a method for constraining the posture of the master controller during indexing operation in the master-follower mode so that posture consistency can be preserved even when master-follower connection is disengaged. This constraining method will also be applied when the operator changes primary view so that the master controller will be constrained to the posture with respect to a new reference frame. We also propose to assign an appropriate control mode (either position mode or velocity (joy-stick) mode) depending on the selected primary view, where control modes for translational motion and rotational motion are considered separately. We have verified the effectiveness of the proposed method by experiments.
|
|
17:10-17:30, Paper TuC4.5 | Add to My Program |
Non-Contact Temperature Sensing Using Resonance Frequency for the Robotic Arm with LMPA Joints (I) |
|
Seino, Akira | Fukushima University |
Seto, Noriaki | Fukushima University |
Takahashi, Takayuki | Fukushima University |
Keywords: Mechatronics Systems, Hardware Platform, Plant Engineering
Abstract: In this paper, we propose a temperature sensing method for the robotic arm with low melting point alloy (LMPA) joints. In the Fukushima Daiichi Nuclear Power Station (F1NPS), the decommissioning plan is in progress. To remove the debris safely and certainly, the monitoring of the radiation in the pedestal is needed. For this task, we proposed a highly expandable robotic arm system using low melting point alloy (LMPA) at all joints. Each joint of the arm can switch between free and fixed states using the phase state change of LMPA. In order to operate the arm system efficiently, the joint temperature must be sensed to recognize the phase state of the LMPA. We focus on the resonance frequency of the LC resonance circuit as a temperature sensing method. The LMPA is heated though heating plates by induction heating (IH) method. The resonance frequency of the LC resonant circuit built in the IH device varies according to the temperature of the object to be heated. Measuring the oscillating frequency of IH device and the temperature of the heating plates, we confirmed that they correlated linearly around the melting point of LMPA and the usefulness as the temperature sensing method.
|
|
17:30-17:50, Paper TuC4.6 | Add to My Program |
Virtual Kinesthetic Teaching for Bimanual Telemanipulation (I) |
|
Jang, Inmo | The University of Manchester |
Niu, Hanlin | The University of Manchester |
Collins, Emily Charlotte | The University of Manchester |
Weightman, Andrew | The University of Manchester |
Carrasco, Joaquin | The University of Manchester |
Lennox, Barry | The University of Manchester |
Keywords: Systems for Field Applications, Human-Robot/System Interaction, Human Interface
Abstract: This paper proposes a novel telemanipulation system that enables a human operator to control a dual-arm robot. The operation provides kinesthetic teaching via a digital twin of the robot which the operator cyber-physically guides to perform a task. Its key enabler is the concept of a virtual reality interactive marker, which serves as a simplified end effector of the digital twin robot. In virtual reality, the operator can interact with the marker using bare hands, which are sensed by the Leap Motion on top of a virtual reality headset. Then, the status (e.g. position/orientation) of the marker is transformed to the corresponding joint space command to the remote robot so that its end effector can follow the marker. We provide the details of the system architecture, and implement the system based on commercial robots/devices (i.e. UR5, Robotiq gripper, Leap Motion), virtual reality, ROS, and Unity3D. Moreover, the paper discusses the technical challenges that we had to address, and the system’s potential benefits from a human-robot interaction perspective.
|
|
17:50-18:10, Paper TuC4.7 | Add to My Program |
Localization in a Semantic Map Via Bounding Box Information and Feature Points (I) |
|
Pathak, Sarthak | The University of Tokyo |
Uygur, Irem | The University of Tokyo |
Lin, Shize | Tsinghua University |
Miyagusuku, Renato | Utsunomiya University |
Moro, Alessandro | Ritecs Inc |
Yamashita, Atsushi | The University of Tokyo |
Asama, Hajime | The University of Tokyo |
Keywords: Vision Systems, Sensor Fusion
Abstract: Mobile service robots often operate in human environments such as corridors, offices, classrooms, homes, etc. In order to function properly, they need to be aware of their 6 Degree of Freedom (6 DoF) location. In addition, it is important that they possess semantic information i.e. knowledge of the types and positions of objects around them. In this method, we propose a method which obtains all of the above information directly. This method operates by using a camera as a "semantic sensor". The robot obtains the direction of objects such as doors, windows, tables, etc. around itself in 2D camera images by detecting bounding boxes. It then uses these object locations to localize itself within a floor map of the environment, which is typically available for most indoor environments. However, bounding box information is highly unstable due to the various changes in lighting, pose, size, etc. Hence, we also semantically tag feature points on detected objects and use them in our Monte-Carlo based localization framework. This increases the robustness and accuracy of our approach, as is demonstrated by experiments.
|
|
TuD1 Special Session, Room 1 |
Add to My Program |
Applied Field Robotics through Machine Learning -Part 2 |
|
|
Chair: Louhi Kasahara, Jun Younes | The University of Tokyo |
Organizer: Yamashita, Atsushi | The University of Tokyo |
Organizer: Kobayashi, Yuichi | Shizuoka University |
Organizer: Miyagusuku, Renato | Utsunomiya University |
Organizer: Louhi Kasahara, Jun Younes | The University of Tokyo |
|
18:30-18:50, Paper TuD1.1 | Add to My Program |
Magnetic-Based Localization Considering Robot’s Attitude in Slopes (I) |
|
Fukushima, Akinori | Utsunomiya University |
Miyagusuku, Renato | Utsunomiya University |
Ozaki, Koichi | Utsunomiya University |
Keywords: Autonomous Vehicle Navigation, Systems for Field Applications
Abstract: In this work, we propose a novel approach for robust localization in slopes using inertial sensors and magnetic information. In environments with slopes, as sensors tilt with the robot when it is moving on the slope, accurate sensor data cannot be correctly measured, reducing localization accuracy. To perform localization using magnetic information, we use information from an inertial measurement unit to estimate the robot’s attitude and compute accurate magnetic azimuth information that accounts for attitude changes. Furthermore, pitch information computed from the inertial measuring unit can even be used to enhanced localization, if pitch information from the environment is also collected. By combining these methods with standard geometric landmark localization using 2D laser rangefinders, we have developed a localization system that is robust to the presence of steep slopes, demonstrated by our testing in real environments.
|
|
18:50-19:10, Paper TuD1.2 | Add to My Program |
Development and Testing of Garbage Detection for Autonomous Robots in Outdoor Environments (I) |
|
Arai, Yuki | Utsunomiya University |
Miyagusuku, Renato | Utsunomiya University |
Ozaki, Koichi | Utsunomiya University |
Keywords: Vision Systems, Machine Learning, Systems for Field Applications
Abstract: In Japan, there is a growing concern about labor shortages due to the declining birthrate and aging population, and there are high expectations for robots to help solve such social problems and create industries. However, due to the prohibition of public road tests in Japan, there are few examples of actual applications of robots. Therefore, considerations and problems in the practical application of robots are still unclear. In this paper, by focusing on the implementation of garbage collection technology, we have developed an autonomous garbage collection robot using deep learning. In addition, we have verified the usefulness of our garbage detection technology in outdoor environments by conducting actual demonstrations at HANEDA INNOVATION CITY, which is a large-scale commercial and business complex belonged private property, Utsunomiya University, and Nakanoshima Challenge 2019, which is a field of demonstration experiment in the outdoor environment. Our garbage detector was designed to detect cans, plastic bottles, and lunch boxes automatically. Through experiments on test data and outdoor experiments in the real-world, we have confirmed that our detector has a 95.6% Precision and 96.8% Recall. Conparisons to other state-of-the-art detectors are also presented.
|
|
19:10-19:30, Paper TuD1.3 | Add to My Program |
Accelerated Sim-To-Real Deep Reinforcement Learning: Learning Collision Avoidance from Human Player (I) |
|
Niu, Hanlin | The University of Manchester |
Ji, Ze | Cardiff University |
Arvin, Farshad | The University of Manchester |
Lennox, Barry | The University of Manchester |
Yin, Hujun | The University of Manchester |
Carrasco, Joaquin | The University of Manchester |
Keywords: Machine Learning, Autonomous Vehicle Navigation, Systems for Field Applications
Abstract: This paper presents a sensor-level mapless collision avoidance algorithm for use in mobile robots that map raw sensor data to linear and angular velocities and navigate in an unknown environment without a map. An efficient training strategy is proposed to allow a robot to learn from both human experience data and self-exploratory data. A game format simulation framework is designed to allow the human player to tele-operate the mobile robot to a goal and human action is also scored using the reward function. Both human player data and self-playing data are sampled using prioritised experience replay algorithm. The proposed algorithm and training strategy have been evaluated in two different experimental configurations: Environment 1, a simulated cluttered environment, and Environment 2, a simulated corridor environment, to investigate the performance. It was demonstrated that the proposed method achieved the same level of reward using only 16% of the training steps required by the standard Deep Deterministic Policy Gradient (DDPG) method in Environment 1 and 20% of that in Environment 2. In the evaluation of 20 random missions, the proposed method achieved no collision in less than 2h and 2.5h of training time in the two Gazebo environments respectively. The method also generated smoother trajectories than DDPG. The proposed method has also been implemented on a real robot in the real-world environment for performance evaluation. We can confirm that the trained model with the simulation software can be directly applied into the real-world scenario without further fine-tuning, further demonstrating its higher robustness than DDPG
|
|
19:30-19:50, Paper TuD1.4 | Add to My Program |
Brain-Mobility-Interface Based on Deep Learning Techniques for Classifying EEG Signals into Control Commands (I) |
|
Hoshino, Satoshi | Utsunomiya University |
Tagami, Takuya | Utsunomiya University |
Yagi, Hideaki | Utsunomiya University |
Kanda, Kohnosuke | Utsunomiya University |
Keywords: Human Interface, Machine Learning, Human-Robot Cooperation/Collaboration
Abstract: This paper proposes an interface that enables users to mentally control a personal mobility robot, PMR. The user interface is named as brain-mobility-interface, BMI. In the BMI, EEG signals of a user are measured and fed as inputs. From the EEG signals, the brain state and face direction of the user, indicating the intention for PMR control, are estimated. For this purpose, two control command classifiers based on deep neural networks, DNNs, are applied to the BMI. As the output, the EEG signals are classified into control commands depending on the estimated user's intentions. The control commands are composed of linear and angular velocities of the PMR. Through the network training, the estimation performance of both the classifiers is increased to more than 99 [%]. In the control experiment, furthermore, we show that the classification performance of the BMI is enough for a user to control the PMR as intended with only the mental commands.
|
|
19:50-20:10, Paper TuD1.5 | Add to My Program |
Autonomous Mobile Robot for Apple Plant Disease Detection Based on CNN and Multi-Spectral Vision System |
|
Karpyshev, Pavel | Skolkovo Institute of Science and Technology |
Ilin, Valery | Skolkovo Institute of Science and Technology |
Kalinov, Ivan | Skolkovo Institute of Science and Technology |
Petrovsky, Alexander | Skolkovo Institute of Science and Technology (Skoltech) |
Tsetserukou, Dzmitry | Skolkovo Institute of Science and Technology |
Keywords: Systems for Field Applications, Sensor Fusion, Automation Systems
Abstract: This paper presents an autonomous system for apple orchard inspection and early stage disease detection. Various sensors including hyperspectral, multispectral and visible range scanners are used for disease detection. For localization and obstacle detection 2D LiDARs and RTK GNSS receivers are used. The proposed system allows to minimize the use of pesticides and increase harvests. The detection approach is based on the use of neural networks for both plant segmentation and disease detection.
|
|
TuD2 Regular Session, Room 2 |
Add to My Program |
Machine Learning |
|
|
Chair: Kobayashi, Taisuke | Nara Institute of Science and Technology |
Co-Chair: Ramirez-Alpizar, Ixchel Georgina | National Institute of Advanced Industrial Science and Technology |
|
18:30-18:50, Paper TuD2.1 | Add to My Program |
Finer-Level Sequential WiFi-Based Indoor Localization |
|
Khassanov, Yerbolat | ISSAI |
Nurpeiissov, Mukhamet | Nazarbayev University |
Sarkytbayev, Azamat | Nazarbayev University |
Kuzdeuov, Askat | Nazarbayev University |
Varol, Huseyin Atakan | Nazarbayev University |
Keywords: Machine Learning, Sensor Networks, Building Automation
Abstract: The WiFi-based indoor localization problem aims to identify the location of a user using the signals received from surrounding wireless access points. A major approach to address this problem is through machine learning algorithms trained on precollected radio maps. However, these approaches either completely ignore the temporal aspects of the problem or the interval between consecutive reference points is too large. Therefore, in this work, we study the application of end-to-end sequence models for finer-level WiFi-based indoor localization. We show that localization task can be formulated as a sequence learning problem by using recurrent neural networks with regression output. The regression output is used to estimate three-dimensional positions and allows the network to easily scale to larger areas. In addition, we present our WiFine dataset containing 290 trajectories sequentially collected at finer-level reference points. The dataset is made publicly available for advancing sequential indoor localization research. The experiments performed on WiFine dataset show that on finer-level localization task the recurrent neural networks are superior to non-sequential models such as k-nearest neighbors and feedforward neural network.
|
|
18:50-19:10, Paper TuD2.2 | Add to My Program |
Autonomous Navigation in Complex Environments Using Memory-Aided Deep Reinforcement Learning |
|
Kästner, Linh | Technische Universität Berlin |
Shen, Zhengcheng | Technische Universität Berlin |
Marx, Cornelius | TU Berlin |
Lambrecht, Jens | Technische Universität Berlin |
Keywords: Motion and Path Planning, Software, Middleware and Programming Environments, Machine Learning
Abstract: Mobile robots have gained increased importance within industrial tasks such as commissioning, delivery or operation in hazardous environments. The ability to navigate in unknown and complex environments is paramount in industrial robotics. Reinforcement learning approaches have shown remarkable success in dealing with unknown situations and react accordingly without manually engineered guidelines and over-conservative measures. However, these approaches are often restricted to short range navigation and are prone to local minima due to a lack of a memory module. Thus, the navigation in complex environments such as mazes, long corridors or concave areas is still an open frontier. In this paper, we incorporate a variety of recurrent neural networks to cope with these challenges. We train a reinforcement learning based agent within a 2D simulation environment of our previous work and extend it with a memory module. The agent is able to navigate solely on sensor data observations which are directly mapped to actions. We evaluate the performance on different complex environments and achieve enhanced results within complex environments compared to memory-free baseline approaches.
|
|
19:10-19:30, Paper TuD2.3 | Add to My Program |
Active Exploration for Unsupervised Object Categorization Based on Multimodal Hierarchical Dirichlet Process |
|
Yoshino, Ryo | Ritsumeikan University |
Takano, Toshiaki | Shizuoka Institute of Science and Technology |
Tanaka, Hiroki | Ritsumeikan University |
Taniguchi, Tadahiro | Ritsumeikan University |
Keywords: Machine Learning, Sensor Fusion, Decision Making Systems
Abstract: This paper describes an effective active exploration method for multimodal object categorization using a multimodal hierarchical Dirichlet process (MHDP). MHDP is a type of multimodal latent variable models, e.g., multimodal latent Dirichlet allocation and multimodal variational autoencoder, that enables a robot to perform unsupervised multimodal object categorization on the basis of different types of sensor information. The goal of the active exploration is to reduce the number of actions executed to collect multimodal sensor information from a variety of objects to acquire knowledge on object categories. The active exploration method employing the information gain (IG) criterion for MHDP is described by extending the IG-based active perception method. Exploiting the submodular property of IG in MHDP, greedy and lazy greedy algorithms with a certain theoretical guarantee of performance are proposed. The effectiveness of the proposed method is evaluated in a robot experiment. Results show that the proposed active exploration method with the greedy algorithm works well, and it significantly reduces the step for exploration. Further, the performance of the lazy greedy algorithm is found to deteriorate at times, due to the estimation error in the IG, differently from that of active perception.
|
|
19:30-19:50, Paper TuD2.4 | Add to My Program |
Behavioral Cloning from Observation with Bi-Directional Dynamics Model |
|
Tobias, Betz | Technical University Munich |
Fujiishi, Hidehito | Nara Institute of Science and Technology |
Kobayashi, Taisuke | Nara Institute of Science and Technology |
Keywords: Machine Learning, Intelligent and Flexible Manufacturing, Control Theory and Technology
Abstract: Robotics is rapidly expanding its workplace from industrial factories to more complicated fields to work on behalf of human. The difficulty of programing the operations human does in advance, however, prevents this expansion. Behavioral cloning is one of the promising approaches to acquire the operations effectively from the expert's demonstrations, which consist of the states and the performed actions of the expert. However, it is intractable and/or highly expensive for robots to measure the expert's actions. Behavioral cloning from observation fills this gap and makes it possible to imitate with state-only demonstrations by inferring actions that the expert performed from an inverse dynamics model. Our goal is to improve the accuracy of this algorithm. This is done by evaluating the inferred action using an additional forward dynamics model. Specifically, we focus on the consistency in both dynamics models, which have to be bi-directional. This bi-directionality, can classify whether the inferred action is realistic or not, and can prevent wrong updates. We show the successful improvement with our new method using various simulation tasks which are typically used in benchmarks.
|
|
19:50-20:10, Paper TuD2.5 | Add to My Program |
Towards Deep Robot Learning with Optimizer Applicable to Non-Stationary Problems |
|
Kobayashi, Taisuke | Nara Institute of Science and Technology |
Keywords: Machine Learning, Software Platform, Control Theory and Technology
Abstract: This paper proposes a new optimizer for deep learning, named d-AmsGrad. In the real-world data, noise and outliers cannot be excluded from dataset to be used for learning robot skills. This problem is especially striking for robots that learn by collecting data in real time, which cannot be sorted manually. Several noise-robust optimizers have therefore been developed to resolve this problem, and one of them, named AmsGrad, which is a variant of Adam optimizer, has a proof of its convergence. However, in practice, it does not improve learning performance in robotics scenarios. This reason is hypothesized that most of robot learning problems are non-stationary, but AmsGrad assumes the maximum second momentum during learning to be stationarily given. In order to adapt to the non-stationary problems, an improved version, which slowly decays the maximum second momentum, is proposed. The proposed optimizer has the same capability of reaching the global optimum as baselines, and its performance outperformed that of the baselines in robotics problems.
|
|
20:10-20:30, Paper TuD2.6 | Add to My Program |
Cooking Actions Inference Based on Ingredient’s Physical Features |
|
Ramirez-Alpizar, Ixchel Georgina | National Institute of Advanced Industrial Science and Technology |
Hiraki, Ryosuke | Osaka University |
Harada, Kensuke | Osaka University |
Keywords: Systems for Service/Assistive Applications, Human Factors and Human-in-the-Loop, Machine Learning
Abstract: Most of the cooking recipes available on the internet describe only major cooking steps, since the detailed actions are considered to be common knowledge. However, when we want a robot to cook a meal based on a recipe, we have to give to the robot a step by step plan of each of the tasks needed to execute one recipe step. In this paper, we developed a framework for inferring the executable cooking actions of ingredients, in order to compensate for the common knowledge of humans. We tuned the existing VGG16 Convolutional Neural Network (CNN) to learn the physical features of ingredients. Then, we built an inference model for six different cooking actions based on the learnt physical features of ingredients. The resultant inferred action(s) represents the next possible action(s) that can be executed. As there can be more than one executable action for the same ingredient state, we prioritize the cooking actions considering the previously executed action, for which kind of people the meal is being prepared for and the cooking time allowed. We show experimental results on five different types of ingredients that are not contained in the training dataset of the CNN.
|
|
TuD3 Special Session, Room 3 |
Add to My Program |
Human-Robotic Systems Collaboration |
|
|
Chair: Sziebig, Gabor | SINTEF Manufacturing |
Organizer: Sziebig, Gabor | SINTEF Manufacturing |
Organizer: She, Jinhua | Tokyo University of Technology |
Organizer: Yokota, Sho | Toyo University |
Organizer: Niitsuma, Mihoko | Chuo University |
Organizer: Solvang, Bjoern | The Arctic University of Norway |
|
18:30-18:50, Paper TuD3.1 | Add to My Program |
Cooperation between Media Changing Robot System and Humans for Cell Processing |
|
Nonoyama, Ryosuke | Kokushikan University |
JINNO, MAKOTO | Kokushikan University |
Yori, Koichiro | Terumo Corporation |
Sugiura, Keiichi | Terumo Corporation |
Sameshima, Tadashi | Terumo Corporation |
Keywords: Human-Robot Cooperation/Collaboration
Abstract: In the field of regenerative medicine, cell processing is currently performed manually. The process is labor intensive and expensive, and its efficiency must be improved. Automatic cell culture apparatuses equipped with a vertical articulated robot have been recently proposed. However, the automation of all tasks of cell processing complicates the system. This study aims to develop a simple and rational cell processing system by combining the tasks performed by a robot with those performed by a human. In a previous study, we improved the efficiency of discarding and injecting tasks using a robot arm in the media changing process. In this study, we report the results of implementing and evaluating prototypes that improve the efficiency of the discarding and injecting tasks. Our experimental results showed the feasibility of discarding and injecting robots that can be used in a safety cabinet. Robots can perform the media changing process more efficiently and more accurately than humans. Moreover, the risk of dripping can be reduced using robots.
|
|
18:50-19:10, Paper TuD3.2 | Add to My Program |
Architecture for Task-Dependent Human-Robot Collaboration (I) |
|
Shu, Beibei | UiT the Arctic University of Norway |
Solvang, Bjoern | The Arctic University of Norway |
Keywords: Human-Robot/System Interaction, Human-Robot Cooperation/Collaboration, System Simulation
Abstract: Nowadays, industrial robots are considered standard equipment in modern production systems. Many small and medium-sized enterprises (SMEs) have already adopted industrial robots in their daily operations. However, the setup, programming and test of various robots need a high level of expertise. Furthermore, the emerging trend of small batch production and rapid product changeover has brought more challenges. To improve flexibility and agility, collaborative robots have been widely used by large manufacturers, but the high costs hinder their usage in SMEs. Thus, in this paper, the potential of using standard industrial robots for human-robot collaboration is exploited. We propose a new system architecture that provides a cost-effective solution and allows traditional industrial robots to perform human-robot collaborations task. The proposed architecture has advanced features of Industry 4.0, which improves both flexibility and agility of a manufacturing system. Due to current safety regulations, our initial experiments are conducted mainly through computer-based simulations.
|
|
19:10-19:30, Paper TuD3.3 | Add to My Program |
The Application of Virtual Reality in Programming of a Manufacturing Cell (I) |
|
Arnarson, Halldor | UiT the Arctic University of Norway |
Solvang, Bjoern | The Arctic University of Norway |
Shu, Beibei | UiT the Arctic University of Norway |
Keywords: Virtual Reality and Interfaces, Software, Middleware and Programming Environments, Human-Robot/System Interaction
Abstract: Programming of industrial robots and manufacturing equipment in general requires product specific expertise towards all members in a manufacturing cell. Typically, old and new equipment is present in the same setup and several experts are often involved in the operation of these systems. In order to decrease the level of complexity in programming of manufacturing equipment this paper investigates the use of virtual reality (VR), to create a common programming platform for typical members in a manufacturing setup. A two-way digital twin is created where all robots can be programmed through the same human- machine- interface (HMI). This cyber-physical system (CPS) allows for simulations, testing and safety checks before all programs are converted and downloaded to the respective units.
|
|
19:30-19:50, Paper TuD3.4 | Add to My Program |
Application to Biomaterial of a Vibration Exciter for Ultrasonic Controlled Growing Rod System (I) |
|
Makino, Koji | University of Yamanashi |
Kitano, Yudai | Yamanashi University |
Taniguchi, Naofumi | University of Yamanashi |
Ishii, Takaaki | University of Yamanashi |
Ohba, Tetsuro | Yamanashi Uni |
Miyashita, Masaki | University of Yamanashi |
Ota, Kento | University of Yamanashi |
Haro, Hirotaka | University of Yamanashi |
Terada, Hidetsugu | University of Yamanashi |
Keywords: Human Interface, Biologically-Inspired Robotic Systems
Abstract: Scoliosis is a symptom of the deformation of the backbone, and a few percentage of teenagers suffer from it. The patient needs to receive the operational surgery, if the curve of backbone is over the certain degrees. In the surgery, pedicle screws are planted in the backbone, and fixed by the straight rod. And the surgical operation is given many times until the patient stops growing up, since the backbone is extension according to growth of the patient. Therefore, a method that the surgical operation is not necessary is desirable. We have developed a growing rod that is able to be controlled using an ultrasonic vibration. In the previous work, the primitive principle of the growing rod was proposed, and were verified by some experiments in an ideal state. It is important to verify the proposed principle using the biomaterial to progress the nextstage (the crinical trials). In this paper, the proposed method is verified using biomaterial that is the meat with bone for food. And, the extension force that occurs in the backbone is discussed, since the length of the gap of the backbone becomes large according to growth of the patient.As a result, it is clear that the extend force of the rod is proportional to the magnitude of the extend force that occurs in spine.Therefore, the extension of the rod stop naturally, if the force in the backbone is released. We confirm that this method has safety.
|
|
19:50-20:10, Paper TuD3.5 | Add to My Program |
Design and Basic Experiment of Online Feedback Training System for Golf Putting (I) |
|
Kawano, Hibiki | Toyo University |
Yokota, Sho | Toyo University |
Matsumoto, Akihiro | Toyo University |
Chugo, Daisuke | Kwansei Gakuin University |
Hashimoto, Hiroshi | Advanced Institute of Industrial Technology |
Keywords: Systems for Service/Assistive Applications, Human Interface
Abstract: The purpose of this study is to develop an online feedback training system for golf putting that can improve the repeatability without affecting the swing feeling. By using the proposed system and providing auditory feedback in real time, the repeatability of the swing can be improved in the same swing feeling as a game. To realize this system, an IMU (inertial measurement unit) is mounted on the grip end of the putter and measures the angle of putter head. This mounting position of the IMU was examined and designed. In addition, the authors experimentally determined the maximum weight of the mountable IMU without affecting swing feeling. As a results, it turned out that up to 10g IMU can be mounted on the club. Moreover this system gives auditory feedback corresponding to the loudness and the angle of the putter, therefore, users can know the angle of the putter head in real time. In particular, this paper reports on the proposal and evaluation experiments of this system.
|
|
TuD4 Special Session, Room 4 |
Add to My Program |
System Integration for Underwater Investigation |
|
|
Chair: Sakagami, Norimitsu | Tokai University |
Co-Chair: Sagara, Shinichi | Kyushu Institute of Technology |
Organizer: Sagara, Shinichi | Kyushu Institute of Technology |
Organizer: Takemura, Fumiaki | Okinawa National College of Technology |
Organizer: Sakagami, Norimitsu | Tokai University |
|
18:30-18:50, Paper TuD4.1 | Add to My Program |
Underwater 3D Scanner Using RGB Laser Pattern (I) |
|
Nishida, Yuya | Kyushu Institute of Technology |
Yasukawa, Shinsuke | Kyushu Institute of Technology |
Ishii, Kazuo | Kyushu Institiute of Technology |
Keywords: Systems for Field Applications, Vision Systems
Abstract: To preserve and manage fishery resources, the authors developed the scanner based on the structured light method that efficiently measures the target shape at once time. The scanner irradiates the laser pattern that consists of the 6 color lasers and is coded in the De Bruijn sequence to measure the target even in water where light is easily attenuated. Data processing method including image processing and decoding for the reflection image captured by the scanner were proposed in this paper. The evaluation experiment results showed that the scanner can measure the color blocks other than black located in 1,000 mm away, less than 1.4 % error for the measurement range.
|
|
18:50-19:10, Paper TuD4.2 | Add to My Program |
A Preliminary Research to Develop Obstacle Detection System Including Distance Measurement (I) |
|
Hakozaki, Katsuya | Tokyo University of Marine Science and Technology, Graduate Scho |
Shimizu, Etsuro | Tokyo University of Marine Science and Technology |
Keywords: Autonomous Vehicle Navigation, Automation Systems, Decision Making Systems
Abstract: Many nautical instruments such as radar, AIS and ECDIS support the seafarer�s Look-out on large vessels. However, there are few small ships equipped with these devices, while small ships account for 80% of the ships that suffered a marine accident. Therefore, in order to support the lookout of these ships, we developed a distance measurement system in addition to the obstacle detection system and evaluated its performance. With this system, we aim to develop a Look-out environment equivalent to the case where a small vessel is equipped with navigation equipment such as radar with only a camera and a personal computer, and to reduce the burden on the Look-out of the seafarers.
|
|
19:10-19:30, Paper TuD4.3 | Add to My Program |
Computational and Experimental Investigation of a Negative Pressure Effect Plate for Underwater Inspection Robots (I) |
|
IWAHORI, Takazumi | Ritsumeikan University Graduate School |
Takebayashi, Takahiro | Ritsumeikan University |
Sakagami, Norimitsu | Tokai University |
Kawamura, Sadao | Ritsumeikan University |
Keywords: System Simulation, Environment Monitoring and Management, Mechanism Design
Abstract: We present a Computational Fluid Dynamics (CFD) simulation of a Negative Pressure Effect Plate (NPEP) that generates a large force for stabilizing the positions and orientations of underwater robots. For designing NPEPs and improving the mathematical model of NPEPs, we applied a CFD method to understand hydrodynamic phenomena occurring around the NPEP. For the simulation, we analyze the pressure and flow distributions inside of the NPEPs with different diameters.We also conducted an experiment to evaluate the performance of one NPEP and for comparison with CFD results. Comparison of the experimental and numerical results, revealed some clues for design and prediction model improvement of the NPEP.
|
|
19:30-19:50, Paper TuD4.4 | Add to My Program |
Sound Analysis to Develop Operation Confirmation System for Remotely-Controlled Ship (I) |
|
Hoshino, Ai | Tokyo University of Marine Science and Technology |
Umeda, Ayako | Tokyo University of Marine Science and Technology |
Shimizu, Etsuro | Tokyo University of Marine Science and Technology |
Keywords: Human-Robot/System Interaction, Autonomous Vehicle Navigation, Automation Systems
Abstract: In recent years, research related to autonomous and remotely controlled ships has become a major topic in the maritime field. At this laboratory, research into the field of remote control of ship operations was established in 2015.The prototype of the remote-control system has already been completed. We are studying methods and systems to support remote control. We think it is necessary to confirm that the command of the operator who operates the ship at a remote place is executed on the ship to be operated by not only the measuring instrument but also the actual measurement (ex. Sound and vibration). Since seafarers actually use sound for engine monitoring, in this study we are studying a system that analyzes measured engine sound data and calculates the actual operating conditions. In this paper, we show the result of preliminary research to detect the feature quantity from the result of analyzing the collected data of the engine sound that is given each maneuvering command.
|
|
19:50-20:10, Paper TuD4.5 | Add to My Program |
Cooperative Manipulation of a Floating Object by Two Underwater Robots with Arms (I) |
|
Sagara, Shinichi | Kyushu Institute of Technology |
Takahashi, Takuya | Kyushu Institute of Technology |
Radzi, Ambar | Universiti Tun Hussein Onn Malaysia |
Keywords: Systems for Field Applications, Control Theory and Technology
Abstract: In future ocean development, it is considered that many tasks will be achieved by cooperative motions of several underwater robots. However, no research has been studied on the cooperative work of underwater robots. We have proposed a cooperative control method for space robots. In this paper, as the first step of development of cooperative control methods for underwater robots, our proposed resolved acceleration control method for underwater vehicle-manipulator system is applied to cooperative manipulations of a floating object by underwater robots. To validate the control method, computer simulations are done. The simulation results show the effectiveness of the control method.
|
|
20:10-20:30, Paper TuD4.6 | Add to My Program |
Model Checking for Decision Making System of Long Endurance Unmanned Surface Vehicle (I) |
|
Niu, Hanlin | The University of Manchester |
Ji, Ze | Cardiff University |
Savvaris, Al | Cranfield University |
Tsourdos, Antonios | Cranfield University |
Carrasco, Joaquin | The University of Manchester |
Keywords: Formal Methods in System Integration, Decision Making Systems, Autonomous Vehicle Navigation
Abstract: This work aims to develop a model checking method to verify the decision making system of Unmanned Surface Vehicle (USV) in a long range surveillance mission. The scenario in this work was captured from a long endurance USV surveillance mission using C-Enduro, an USV manufactured by ASV Ltd. The C-Enduro USV may encounter multiple non-deterministic and concurrent problems including lost communication signals, collision risk and malfunction. The vehicle is designed to utilise multiple energy sources from solar panel, wind turbine and diesel generator. The energy state can be affected by the solar irradiance condition, wind condition, states of the diesel generator, sea current condition and states of the USV. In this research, the states and the interactive relations between environmental uncertainties, sensors, USV energy system, USV and Ground Control Station (GCS) decision making systems are abstracted and modelled successfully using Kripke models. The desirable properties to be verified are expressed using temporal logic statement and finally the safety properties and the long endurance properties are verified using the model checker MCMAS, a model checker for multi-agent systems. The verification results are analyzed and show the feasibility of applying model checking method to retrospect the desirable property of the USV decision making system. This method could assist researcher to identify potential design error of decision making system in advance.
|
|
20:30-20:50, Paper TuD4.7 | Add to My Program |
Development of Pressure Measurement Equipment Fabricated by Robot Packaging Method (I) |
|
Shibata, Mizuho | Kindai Univ |
Sakagami, Norimitsu | Tokai University |
Keywords: Systems for Field Applications, Soft Robotics
Abstract: This manuscript describes a pressure sensor equipment for underwater robot external attachment. It is difficult to add sensors to many underwater robots. Most pressure sensors have a diaphragm plate, and the pressure sensor must be exposed to the environment to deform the diaphragm plate. Therefore, to add a pressure sensor to an underwater robot, it is generally necessary to redesign both the outer shell and the robot's inner structure. To solve this problem, we propose encapsulating a pressure sensor in a plastic film bag. Vacuum packaging technology is applied to fabricate the sensor equipment. This technique is called the robot packaging method, and it is possible to fabricate the sensor equipment rapidly and inexpensively. This manuscript shows experimental results that it is necessary to encapsulate a sufficient amount of insulating fluid in the plastic film bag to fabricate the pressure sensor equipment by the robot packaging method.
|
| |