| |
Last updated on January 18, 2021. This conference program is tentative and subject to change
Technical Program for Wednesday January 13, 2021
|
WeB1 Regular Session, Room 1 |
Add to My Program |
Vision Systems I |
|
|
Chair: ye, minying | Univ. of Fukui |
Co-Chair: Yamazaki, Kimitoshi | Shinshu University |
|
14:00-14:20, Paper WeB1.1 | Add to My Program |
Multi-Person Pose Tracking with Occlusion Solving Using Motion Models |
|
GAMEZ, Lucas | National Inst. of AIST |
Yoshiyasu, Yusuke | CNRS-AIST JRL |
Yoshida, Eiichi | National Inst. of AIST |
Keywords: Vision Systems, Machine Learning
Abstract: We present a method for the multi-person human tracking problem including occlusion solving. To track and associate frame-by-frame human detections obtained using a deep learning approach, we propose to combine motion features extracted by optical flow and Kalman filtering, which allow us to predict the future poses of targets. By taking advantage of the characteristics of both motions features, we are able to handle sharp motions of the target and occlusions. With our simple occlusion handling mechanism, we achieve comparable results with state of the art and are able to keep track of a target identity even when occlusions occur.
|
|
14:20-14:40, Paper WeB1.2 | Add to My Program |
Fault-Diagnosing Monocular-SLAM for Scale-Aware Change Detection |
|
sugimoto, takuma | University of Fukui |
yamaguchi, kousuke | University of Fukui |
bao, Zhongshan | Univ. of Fukui |
ye, minying | Univ. of Fukui |
hiroki, tomoe | Univ. of Fukui |
Tanaka, Kanji | University of Fukui |
Keywords: Vision Systems, Sensor Fusion, Multi-Modal Perception
Abstract: In this paper, we present a new fault diagnosis (FD) -based approach for image change detection (ICD) that can detect significant changes as inconsistencies between different visual experiences of monocular-SLAM. Unlike classical change detection approaches such as pairwise image comparison (PC) and anomaly detection (AD), neither the memorization of each map image nor the maintenance of up-to-date place-specific anomaly detectors are required in this FD approach. A significant challenge that is encountered when incorporating different visual experiences into FD involves dealing with the varying scales of changed objects. To address this issue, we reconsider the bag-of-words (BoW) image representation, and focus on the state-of-the-art BoW-based SLAM paradigm. As a key advantage, the local feature -based representation enables to re-organize the BoW into any different scales without modifying the database entries (i.e., the map). Furthermore, it enables to control discriminative power and expected inconsistencies of local features. Experiments on challenging cross-season ICD using publicly available NCLT dataset, and comparison against state-of-the-art ICD algorithms validate the efficacy of the proposed FD approach with/without combining AD and/or PC.
|
|
14:40-15:00, Paper WeB1.3 | Add to My Program |
Cylinder Detection from RGBD Data Based on Radius Estimation Using Number of Measurement Points |
|
Kawagoshi, Tomoki | Shinshu University |
Yamazaki, Kimitoshi | Shinshu University |
Keywords: Vision Systems, Automation Systems, Autonomous Vehicle Navigation
Abstract: In this paper, we describe a method to estimate cylindrical parameters using RGBD data. As cylindrical shape is one of the shape elements that composes of a building, to know cylindrical parameters accurately is useful to robots aiming to grasp such part. Therefore, we propose a method to estimate the diameter and the inclination of a cylinder part. We formulate the relationship between cylinder diameter and the number of 3D point cloud measured from the cylinder. It enables to estimate cylinder diameter from measured point cloud directly. Through proof experiments, we show that our method is more accurate than conventional methods. Moreover, as an example of the search activity at disaster environments, we report an experiment on grasping a ladder by an autonomous mobile manipulator and showed the effectiveness of the proposed method.
|
|
15:00-15:20, Paper WeB1.4 | Add to My Program |
Visual Servoing Using Virtual Space for Both Learning and Task Execution |
|
Kawagoshi, Tomoki | Shinshu University |
Arnold, Solvi | Shinshu University |
Yamazaki, Kimitoshi | Shinshu University |
Keywords: Vision Systems, Motion and Path Planning, Automation Systems
Abstract: In this paper, we describe a framework for doing an object picking task by visual servoing. During a robotic manipulator approaches the object to be grasped, a convolutional neural network (CNN) is used to generate motions to realize visual servoing. Here, in order to obtain an appropriate CNN, it is necessary to prepare a large amount of training data. Therefore, we propose a method to utilize a virtual environment in order to reduce the load. Also, during actually performing an object grasping task, sensor data acquisition and motion generation are performed using the virtual environment. This makes it possible to approach the object even if the texture changes in the actual environment where robots move. An object grasping experiment was conducted on a rectangular box or a cylindrical object, and the performance of the proposed framework was verified.
|
|
15:20-15:40, Paper WeB1.5 | Add to My Program |
Diagnosing Deep Self-Localization Network for Domain-Shift Localization |
|
Tanaka, Kanji | University of Fukui |
Keywords: Vision Systems, Machine Learning, Sensor Fusion
Abstract: Deep convolutional neural network (DCN) has become a common approach in visual robot self-localization. In a typical self-localization system, a DCN is trained as a visual place classifier from past visual experiences in the target environment. However, its classification performance can be deteriorated when it is tested in a different domain (e.g., times of day, weathers, seasons), due to domain shifts. Therefore, an efficient domain-adaptation (DA) approach to suppress perdomain DA cost would be desired. In this study, we address this issue with a novel ``domain-shift localization (DSL)" technique that diagnosis the DCN classifier with the goal of localizing which region of the robot workspace is significantly affected by domain-shifts. In our approach, the DSL task is formulated as a fault-diagnosis (FD) problem, in which the deterioration of DCN-based self-localization for a given query image is viewed as an indicator of domain-shifts at the imaged region. In our contributions, we address the following non-trivial issues: (1) We address a subimage-level fine-grained DSL task given a typical coarse image-level DCN classifier, in which the target DCN system is queried with a region-of-interest (RoI) masked synthesized query image to diagnosis the RoI region; (2) We extend the DSL task to a relevance feedback (RF) framework, to perform a further query and return improved diagnosis results; and (3) We implement the proposed framework on 3D point cloud imagery -based self-localization and experimentally demonstrate the effectiveness of the proposed algorithm.
|
|
15:40-16:00, Paper WeB1.6 | Add to My Program |
Study of Image Processing Methods for Space Debris Capture |
|
Kobayashi, Taichi | Tottori University |
Nishida, Shin-Ichiro | Tottori University |
Nakamura, Shunsuke | Tottori University |
Nakatani, Shintaro | Tottori University |
Keywords: Vision Systems, Autonomous Vehicle Navigation, System Simulation
Abstract: The number of space debris in Earth's orbit is steadily increasing, and the micro-robotic satellite is expected to remove the space debris actively. The stereo camera has been employed to navigate the robotic satellite to the debris by measuring the position and attitude of the debris. In this study, we propose and evaluate an image processing algorithm with the stereo camera for extracting circular portions of the target debris under multiple illumination environments.
|
|
WeB2 Special Session, Room 2 |
Add to My Program |
Real Space Service System |
|
|
Chair: Wada, Kazuyoshi | Tokyo Metropolitan University |
Co-Chair: Ohara, Kenichi | Meijo University |
Organizer: Wada, Kazuyoshi | Tokyo Metropolitan University |
Organizer: Ohara, Kenichi | Meijo University |
Organizer: Niitsuma, Mihoko | Chuo University |
Organizer: Nakamura, Sousuke | Hosei University |
|
14:00-14:20, Paper WeB2.1 | Add to My Program |
Hierarchical Probabilistic Task Recognition Based on Spatial Memory for Care Support (I) |
|
Katsunaga, Tappei | Hokkaido University |
Tanaka, Takayuki | Hokkaido University |
Niitsuma, Mihoko | Chuo University |
Takahashi, Saburo | Panasonic Advanced Technology Development Co., Ltd |
Abe, Toshihisa | Panasonic Advanced Technology Development Co., Ltd |
Keywords: Machine Learning, Welfare systems, Human-Robot Cooperation/Collaboration
Abstract: In this research, we propose a recognition method for recognizing the task performed by a worker in a care site. The time series sample data of each feature amount during task is defined as the task history, and spatiotemporal task information is created based on the task history by referring to Niitsuma's spatial memory. A simulated care task was performed in an environment that recreates an actual care site, and the time-series data of the worker's feature amount measured by the motion capture system was divided into learning data and evaluation data, and the recognition accuracy was verified. The recognition accuracy of the seven defined elemental tasks was close to 80ï¼… on average, demonstrating the effectiveness of this method.
|
|
14:20-14:40, Paper WeB2.2 | Add to My Program |
Pose Identification for Task Recognition in Care Work (I) |
|
Kato, Shinnosuke | Chuo University |
Niitsuma, Mihoko | Chuo University |
Tanaka, Takayuki | Hokkaido University |
Keywords: Welfare systems, Sensor Networks, Machine Learning
Abstract: In Japan, the number of caregivers is not increasing with the same rate as the aging of the population, resulting in an increased workload per caregiver, which has become a social issue. In response, information and communication technology has been introduced to support caregivers, but the tasks it can support are currently limited. In addition, caregivers are subjected to a variety of events and conditions at the work site, which results in a heavy cognitive burden, even on skilled caregivers. To solve this problem, we develop a system to support caregivers, reduce their burden, and improve work efficiency, by observing caregivers and patients using intelligent space. First, it is necessary to understand the caregiver's tasks in detail. In this paper, we propose a task recognition method based on the recognition of poses from images captured by a camera. Moreover, we investigated alternatives to improve the results of the method. Our experiments indicated that the method can recognize the caregiver's pose with 96.9% accuracy, and we successfully solved the problem of insufficient training data.
|
|
14:40-15:00, Paper WeB2.3 | Add to My Program |
Teaching System for Multimodal Object Categorization by Human-Robot Interaction in Mixed Reality (I) |
|
El Hafi, Lotfi | Ritsumeikan University |
Nakamura, Hitoshi | Ritsumeikan University |
Taniguchi, Akira | Ritsumeikan University |
Hagiwara, Yoshinobu | Ritsumeikan University |
Taniguchi, Tadahiro | Ritsumeikan University |
Keywords: Human-Robot/System Interaction, Virtual Reality and Interfaces, Multi-Modal Perception
Abstract: As service robots are becoming essential to support aging societies, teaching them how to perform general service tasks is still a major challenge preventing their deployment in daily-life environments. In addition, developing an artificial intelligence for general service tasks requires bottom-up, unsupervised approaches to let the robots learn from their own observations and interactions with the users. However, compared to the top-down, supervised approaches such as deep learning where the extent of the learning is directly related to the amount and variety of the pre-existing data provided to the robots, and thus relatively easy to understand from a human perspective, the learning status in bottom-up approaches is by their nature much harder to appreciate and visualize. To address these issues, we propose a teaching system for multimodal object categorization by human-robot interaction through Mixed Reality (MR) visualization. In particular, our proposed system enables a user to monitor and intervene in the robot's object categorization process based on Multimodal Latent Dirichlet Allocation (MLDA) to solve unexpected results and accelerate the learning. Our contribution is twofold by 1) describing the integration of a service robot, MR interactions, and MLDA object categorization in a unified system, and 2) proposing an MR user interface to teach robots through intuitive visualization and interactions.
|
|
15:00-15:20, Paper WeB2.4 | Add to My Program |
Bidirectional Generation of Object Images and Positions Using Deep Generative Models for Service Robotics Applications (I) |
|
Hayashi, Kaede | Ritsumeikan University |
ZHENG, WENRU | Ritsumeikan University |
El Hafi, Lotfi | Ritsumeikan University |
Hagiwara, Yoshinobu | Ritsumeikan University |
Taniguchi, Tadahiro | Ritsumeikan University |
Keywords: Human-Robot/System Interaction, Multi-Modal Perception, Machine Learning
Abstract: The introduction of systems and robots for automated services is important for reducing running costs and improving operational efficiency in the retail industry. To this aim, we develop a system that enables robot agents to display products in stores. The main problem in automating product display using common supervised methods with robot agents is the huge amount of data required to recognize product categories and arrangements in a variety of different store layouts. To solve this problem, we propose a cross-modal inference system based on joint multimodal variational autoencoder (JMVAE) that learns the relationship between object image information and location information observed on site by robot agents. In our experiments, we created a simulation environment replicating a convenience store that allows a robot agent to observe an object image and its 3D coordinate information, and confirmed whether JMVAE can learn and generate a shared representation of an object image and 3D coordinates in a bidirectional manner.
|
|
15:20-15:40, Paper WeB2.5 | Add to My Program |
Rearranging Tasks in Daily-Life Environments Using a Monte Carlo Tree Search and a Feasibility Database (I) |
|
Uriguen Eljuri, Pedro Miguel | Nara Institute of Science and Technology |
Garcia Ricardez, Gustavo Alfonso | Nara Institute of Science and Techonology (NAIST) |
Koganti, Nishanth | GEP Worldwide Inc |
Takamatsu, Jun | Nara Institute of Science and Technology |
Ogasawara, Tsukasa | Nara Institute of Science and Technology |
Keywords: Logistics Systems, Systems for Service/Assistive Applications, Decision Making Systems
Abstract: In this paper, we address the task of rearranging items with a robot. A rearranging task is challenging because one should solve the following issues: to determine how to pick the items and plan how and where to place the items. In this study, we focus on how to obtain a sequence of actions that the robot could execute reducing the failures when the motion planner creates the trajectory to move the robot, such as not finding a solution. To confirm the sequence of instructions before executing them with the robot, we combine a motion planner with a symbolic planner. For that purpose, we propose a Motion Feasibility Checker (MFC), which quickly decides if a given set of pick-and-place poses can be executed with respect to the robot’s kinematics. The MFC uses a database of possible pick/place poses of the target robot; given the initial and target pose of the item, the MFC finds a set of pick-and-place poses to execute that action with the robot. We use the Monte Carlo Tree Search (MCTS) to achieve a high performance of the symbolic planning. In the proposed method, the MCTS searches for the goal while it collaborates with the MFC. We tested the proposed method in a simulation environment doing a sandwich rearranging task in a convenience store setup.
|
|
15:40-16:00, Paper WeB2.6 | Add to My Program |
A Free-Rotating Gripper for Grasping or Rotating an Object (I) |
|
TERAGUCHI, TOMOYA | Tokyo Metropolitan University |
Wada, Kazuyoshi | Tokyo Metropolitan University |
Seki, Masashi | Tokyo Metropolitan University |
Tomizawa, Tetsuo | National Defense Academy of Japan |
Keywords: Mechanism Design, Mechatronics Systems, Automation Systems
Abstract: Automation of display and disposal products is strongly required at convenience stores. However, it is difficult for robots to correct the posture of fallen products and to grasp and transport products in any posture. In this paper, we developed robot hand with free rotation gripper which can switch grasp or rotate products easily for efficient work. It has rotating plate at the end of the robot hand and two grip modes. They are "Grasp mode" and "Rotation mode". Grasp mode prevents free rotation of gripped object, and Rotation mode can change the posture of gripped object. The posture of object is determined by the relationship between the grip position and the position of the center of gravity. With this robot hand, performance evaluation experiment was conducted; the result confirmed that the robot hand was able to grip cork block, convenience store's rice ball and sandwich in Grasp mode and Rotation mode, and gripper behavior was changed by the grip mode.
|
|
WeB3 Regular Session, Room 3 |
Add to My Program |
Welfare Systems I |
|
|
Chair: Miyake, Tamon | Waseda University |
Co-Chair: Akiyama, Yasuhiro | Nagoya-University |
|
14:00-14:20, Paper WeB3.1 | Add to My Program |
A Ground-Stair Walking Strategy of the Assistive Device Based on RGB-D Camera |
|
Yu, Shuai-Hong | Waseda University |
Yang, Bo-Rong | Waseda University |
Lee, Hee-hyol | Waseda University |
Tanaka, Eiichiro | Waseda University |
Keywords: Rehabilitation Systems, Vision Systems, Control Theory and Technology
Abstract: To ensure the safety of elderly people with a walking assistive device during the ground-stair transition, a system which could automatically transfer the walking mode between level walking and stair climbing was proposed. A road condition detection system with a RGB-D Camera and ultrasonic sensors was utilized in this system. The walking mode transformation is triggered by the detection of environment change. When far approach the stairs, the RGB-D Camera is responsible for the stair detection. The ultrasonic sensors is used in the near-approach of stairs and stair traversal. The ultrasonic sensors were mounted on the toe and heel to find the upwards and downwards stairs respectively. During stair traversal, if the ultrasonic sensors find the foot of the device is too closed to the edge of stairs, the target trajectory will be higher and the stride will be adjusted to prevent collisions. The Impedance control was introduced to let the device trace the predefined walking trajectories. In experiments, the device changed walking modes successfully when getting close the stairs. The results engaged in three subjects showed that the foot height was increased compared without assistance. Thus a huge potential can be expected that this system can solve the adaptability of walking assistive devices to different surroundings.
|
|
14:20-14:40, Paper WeB3.2 | Add to My Program |
Kinematic Gait Stability Index Highly Correlated with the Margin of Stability: Concept and Interim Report |
|
Iwasaki, Tomoyuki | Nagoya University |
Okamoto, Shogo | Nagoya University |
Akiyama, Yasuhiro | Nagoya-University |
Inagaki, Takashi | Nagoya University |
Yamada, Yoji | Nagoya University |
Keywords: Welfare systems
Abstract: Gait stability indices that are easy to measure and compute are necessary to enhance their commercial applications. We constructed a new kinematic gait variability index that is highly correlated with a popular kinetic stability index, i.e., the margin of stability (MoS). The new stability index was computed by using the velocity of the center of the human body mass, which can be easily measured by equipment such as inertial measurement instruments. A time-series extension of partial least squares regression was applied on the velocity series to construct the principal motions. These motions were independent of each other and were correlated with MoS. The linear combination of these principal motions exhibited a high correlation coefficient of 0.82 with the minimum values of MoS, suggesting that the proposed index can be used as an easy-to-measure alternative of the MoS.
|
|
14:40-15:00, Paper WeB3.3 | Add to My Program |
Consideration about Structure of Gait Motion Assist Device for Trunk Using McKibben Type Pneumatic Artificial Muscle |
|
Yase, Hayato | Kagawa University |
Sasaki, Daisuke | Kagawa University |
Kadowaki, Jun | Kagawa University |
Kimura, Taiga | Kagawa University |
Keywords: Welfare systems, Mechanism Design, Soft Robotics
Abstract: Stability and efficiency of walking can be obtained according to trunk motions such as side flexion and rotation. In this study, spine type wearable device which can correct a posture and assist these trunk motions was developed to support exercise therapy for people declined in ability of walking. Proposed device has advantage of high affinity with human body owing to adjustable stiffness mechanism. In this paper, the mechanism of device, design method of artificial muscle based on rubber property and assisting method are described. The mechanism of device imitates spine structure to realize adjustable stiffness characteristics. We applied an artificial muscle model to simplify of mechanism and control method. Finally, the effectiveness of proposed mechanism and assisting method was confirmed from experiment result.
|
|
15:00-15:20, Paper WeB3.4 | Add to My Program |
Extraction of Shoulder Parts to Avoid Heavy Load Based on Pain While Walking with Backpack |
|
Wako, Nenta | Waseda University |
Miyake, Tamon | Waseda University |
Sugano, Shigeki | Waseda University |
Keywords: Human Factors and Human-in-the-Loop, Welfare systems
Abstract: When using a backpack, proper shoulder load reduction is required. We focused on pain (nociceptive pain), which is a warning signal to protect the human body, and we aimed to extract the shoulder parts to avoid heavy loads while walking with a backpack. We set 19 measuring points on each shoulder and 12 measuring points on the lower back. Using three-axis tactile sensors, we then recorded the interface load on the shoulders and lower back under two back panel conditions: general flat panel and panel with lumbar pad. With 180 data for mean load and 150 data for peak load on each measuring point, we confirmed the load distribution and load shift effects using a lumbar pad by comparing the shoulder load and the lower back load. Then, the shoulder load data was normalized by the pain threshold for a single-point pressure stimulus at each measuring point of the subject. The pain threshold was estimated by an approximate expression with a sigmoid function for pain scores, which were collected by subjective evaluation with a pain scale. In statistical analysis, through multiple comparisons (Steel-Dwass test) for the mean values of the normalized shoulder load on each measuring point and its mean value of the entire shoulder, we extracted seven potential high-risk points (coracoid process, medial and lateral part of the clavicle regions, medial and lateral part of the ridgeline of the shoulder, and supraspinatus). Moreover, we observed that high-risk loads remained locally behind a significant reduction of the entire shoulder load with a lumbar pad. These results can be used to improve backpack design for proper loads on the shoulder.
|
|
15:20-15:40, Paper WeB3.5 | Add to My Program |
Investigation of Relationship between Multi-Point Mechanical Stimuli on Shoulder and Overall Pain on Backpack Wearers |
|
Wako, Nenta | Waseda University |
Miyake, Tamon | Waseda University |
Sugano, Shigeki | Waseda University |
Keywords: Human Factors and Human-in-the-Loop, Modeling and Simulating Humans, Welfare systems
Abstract: With the increasing use of backpacks on a daily basis, appropriate assessment of shoulder load, which has adverse effects on the body, has become more important. We focused on nociceptive pain, which is a physiological warning signal, and performed a subjective evaluation of loading conditions. In this study, we investigated the relationship between multi-point mechanical stimuli set at 38 measuring points on the shoulder, and overall pain. In the experiment, eight subjects rated their pain levels at 24 loading conditions (combinations of 3 weight, 2 weight-distance, 2 weight-height, and 2 padding conditions) using a pain scale. In the statistical analysis, the overall pain intensities at different loading conditions were compared through ANOVA, and weight and distance from body were confirmed as main contributing factors. In the regression analysis, four different models were used to fit the overall data. A generalized linear model (GLM) with polynomial sigmoid function resulted in the best fit. GLM fitting was also performed on the data after these have been divided into 8 groups based on combinations of distance–height–padding. The independent variables, the selected combinations of loads at the measuring points, differed depending on the loading conditions. For more accurate regression, loads that contribute to the determination of overall pain intensity should be appropriately selected according to the loading conditions. These results can be used to comprehensively evaluate backpack design based on shoulder pain.
|
|
WeB4 Special Session, Room 4 |
Add to My Program |
Robot Audition and Its System Integration - Part 1 |
|
|
Chair: Itoyama, Katsutoshi | Tokyo Institute of Technology |
Co-Chair: Suzuki, Reiji | Nagoya University |
Organizer: Itoyama, Katsutoshi | Tokyo Institute of Technology |
Organizer: Hoshiba, Kotaro | Kanagawa University |
Organizer: Kumon, Makoto | Kumamoto University |
Organizer: Suzuki, Reiji | Nagoya University |
Organizer: Matsubayashi, Shiho | Osaka University |
|
14:00-14:20, Paper WeB4.1 | Add to My Program |
Assessment of a Beamforming Implementation Developed for Surface Sound Source Separation (I) |
|
Zhong, Zhi | Tokyo Institute of Technology |
Shakeel, Muhammad | Tokyo Institute of Technology |
Itoyama, Katsutoshi | Tokyo Institute of Technology |
Nishida, Kenji | Tokyo Institute of Technoloy |
Nakadai, Kazuhiro | Honda Research Inst. Japan Co., Ltd |
Keywords: Human-Robot/System Interaction, System Simulation
Abstract: This paper presents the assessment of a scan-and-sum beamformer by numerical simulations. The scan-and-sum beamformer has been proposed and analyzed theoretically, in concern with sound source separation of general wide band surface sources distributed in the azimuth angle dimension. Sound sources emitted by regions are called surface sources, and tend to have various shapes and sizes, e.g., a waterfall or an orchestra on the stage. Conventionally a sound source is modelled as a point source that is without a shape or size, hence conventional beamformers are mainly designed for point source separation. A scan-and-sum beamformer deploys a conventional beamformer as the sub-beamformer, and scans the region where a target surface source exists at an appropriate scanning density. The separated surface source is formed through a weighted summation of sub-beamformers. Implementations based on the MVDR sub-beamformer are presented under a framework that reduces overlapping calculation of inverse correlation matrices. For inverse correlation estimation, two methods are provided, one is a block-wise processing which further reduces computational cost, and the other is RLS-based inverse matrix calculation which displays strength in accuracy of estimation. A self-designed diverse dataset having various mixtures of surface sound sources is also created to carry out extensive numerical simulations for a detailed comparison between the scan-and-sum beamformer method and the conventional MVDR approach.. Simulations validated the efficiency and effectiveness of the current implementation, and showed that the proposed scan-and-sum beamformer outperforms a conventional one in surface sound source separation.
|
|
14:20-14:40, Paper WeB4.2 | Add to My Program |
An Unsupervised Auditory Scene Analysis System Using Incremental Low-Dimensional Embedding (I) |
|
Shinzato, Kenta | Kyoto University |
Kojima, Ryosuke | Kyoto University |
Keywords: Machine Learning, Systems for Field Applications, Automation Systems
Abstract: This paper addresses the low-dimensional embedding of sounds towards unsupervised auditory scene analysis. Summarizing long-time recordings by mapping them into low-dimensional space is an essential task in long-time environmental monitoring. In this paper, we propose a novel low-dimensional embedding system using an incremental embedding algorithm. To analyze long-time recordings, we design an incremental system consisting of recording, feature extraction, low-dimensional embedding, and visualization. Recently, many low-dimensional embedding methods for acoustic scenes have been studied; however, applicability of these methods to the long-time recording is not adequately evaluated. Thus, this paper describes the construction of the scene analysis system and evaluates the performance of this system. In this paper, we especially focus on two important viewpoints in long-time monitoring: incremental methods and effects of noisy data. To realize an incremental system, we use Self-Organizing Nebulous Growths (SONG), which can incrementally construct a low-dimensional embedding space. Also, in our experiments, we apply our system to bird song analysis under noise conditions. By the preliminary experiments using benchmark datasets, we discover noise sensitivity of our system and applicability to environmental monitoring.
|
|
14:40-15:00, Paper WeB4.3 | Add to My Program |
Multi-Channel Environmental Sound Segmentation Utilizing Sound Source Localization and Separation U-Net (I) |
|
Sudo, Yui | Tokyo Institute of Technolgy |
Itoyama, Katsutoshi | Tokyo Institute of Technology |
Nishida, Kenji | Tokyo Institute of Technoloy |
Nakadai, Kazuhiro | Honda Research Inst. Japan Co., Ltd |
Keywords: Machine Learning, Human-Robot/System Interaction
Abstract: This paper proposes a multi-channel environmental sound segmentation method. Environmental sound segmentation is an integrated method that deals with sound source localization, sound source separation and class identification. When multiple microphones are available, spatial features can be used to improve the separation accuracy of signals from different directions; however, conventional methods have two drawbacks: (a) Since sound source localization and sound source separation using spatial features and class identification using spectral features are trained in the same neural network, it overfits to the relationship between the direction of arrival and the class. (b) Although the permutation invariant training used in speech recognition could be extended, it is not practical for environmental sounds due to the maximum number of speakers limitation. This paper proposes multi-channel environmental sound segmentation method that combines U-Net which simultaneously performs sound source localization and sound source separation, and convolutional neural network which classifies the separated sounds. This method prevents overfitting to the relationship between the direction of arrival and the class. Simulation experiments using the created datasets including 75-class environmental sounds showed that the root mean squared error of the proposed method was lower than that of the conventional method.
|
|
15:00-15:20, Paper WeB4.4 | Add to My Program |
EMC : Earthquake Magnitudes Classification on Seismic Signals Via Convolutional Recurrent Networks (I) |
|
Shakeel, Muhammad | Tokyo Institute of Technology |
Itoyama, Katsutoshi | Tokyo Institute of Technology |
Nishida, Kenji | Tokyo Institute of Technoloy |
Nakadai, Kazuhiro | Honda Research Inst. Japan Co., Ltd |
Keywords: Machine Learning, Systems for Search and Rescue Applications, Environment Monitoring and Management
Abstract: We propose a novel framework for reliable automatic classification of earthquake magnitudes. Using deep learning methods, we aim to classify the earthquake magnitudes into different categories. The method is based on a convolutional recurrent neural network in which a new approach for feature extraction using Log-Mel spectrograms representations is applied for seismic signals. The neural network is able to classify earthquake magnitudes from minor to major. Stanford Earthquake Dataset (STEAD) is used to train and validate the proposed method. The evaluation results demonstrate the efficacy of the proposed method in a rigorous event independent scenario, which can reach a F-score of 67% depending upon the earthquake magnitude.
|
|
15:20-15:40, Paper WeB4.5 | Add to My Program |
Sound Source Tracking Using Integrated Direction Likelihood for Drones with Microphone Arrays (I) |
|
Yamada, Taiki | Tokyo Institute of Technology |
Itoyama, Katsutoshi | Tokyo Institute of Technology |
Nishida, Kenji | Tokyo Institute of Technoloy |
Nakadai, Kazuhiro | Honda Research Inst. Japan Co., Ltd |
Keywords: Systems for Search and Rescue Applications, Sensor Fusion, Formal Methods in System Integration
Abstract: This paper presents a method for sound source localization and tracking using drones with microphone arrays. Signal processing with multiple microphone arrays enables us to estimate the sound source location via triangulation strategies. However in drone applications, triangulation approaches have difficulties because direction estimation is generally unstable due to drone noise. Accordingly, we propose a novel method we call PArticle Filtering with Integrated MUSIC (PAFIM). PAFIM use the strategy of direct localization which estimates the location likelihood distribution rather then estimating sound source direction. In this way, sound source locations can be estimated robustly than discrete triangulation. Numerical simulation demonstrated PAFIM is effective in tracking moving sound sources with root mean square error within 15% of sound source distance.
|
|
WeC1 Regular Session, Room 1 |
Add to My Program |
Vision Systems II |
|
|
Chair: Miura, Jun | Toyohashi University of Technology |
Co-Chair: Takemura, Hiroshi | Tokyo University of Science |
|
16:20-16:40, Paper WeC1.1 | Add to My Program |
Automatic Wood Splinter Detection System Using Cotton for Gymnasium Inspection |
|
Inamine, Moriaki | Tokyo University of Science |
Inaba, Wataru | Tokyo University of Science |
Tsuichihara, Satoki | University of Fukui |
Takemura, Hiroshi | Tokyo University of Science |
Sumiya, Shigeki | Senoh Corporation |
Keywords: Automation Systems, Vision Systems
Abstract: The purpose of this research is to develop equipment which inspects the gymnasium floor in a short time, with a small number of people, with low inspection omissions. Wood splinters from the floor to be detected are very small compared to gymnasium floor and it is hard to find out these wood splinters. In this research, we attached cotton to the wood splinters and found out these by detecting cotton. Then we developed apparatuses that attach cotton and detect wood splinters while moving through the gymnasium. The method of detecting cotton was as follows. First, attached cotton at the front of the apparatuses. Second, moved the apparatuses forward. Third, shoot the cotton behind the apparatuses. Fourth, analyzed the video and detected cotton with one of the object recognition technology called You Only Look Once (YOLO). By the way, wood splinters pointed to longitudinal direction of floorboard so we needed to move apparatuses forward and reverse direction. In order to meet the above-mentioned, we adopted rectangular scan (snake scan) as inspection route. In addition to, we made a system that generate route from range set by inspector. We also made a system displays coordinates where cotton was detected on the map and detection image by selecting the coordinates. As a result of the accuracy test of attaching cotton to simulated wood splinters, the accuracy was 76%. As a result of the precision test detecting cotton, the true positive rate was 99% but there was a lot of false positive. From the above, we can detect 75% splinters. The result of the running test through the gymnasium is as follows. First, total inspection time was 87 minutes. Second, operation required one person when setting and no person when running. Third, inspection omission was at most 2.4%. From the above, we developed equipment which inspects the gymnasium floor in a short time with a small number of people. We will improve this accuracy and reduce false positive when detecting cotton from now.
|
|
16:40-17:00, Paper WeC1.2 | Add to My Program |
A Long-Time Automated Honeybees Count System Using High-Speed Vision |
|
Yoshida, Hironori | Hiroshima University |
Shimasaki, Kohei | Hiroshima University |
Senoo, Taku | Hiroshima University |
Ishii, Idaku | Hiroshima University |
Yamamoto, Kazuhiko | Department of Biotechnology and Chemistry, Kindai University |
Keywords: Vision Systems, Surveillance Systems, Automation Systems
Abstract: Honeybees play a significant role in increasing the effectiveness of agricultural practices. However, their numbers have witnessed a concerning decline in recent years owing to several reasons represented by the colony collapse disorder. Thus, honeybee monitoring has recently attracted increased research attention. The method presented in this paper facilitates counting of honeybees flying in their natural environment by capturing high-frame-rate (HFR) images. The proposed method inspects periodic changes in the image brightness caused by the honeybees’ wing flapping at frequencies of the order of hundreds of hertz. Vibration-source localization based on the short-time Fourier transform of all pixels facilitates the detection of bees even when their individual appearances remain unclear. The number of honeybees captured flying in an image can be determined in 3 steps, (1) acquisition of high speed images, the histograms of which are equalized and are robust to changes in ambient lighting; (2) vibration-source localization that captures luminance fluctuations of a specific frequency at the pixel level; and (3) counting process that labels localized pixels as connecting elements that are subsequently counted. The proposed system is resistant to sunshine changes by equalizing the brightness histogram of captured images. It can reliably determine the number of honeybees flying near their hive per day by performing a counting experiment that might last several hours, such as eight or more.
|
|
17:00-17:20, Paper WeC1.3 | Add to My Program |
Viewpoint Planning for Automated Fruit Harvesting Using Deep Learning |
|
Rehman, Hafiza Ufaq | Toyohashi University of Technology |
Miura, Jun | Toyohashi University of Technology |
Keywords: Vision Systems, Machine Learning, Systems for Field Applications
Abstract: This study presents a viewpoint planning for harvesting robots to improve the detection results of fruits. With viewpoint planning, robots can consider active sensing techniques instead of considering just one usual viewpoint. The planner takes the current scene as input and outputs the best viewpoint out of pre-defined viewpoints; move left, move right, or stay at the current position. We formulate the viewpoint planning problem as a classification problem and implement it using a deep neural network. We extract local fruit regions with the neighboring regions surrounding the fruit from each current scene using a fruit detector. After applying fruit-wise classification, we use the labels assigned by the classifier in the viewpoint planner to select the best viewpoint. Our system classifies fruits up to 82.9 and 81 percent accuracy on unseen test data for computer graphics and real farm datasets, respectively. Overall, we conclude that deep learning is a promising field of research for advancing the latest technology in harvesting robots.
|
|
17:20-17:40, Paper WeC1.4 | Add to My Program |
KMOP-vSLAM: Dynamic Visual SLAM for RGB-D Cameras Using K-Means and OpenPose |
|
Liu, Yubao | Toyohashi University of Technology |
Miura, Jun | Toyohashi University of Technology |
Keywords: Vision Systems, Motion and Path Planning, Control Theory and Technology
Abstract: Although tremendous progress has been made in Simultaneous Localization and Mapping (SLAM), the scene rigidity assumption limits wide usage of visual SLAMs in the real-world environment of computer vision, smart robotics and augmented reality.To make SLAM more robust in dynamic environments, outliers on the dynamic objects, including unknown objects, need to be removed from tracking process. To address this challenge, we present a novel real-time visual SLAM system, KMOP-vSLAM, which adds the capability of unsupervised learning segmentation and human detection to reduce the drift error of tracking in indoor dynamic environments. An efficient geometric outlier detection method is proposed, using dynamic information of the previous frames as well as a novel probability model to judge moving objects with the help of geometric constraints and human detection. Outlier features belonging to moving objects are largely detected and removed from tracking. The well-known dataset, TUM, is used to evaluate tracking errors in dynamic scenes where people are walking around. Our approach yields a significantly lower trajectory error compared to state-of-the-art visual SLAMs using an RGB-D camera.
|
|
17:40-18:00, Paper WeC1.5 | Add to My Program |
Real-Time Interpolation Method for Sparse LiDAR Point Cloud Using RGB Camera |
|
Hasegawa, Tomohiko | Hokkaido University |
Emaru, Takanori | Hokkaido University |
Ravankar, Ankit | Faculty of Engineering, Hokkaido University |
Keywords: Sensor Fusion, Vision Systems, Autonomous Vehicle Navigation
Abstract: LiDAR (Light Detection and Ranging) sensor- based mapping and navigation is one of the fundamental techniques for achieving autonomous driving capabilities in urban scenarios. LiDARs can generate long-distance omni- directional measurements of its surrounding that is crucial for object detection and obstacle avoidance for autonomous vehicles. Currently, the most popular LiDAR sensors used for the application of autonomous driving are 360-degree rotating multi-layer type sensor. These are expensive for general use and mainly suffer from poor vertical resolution. In this study, we propose a method of LiDAR point cloud interpolation in real-time using information from an RGB camera. We propose a method to treat sparse point cloud as a depth image which enables us to apply depth enhancement methods to the point cloud. Additionally, we propose a new depth enhancement method with image segmentation for point cloud and compare its accuracy with existing methods. From the results, we present the usefulness of introducing depth image and applying the new depth enhancement method for LiDAR point cloud.
|
|
18:00-18:20, Paper WeC1.6 | Add to My Program |
Pose Estimation of a Simple-Shaped Object Based on PoseClass Using RGBD Camera |
|
Yamada, Rikuto | Meijo University |
Yamamori, Koki | Meijo University |
Tasaki, Tsuyoshi | Meijo University |
Keywords: Vision Systems, Machine Learning
Abstract: The problem of pose estimation of a simple-shaped object using an RGBD camera is addressed with the purpose of developing a robot capable of arranging goods. The demand for robots capable of arranging goods in retail stores is high. However, the goods are usually simple in shape such as a rectangular box or triangular prism, and it is difficult to estimate the pose using conventional methods based on shape features, without a given rough pose as an initial value. In this study, a new concept called PoseClass is proposed, in which the object surface placed on the shelf is treated as a class, and a Deep Neural Network (DNN) is developed, which while estimating the PoseClass also outputs the pose. The developed method is 3.8 times more accurate than previous DNN-based methods.
|
|
WeC2 Regular Session, Room 2 |
Add to My Program |
Software Design and Framework I |
|
|
Chair: Matsubara, Katsuya | Future University Hakodate |
Co-Chair: Ho, Van | Japan Advanced Institute of Science and Technology |
|
16:20-16:40, Paper WeC2.1 | Add to My Program |
Efficient Tailoring of Universal Software Components for Conserving Resources: Evaluation in Practical ROS-Based System |
|
Shizukuishi, Takuya | Future University Hakodate |
Matsubara, Katsuya | Future University Hakodate |
Keywords: Software, Middleware and Programming Environments, Software Platform, Integration Platform
Abstract: The method of development for robot systems is shifting toward rapid and efficient construction utilizing universal software components, such as various common libraries and operating systems, with tailored implementations rather than programming from scratch. The Robot Operating System (ROS), which is becoming widely used as a robotics infrastructure, is built on the Linux operating system, which has been commonly used in PC and server environments. However, many universal software components may contain functionalities and implementation codes that are unnecessary in certain cases, and these may have negative impacts, especially on resource-constrained platforms. Unfortunately, identifying and eliminating unnecessary code appropriately is a difficult process because it requires a deep understanding of the system behavior and implementation details of the component. This study proposes a dynamic analysis method and tools to help determine the necessity of each functionality of the Linux kernel, one of the largest universal software components in a ROS-based system, on the basis of energy and memory consumption. To evaluate the practicality of the proposal, we conduct an experimental application in a practical ROS-based robot system. The results show that even without understanding the details of the whole kernel, memory consumption can be reduced by about 40% by efficiently disabling unnecessary features.
|
|
16:40-17:00, Paper WeC2.2 | Add to My Program |
Determining the Configuration of a Manipulator Using Minimal Amount of Passive Fiducial Markers (I) |
|
Erich, Floris Marc Arden | Institute of Advanced Industrial Science and Technology |
Ando, Noriaki | National Institute of Advanced Industrial Science and Technology |
Keywords: Software, Middleware and Programming Environments, Control Theory and Technology, Human-Robot/System Interaction
Abstract: In this paper we present a method for determining the configuration of a robot using at least one fiducial marker per joint on average, tracked by an optical motion capture system. Typically a rigid body requires at least three markers per tracked body for a passive motion capture system to determine its configuration. By using the kinematic model of the robot we can determine the configuration of a robot by applying on average a single marker per joint. The relation between markers and the links of a robot is specified in a markerfile which can be constructed using custom developed tools. In this paper we present such tools and an algorithm for online determination of the configuration of the robot.
|
|
17:00-17:20, Paper WeC2.3 | Add to My Program |
A 3-Dimensional Printing System Using an Industrial Robotic Arm (I) |
|
Luu, Quan | Japan Advanced Institute of Science and Technology |
La, Hung | University of Nevada at Reno |
Ho, Van | Japan Advanced Institute of Science and Technology |
Keywords: Integration Platform, Automation Systems, Mechatronics Systems
Abstract: This paper describes the development of a three-dimensional (3D) printing system that integrates a six-degree-of-freedom industrial robot into a fused deposition modeling process. By using the robot-based 3D printing system, printing on inclined planes became possible, which cannot be achieved by a conventional 3D printer. Moreover, the robotic 3D printing is supposed to achieve faster and smoother motion compared to its counterpart under the same temporal settings, thanks to a knowledge-based strategy to re-plan printing trajectories from a set of G-commands. The accurate execution of the printing trajectories and other necessary components for the printing process (for example, an extruder) are regulated by the robot operating system (ROS). The efficiency of the printing system was evaluated by 3D printing a couple of simple 3D models using a six-axis Denso robot. The preliminary results revealed great potential for rapid prototyping and printing in close contact with humans, especially in the field of interactive manufacturing, or human-robot collaboration.
|
|
17:20-17:40, Paper WeC2.4 | Add to My Program |
Development of IoT Educational Materials for Engineering Students |
|
Kimura, Noriyuki | National Institute of Technology, Asahikawa College |
Okita, Yuto | National Institute of Technology, Asahikawa College |
Goka, Ryota | National Institute of Technology, Asahikawa College |
Yamazaki, Takuto | National Institute of Technology, Asahikawa College |
Satake, Toshifumi | National Institute of Technology, Asahikawa College |
Igo, Naoki | National Institute of Technology, Asahikawa College |
Keywords: Integration Platform, Sensor Fusion, Virtual Reality and Interfaces
Abstract: In recent years, the IoT technology has spread rapidly. However, there is a lack of appropriate educational materials for students. The educational materials in this paper refer to IoT educational materials for engineering students. As a result of the survey, there is a problem that engineers and students who just started to learn IoT cannot understand it because there are no educational materials about IoT, and there is a problem that there are few opportunities to learn IoT itself. Therefore, this study aims to develop IoT educational materials by students themselves and to improve the recognition of IoT educational materials by presenting them in the free category of the 29th KOSEN Programming Contest. In this system, the "drone field" is the center of the system, and "IoT field", "Virtual Reality (VR)/ Augmented Reality (AR) field", and "3D printer field" are placed around the "drone field" to make the IoT teaching materials. A step-by-step manual for beginners is provided. As a result, 52 teams applied for the KOSEN Programming Contest, and our system passed the first round of judging and won the Fighting Spirit Award.
|
|
17:40-18:00, Paper WeC2.5 | Add to My Program |
Development of a Tower-Type Cooking Robot |
|
Inagawa, Masahiro | Hirosaki University |
Takei, Toshinobu | Hirosaki University |
Imanishi, Etsujiro | Hirosaki University |
Keywords: Integration Platform, Hardware Platform, Software, Middleware and Programming Environments
Abstract: Cooking robots still have problems, such as poor the cooking performance, unsatisfactory methods for executing recipe instructions, and the large scale of the system. The objective of this study is to develop a cooking robot that can improve cooking performance while reducing the scale of the system. In this paper, we propose a tower-type cooking robot that functions as a single, small-scale cooking robot system by layering individual cooking mechanisms as units and stacking them together. In addition, we apply our recipe analysis algorithm to the proposed tower-type cooking robot to solve the problem of executing recipe instructions. To achieve our objective, we have developed a prototype of a tower-type cooking robot and an experiment. Since the improvement in cooking performance is dependent on each unit, we will have to improve their controls and mechanisms in the future.
|
|
WeC3 Regular Session, Room 3 |
Add to My Program |
Welfare Systems II |
|
|
Chair: Kikuchi, Takehito | Oita University |
Co-Chair: Wake, Naoki | Microsoft |
|
16:20-16:40, Paper WeC3.1 | Add to My Program |
A Learning-From-Observation Framework: One-Shot Robot Teaching for Grasp-Manipulation-Release Household Operations |
|
Wake, Naoki | Microsoft |
Arakawa, Riku | The University of Tokyo |
Yanokura, Iori | The University of Tokyo |
Kiyokawa, Takuya | Nara Institute of Science and Technology |
Sasabuchi, Kazuhiro | Microsoft |
Takamatsu, Jun | Nara Institute of Science and Technology |
Ikeuchi, Katsushi | Microsoft |
Keywords: Systems for Service/Assistive Applications, Formal Methods in System Integration, Multi-Modal Perception
Abstract: A household robot is expected to perform various manipulative operations with an understanding of the purpose of the task. To this end, a desirable robotic application should provide an on-site robot teaching framework for non-experts. Here we propose a Learning-from-Observation (LfO) framework for grasp-manipulation-release class household operations (GMR-operations). The framework maps human demonstrations to predefined task models through one-shot teaching. Each task model contains both high-level knowledge regarding the geometric constraints and low-level knowledge related to human postures. The key idea is to design a task model that 1) covers various GMR-operations and 2) includes human postures to achieve tasks. We verify the applicability of our framework by testing an operational LfO system with a real robot. In addition, we quantify the coverage of the task model by analyzing online videos of household operations. In the context of one-shot robot teaching, the contribution of this study is a framework that 1) covers various GMR-operations and 2) mimics human postures during the operations.
|
|
16:40-17:00, Paper WeC3.2 | Add to My Program |
Human Pose Recognition under Cloth-Like Objects from Depth Images Using a Synthetic Image Dataset with Cloth Simulation |
|
Ochi, Shunsuke | Toyohashi University of Technology |
Miura, Jun | Toyohashi University of Technology |
Keywords: Systems for Service/Assistive Applications, Vision Systems, Welfare systems
Abstract: This paper proposes a method of human pose recognition when the body is largely covered by cloth-like objects such as blankets. Such a recognition is useful for robotic monitoring of the elderly and the disabled. Human pose recognition under cloth-like object is challenging due to a large variety of the shape of covering objects. Since we would like to use depth images for addressing privacy and illumination issues, it further makes the problem difficult. In this paper, we utilize computer graphics tools including cloth simulation for generating a synthetic dataset, which is then used for training a deep neural network for body parts segmentation. We achieved around 90% accuracy in synthetic data and show the effectiveness of simulating cloth-like objects in data generation. We also applied it to real data and examined the results for identifying remaining issues.
|
|
17:00-17:20, Paper WeC3.3 | Add to My Program |
Urine Volume Estimation by Electrical Impedance Tomography with Fewer Electrodes: A Simulation Study |
|
Noyori, Shuhei | The University of Tokyo |
Noguchi, Hiroshi | Osaka City University |
Nakagami, Gojiro | The University of Tokyo |
Mori, Taketoshi | The University of Tokyo |
Sanada, Hiromi | The University of Tokyo |
Keywords: Welfare systems, Rehabilitation Systems, Systems for Service/Assistive Applications
Abstract: Urinary incontinence is prevalent among elderly people. Recent studies have demonstrated the effectiveness of continence care based on bladder volume measurement in elderly people, who maintain the urinary storage function but have difficulty in feeling bladder fullness due to dementia or some neurological disorders. To help caregivers provide the bladder volume measurement-based care, our group is developing a wearable bladder volume sensor with an error of < 20 mL (10% error around 200 mL). This study evaluated the performance of a bladder volume estimation by impedance measurement with fewer electrodes than ordinary electrical impedance tomography. The result in simulation data by 8-electrode measurement showed a root mean square error of 27.7 (0.2) mL (mean (SD)) with the normal measurement noise, and this decreased to 15.2 (0.1) mL with a smaller noise. This study confirmed that the impedance measurement with eight electrodes accurately estimates the bladder volume. As the urine conductivity is subject to diurnal variation, we tested the model in the mixed urine conductivity condition. The root mean square error increased to 57.2 (0.3) mL and was not improved even with smaller noise. Further research will include the development of an estimation model that is robust to urine conductivity variations.
|
|
17:20-17:40, Paper WeC3.4 | Add to My Program |
Motion Analysis of Transfer Operation from Bed to Wheelchair for Care-Giver and -Receiver with Wearable Motion Capture |
|
Kikuchi, Takehito | Oita University |
Shimazu, Okito | Oita University |
Yamamoto, Yasuhiro | Nishi-Nippon Junior Collage |
Nakano, Mikiko | Nishi-Nippon Jr.Col |
Ichiyama, Sachiko | Nishi-Nippon Jr.Col |
Kawai, Sayuri | Nishi-Nippon Jr.Col |
Orii, Asuka | Nishi-Nippon Jr.Col |
Tanabe, Shinichi | Nishi-Nippon Jr.Col |
Keywords: Welfare systems, Rehabilitation Systems
Abstract: Low back pain (LBP) is a common major health problem in the caregiving works. There is strong epidemiological evidence that physical demands of works can be associated with increased reports of back pain. High-quality education for caregivers must play an important role to avoid such work-related injuries. However, evidence-based education for caregivers has not been sufficiently authorized. To establish the measurements and analysis of the caregiving operation which composes many body contacts, we developed a motion analysis system and software as the educational tool for caregivers. This system composed of two sets of wearable motion capture sensor with 31 inertial measurement units (IMU). In addition, we measured motions of transfer operations from a bed to a wheelchair for professional and amateur caregivers and compared their features.
|
|
WeC4 Special Session, Room 4 |
Add to My Program |
Robot Audition and Its System Integration - Part 2 |
|
|
Chair: Itoyama, Katsutoshi | Tokyo Institute of Technology |
Co-Chair: Kumon, Makoto | Kumamoto University |
Organizer: Itoyama, Katsutoshi | Tokyo Institute of Technology |
Organizer: Hoshiba, Kotaro | Kanagawa University |
Organizer: Kumon, Makoto | Kumamoto University |
Organizer: Suzuki, Reiji | Nagoya University |
Organizer: Matsubayashi, Shiho | Osaka University |
|
16:20-16:40, Paper WeC4.1 | Add to My Program |
Development of Microstructured Low Noise Propeller for Aerial Acoustic Surveillance (I) |
|
Noda, Ryusuke | Kyoto University |
Nakata, Toshiyuki | Chiba University |
Senda, Kei | Kyoto University |
Liu, Hao | Chiba University |
Keywords: Biomimetics, Systems for Search and Rescue Applications, Surveillance Systems
Abstract: The multi-rotor drone market is expected to rapidly expand in the next few years. Drones have huge potential for various missions such as delivery and surveillance, and they will likely be operated in the urban areas, close to human. In this situation, drones are expected to cause noise pollution and the solutions are required immediately for preventing that. In this study, we developed a method to reduce the noise that utilizes the attachment at trailing edge of the propellers. The effect of the attachment on the aerodynamic and acoustic performance is investigated experimentally and numerically. From the single propeller experiments and simulations, we found that the trailing-edge attachment can reduce the rotation speed, and the microstructure of the trailing-edge is crucial to prevent the large-scale flow separations that greatly increases the noise level at high frequency. By appropriately designing, the attachment can reduce the noise level even down to the level lower than the basic one. Our results point out that the simple trailing-edge attachment may improve the acoustic performance of the propellers and it will be useful for the acoustic surveillance.
|
|
16:40-17:00, Paper WeC4.2 | Add to My Program |
Visualizing Directional Soundscapes of Bird Vocalizations Using Robot Audition Techniques (I) |
|
Suzuki, Reiji | Nagoya University |
Zhao, Hao | Nagoya University |
Sumitani, Shinji | Nagoya University |
Matsubayashi, Shiho | Osaka University |
Arita, Takaya | Nagoya University |
Nakadai, Kazuhiro | Honda Research Inst. Japan Co., Ltd. / Tokyo Institute of Techno |
Okuno, Hiroshi G. | Kyoto University / Waseda University |
Keywords: Environment / Ecological Systems, Environment Monitoring and Management, Sensor Networks
Abstract: A visualisation of the soundscape dynamics is one of the important topics in ecoacousics. However, existing approaches mainly focused on the soundscape in the frequency domain while the soundscape in the directional or spatial domain is also essential to better understand animal vocalizations. This paper proposes and discusses novel applications of robot audition techniques to visualize soundscape dynamics in the directional or spatial domain by using the directional information of sound sources obtained from the robot audition software HARK (Honda Research Institute Japan Audition for Robots with Kyoto University) and the software for birdsong localization HARKBird. First, we create a false-color spectrogram that visualizes directional soundscapes in which the color of the spectrogram reflects the direction of arrival of separated sounds. We also visualize the distribution of directional soundscapes by combining the entropy of the likelihood of sound existence (MUSIC spectrum) and a latent space embedding method (UMAP). We applied these techniques to a 5 min recording with 6 individuals of Zebra Finch in order to show that the extracted visual information can reflect acoustic structures among the group of bird individuals in the directional domain.
|
|
17:00-17:20, Paper WeC4.3 | Add to My Program |
Observing Nocturnal Birds Using Localization Techniques (I) |
|
Matsubayashi, Shiho | Osaka University |
Saito, Fumiyuki | IDEA Consultants, Inc |
Suzuki, Reiji | Nagoya University |
Nakadai, Kazuhiro | Honda Research Inst. Japan Co., Ltd |
Okuno, Hiroshi G. | Waseda University |
Keywords: Environment Monitoring and Management, Environment / Ecological Systems
Abstract: Although nocturnal birds in Japan are rare, they often play critical roles in the ecosystem. Because they are elusive, however, the accurate and efficient monitoring of such birds, has been a challenge for field researchers. Furthermore, difficulties multiply when the population size of the target species is decreasing. This paper introduces recording examples conducted in the field to secretively monitor nocturnal birds in different environments, using localization techniques. We observed two different species, the ruddy-breasted crake (Porzana Fusca) in wetland and the Ural owls (Strix uralensis) in a forest, both of which are rare and their conservation is of environmental concern. We localized the territorial calls of crakes, and the feeding and fledgling scenes of owls, derived from one or multiple microphone arrays. The localized sounds successfully captured the fine-scale movements of these species in space and time, which cannot be easily obtained from any other monitoring method. Our results provide the first cases of monitoring such rare species using microphone arrays in the field.
|
|
WeD1 Special Session, Room 1 |
Add to My Program |
Intelligent Sensing Applications for Human Assistive Systems |
|
|
Chair: Shimizu, Sota | Shibaura Institute of Technology |
Co-Chair: Motoi, Naoki | Kobe University |
Organizer: Motoi, Naoki | Kobe University |
Organizer: Igarashi, Hiroshi | Tokyo Denki University |
Organizer: Ito, Shin-ichi | Tokushima University |
Organizer: Shimizu, Sota | Shibaura Institute of Technology |
|
18:40-19:00, Paper WeD1.1 | Add to My Program |
Local Path Planning Method Based on Virtual Manipulators and Dynamic Window Approach for a Wheeled Mobile Robot (I) |
|
Kobayashi, Masato | Kobe University |
Motoi, Naoki | Kobe University |
Keywords: Human-Robot/System Interaction, Control Theory and Technology, Motion and Path Planning
Abstract: This paper proposes a local path planning method based on virtual manipulators and dynamic window approach (VMDWA) for a wheeled mobile robot. In the conventional researches such as the dynamic window approach (DWA), the mobile robot follows the desired path and avoids obstacles in the dynamic and static environment. However, DWA calculates the predicted path assuming the constant velocities. From these predicted paths, the best path is selected using the cost function. Since the DWA assumes the constant value of velocities, multiple candidates of paths cannot generate flexible movements. Therefore, it may not be possible to generate the path that does not collide with obstacles. To solve these problems, this paper proposes the local path planning method based on virtual manipulators and dynamic window approach (VMDWA). In the proposed method, virtual manipulators are taken into account at the calculation of the path planning based on DWA. Therefore, it is possible to use the variable velocities values for the predictive paths. VMDWA selects the optimal path from the several calculations of predictive paths. The simulation results confirmed the effectiveness of the VMDWA.
|
|
19:00-19:20, Paper WeD1.2 | Add to My Program |
SpheriCol: A Driving Assistance System for Power Wheelchairs Based on Spherical Vision and Range Measurements (I) |
|
Delmas, Sarah | MIS Laboratory, Université De Picardie Jules Verne |
Morbidi, Fabio | MIS Laboratory, Université De Picardie Jules Verne |
Caron, Guillaume | MIS Laboratory, Université De Picardie Jules Verne |
Albrand, Julien | MIS Laboratory, Université De Picardie Jules Verne |
Jeanne-Rose, Méven | MIS Laboratory, Université De Picardie Jules Verne |
Devigne, Louise | IRISA/Inria Rennes and INSA Rennes |
Babel, Marie | IRISA/Inria Rennes and INSA Rennes |
Keywords: Vision Systems, Systems for Service/Assistive Applications, Sensor Fusion
Abstract: This paper presents "SpheriCol", a new driving assistance system for power wheelchair users. The ROS-based aid system combines spherical images from a twin-fisheye camera and range measurements from on-board exteroceptive sensors, to synthesize different augmented views of the surrounding environment. Experiments with a Quickie Salsa wheelchair show that SpheriCol improves situational awareness and supports user's decision in challenging maneuvers, such as passing through a door or corridor centering.
|
|
19:20-19:40, Paper WeD1.3 | Add to My Program |
Quick Torsion Torque Control Based on Model Error Compensator and Disturbance Observer with Torsion Torque Sensor (I) |
|
Kawai, Yusuke | Nagaoka University of Technology |
Nagao, Sora | Nagaoka University of Technology |
Yokokura, Yuki | Nagaoka University of Technology |
Ohishi, Kiyoshi | Nagaoka University of Technology |
Miyazaki, Toshimasa | Nagaoka University of Technology |
Keywords: Control Theory and Technology, Sensor Fusion, Human-Robot/System Interaction
Abstract: Conventionally, I-P-I-P torsion torque control has been proposed for realizing the load-side acceleration control that is a robust motion control for the flexible joint manipulator. However, conventional torsion torque control is designed as a 4th-order delay system and it is difficult to improve the control bandwidth. For this, reducing control-system-order is required. This paper proposes a quick torsion torque control based on a force and position sensors integrated disturbance observer (FPIDO) and a model error compensator (MEC) for improving the performance of human interaction in the flexible joint manipulator. The proposed approach that combines the FPIDO and MEC is capable of control the design of the torsion torque control at the 2nd-order delay system. The proposed approach is verified through numerical simulation and experimental results.
|
|
19:40-20:00, Paper WeD1.4 | Add to My Program |
Tracking Method of Medaka Considering Proximity State (I) |
|
Sakakibara, Takanori | Kagawa University |
Takahashi, Satoru | Kagawa University |
Kawabata, Kuniaki | Japan Atomic Energy Agency |
Oda, Shoji | University of Tokyo |
Keywords: Vision Systems, Automation Systems, Human Interface
Abstract: In biology, it is expected to understand the mechanism of generation of herd by analyzing herd behavior. Then, it is very difficult to obtain the herd data manually. In this paper, we introduce a method for automatically extracting the medaka swimming trajectory from time-series images in order to analyze medaka behavior. In particular, the improvement of extraction accuracy is done by treating the overlapping and approaching states of medaka in the image.
|
|
20:00-20:20, Paper WeD1.5 | Add to My Program |
Robust Heartbeat Interval Estimation Method against Various Postures on Bed Using Contactless Measurement (I) |
|
Sato, Tsuyoshi | Keio University |
Nakaigawa, Toshiya | Keio University |
Hamada, Nozomu | Keio University |
Mitsukura, Yasue | Keio Univerisity |
Keywords: Welfare systems, Systems for Service/Assistive Applications
Abstract: The purpose of this thesis is to use contactless measurement, to propose a robust system for heartbeats interval estimation that enables easy and stable health monitoring in the daily life. In recent years, cardiovascular disease is a major cause of death. Ballistocardiogram (BCG), which records the mechanical activity of the heart, has been studied as a method provided an unobtrusive measurement. This technique provides the possibility of observing health status without causing any discomfort however, the signal quality can highly vary due to artifacts associated with breathing or postures of a user. Therefore, this study assesses the robust algorithm to estimate heartbeats interval from BCG signal measured by high-sensitive load sensors that mounted to bed legs. Three healthy subjects participated in the experiments that they lied on the bed with various postures. As a result, mean beat-to-beat interval errors were less than about 50 ms when subjects held in decubitus.
|
|
WeD2 Regular Session, Room 2 |
Add to My Program |
Software Design and Framework II |
|
|
Chair: Yu, Hao | UiT the Arctic University of Norway |
Co-Chair: Kuwahara, Toshinori | Tohoku University |
|
18:40-19:00, Paper WeD2.1 | Add to My Program |
Development and Demonstration of the Mission Control System for Artificial Meteor Generating Micro-Satellites |
|
Shibuya, Yoshihiko | Tohoku University |
Sato, Yuji | Tohoku University |
Tomio, Hannah | Tohoku University |
Fujita, Shinya | Tohoku University |
Kuwahara, Toshinori | Tohoku University |
Kamachi, Koh | ALE Co., Ltd |
Watanabe, Hayato | ALE Co., Ltd |
Keywords: Network Systems, Integration Platform, Software Platform
Abstract: The Space Robotics Laboratory of Tohoku University and ALE Co., Ltd. have developed the micro-satellites "ALE-1" and "ALE-2" to demonstrate the generation of artificial meteors. These meteors will be created by the ejection of meteor particles on orbit. This mission necessitates strict safety requirements to prevent the released particles from colliding with other satellites and spacecraft. In this project, we constructed a ground system for the operation of these satellites and established an operation plan to meet their safety requirements. The ground system uses a virtual ground station interface to operate multiple ground stations around the world via an antenna sharing service. A meteor particle release simulation was conducted with ALE-2 on orbit to demonstrate and validate the ground system and operation plan. The results of this test, presented here, show that the ground system can be successfully used to conduct the artificial meteor mission and these operations meet the safety requirements.
|
|
19:00-19:20, Paper WeD2.2 | Add to My Program |
Rowma: A Reconfigurable Robot Network Construction System |
|
Suenaga, Ryota | Meiji University |
Morioka, Kazuyuki | Meiji University |
Keywords: Software, Middleware and Programming Environments, Software Platform, Network Systems
Abstract: This paper describes an OSS called Rowma that can form a network of ROS-based robots. Rowma is a system that manages multiple ROS-based robots on the cloud. Also, it sends arbitrary data such as rosrun/roslaunch commands and topics, and exchanges topics between robots. Furthermore, Rowma can compose a network among arbitrary multiple robots. And a user can arbitrarily select the topic to communicate between robots according to the application. Robots can easily join a network because Rowma can integrate easily into the existing ROS-based system without affecting it. Also, a developer can easily develop robot applications since we distribute Rowma SDKs. Moreover, Rowma can construct a network to communicate data with robots with each other and operate robots over the Internet by using this system. This paper describes one of the use cases of Rowma, and show the results of measuring the data transmission delay time using Rowma in two types of cases.
|
|
19:20-19:40, Paper WeD2.3 | Add to My Program |
A Simulation-Based Approach for Improving the Performance of a Manufacturing System |
|
Azarian, Mohammad | UiT the Arctic University of Norway |
Yu, Hao | UiT the Arctic University of Norway |
Deng Solvang, Wei | The Arctic University of Norway |
Keywords: System Simulation, Enterprise Resource Planning Systems
Abstract: The latest and innovative manufacturing paradigms, i.e., flexible manufacturing systems (FMS), reconfigurable manufacturing systems (RMS), etc., have provided opportunities to achieve an efficient and effective manufacturing process. However, due to the inherent stochasticity of customer demands as well as other influencing factors, factories are still facing dynamic and complex challenges related to their production management. Among which, machine failure is one of the most significant problems. Therefore, this research investigates a novel solution through incorporating the concept of Industry 4.0 (I4.0) into a manufacturing process, where reconfigurable machines are used to compensate for the production loss of a dedicated production line due to machine failure. A simulation-based approach is developed with the help of FlexSim in order to examine two general configurations with respect to three key performance indicators (KPIs). The results illustrate, by properly adopting the concept and technologies of I4.0, the performance of a manufacturing process can be improved to better react to unexpected events in production. From both technical and operational perspectives, several recommendations for further development are given in this paper.
|
|
WeD3 Special Session, Room 3 |
Add to My Program |
System Integration for Human Centric Design |
|
|
Chair: An, Qi | Kyushu University |
Co-Chair: Tsuchiya, Yoshio | National Institute of Technology, Tomakomai College |
Organizer: An, Qi | Kyushu University |
Organizer: Imamura, Yumeko | National Inst. of AIST |
Organizer: Tsuchiya, Yoshio | National Institute of Technology, Tomakomai College |
Organizer: Hamasaki, Shunsuke | The University of Tokyo |
|
18:40-19:00, Paper WeD3.1 | Add to My Program |
Subjective Evaluation of Lumbar Load and Tightening Force Using a Pelvic Belt (I) |
|
Tsuchiya, Yoshio | National Institute of Technology, Tomakomai College |
Tanaka, Takayuki | Hokkaido University |
Yoshida, Michihiro | Hokkaido University |
Keywords: Environment Monitoring and Management, Welfare systems, Biomimetics
Abstract: Lumbar loading causes increased intervertebral pressure and is an important factor in low back pain. However, it is difficult to quantitatively assess what kinds of actions affect lumbar load and how much the lumbar load changes. Low back pain occurs not only in the workplace but also in activities of daily living, so it is necessary to investigate factors in low back pain by measuring movements at various locations and determining the magnitude of the lumbar load. Lumbar load varies depending on posture and external load. However, each person perceives lumbar loading differently and at different times. It is also known that wearing a pelvic belt can improve posture and reduce lumbar load. Accordingly, in this study, the tightening force of a pelvic belt was measured to determine its effect on posture and lumbar load, and the relationship between perceived burden and lumbar load was evaluated.
|
|
19:00-19:20, Paper WeD3.2 | Add to My Program |
Development of Walking Assist Robot with Body Weight Support Mechanism (I) |
|
Dong, Zonghao | Tohoku University |
Salazar Luces, Jose Victorio | Tohoku University |
Hirata, Yasuhisa | Tohoku University |
Keywords: Rehabilitation Systems, Human-Robot/System Interaction, Mechatronics Systems
Abstract: Gait rehabilitation is a necessary training process during early-stage treatment for patients suffering from stroke or Spinal Cord Injury (SCI). However, patients with decreased lower extremity muscle strength may have difficulty keeping the stability of the upper trunk and have the possibility of falling down. Therefore, it is necessary to provide patients with Partial Body Weight Support (PBWS) and to ensure safety during bipedal locomotion. In this paper, we introduce a mechatronic system design of a walking assist robot with Body Weight Support (BWS) mechanism to assist locomotor rehabilitation training for patients with stroke or SCI. The BWS functionality is realized by using a Variable Stiffness Mechanism (VSM) and ground load signals can be measured using a pair of force sensor-based robotic shoe systems. The proposed control system design is implemented in the QNX real-time operation system and the experimental result illustrates the validity of the proposed robotic architecture.
|
|
19:20-19:40, Paper WeD3.3 | Add to My Program |
Task-Space Control Interface for SoftBank Humanoid Robots and Its Human-Robot Interaction Applications (I) |
|
Bolotnikova, Anastasia | SoftBank Robotics Europe, Univerisy of Montpellier–CNRS LIRMM |
Gergondet, Pierre | Beijing Advanced Innovation Center for Intelligent Robots and Sy |
Tanguy, Arnaud | CNRS-AIST Joint Robotics Laboratory |
Courtois, Sébastien | SoftBank Robotics Europe |
Kheddar, Abderrahmane | CNRS-AIST Joint Robotics Laboratory, Univerisy of Montpellier–CN |
Keywords: Software, Middleware and Programming Environments, Systems for Service/Assistive Applications, Human-Robot/System Interaction
Abstract: We present an open-source software interface, called mc_naoqi, that allows to perform whole-body task-space Quadratic Programming based control, implemented in mc_rtc framework, on the SoftBank Robotics Europe humanoid robots. We describe the control interface, associated robot description packages, robot modules and sample whole-body controllers. We demonstrate the use of these tools in simulation for a robot interacting with a human model. Finally, we showcase and discuss the use of the developed open-source tools for running the human-robot close contact interaction experiments with real human subjects inspired from assistance scenarios.
|
|
19:40-20:00, Paper WeD3.4 | Add to My Program |
The Effect of Passive Lower Limb Assist Device on Muscle Synergy in Standing-Up Motion (I) |
|
Torigai, Shunsuke | Hokkaido University |
Tanaka, Takayuki | Hokkaido University |
Keywords: Welfare systems, Rehabilitation Systems, Biologically-Inspired Robotic Systems
Abstract: We aim to develop an assist system that corrects changes in muscle synergy during human body movements and supports movements. In this paper, we investigate the effect of passive lower limb assist device with four spring arrangement patterns on human muscle synergy in standing-up motion. As a result, it was possible to change the activity of muscle synergies, which are dominant in the standing motion, without changing the composition.
|
|
20:00-20:20, Paper WeD3.5 | Add to My Program |
Care Training Assistant Robot and Visual-Based Feedback for Elderly Care Education Environment |
|
Lee, Miran | Ritsumeikan University |
TRAN, DINH TUAN | Graduate School of Information Science and Engineering, Ritsumei |
Yamazoe, Hirotake | University of Hyogo |
Lee, Joo-Ho | Ritsumeikan University |
Keywords: Systems for Service/Assistive Applications, Human-Robot/System Interaction, Modeling and Simulating Humans
Abstract: This paper proposes the system of a care training assistant robot (CaTARo) with 3D facial pain expression for improving the care skills in elderly care education. To ensure accurate and efficient elderly care training, this study focuses on the development of a care training assessment method based on fuzzy-logic for calculating the pain level for giving the feedback. Further, we developed the 3D facial avatar with pain expression of CaTARo based on UNBC-McMaster Pain Shoulder Archive and generated four pain groups with respect to pain level. The pain expression was represented in real-time by using a beam projector and a 3D facial mask during care training. Our results of the study confirmed the feasibility of care training robot with pain expression and concluded that proposed approach can potentially improve caregiving and nursing skills, after further research.
|
|
WeD_VD5 Regular Session, Room 5 |
Add to My Program |
Video Demonstration Session - Part 1 |
|
|
Chair: Hoshino, Satoshi | Utsunomiya University |
Co-Chair: Carrasco, Joaquin | The University of Manchester |
|
18:40-19:00, Paper WeD_VD5.1 | Add to My Program |
Advanced Collaborative Robots for the Factory of the Future |
|
Rothomphiwat, Kongkiat | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Harnkhamen, Atthanat | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Tothong, Tanyatep | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Suthisomboon, Tachadol | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Dilokthanakul, Nat | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Manoonpong, Poramate | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Keywords: Multi-Robot Systems, Intelligent Transportation Systems, Integration Platform
Abstract: This paper presents an integrated robotic platform for advanced collaborative robots and demonstrates an application of multiple robots collaboratively transporting an object to different positions in a factory environment. The proposed platform integrates a drone, a mobile manipulator robot, and a dual-arm robot to work autonomously, while also collaborating with a human worker.
|
|
19:00-19:10, Paper WeD_VD5.2 | Add to My Program |
Study on Generality of Roller Chain Assembly Strategy with Parallel Jaw Gripper |
|
Tatemura, Keiki | Wakayama University |
Dobashi, Hiroki | Wakayama University |
Keywords: Factory Automation, Intelligent and Flexible Manufacturing, Automation Systems
Abstract: For realizing a versatile and flexible robotic assembly system in manufacturing, robots are expected to handle not only rigid parts but also parts with flexible properties such as roller chains. this paper studies the generality of our preciously proposed strategy for roller chain assembly with a parallel jaw gripper. Taking the roller chain assembly of a belt drive unit designed for the World Robot Summit 2018 as an example, the generality of the strategy is experimentally verified with roller chains and sprockets in different dimensions.
|
|
19:10-19:20, Paper WeD_VD5.3 | Add to My Program |
Teach and Playback for Robotic Handling through Object Recognition |
|
Hoshino, Satoshi | Utsunomiya University |
Urayama, Kazuki | Utsunomiya University |
Keywords: Factory Automation, Vision Systems, Motion and Path Planning
Abstract: In this paper, we develop a teach-and-playback system for object handling. The robot in this system is assumed to be equipped with an RGB-D camera. In the teaching phase, the robot is allowed to record 3D point cloud models of objects and instructed respective handling motion through object recognition. In the playback phase, the position and posture of a target object is estimated through object recognition and 3D point cloud matching. Finally, the instructed motion is modified depending on the object. In the experiment, we discuss the effectiveness of the teach-and-playback system for object handling.
|
|
19:20-19:30, Paper WeD_VD5.4 | Add to My Program |
Development of Rack Loading System for Plating Process |
|
Asaishi, Kenta | Iwate Industry Promotion Center |
Miura, Syuhei | Toa Denka Co., Ltd |
Chiba, Hiroshi | Toa Denka Co., Ltd |
Miyoshi, Tasuku | Iwate University |
Keywords: Factory Automation, Mechatronics Systems, Intelligent and Flexible Manufacturing
Abstract: As a plating method for precision parts, the rack loading method in which the workpiece is placed on the rack is often used. In many plating factories that use the rack loading method, workers perform the loading task manually. Existing model-based rack-loading labor-saving systems that utilize simulation results need to be recalculated every time the shape or position of a workpiece or rack changes. For this reason, it takes a lot of time and effort to change the setups. Achieving the rack-loading task that requires less setup changes, we assumed the rack loading task as a non-contact inverse peg-in-hole problem. In this study, we constructed a rack loading system which recognizes the pin position on the rack side and the hole position on the workpiece side without any contacts.
|
|
19:30-19:40, Paper WeD_VD5.5 | Add to My Program |
Development of Autonomous Explorer for Arctic Ice Survey |
|
Yonekura, Tatsuro | Iwate University |
Furudate, Morimichi | Iwate University |
Sato, Ryo | Japan Agency for Marine-Earth Science and Technology |
Yoshida, Hiroshi | Japan Agency for Marine-Earth Science and Technology |
Miyoshi, Tasuku | Iwate University |
Keywords: Hardware Platform, Autonomous Vehicle Navigation
Abstract: The decrease in Arctic ice due to global warming has a great impact on the environment. Therefore, investigating the current state of Arctic ice has become an important issue. The goal of this study is to investigate the ice condition by AUV. In order to obtain the global position information of AUV under sea ice, a communication needs to be established between the sea and land across the ice. For that purpose, it is necessary to install a radio tower within the communication range with the AUV on ice. It is practically difficult to install a large number of radio towers. If the radio tower can move autonomously, it will be able to communicate with the AUV. Hence, the purpose of this study was to construct an Arctic Explorer as an autonomously movable radio tower. Here, we would show the concept and operation of the developed Arctic explorer.
|
|
19:40-19:50, Paper WeD_VD5.6 | Add to My Program |
Position Control of Remotely Operated Vehicle Using Template Matching |
|
Masuzaki, Chikara | Nagasaki University |
Yamamoto, Ikuo | Nagasaki University |
Morinaga, Akihiro | Nagasaki University |
Keywords: Automation Systems, Systems for Field Applications, Vision Systems
Abstract: In recent years, marine renewable energy sources, such as offshore wind and tidal power, have received a lot of attention. An issue in promoting these is how to inspect and repair marine related facilities. At present, it is commonly done by divers. However, there are various problems with this method, including its high risk and high cost. Therefore, inspection and repair of marine-related facilities using Remote Operated Vehicle (ROV) is being studied. ROV is an underwater vehicle connected to a base station on the ground by a communication cable. ROV is operated by a person on the ground, which makes them less expensive, safer, and more accurate than conventional underwater surveys. Normally, ROV is equipped with an Inertial Measurement Unit (IMU), which controls the vehicle's attitude and position based on the information provided by the IMU. However, IMU is very expensive and large in size. Therefore, in this study, the amount of movement of the vehicle is calculated from the camera held by a large number of ROV to control its position. The proposed method enables the development of small and inexpensive ROV.
|
|
19:50-20:00, Paper WeD_VD5.7 | Add to My Program |
Design, Integration and Sea Trials of 3D Printed Unmanned Aerial Vehicle and Unmanned Surface Vehicle for Cooperative Missions |
|
Niu, Hanlin | The University of Manchester |
Ji, Ze | Cardiff University |
Liguori, Pietro | Cardiff University |
Yin, Hujun | The University of Manchester |
Carrasco, Joaquin | The University of Manchester |
Keywords: Autonomous Vehicle Navigation, Integration Platform, Systems for Field Applications
Abstract: In recent years, Unmanned Surface Vehicles (USV) have been extensively deployed for maritime applications. However, USV has a limited detection range with sensor installed at the same elevation with the targets. In this research, we propose a cooperative Unmanned Aerial Vehicle - Unmanned Surface Vehicle (UAV-USV) platform to improve the detection range of USV. A floatable and waterproof UAV is designed and 3D printed, which allows it to land on the sea. A catamaran USV and landing platform are also developed. To land UAV on the USV precisely in various lighting conditions, IR beacon detector and IR beacon are implemented on the UAV and USV, respectively. Finally, a two-phase UAV precise landing method, USV control algorithm and USV path following algorithm are proposed and tested.
|
|
20:00-20:10, Paper WeD_VD5.8 | Add to My Program |
3D Vision-Guided Pick-And-Place Using Kuka LBR Iiwa Robot |
|
Niu, Hanlin | The University of Manchester |
Ji, Ze | Cardiff University |
Zhu, Zihang | Körber Digital GmbH/Körber AG |
Yin, Hujun | The University of Manchester |
Carrasco, Joaquin | The University of Manchester |
Keywords: Automation Systems, Vision Systems, Integration Platform
Abstract: This paper presents the development of a control system for vision-guided pick-and-place tasks using a robot arm equipped with a 3D camera. The main steps include camera intrinsic and extrinsic calibration, hand-eye calibration, initial object pose registration, objects pose alignment algorithm, and pick-and-place execution. The proposed system allows the robot be able to to pick and place object with limited times of registering a new object and the developed software can be applied for new object scenario quickly. The integrated system was tested using the hardware combination of kuka iiwa, Robotiq grippers (two finger gripper and three finger gripper) and 3D cameras (Intel realsense D415 camera, Intel realsense D435 camera, Microsoft Kinect V2). The whole system can also be modified for the combination of other robotic arm, gripper and 3D camera.
|
| |