| |
Last updated on January 7, 2024. This conference program is tentative and subject to change
Technical Program for Tuesday January 9, 2024
|
TueAK1 |
Event Hall 1 |
Medical Systems |
In-person Regular Session |
Chair: Do, Thanh Nho | University of New South Wales |
Co-Chair: Konno, Atsushi | Hokkaido University |
|
10:15-10:30, Paper TueAK1.1 | |
Development of Machine Learning-Based Assessment System for Laparoscopic Surgical Skills Using Motion-Capture |
|
Ebina, Koki | Hokkaido University |
Abe, Takashige | Department of Urology, Hokkaido University Graduate School of Me |
Yan, Lingbo | Hokkaido University |
Hotta, Kiyohiko | Hokkaido University Hospital |
Higuchi, Madoka | Department of Renal and Genitourinary Surgery, Graduate School O |
Iwahara, Naoya | Hokkaido University |
Furumido, Jun | Department of Renal and Genitourinary Surgery, Hokkaido Universi |
Kon, Masafumi | Hokkaido University |
Murai, Sachiyo | Department of Urology, Hokkaido University Graduate School of Me |
Kurashima, Yo | Hokkaido University |
Komizunai, Shunsuke | Hokkaido University |
Tsujita, Teppei | National Defense Academy of Japan |
Sase, Kazuya | Tohoku Gakuin University |
Chen, Xiaoshuai | Hirosaki University |
Senoo, Taku | Hokkaido University |
Shinohara, Nobuo | Hokkaido University Graduate School of Medicine, Department of R |
Konno, Atsushi | Hokkaido University |
Keywords: Medical Systems, Systems for Service/Assistive Applications, Machine Learning
Abstract: Laparoscopic surgery is a widely used surgical technique, on the other hand, its high degree of difficulty makes it difficult for beginners to learn the technique efficiently. In addition, recent working hour restrictions and shortages of surgeons have resulted in insufficient training time, and establishing the efficiently training methods is becoming urgent needs. Therefore, to promote the skill proficiency of novice surgeons, machine learning-based assessment system for laparoscopic surgical skills was developed. A measurement system with a simple configuration was introduced so that trainees can easily use it alone. Alternatively, the indices related to the opening ratio and the rotation angle of surgical instruments, which were measured in the authors' previous study were no longer available. Therefore, comparative experiments were conducted to verify the effect of the lack of indices related to these data on the accuracy of skill evaluation. Based on the measurement data of 104 wet lab trainings measured in the previous study, a machine learning model that evaluates surgeon's skill at 3 levels based on the number of surgical experiences, and global operative assessment of laparoscopic skills (GOALS) which is a type of surgical skill evaluation index were established. By using the explainable AI method, this system can present the skill evaluation result including its basis to the trainee. Since the developed system can be easily operated by a GUI-based program, trainees can confirm the quantitative evaluation result on-site immediately after the training.
|
|
10:30-10:45, Paper TueAK1.2 | |
Development of a Perioperative Medication Suspension Decision Algorithm Based on Bayesian Networks |
|
Kawaguchi, Shuhei | Saga University |
Fukuda, Osamu | Saga University |
Kimura, Sakiko | Saga University |
Yeoh, Wen Liang | Saga University |
Yamaguchi, Nobuhiko | Saga University |
Okumura, Hiroshi | Saga University |
Keywords: Medical Systems, Decision Making Systems, Software, Middleware and Programming Environments
Abstract: In this study, we developed a perioperative drug suspension decision system using a Bayesian network to estimate the appropriate drug suspension period for antithrombotic drugs in the perioperative period. In the past, physicians relied on a vast amount of information in the guidelines to determine the drug suspension period. However, determining the appropriate suspension period was sometimes difficult when competing thrombotic and bleeding risks were present at the time of guideline reference. The proposed method accumulates expert judgments and builds a Bayesian network model based on these data, which successfully demonstrates the estimation of the drug suspension period even in the presence of competing risks. Additionally, a web-application-based interface was created to visually present causal relationships.
|
|
10:45-11:00, Paper TueAK1.3 | |
Development of a Smart Textile-Driven Soft Spine Exosuit for Lifting Tasks in Industrial Applications |
|
Zhu, Kefan | UNSW Sydney |
Bibhu, Sharma | UNSW Sydney |
Phan, Phuoc Thien | University of New South Wales |
Davies, James J. | University of New South Wales |
Thai, Mai Thanh | University of New South Wales |
Hoang, Trung Thien | University of New South Wales |
Chi Cong, Nguyen | University of New South Wales |
Ji, Adrienne | University of New South Wales |
Emanuele, Nicotra | UNSW Sydney |
Lovell, Nigel Hamilton | University of New South Wales |
Do, Thanh Nho | University of New South Wales |
Keywords: Medical Systems, Soft Robotics, Human-Robot Cooperation/Collaboration
Abstract: Work-related musculoskeletal disorders (WMSDs) are often caused by repetitive lifting, making them a significant concern in occupational health. Although wearable assist devices have become the norm for mitigating the risk of back pain, most spinal assist devices still possess a partially rigid structure that impacts the user’s comfort and flexibility. This paper addresses this issue by presenting a smart textile-actuated spine assistance robotic exosuit (SARE), which can conform to the back seamlessly without impeding the user’s movement and is incredibly lightweight. The SARE can assist the human erector spinae to complete any action with virtually infinite degrees of freedom. To detect the strain on the spine and to control the smart textile automatically, a soft knitting sensor which utilizes fluid pressure as sensing element is used. The new device is validated experimentally with human subjects where it reduces peak electromyography (EMG) signals of lumbar erector spinae by 32%±15% in loaded and 22%±8.2% in unloaded conditions. Moreover, the integrated EMG decreased by 24.2%±13.6% under loaded condition and 23.6%±9% under unloaded condition. In summary, the artificial muscle wearable device represents an anatomical solution to reduce the risk of muscle strain, metabolic energy cost and back pain associated with repetitive lifting tasks.
|
|
11:00-11:15, Paper TueAK1.4 | |
Detection of Exposed Nerves in Two Individuals in Vivo and Unexposed Nerves Ex Vivo with Near-Infrared Hyperspectral Laparoscope |
|
Fukushima, Ryodai | Tokyo University of Science |
Takemura, Hiroshi | Tokyo University of Science |
Toshihiro, Takamatsu | Tokyo University of Science |
Kounosuke, Sato | Tokyo University of Science |
Abian, Hernandez-Guedes | Universidad De Las Palmas De Gran Canaria |
Gustavo, M. Callico | Universidad De Las Palmas De Gran Canaria |
Kyouhei, Okubo | Tokyo University of Science |
Masakazu, Umezawa | Tokyo University of Science |
Hideo, Yokota | RIKEN |
Kouhei, Soga | Tokyo University of Science |
Keywords: Medical Systems, Hardware Platform
Abstract: In this study, a laparoscopic device, which can conduct near-infrared hyperspectral imaging (NIR-HSI) was developed for the first time in the world. The optical system was developed with a spectroscopic element, AOTF, which enables non-invasive acquisition of specimen spectrum in the wavelength range of 490 nm to 1600 nm. Using this system, exposed nerves of living pigs were acquired; a total of four nerves from two pigs were imaged by a neural network using leave-one-out cross-validation procedure. The system provides accuracy of 84.6 %, recall of 65.7 %, and specificity of 85.3 %, respectively. In addition, a detection was performed on a specimen with a 2 mm thick mesentery membrane placed on top of the nerves extracted from a pig. The results were accuracy of 95.2 %, recall of 95.2 %, and specificity of 95.1 %, respectively. Through this study, the developed laparoscopic device is expected to be useful for visualization of tissues in living specimens and in deep tissues of the living body.
|
|
11:15-11:30, Paper TueAK1.5 | |
MRI Compatible Robotic Dosimeter System for Safety Assessment of Medical Implants |
|
Martinez, Daniel Enrique | Georgia Institute of Technology |
Nieves-Vazquez, Heriberto A | Georgia Institute of Technology |
Yaras, Yusuf S | Georgia Institute of Technology |
Khotimsky, Alexey | Georgia Institute of Technology |
Skowronski, Ben | Georgia Institute of Technology |
Bradley, Lee | Georgia Institute of Technology |
Oshinski, John | Emory University |
Degertekin, F Levent | Georgia Institute of Technology |
Ueda, Jun | Georgia Institute of Technology |
Keywords: Medical Systems, Integration Platform, Mechatronics Systems
Abstract: Magnetic Resonance Imaging (MRI) is considered a safe imaging modality since there is no use of ionizing radiation. However, safety concerns still arise due to Radiofrequency (RF)-induced heating of electrically conducting structures such as medical implants. While recent advancements in robotics and sensors have enabled the measurement of temperature and electric fields outside the MRI setting, the heat generated by electromagnetic components within an MRI scanner still poses a challenge. This paper proposes the use of an MRI-compatible robot to accurately move and position a novel MRI-compatible sensor at different points in a gel phantom to generate heat and electric field maps around implantable medical devices. The effectiveness of the system is demonstrated by measuring a heat map around an abandoned pacemaker lead. The system provides a novel method of medical device safety evaluation in a clinical MRI setting.
|
|
11:30-11:45, Paper TueAK1.6 | |
Discrimination of Lifted Object Weight Using a Triaxial Accelerometer When Lifting an Object of Unknown Weight by the Stoop Lifting |
|
Otsuka, Keisuke | Aoyama Gakuin University |
Itami, Taku | Aoyama Gakuin University |
Yoneyama, Jun | Aoyama Gakuin University |
Keywords: Medical Systems, Systems for Service/Assistive Applications, Machine Learning
Abstract: In modern-day Japan, a society facing issues related to an aging population and declining birth rates, there exists a shortage of nurses and caregivers. One of the contributing factors to this shortage is attributed to low back pain (LBP). Considering the current environment of hospitals and care facilities, a more straightforward approach should be adopted for the prevention and improvement of LBP. Therefore, this study aims to raise awareness of the risk of LBP through the determination of lifted object weight, and encouraging individuals to actively engage in prevention of LBP. Previous study demonstrated the feasibility of distinguishing between objects weighing 0kg and 25kg during lifting motions based on spikes in acceleration data. However, there is a need to differentiate between weights with smaller differences. Hence, this study focuses on lifting objects of unknown weight by the stoop lifting that are known to impose significant lumbar strain, and proposes a method for discerning between objects weighing 10kg and 20kg using a single triaxial accelerometer. This differentiation is achieved by measuring changes in posture during lifting motions through the decomposition of gravitational acceleration. The effectiveness of this study is verified through the discrimination accuracy calculated using Support Vector Machine (SVM) analysis.
|
|
TueAK2 |
Event Hall 2 |
Soft Robotics for System Integration 1 |
In-person Special Session |
Chair: Nabae, Hiroyuki | Tokyo Institute of Technology |
Organizer: Maeda, Shingo | Tokyo Institute of Technology |
Organizer: Hirai, Shinichi | Ritsumeikan Univ |
Organizer: Suzumori, Koichi | Tokyo Institute of Technology |
Organizer: Nabae, Hiroyuki | Tokyo Institute of Technology |
Organizer: Wiranata, Ardi | University of Gadjah Mada |
|
10:15-10:30, Paper TueAK2.1 | |
Vibration Experiment for a Conductive Structure with Curved Surface Using a Dielectric Elastomer Actuator with Electro Adhesion Technique (I) |
|
Hiruta, Toshiki | Toyohashi University of Technology |
Ohno, Junya | Toyohashi University of Technologies |
Takagi, Kentaro | Toyohashi University of Technology |
Keywords: Soft Robotics
Abstract: This study proposes a novel vibration excitation method using a dielectric elastomer actuator (DEA) applied an electro adhesion technique for conductive structures. Characteristics of mechanical structures are evaluated based on vibration experiments. For flexible structure with curved surface, the DEA excitation should be applied instead of traditional excitation techniques (e.g., impulse hammers, heavy and rigid exciters, and lead zirconate titanate (PZT) actuators). The characteristics of the DEA include their high flexibility, stretchability, and fast response. Here, adhesives are required to attach the DEA to the target structure for the vibration excitation. When the DEA is removed from the target, adhesives damage the DEA leading to less reliability of its performance. The DEA proposed in this study can attach to a structure made of conductive materials owing to the electro adhesion technique. The vibration experiment for an aluminum cylindrical shell was conducted applying the DEA excitation. The effectiveness of the proposed vibration excitation technique was evaluated based on vibration responses of the target structure.
|
|
10:30-10:45, Paper TueAK2.2 | |
Multi-DOF Blower-Powered and Inner Tendon-Driven Soft Inflatable Robotic Arm (I) |
|
Uchiyama, Katsu | Meiji University |
Otsuka, Masayuki | Meiji University |
Niiyama, Ryuma | Meiji University |
Keywords: Soft Robotics
Abstract: We propose a soft inflatable robotic arm as a type of inflatable robot that is light and can be made larger. This robotic arm is based on a soft inflatable joint, which is actuated by an internal tendon and constantly supplied with air by a blower. The tendon drive method was unknown, so it was difficult to make the arm articulated. To achieve multiple degrees of freedom, a guide was fabricated to prevent tendon interference and to guide the tendons. This guide was made of a thin plate so that the flexibility and light weight of the inflatable robot would not be compromised. Guide was provided at the joints to derive the relationship between joint angle and amount of wire pulling. It was used to maintain the relationship between the tendon wire intersections and the anchor points between the joints. The function of the guide was fully confirmed through motion experiments on a robot arm using these guides. Based on this, a robot arm over 1-meter long was created, and it was verified that various postures were possible. These results will contribute to expanding the design space for low-pressure, large inflatable robots.
|
|
10:45-11:00, Paper TueAK2.3 | |
Fiber Jamming Mechanism for Back-Stretchable McKibben Muscles (I) |
|
Tanaka, Shoma | Tokyo Institute of Technology |
Kobayashi, Ryota | Tokyo Institute of Technology |
Nabae, Hiroyuki | Tokyo Institute of Technology |
Suzumori, Koichi | Tokyo Institute of Technology |
Keywords: Soft Robotics, Mechanism Design
Abstract: McKibben muscles struggle to perform passive elongation movements from their natural length by an external force when air pressure is not applied. This problem occurs in systems with mutual interference between artificial muscles, such as antagonistic drive. To solve this problem, we developed a new type of McKibben muscle called a back-stretchable McKibben muscle (BSM), comprising a thin McKibben muscle and a pneumatically driven variable stiffness mechanism for the tensile direction. However, in principle, the developed variable stiffness mechanism requires the use of parts with high bending stiffness. Consequently, unlike the thin McKibben muscle, the BSM cannot operate in a bent state, maintaining high flexibility even when air pressure is applied. This study proposes a new mechanism with low bending stiffness when air pressure is applied and not applied to improve the variable stiffness mechanism of the BSM. The proposed mechanism presents low tensile and bending stiffnesses when air pressure is not applied, and the bending stiffness remains unchanged when air pressure is applied, increasing only the tensile stiffness. We have devised a pneumatically driven fiber jamming mechanism (HJM) with these characteristics. In this study, a prototype was fabricated, and the experiments confirmed that the tensile stiffness can be increased by 4.3 by applying pressure while the bending stiffness remains almost unchanged.
|
|
11:00-11:15, Paper TueAK2.4 | |
Gluing-Free Soft Robotic Hands for Cake Topping (I) |
|
Matsuno, Takahiro | Kindai University |
Wang, Zhongkui | Ritsumeikan University |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords: Soft Robotics, Factory Automation
Abstract: This paper focuses on cake topping performed by a soft robotic hands. Recently robots have been introduced into food factories for improving productivity and reducing labor costs. Robots are actively used in various manufacturing processes. Unfortunately, it is currently difficult to introduce robots into topping, which is a delicate operation dealing with soft food materials. Thus, this research challenges topping operations of cakes using soft robotic hands. We fabricated soft fingers without gluing. Soft fingers were cast in silicone rubber using gelatin as a sacrificial material. A soft robotic hand was composed of a pair of soft fingers. Topping experiments were conducted using the fabricated soft hands. First, we confirmed the usefulness of the soft hands by conducting a topping experiment using strawberry samples. Next, topping experiments were conducted using real strawberries and cake bases. On the 'nappe' base with a flat surface, strawberries could be topped straight in a uniform posture. Although the topping was successful on the 'piping' base with uneven surface, it was found that the posture of the strawberries varied.
|
|
11:15-11:30, Paper TueAK2.5 | |
Deformable Soft Tactile Sensing System Integrating Electropermanent Magnet and Magnetorheological Fluid (I) |
|
Nakayama, Yunosuke | Ryukoku University |
Ho, Van | Japan Advanced Institute of Science and Technology |
Shibuya, Koji | Ryukoku University |
Keywords: Soft Robotics, Haptics and tactile sensors
Abstract: In this study, we proposed a deformable soft tactile sensing system that integrates a tactile sensor made from silicone rubber with a strain gauge, Electropermanent magnets (EPMs), and magnetorheological (MR) fluid. The EPMs can change the magnetic field according to an electrical signal, causing a viscosity change in the MR fluid circulating in the system using a peristaltic pump. This viscosity change alters the flow rate and internal pressure of the fluid, resulting in the deformation of the tactile sensor. This implies that the proposed tactile sensor system has multiple sensitivities, which can expand its application field. Compared to the common system used in soft actuators and sensors with pneumatic actuators, it has a small size and a simple control system. We fabricated EPMs and conducted an experiment to investigate their characteristics. Moreover, we created a soft tactile sensor with a chamber filled with the MR fluid, in which a strain gauge was embedded. Finally, we integrated the EPMs and the MR fluid in the proposed soft tactile sensing system. We conducted several experiments on the effect of the EPM on tactile sensor deformation and the time response of the sensing system.
|
|
TueAM1 |
Meeting room 1 |
Motion and Path Planning |
In-person Regular Session |
Chair: Ogata, Tetsuya | Waseda University |
|
10:15-10:30, Paper TueAM1.1 | |
Fast and Lightweight Scene Regressor for Camera Relocalization |
|
Bui, Bach-Thuan | Ritsumeikan University |
Tran, Dinh Tuan | College of Information Science and Engineering, Ritsumeikan Univ |
Lee, Joo-Ho | Ritsumeikan University |
Keywords: Motion and Path Planning, Virtual Reality and Interfaces, Vision Systems
Abstract: Camera relocalization involving a prior 3D reconstruction plays a crucial role in many mixed reality and robotics applications. Estimating the camera pose directly with respect to pre-built 3D models can be prohibitively expensive for several applications with limited storage and/or communication bandwidth. Although recent scene and absolute pose regression methods have become popular for efficient camera localization, most of them are computation-resource intensive and difficult to obtain a real-time inference with high accuracy constraints. This study proposes a simple scene regression method that requires only a multi-layer perceptron network for mapping scene coordinates to achieve accurate camera pose estimations. The proposed approach uses sparse descriptors to regress the scene coordinates, instead of a dense RGB image. The use of sparse features provides several advantages. First, the proposed regressor network is substantially smaller than those reported in previous studies. This makes our system highly efficient and scalable. Second, the pre-built 3D models provide the most reliable and robust 2D-3D matches. Therefore, learning from them can lead to an awareness of equivalent features and substantially improve the generalization performance. A detailed analysis of our approach and extensive evaluations using existing datasets are provided to support the proposed method. The implementation detail is available at https://github.com/ais-lab/feat2map.
|
|
10:30-10:45, Paper TueAM1.2 | |
LiDAR Scan Images for Mobile Robot Motion Planners |
|
Hoshino, Satoshi | Utsunomiya University |
Unuma, Kyohei | Utsunomiya University |
Keywords: Motion and Path Planning, Autonomous Vehicle Navigation, Machine Learning
Abstract: During autonomous navigation, mobile robots are required to avoid obstacles ahead in some cases. We assume a mobile robot equipped with a 2D LiDAR. For the robot, we have proposed motion planners based on deep neural networks, such as multilayer perceptrons, MLPs. The networks were trained through imitation learning. However, the robot based on these motion planners using range data inputs sometimes failed to avoid obstacles in unknown environments. This is because that the robot planned avoidance motions without considering the spatial configuration. In this paper, therefore, we focus on the obstacle avoidance capability of the robot, and propose to use LiDAR scan images as inputs. The LiDAR scan images are generated from the range data, and used as inputs to four types of motion planners based on MLP and convolutional neural network, CNN. In the navigation experiments, the effectiveness of the motion planners for improving the obstacle avoidance capability is discussed. Moreover, we examine the applicability of the motion planners to the actual mobile robot for autonomous navigation in a real environment.
|
|
10:45-11:00, Paper TueAM1.3 | |
Path Planning Considering Energy Consumption of Crawler Robots in Mountain Environments |
|
Adachi, Namihei | University of Tsukuba |
Date, Hisashi | University of Tsukuba |
Keywords: Motion and Path Planning, Environment / Ecological Systems
Abstract: In the forestry industry, the demand for task automation is increasing to solve the labor shortage problem. We examine the path planning problem for tasks such as afforestation, harvesting, and surveying, in which a robot circuits a predetermined set of locations. We specifically address the problem of path planning for quasi-minimization of energy consumption for the operation of a crawler robot in an environment with inclines. This problem requires consideration of energy loss caused by ascending and descending in the mountainous environment, besides due to friction during on-the-spot rotation specific to the crawler robot, and is challenging to treat as the standard Traveling Salesman Problem(TSP). The path planning method we propose in this paper applies this problem to the Generalized Traveling Salesman Problem (GTSP) known as the TSP extension. This allows the proposed method to account for energy consumption during movement. We modeled the energy consumption of the robot during straight movements and rotations through a preliminary experiment. A graph reflecting the energy consumption model can be generated for evaluation of the path planning by setting the points to be traversed on the point cloud map after 3D scanning of an outdoor unstructured terrain field with flat ground and inclined ground. It is confirmed that the proposed method can plan a path that reduces energy consumption by 22% for a standard TSP solution that quasi-minimizes Euclidean distance.
|
|
11:00-11:15, Paper TueAM1.4 | |
Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model |
|
Hori, Kazuki | Waseda University |
Suzuki, Kanata | Fujitsu Limited |
Ogata, Tetsuya | Waseda University |
Keywords: Motion and Path Planning, Human-Robot/System Interaction, Machine Learning
Abstract: The application of the Large Language Model (LLM) to robot action planning has been actively studied. The instructions given to the LLM by natural language may include ambiguity and lack of information depending on the task context. It is possible to adjust the output of LLM by making the instruction input more detailed; however, the design cost is high. In this paper, we propose the interactive robot action planning method that allows the LLM to analyze and gather missing information by asking questions to humans. The method can minimize the design cost of generating precise robot instructions. We demonstrated the effectiveness of our method through concrete examples in cooking tasks. However, our experiments also revealed challenges in robot action planning with LLM, such as asking unimportant questions and assuming crucial information without asking. Shedding light on these issues provides valuable insights for future research on utilizing LLM for robotics.
|
|
11:15-11:30, Paper TueAM1.5 | |
Spatiotemporal Motion Profiles for Cost-Based Optimal Approaching Pose Estimation |
|
Nguyen, Trung-Tin | University of Prince Edward Island |
Ngo, Trung Dung | University of Prince Edward Island |
Keywords: Motion and Path Planning, Human-Robot/System Interaction, Decision Making Systems
Abstract: Recent public perceptions indicate a positive shift towards a society with human and robot co-existing, especially aged populations. The ability to socially navigate become crucial for mobile robots by enabling them to guarantee not only human physical safety but also psychological comfort, and enhance robots contextual awareness in human-robot interactions (HRI). In this study, we introduce an extended navigation scheme to approach moving target based on the tracking of human spatiotemporal motion, social studies on proxemics, and kinodynamics of the mobile robot. The strategy utilizes existing multi-layer cost-based navigation mapping for complete integration with plannings and introduce soft social constraints by extending the costmap value range. The primary contributions include (i) spatio-temporal motion profiles (SMPs) of all humans under tracking, (ii) a social navigation cost function (SNCF) for filtering socially-optimal goal poses. The results drawn from simulated testings across three normative social situations, and statistical analysis demonstrate the SMPs effectiveness through measured spatial and temporal coefficients. The driving factors safety and appropriate social construct are determined to be either statistically or practically significant, while also introducing a complete navigation scheme taking into account of socially acceptable behaviours for the robot.
|
|
11:30-11:45, Paper TueAM1.6 | |
UAV Speed Control for Mobile Relay System to Ensure Transmission Quality and Throughput Stability |
|
Pham, Thi Quynh Trang | University of Industry |
Ha, Xuan Son | VNU |
Dinh, Trieu Duong | VNU |
Trinh, Anh Vu | VNU |
Keywords: Path Planning for Multiple Mobile Robots or Agents, System Simulation
Abstract: In this paper, we propose a novel mobile relaying method, where the relay nodes are mounted on unmanned aerial vehicles (UAVs). In the proposed method, each UAV is utilized as a moving relay to offer high performance for relay networks, especially for large and high-stability throughputs. Compared to conventional static relaying, our proposed method cannot only exploit the advantages of the dynamic relay system but also optimize the speed of each UAV to provide a new enhancement on network throughputs and the network’s quality of services (QoSs). In addition, given a predetermined trajectory of each UAV relay, we define a novel solution for the optimization problem, which can both minimize throughput variation and optimize the speed of each UAV. Numerical results show that the proposed method can achieve significant throughput gains over the conventional relaying techniques.
|
|
11:45-12:00, Paper TueAM1.7 | |
Active Search and Re-Localization Framework for Position-Lost Recovery under Limited Number of AMRs |
|
Hisatsugu, Hiroki | Hitachi, Ltd |
Hashizume, Jiro | Hitachi, Ltd |
Keywords: Path Planning for Multiple Mobile Robots or Agents, Sensor Fusion, Multi-Robot Systems
Abstract: Autonomous mobile robots are required to operate continuously in uncertain environments without human intervention. Addressing the self-localization failure problem, this paper proposes an efficient re-localization method for multiple autonomous mobile robot systems that utilizes lost-robot searching actions by surrounding robots, based on a rough estimation of the lost-robot's positional area. This approach ensures swift re-localization through active movements, effectively countering occlusion challenges inherent to cooperative localization. The system implemented functions for lost detection, searching, relative position recognition and information sharing. Real-world experiments in a 5x6 m indoor environment demonstrate the effectiveness of our approach. Re-localization is successfully achieved within 0.5 m of localization error in all 20 tests in an environment of occlusion between two robots.
|
|
TueAM2 |
Meeting room 2 |
Robotic Teleoperation and Environmental Sensing |
In-person Special Session |
Co-Chair: Tamura, Yusuke | Tohoku University |
Organizer: Ji, Yonghoon | JAIST |
Organizer: Tamura, Yusuke | Tohoku University |
Organizer: Kono, Hitoshi | Tokyo Denki University |
Organizer: Woo, Hanwool | Kogakuin University |
Organizer: Fujii, Hiromitsu | Chiba Institute of Technology |
|
10:15-10:30, Paper TueAM2.1 | |
Path Planning for Identification of Radiation Source Using Mobile Robot with Directional Gamma-Ray Detector (I) |
|
Takahashi, Yurika | Kogakuin University |
Woo, Hanwool | Kogakuin University |
Keywords: Systems for Search and Rescue Applications, Motion and Path Planning, Systems for Field Applications
Abstract: We propose a path planning method for identifying an unknown radiation source using a mobile robot with a directional gamma-ray detector. The proposed method autonomously determines the next measurement point based on the direction and the number of incident gamma rays obtained from a directional gamma-ray detector. For efficient exploration, the robot needs to approach the source at a distance where sufficient incident events can be acquired. However, if the robot gets too close to the source, the radiation may damage the equipment. Therefore, it is necessary to perform the exploration while maintaining a certain distance from the source. This paper proposes a novel index to grasp the distance from the source even if the radiation intensity is unknown. Using this index, the proposed system can automatically plan the path for exploration and successfully identify the radiation source.
|
|
10:30-10:45, Paper TueAM2.2 | |
3D Voxel Pattern Modeling for Monte Carlo Simulation of Radiation Transport and Its Application to Radiation Source Finding (I) |
|
Tomita, Hideki | Nagoya University |
Kanda, Minato | Nagoya University |
Mukai, Atsushi | Nagoya University |
Kase, Hiroki | Shizuoka University |
Aoki, Toru | Shizuoka University |
Keywords: Sensor Fusion, Systems for Search and Rescue Applications, Environment Monitoring and Management
Abstract: In radiation measurements, in addition to direct radiations from a radiation source both radiations result from scattering and shielding of primary radiations by objects around radiation source(s) are reached to a radiation detector/camera. By taking into account the effects of scattering and shielding, further accurate measurements or imaging can be achieved. For the Monte Carlo simulation of radiation transport to evaluate scattered and attenuated radiations, a 3D voxel pattern modeling based on 3D point cloud data was adopted. The geometry built by this modeling method was used to the evaluation of 662 keV gamma-ray attenuations by surrounding objects in finding a 137Cs point source, and the results showed that the estimated source location and activity agreed well within the uncertainties by considering the attenuations.
|
|
10:45-11:00, Paper TueAM2.3 | |
Radiation Source Localization Considering Shielding Effect of Structures Using 3D Object Recognition (I) |
|
Nguyen, Baduy | Tohoku University |
Tamura, Yusuke | Tohoku University |
Hirata, Yasuhisa | Tohoku University |
Keywords: System Simulation, Sensor Fusion, Decision Making Systems
Abstract: In Fukushima Daiichi, post-accident conditions have led to the accumulation of radioactive fuel debris. Cleaning up this debris necessitates human intervention, but workers face a severe risk of radiation exposure due to the work's location within the reactor building and potential contaminant exposure. To establish a safe working environment, efforts focus on identifying and removing these radiation sources. However, challenges arise from shielding effects caused by structures like concrete walls.This study proposes a novel approach to identify the positions of these radiation sources. Specifically, it employs 3D object recognition to identify objects that could potentially shield radiation. By correcting the measured radiation dose that has been reduced due to the shielding effect of identified structures, the study utilizes a sequential image reconstruction method known as the Maximum Likelihood Expectation Maximization (ML-EM) technique to estimate the positions of radiation sources. It's worth noting that the ML-EM technique focuses on detailed estimation within localized areas. To estimate these localized areas with a high probability of radiation source presence, an Extended Kalman Filter is utilized.
|
|
11:00-11:15, Paper TueAM2.4 | |
Development of a Seamless Telerobotic System Combining Teleoperation and Autonomous Navigation Using the Robot Service Network Protocol (I) |
|
Matsuhira, Nobuto | Shibaura Institute of Technology, the University of Tokyo |
Sasaki, Takeshi | Shibaura Institute of Technology |
Asama, Hajime | The University of Tokyo |
Keywords: Human-Robot Cooperation/Collaboration, Systems for Service/Assistive Applications, Systems for Field Applications
Abstract: With the increase in the elderly population, decline in birth rate, occurrences of natural disaster, and the recent outbreak of the corona virus disaster, drastic changes have been observed in the overall public lifestyle, and the role of robots has been reconsidered. To cope with complicated social problems, several types of robots, including sensors, should be networked and managed through collaborative control systems. In this study, an interface and a graphical user interface (GUI) are developed for ease in system building as well as user operation. The prototype system, namely the telerobotic system, is developed for networking different types of robots and combining different types of control modes, i.e., an autonomous control and a teleoperated control, to a seamless control. The system uses the robot operating system (ROS), robot service network protocol (RSNP), and a common operation GUI. The system is verified through experiments combining autonomous navigation and teleoperation.
|
|
11:15-11:30, Paper TueAM2.5 | |
Three-Dimensional Environmental Measurement of Surroundings Using Camera Pose Estimation Base on Line Features (I) |
|
Iriyama, Shingo | Chuo University |
Pathak, Sarthak | Chuo University |
Umeda, Kazunori | Chuo University |
Keywords: Sensor Fusion, Vision Systems
Abstract: This study introduces a cost-effective measurement approach for indoor environments, intended for inspection purposes for autonomous mobile robotics and infrastructure maintenance. The method involves utilizing a spherical camera capable of capturing 360 degrees view of the surroundings, along with a ring laser, to obtain a 3D point cloud representing a cross-section of a room through the structured light method. The camera and laser rotated 360 degrees in the target environment. The camera’s orientation is established by comparing the distribution of line features within the room to the three directions in real space. By considering the extent of change in the camera’s posture, the method integrates multiple point clouds generated by the structured light method. This results in the creation of a comprehensive 3D point cloud that represents the entire indoor environment.
|
|
11:30-11:45, Paper TueAM2.6 | |
Semantic and Volumetric 3D Plant Structures Modeling Using Projected Image of 3D Point Cloud (I) |
|
Imabuchi, Takashi | Japan Atomic Energy Agency |
Kawabata, Kuniaki | Japan Atomic Energy Agency |
Keywords: Vision Systems, Environment Monitoring and Management, Machine Learning
Abstract: This paper describes a method for volumetric-based semantic 3D modeling from 3D point cloud obtained in a plant environment. To calculate a radiation dose distribution of the workspace in decommissioning, the shape, arrangement, materials, and thicknesses of structures are essentially required in addition to dose values. However, it is costly to create such enriched 3D models from 3D point cloud. In this study, we propose a method to create 3D models with structural category and material thickness by combining 2D image-based deep learning and volumetric reconstruction method. To discriminate structures, structural category labels are predicted by a pre-trained 2D semantic segmentation network on a projected image created from 3D point cloud. Then, a triangular mesh is generated from the integrated Truncated Signed Distance Function (TSDF) according to the prediction labels. In addition, we optimize the TSDF thickness assignment function to reduce the surface distance error. Our evaluation reports thickness and surface distance errors when generating meshes with three different structural categories in a mock-up plant environment.
|
|
11:45-12:00, Paper TueAM2.7 | |
Obstacle Detection and Height Estimation Using Fisheye Stereo Camera Considering Intensity Information (I) |
|
Chikugo, Hikaru | Chuo University |
Sakuda, Tomoyu | Chuo University |
Pathak, Sarthak | Chuo University |
Umeda, Kazunori | Chuo University |
Keywords: Autonomous Vehicle Navigation, Vision Systems, Factory Automation
Abstract: In this paper, we propose a novel method for obstacle detection and height estimation based on disparity and intensity information using a fisheye stereo camera. The method using only disparity information may incorrectly detect road surfaces as obstacles. Therefore, the proposed method detects obstacles by comparing the intensity of the obstacle edges with that of the disparity information. Experimental results show that the proposed method using disparity and intensity information can detect only obstacles without incorrectly detecting road surfaces. It is also shown that the accuracy of the height estimation does not change even though the road surface is not detected incorrectly.
|
|
TueAM3 |
Meeting room 3 |
Human Interfaces |
In-person Regular Session |
Co-Chair: Klein, Jordan | USACE ERDC |
|
10:15-10:30, Paper TueAM3.1 | |
Synthesis of Speech Reflecting Features from Lip Images |
|
Matsuura, Atsushi | Shibaura Institute of Technology |
Shimizu, Sota | Shibaura Institute of Technology |
Keywords: Human Interface, Multi-Modal Perception, Human-Robot/System Interaction
Abstract: Models that synthesize speech from multimodal information other than text have been devised and have attracted much attention. We focus on lip image information and propose a model for speech synthesis that reflects lip movements. The architecture consists of an image feature extractor using an autoencoder and an encoder-decoder model similar to the Tacotron2 that outputs a mel spectrogram. We have succeeded in synthesizing speech that reflects lip movements under limited conditions.
|
|
10:30-10:45, Paper TueAM3.2 | |
Registration of 3D Point Clouds with Credibility for Changing Environments |
|
Noguchi, Yosuke | Shibaura Institute of Technology |
Shimizu, Sota | Shibaura Institute of Technology |
Takamura, Tomoki | Shibaura Institute of Technology |
Carfì, Alessandro | University of Genoa |
Mastrogiovanni, Fulvio | University of Genoa |
Keywords: Human Interface, Vision Systems, Systems for Service/Assistive Applications
Abstract: In progress of technologies related to augmented reality and autonomous robot mobility, needs of registration techniques that use a pre-made global 3D point cloud as a map and presently-acquired local one for localization increases rapidly. However, in a warehouse of shops and restaurants, location of objects such as products and food materials changes very frequently, resulting in more difference between the pre-made environmental map and the local data of 3D point cloud. This difference reduces accuracy of localization. One of problems in the localization is that when is acquired in both point clouds, the object is used as a reference, resulting in an error in the overall registration accuracy. Therefore, in this paper, the environment point cloud is updated at each registration to reduce the difference from the local point cloud. The updated information is also used to find reference points during registration. Specifically, point cloud data with general coordinate values is made into point cloud data with attributes by recording the past acquisition history. From that acquisition history, a value called Credibility is calculated, which evaluates the likelihood that the point can be used as a reference point during estimation. Then, based on Credibility, I propose a positioning method that updates the environmental point cloud and reduces the impact from moving objects. In the experiment, after updating the environmental point cloud using point clouds acquired at different times by changing the placement of objects on the desk and adding the acquisition history, the positioning accuracy with the point cloud acquired by moving the objects again was evaluated using the squared error. The evaluation results showed approximately 30 percent increase in positioning accuracy to the proper location based on a non-moving object compared to the existing ICP algorithm.
|
|
10:45-11:00, Paper TueAM3.3 | |
Study on Wearable Gaze-Based Communication System for Patients with Amyotrophic Lateral Sclerosis (ALS) |
|
Sawada, Nonoka | University of Tsukuba |
Uehara, Akira | University of Tsukuba |
Kawamoto, Hiroaki | University of Tsukuba |
Sankai, Yoshiyuki | University of Tsukuba |
Keywords: Human Interface, Virtual Reality and Interfaces, Welfare systems
Abstract: This study centers its attention on communication among ALS patients with severe symptoms, which is essential to daily life and the part that remains a difficulty. Also, communication is important to promote social participation and ability of independence. To promote social participation and improve the ability of independence without the restrictions for patients, the purpose of this study is to develop a wearable interface that enables communication based on the patient’s voluntary intent, without temporal, spatial, or physical restrictions, and to confirm the feasibility of intention transmission using a new text entry algorithm. The developed interface utilized a see-through type head-mounted display (HMD) with a gaze estimation sensor to enable patients themselves to transition between cyberspace and physical space and to communicate by the system without restrictions of posture or usage environment during daily life activities. The letter selection is triggered by a change in the direction of the gaze trajectory. This method of letter selection reduces the burden on the user by avoiding the effects of dwell time and other factors associated with gazing input. The key layout for text input UI considers the characteristics of human eye movements, to minimize key selection errors and allows for quick viewpoint movement. To confirm the feasibility of the basic performance, we conducted the experiment with three able-bodied participants using the developed system and a conventional system as a comparison system, respectively. As a result, the developed system was able to input sentences in the same amount of time as the conventional system. The results show that the developed system has the basic performance of a communication interface with a new character input method based on eye movement functions. Therefore, we confirmed the feasibility of transmitting intention using a new text entry algorithm to enable communication without the restrictions.
|
|
11:00-11:15, Paper TueAM3.4 | |
Immersive VR System for Evaluating the Severity of Object-Centered Neglect Based on Environmental Complexity |
|
Koshino, Akira | Waseda University |
Yasuda, Kazuhiro | Waseda University |
Takazawa, Saki | Waseda University |
Kawaguchi, Shuntaro | Sonoda Rehabilitation Hospital |
Iwata, Hiroyasu | Waseda University |
Keywords: Human Interface, Rehabilitation Systems, Virtual Reality and Interfaces
Abstract: Unilateral spatial neglect occurs as a sequela of stroke. In particular, object-centered neglect (OCN) neglects one-half of an object bordering the object’s midline. Although the conventional method of assessing OCN has been introduced, there is no technology for assessing OCN in three dimensions. Furthermore, it is not clear whether neglect is affected by the complexity of the surrounding environment. Therefore, this study aims to assess the severity of OCN spatially based on the complexity of the environment. The use of an immersive virtual reality space made it possible to adjust the three-dimensional assessment and complexity of the environment and assess the severity of symptoms at different locations.
|
|
11:15-11:30, Paper TueAM3.5 | |
Smartphone-Based Teaching System for Neonate Soothing Motions |
|
Tsuji, Hiyori | Keio University |
Yamamoto, Takumi | Keio University |
Yamaji, Sora | Kanagawa University |
Kobayashi, Maiko | Waseda University |
Sasaki, Kyoshiro | Kansai University |
Aso, Noriko | Kanagawa University |
Sugiura, Yuta | Keio University |
Keywords: Human Interface, Systems for Service/Assistive Applications
Abstract: Inappropriate soothing motions can negatively impact the physical and socio-emotional health of infants and the formation of attachment—that is, bonding—between caregivers and children. Thus, it is important for caregivers to learn soothing motions expertise. In this study, we propose a system that supports the learning of neonatal soothing motions at home using a smartphone. A smartphone is attached to a stuffed animal to evaluate its movement and provide feedback accordingly. To evaluate the proposed system, we assessed the effectiveness of the system using two groups with different learning methods, Group Video (V), which practiced soothing motions using a stuffed animal while watching videos of a midwife’s soothing motions, and Group Video + System (V+S), which practiced with a stuffed animal while watching the videos and using the system for additional guidance. The results show that this system is effective in teaching the correct inclination angle—that is, the angle used to rotate the head up around the chest of the neonate from a ground-parallel position.
|
|
11:30-11:45, Paper TueAM3.6 | |
Human Metrics Explorer System for Multi-Device Physiological Measurements in Emotion Estimation |
|
Kitagawa, Akane | Shimadzu Corporation |
Murata, Koichi | Shimadzu Corporation |
Uraoka, Yasuyuki | Shimadzu Corporation |
Furuta, Masafumi | Shimadzu Corporation |
Munaka, Tatsuya | Shimadzu Corporation |
Keywords: Human Interface, Systems for Service/Assistive Applications, Integration Platform
Abstract: Synchronously measuring and analyzing multiple physiological signals is useful for estimating human emotion in more detail; however, it is difficult in practical environments because few systems can integrate various types of devices and be used easily. We developed a human metrics explorer (HuME)TM system for realizing easy synchronous measurement and analysis of data from several devices. The purposes of this study are to confirm the accuracy of the trigger signal used for synchronization of the HuME system in a basic experiment and to demonstrate the feasibility of the use of the system through synchronous measurement and analysis of signals from multiple device types through a practical experiment of emotion estimation. In a basic experiment, we examined the time lag between the transmission of trigger signals from the transmitter and the recording of these signals on sensing devices placed at various positions. In a practical experiment concerning emotion estimation applications, we measured the physiological data of two participants during a board game using the developed wearable facial electromyogram and electrocardiogram devices, commercial electrodermal activity devices, and cameras. Consequently, the absolute maximum variation in the time lag was 2 ms. Additionally, we could measure and analyze data synchronously from the multiple device types using the HuME system and capture the physiological responses of the participants during the game. In conclusion, we confirmed that the trigger signal was accurate for the HuME system synchronization and demonstrated the feasibility of this system in the synchronization of multiple device types.
|
|
11:45-12:00, Paper TueAM3.7 | |
Electromyography Acquisition System Via Conductive Fabric for Wearable Skill Transfer Device Focused on Drum-Oriented Activities |
|
Molina Padilla, Ximena | Autonomus University of San Luis Potosi |
Martínez Fuerte, Sadya Mariana | Universidad Autónoma De Nuevo León |
Hernández-Ríos, E. Rafael | Mirai Innovation Research Institute |
Penaloza, Christian Isaac | Mirai Innovation Research Institute |
Keywords: Human Interface, Integration Platform, Machine Learning
Abstract: This research seeks to establish a precedent for future work in motor skill transfer using a human-to-human interface system. We propose an integrated wearable device for electromyography signal acquisition, an AI-based data classification algorithm and a graphic interface for signal visualization. Our proposed design is based on intelligent wearable system composed of a shirt with Nylon-silver fabric electrodes that emulate a conventional textile. The textile electrodes are embedded in body locations optimized to detect muscle signals corresponding to drum playing activity. We demonstrate that the proposed textile electrodes are capable to monitor optimal EMG signal that is useful to train an AI algorithm to classify diverse types of drum strokes, positions and strength level.
|
|
TueBK1 |
Event Hall 1 |
Robotic Hands, Grasping |
In-person Regular Session |
Chair: Tahara, Kenji | Kyushu University |
Co-Chair: Wang, Zhongkui | Ritsumeikan University |
|
13:30-13:45, Paper TueBK1.1 | |
Increasing the Graspable Objects by Controlling the Errors in the Grasping Points of a Suction Pad Unit and Selecting an Optimal Hand |
|
Miura, Ryuichi | Meijo University |
Fujita, Kohei | Meijo University |
Tasaki, Tsuyoshi | Meijo University |
Keywords: Robotic hands and grasping, Machine Learning, Vision Systems
Abstract: In this study, we developed a new method utilizing two types of hands to select a two-finger gripper or a suction pad unit depending on the object, aiming to automate the task of displaying products, i.e., stacking shelves. Although a grasping point must be estimated to grasp an object, large errors occur in estimating the grasping point for conventional suction pad units. Moreover, in prior works, hand selection was often performed by stipulating rules according to the shape and posture of a given object, but these methods were unable to handle increasing number of objects. Accordingly, in this study, to improve the performance of object-grasping systems, we dealt with two problems. The first is improving the robustness of deep neural network (DNN) models toward errors in the estimation of grasping points for a suction pad unit. The second is selecting a hand without utilizing fixed rules. For the first problem, we developed a new method that is imprting flexibility to the suction pad unit and processing the input to the DNN in order to focus on the object. For the second problem, we developed a DNN to appropriately select the optimal grasping hand even with limited training data. The results of experiments that the product conveyance success rate of the proposed method was 90%, which was 44pt better than that of the conventional method.
|
|
13:45-14:00, Paper TueBK1.2 | |
A Sensorless Parallel Gripper Capable of Generating Sub-Newton Level Grasping Force |
|
Sato, Mutsuhito | Ritsumeikan University |
Arita, Hikaru | Kyushu University |
Mori, Yoshiki | Ritsumeikan Univercity |
Kawamura, Sadao | Ritsumeikan University |
Wang, Zhongkui | Ritsumeikan University |
Keywords: Robotic hands and grasping, Mechanism Design
Abstract: Upon grasping a fragile object, small grasping force and gripper compliance are required to avoid large deformation or damage to the object. In this study, linear servomotor based mechanism is used to construct a parallel gripper which is able to achieve small gripping force and compliance without using external force sensors. The linear motor mechanism has low friction and no reduction gear. Therefore, it can produce small thrust force with high backdrivability. The proposed parallel gripper consists of two linear motor mechanisms, a guide rail, a frame, and two fingers. The thrust force of the linear motor can be controlled through a driver. The components of the gripper were manufactured using metal machining and 3D printing, and they were carefully assembled to ensure good axial alignments. Force calibration was conducted and the minimum grasping force was confirmed as a value of 0.076 N. The friction force of the guide rail was also experimentally measured and it was confirmed to be 0.090 N. Finally, grasping experiments were conducted on potato chips and tofu. Results suggested that the proposed parallel gripper is able to handle fragile objects with sub-Newton level force.
|
|
14:00-14:15, Paper TueBK1.3 | |
Reinforcement Learning-Based Grasping Via One-Shot Affordance Localization and Zero-Shot Contrastive Language–Image Learning |
|
Long, Xiang | University College London |
Beddow, Luke Jonathan | University College London |
Hadjivelichkov, Denis | University College London |
Delfaki, Andromachi Maria | Independent Researcher |
Wurdemann, Helge Arne | University College London |
Kanoulas, Dimitrios | University College London |
Keywords: Robotic hands and grasping, Machine Learning, Automation Systems
Abstract: We present a novel robotic grasping system using a caging-style gripper, that combines one-shot affordance localization and zero-shot object identification. We demonstrate an integrated system requiring minimal prior knowledge, focusing on flexible few-shot object agnostic approaches. For grasping a novel target object, we use as input the color and depth of the scene, an image of an object affordance similar to the target object, and an up to three-word text prompt describing the target object. We demonstrate the system using real-world grasping of objects from the YCB benchmark set, with four distractor objects cluttering the scene. Overall, our pipeline has a success rate of the affordance localization of 96%, object identification of 62.5%, and grasping of 72%. Videos are on the project website: https://sites.google.com/view/rl-affcorrs-grasp.
|
|
14:15-14:30, Paper TueBK1.4 | |
Learning and Generalizing Tasks on Humanoid Robots with an Automatic Multisensory Segmentation Method |
|
Barberteguy, Victor | Ecole Polytechnique |
Kanehiro, Fumio | National Inst. of AIST |
Keywords: Robotic hands and grasping, Machine Learning, Human-Robot/System Interaction
Abstract: We provide a complete framework for learning and reproducing tasks from human demonstrations. This framework adapts recent developments in automatic, unsu- pervised segmentation of time-series to humanoid robotics by preprocessing the data obtained from a broad range of the robot’s sensors, to then repropduce the learned task in similar environments. In more detail, we reproduce and extend the acquired multi- step task using Dynamic Movement Primitives in simulation for the JVRC1 Robot, and further validate it the segmentation process in real world with the HRP-4C Robot, thus showcasing the possibility to create an extensive library of reusable skills for complex humanoids with our approach.
|
|
14:30-14:45, Paper TueBK1.5 | |
External Sensor-Less Fingertip Force/Position Estimation Framework for a Linkage-Based Under-Actuated Hand with Self-Locking Mechanism |
|
Doan, Ha Thang Long | Kyushu University |
Arita, Hikaru | Kyushu University |
Tahara, Kenji | Kyushu University |
Keywords: Robotic hands and grasping, Mechatronics Systems
Abstract: Precision grasping is an important skill for robotic hands to master so that they can be utilized in various manipulation tasks. To control the robotic hand precisely, modeling the kinematics and statics behavior of the robotic hand is one of the active areas of robotic research. While becoming popular because of their self-adaptability in robust power grasping, linkage-based under-actuated hands are difficult to model analytically for precision fingertip grasping, due to the stochastic and nonlinear dynamical behavior caused by the use of passive mechanisms inside each finger. In this paper, we proposed a fingertip force/position estimation framework, which detects in real-time using internal sensor data whether the passive locking mechanism is in action or not and uses the kinematics and statics models with gravity compensation in each case to compute the estimation. Using the proposed framework, an example of a precision grasping task is carried out to evaluate its reliability and show its potential to be used for future dexterous manipulation tasks.
|
|
14:45-15:00, Paper TueBK1.6 | |
Development of a Gripper for Manipulation of Soft Line-Shaped Object |
|
Ishikawa, Subaru | Kanazawa University |
Nishimura, Toshihiro | Kanazawa University |
Watanabe, Tetsuyou | Kanazawa University |
Keywords: Robotic hands and grasping, Mechanism Design
Abstract: This study presents a gripper for manipulating bundles of line-shaped flexible objects. Among the flexible objects, noodles are focused on. The target operation is the aligning of the noodles to achieve appetizing looks. For the purpose, the developed gripper has the functions of grasping, aligning, and releasing. The three modes are realized by single motor, which is another feature of the developed gripper. The efficiency of the developed gripper was experimentally validated.
|
|
TueBK2 |
Event Hall 2 |
Soft Robotics for System Integration 2 |
In-person Special Session |
Co-Chair: Nabae, Hiroyuki | Tokyo Institute of Technology |
Organizer: Maeda, Shingo | Tokyo Institute of Technology |
Organizer: Hirai, Shinichi | Ritsumeikan Univ |
Organizer: Suzumori, Koichi | Tokyo Institute of Technology |
Organizer: Nabae, Hiroyuki | Tokyo Institute of Technology |
Organizer: Wiranata, Ardi | University of Gadjah Mada |
|
13:30-13:45, Paper TueBK2.1 | |
A Detachable Distance Sensor Unit Using Optical Fiber for a Pneumatic-Driven Bellows Actuator (I) |
|
Mori, Yoshiki | Ritsumeikan Univercity |
Wang, Zhongkui | Ritsumeikan University |
Shimada, Nobutaka | Ritsumeikan University |
Kawamura, Sadao | Ritsumeikan University |
Keywords: Soft Robotics, Mechatronics Systems
Abstract: To improve the functionality of soft robots, many soft sensors have been developed to measure the deformation of soft actuators. Many of them are glued inside or on the surface of the soft actuator and can measure deformation. However, the durability of the soft actuator is basally low. It is necessary to replace the soft sensor with the damaged soft actuator itself. This leads to increased costs in terms of system integration. Therefore, in this paper, we focus on distance sensors using optical fibers and propose a detachable sensor unit installed in an air tube. This makes it possible to measure deformation without attaching a sensor to the soft actuator body by gluing. This sensor unit measures the deformation by the change of reflected light intensity. Therefore, we evaluate it using a pneumatic-driven bellows actuator, which is relatively easy to manufacture and has a linear motion. It is also assumed that the sensor unit (air tube) is installed horizontally or vertically concerning the motion direction of the bellows actuator. Finally, we experimentally demonstrated that this sensor unit can measure the position of the bellows actuator.
|
|
13:45-14:00, Paper TueBK2.2 | |
Dynamics Modeling and Validation of a Bio-Inspired Soft Eel Robots for Underwater Motion (I) |
|
Trinh, Hiep | Le Quy Don Technical University |
Nguyen, Bich Ngoc | VNU University of Science, Vietnam National University, Ha Noi, |
Phung Van, Binh | Le Quy Don Technical University |
Nguyen, Anh Tuan | Le Quy Don Technical University |
Hoang, Kien Trung | Le Quy Don Technical University |
Nguyen, Dinh | Hanoi University of Industry |
Keywords: Soft Robotics, Biologically-Inspired Robotic Systems, Mechanism Design
Abstract: This paper presents advancements in our previous research of bio-inspired soft eel robots with a focus on investigating their underwater motion dynamics. An equivalent model is proposed, consisting of rigid links interconnected by rotating joints and springs, with pneumatic pressure activation modeled through joint moments and hydrodynamic forces determined through explicit formulas incorporating motion dynamic parameters. A novel virtual loading method combining kinetic modeling and simulation of the eel robot’s deformation is introduced to determine the configuration parameters of the equivalent model. The model is extensively investigated using MSC Adams multibody dynamics simulation software and validated through real-underwater motion experiments. Comparative analysis between experimental and simulation results demonstrates the proposed approach’s reliability and accuracy. The findings facilitate the optimization of the eel robot’s design and control and can be extended to study other continuous soft robots’ dynamics.
|
|
14:00-14:15, Paper TueBK2.3 | |
Fabric Manipulation by Pulling-Driven Soft Hand with Closing-Approaching Coupling (I) |
|
Hanamura, Kenta | Ritsumeikan University |
Hirai, Shinichi | Ritsumeikan Univ |
Wang, Zhongkui | Ritsumeikan University |
Keywords: Soft Robotics, Factory Automation
Abstract: This paper proposes a pulling-driven soft hand for fabric manipulation. Automatic fabric manipulation is required in various industries such as garment industry, linen supply industry, automotive parts industry, and composite manufacturing, where various fabrics with different materials, shapes, and surface properties are used. Therefore, we propose pulling-driven soft hands to pick up various fabrics. We find that picking up a fabric on a table requires soft fingertips contacting with the fabric, large friction between fingertips and the fabric, and fingertips moving along the table to maintain the contact with the fabric. We thus introduce closing-approaching coupling to the pulling-driven soft hand. We show a prototype of the proposed soft hand to demonstrate grasping of a single fabric and picking multiple fabrics one by one from their stack. Additionally, a set of the proposed soft hands was applied to pick and place operations of several fabrics.
|
|
14:15-14:30, Paper TueBK2.4 | |
A Preliminary Study of a Soft Artificial Pump Based on ROBIN - Rotation-Based Buckling Instability Analysis (I) |
|
Nguyen, Nhan Huu | Japan Advanced Institute of Science and Technology |
Do, Thanh Nho | University of New South Wales |
Ho, Van | Japan Advanced Institute of Science and Technology |
Keywords: Soft Robotics, Mechanism Design
Abstract: This paper introduces a novel soft artificial fluidic pump based on the conceptualization of ROBIN - Rotation-based Buckling Instability Analysis, which leverages the unique deformation of a two-layer shell-body under rotational actuation to achieve pumping capability. In more detail, the proposed device, hereafter, called ROBIN-based Pump, employs two servo motors to twist soft skin layers until buckling deformation manifests as a series of inward folds. Such complex transformation triggers a significant collapse of the fluid cavity (e.g., water, in this paper) to perform the pumping action. The proposed soft pump underwent empirical evaluation through a series of experiments with different actuation states defined by the speed ω and rotation angle θ of the motors. The main aim is to establish the relationship between actuation configuration [ω θ] and the ROBIN-based pump’s characteristics described by two features: the flow rate of the fluid stream and the generated stroke volume. Preliminary results demonstrate that the soft pump could fulfill the functional requirements for either soft robotic mechanisms (e.g., soft grippers) or biomedical devices (e.g., human heart’s simulators or support devices). This work is expected to showcase a sustainable approach to developing soft robotic systems, where the deformation of one component could provide energy to others.
|
|
14:30-14:45, Paper TueBK2.5 | |
Origami-Based Robotic Gripper for Transporting Solids with Liquids |
|
Nate, Issei | Ritsumeikan University |
Wang, Zhongkui | Ritsumeikan University |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords: Soft Robotics, Robotic hands and grasping
Abstract: In the field of aquaculture and aquatic organism management, there is a need to transport both liquids and solids simultaneously. For instance, when dealing with fry or delicate organisms, it is preferable to transport them together with water, without removing them from their aquatic environment. However, accomplishing this task with a robot requires complex control mechanisms to account for the water flow generated when moving containers. Additionally, when pouring water, precise calculations and careful pouring techniques are necessary to determine the exact location where the water should be delivered. To address these challenges, this study proposes a gripper that can deform its shape underwater according to the shape of the container. The gripper utilizes a cylindrical shell with a Kresling structure, which is a folding structure capable of transforming a cylinder through compression. In this research, we analyze the conditions required for the gripper to transport water and investigate how the gripper's performance is affected by the number of folds. The design of the cylindrical shell is based on the condition of filling the center of the cylinder after deformation. To enhance underwater usability, the gripper incorporates a locking mechanism instead of relying on external actuators. As a result, we have successfully developed a gripper capable of transporting both liquids and solids using only vertical movements. To evaluate the gripper's performance, two experiments were conducted. The first experiment assessed the sealing ability of the gripper, which revealed a water leakage of 7,g per minute. In the second experiment, it was confirmed that the gripper can simultaneously transport solids with liquids using only vertical movements of the robot arm.
|
|
14:45-15:00, Paper TueBK2.6 | |
Integration of Origami Twisted Tower to Soft Mechanism through Rapid Fabrication Process |
|
Kusunoki, Mikiya | Japan Advanced Institute of Science and Technology |
Nguyen, Viet Linh | Japan Advanced Institute of Science and Technology |
Tsai, Hsin-Ruey | National Chengchi University |
Ho, Van | Japan Advanced Institute of Science and Technology |
Xie, Haoran | Japan Advanced Institute of Science and Technology |
Keywords: Soft Robotics, Human Interface
Abstract: Origami structures have been widely integrated in various applications in soft robotics, especially origami twisted tower with continuum mechanisms. However, it is difficult and time-consuming to integrate them into existing structures, since the fabrication processes were conducted either manual folding or 3D printing. To address this issue, we propose a rapid fabrication approach for the origami twisted tower as soft mechanisms with design interface and laser cutting. Following the crease patterns of the fabricated diagram, we verified the origami material and layers with evaluation experiments and found polypropylene as the most suitable material and stiffness for potential applications of the origami structures.
|
|
TueBM1 |
Meeting room 1 |
Robotics and AI for Healthcare: Empowering Independence, Enriching Lives |
In-person Special Session |
Chair: Inamura, Tetsunari | Tamagawa University |
Organizer: Ravankar, Ankit A. | Tohoku University |
Organizer: Salazar Luces, Jose Victorio | Tohoku University |
Organizer: Hirata, Yasuhisa | Tohoku University |
Organizer: Ravankar, Abhijeet | Kitami Institute of Technology |
Organizer: Paez-Granados, Diego | ETH Zurich |
|
13:30-13:45, Paper TueBM1.1 | |
Implementation of a Virtual Success Experience System with Difficulty Adjustment for Enhancing Self-Efficacy (I) |
|
Inamura, Tetsunari | Tamagawa University |
Nagata, Kouhei | National Institute of Informatics |
Takahashi, Nanami | National Institute of Informatics |
Keywords: Human Factors and Human-in-the-Loop, Virtual Reality and Interfaces, Human-Robot/System Interaction
Abstract: Self-efficacy is a psychological term defined as feeling confident that ``I can perform this action in the future.'' In nursing and rehabilitation facilities, it has been indicated that improving self-efficacy is as crucial as enhancing physical movement performance. When AI robot systems assist care receivers, it is desirable to improve users' self-efficacy by adjusting the difficulty of the target task according to the individual's state. This paper proposes a Kendama task in a VR environment to provide virtual successful experiences to model the relationship between difficulty level and self-efficacy. We performed an experiment with 24 participants to investigate the effects of the difficulty level adjustment on self-efficacy. The experimental findings suggest that reducing the difficulty level is inappropriate only to improve self-efficacy. Furthermore, it is necessary to increase the difficulty level to leave positive effects on the recall of past and future expectations.
|
|
13:45-14:00, Paper TueBM1.2 | |
Expanding the Integration of Multiscopic Cyber-Physical-Social System with Physical Sensors in Block-Design Test (I) |
|
Anom Besari, Adnan Rachmat | Tokyo Metropolitan University |
Saputra, Azhar Aulia | Tokyo Metropolitan University |
Obo, Takenori | Tokyo Metropolitan University |
Kurnianingsih, Kurnianingsih | Politeknik Negeri Semarang |
Kubota, Naoyuki | Tokyo Metropolitan University |
Keywords: Integration Platform, Multi-Modal Perception, Rehabilitation Systems
Abstract: This paper presents an expansion of the integrated multiscopic Cyber-Physical-Social System (CPSS), designed to assess physical and cognitive aspects within an independent Block-Design Test (BDT). Our approach comprises three tiers. At the microscopic level, we utilize a physical sensor to capture physical features during hand manipulation. Moving to the mesoscopic level, we incorporate block rotation and represent them in a graph data structure. At the macroscopic level, an upper table vision system employs color feature recognition within each block. Subsequently, Graph Convolutional Networks (GCN) represent the graph generated at the mesoscopic level in an embedding space. Our system effectively quantifies physical and cognitive variables during BDT exercises by evaluating eight WAIS-IV BDT designs. While our findings are promising, further research and development are essential to advance BDT applications.
|
|
14:00-14:15, Paper TueBM1.3 | |
Concept of Seamless Physical Observation of Human Hand through Block Design Test (I) |
|
Saputra, Azhar Aulia | Tokyo Metropolitan University |
Adnan Rachmat, Anom Besari | Tokyo Metropolitan University |
Obo, Takenori | Tokyo Metropolitan University |
Kurnianingsih, Kurnianingsih | Politeknik Negeri Semarang |
Kubota, Naoyuki | Tokyo Metropolitan University |
Keywords: Integration Platform, Rehabilitation Systems, System Simulation
Abstract: A seamless physical monitoring system for human observation is required to address the privacy and comfort concerns of the observed individuals. In this paper, we propose a human physical monitoring system integrated with a block design game-based approach to capture both human hand physical and cognitive capacities. By adopting this concept, we are able to naturally acquire human physical data. We have developed a smart block design, equipped with several sensors to track human activities from a physical standpoint. Supported by human skeleton and hand pose recognition, the data from Kohs blocks are processed in a human musculoskeletal simulation using MuJoCo, which comprises 59 hill-type muscles for each limb. To analyze performance for future studies, we conducted preliminary experiments involving humans playing the Kohs block design test. The results demonstrate that the proposed model effectively recognizes human hand muscle activity and muscle tension. Furthermore, the proposed concept is not only effective for assessing physical capabilities but can also be extended to cognitive capacities.
|
|
14:15-14:30, Paper TueBM1.4 | |
EMD-Based Feature Extraction Toward Real-Time Fear Emotion Recognition Application Using EEG |
|
Ishizuka, Shoma | Aoyama Gakuin University |
Kurebayashi, Yuhi | Aoyama Gakuin University |
Tobe, Yoshito | Aoyama Gakun University |
Keywords: Human Factors and Human-in-the-Loop, Human Interface
Abstract: In recent years, many researchers have shown interests in EEG-based emotion recognition for the application of Brain Computer Interface devices. Therefore, this study investigates the applicability of Empirical Mode Decomposition (EMD)-based feature extraction method for real-time EEG fear emotion recognition. In this study, instead of relying on publicly available datasets such as the DEAP dataset, the EEG data are collected independently by utilizing video clips available on the Internet to elicit fearful emotions. The algorithm mainly consists of two parts: feature extraction and fear emotion recognition. In the feature extraction stage, the acquired EEG signals are divided into five seconds segments and decomposed into several Intrinsic Mode Functions (IMFs) using EMD. Subsequently, the mean and Differential Entropy are extracted from the first five IMFs. These features are then classified by Support Vector Machine. To investigate the applicability of EMD, the EMD-based feature extraction method is compared to conventional methods, namely Short-time Fourier Transform and Wavelet Transform. As a result, the EMD-based method has demonstrated superior accuracy in both subject-dependent and subject-independent classification compared to the other two methods.
|
|
14:30-14:45, Paper TueBM1.5 | |
Pinch Force Measurement Using a Geomagnetic Sensor |
|
Yamamoto, Sarii | Keio University |
Ikematsu, Kaori | LY Corporation |
Kato, Kunihiro | Tokyo University of Technology |
Sugiura, Yuta | Keio University |
Keywords: Rehabilitation Systems, Mechanism Design, Medical Systems
Abstract: Hand muscle strength, such as grip strength, pinch strength, and holding strength is an important indicator when determining hand diseases, treatment, and rehabilitation methods. However, pinch measurement devices for one type of hand muscle strength are expensive and primarily for medical purposes, so the general public does not typically use them. In this study, we propose a pinch measurement method that uses only a smartphone and a measurement device with an embedded magnet. We created a measurement device in which the magnet position changes when force is applied. We also used a magnetic sensor built into a smartphone to estimate pinch force based on the amount of change in the magnetic field from the distance between the magnet and the smartphone. Magnetic fields can be easily obtained from existing applications. This makes it possible to routinely measure pinch force using an inexpensive measuring device rather than a conventional pinch measuring device.
|
|
14:45-15:00, Paper TueBM1.6 | |
Effect of Time Delay in Movable Prosthetic Eyes on Observers' Feeling of Asynchronization |
|
Urata, Tatsuto | Nara Institute of Science and Technology |
Orita, Yasuaki | Nara Institute of Science and Technology |
Wada, Takahiro | Nara Institute of Science and Technology |
Keywords: Welfare systems, Human-Robot/System Interaction, Human Interface
Abstract: The lack of motility in a prosthetic eye can trigger unnaturalness in observers who interact with its user, mainly owing to the asynchronization in the movements of the prosthetic and unaffected eyes. For people using prosthetic eye, this is a crucial concern in addition to the aesthetic nature and immobility of the prosthesis. To address this limitation, it is crucial to realize artificial eye in conjunction with the unaffected eyeball, although some delays are expected regardless. However, inherent delays in actuator response and control in mechanical approaches render this objective unattainable. To obtain design standards for prosthetic eyes that facilitate natural face-to-face interactions, we investigated the impact of delay in the response of prosthetic eyes on the unnaturalness perceived by observers. In the experiment, a unilateral prosthetic eye with a delay was reproduced by a 2D animation movie, participants watched movies with different time constants and dead time were varied. The results obtained from the experiment demonstrated that there is a range within which observers cannot feel unnaturalness due to delay differences in the movements between the left and right eyes, even if they notice these differences by delay. These results will contribute to realizing a better mechanically movable prosthetic eye.
|
|
TueBM2 |
Meeting room 2 |
Assistive System Integration |
In-person Regular Session |
Chair: Lee, Joo-Ho | Ritsumeikan University |
Co-Chair: Uehara, Akira | University of Tsukuba |
|
13:30-13:45, Paper TueBM2.1 | |
360◦ Sound Localization Support System for Deaf and Hard-Of-Hearing People Using Smartglasses Equipped with Two Microphone |
|
Matsuo, Akemi | Aoyama Gakuin University |
Itami, Taku | Aoyama Gakuin University |
Yoneyama, Jun | Aoyama Gakuin University |
Keywords: Systems for Service/Assistive Applications, Welfare systems, Biomimetics
Abstract: Deaf and hard-of-hearing (DHH) people have difficulty getting information. As a result, they are unable to notice emergency warning sounds such as ambulance sirens or car horns, and it therefore takes them longer to avoid danger. In this paper, we propose a system that can calculate the direction of a 360◦ sound source using two microphones. The effectiveness of this system is demonstrated by configuring a system that obtains the direction of the sound source using the sound size relationship and the time difference between the two microphones installed on the smartglasses, and displaying the direction calculation results on the lens. We discuss the proposed method and our validation of the calculations of the direction of a sound source.
|
|
13:45-14:00, Paper TueBM2.2 | |
Dishflipper: Development of a Large Food Debris Rinsing and Removal System Integration for Fully Automating Dish-Washing in a Pilot Soba Noodle Stand |
|
Lo, WingSum | Tokyo University of Agriculture and Technology |
Ozeki, Sara | Tokyo University of Agriculture and Technology |
Nakashima, Kazuya | Connected Robotics Inc |
Tsukamoto, Koichi | Connected Robotics Inc |
Sawanobori, Tetsuya | Connected Robotics Inc |
Mizuuchi, Ikuo | Tokyo University of Agriculture and Technology |
Keywords: Systems for Service/Assistive Applications, Human-Robot/System Interaction, Multi-Robot Systems
Abstract: This work presents a system integration designed for washing reusable dishes at a soba noodle restaurant. To address the need to manually rinse food debris and flip the dishes when in our previous system integration, we developed an add-on system that utilizes the Dishflipper mechanism mounted on an robot-arm, improving the overall capability of our system. Our proposed system integration is unique as it considers automating the removal of large food pieces initially left in the tableware using a cobot. The developed system integration has been shown effective in matching the workflow of a soba noodle stand, as demonstrated at the Hotels and Restaurant Show 2021 (HCJ) hosted in Tokyo. (Extended video: https://www.youtube.com/watch?v=-DcNNwO5h1I)
|
|
14:00-14:15, Paper TueBM2.3 | |
Distinguishing between Stooping and Squatting Using Difference in Angular Velocity of Upper Body Forward Tilt |
|
Ishidoh, Yudai | Aoyama Gakuin University |
Otsuka, Keisuke | Aoyama Gakuin University |
Itami, Taku | Aoyama Gakuin University |
Yoneyama, Jun | Aoyama Gakuin University |
Itami, Kimiwa | University of Shiga Prefecture |
Keywords: Systems for Service/Assistive Applications, Welfare systems, Human Interface
Abstract: The purpose of this study is to classify postures that produce different lumbar loads in order to eliminate occupational low back pain among nurses and caregivers. We focus on forward tilting motion during the lifting of heavy objects, which nurses perform frequently when caring for patients. There are two types of forward tilting postures: stoop lifting, which results in a high lumbar load, and squat lifting, which reduces the lumbar load, and their attachment points are issues for these two methods. In this study, we propose a method to determine the forward tilting direction from the angular velocity of the upper body with a single sensor and fixed mounting points for the purpose of actual use in the nursing field. Assuming that differences in this velocity in the upper body forward tilting direction arise from differences between squat lifting and stoop lifting are performed, the threshold is calculated through formulation and simulation. To validate the calculated threshold, experiments were conducted on 5 healthy adult subjects using the stoop and squat lifting methods to see if the threshold could be used to distinguish between the two methods.
|
|
14:15-14:30, Paper TueBM2.4 | |
Basic Study on Cybernic Interface for Amyotrophic Lateral Sclerosis Patients to Perform Daily Living Tasks by Transiting Seamlessly between Cyberspace and Physical Space |
|
Uehara, Akira | University of Tsukuba |
Sankai, Yoshiyuki | University of Tsukuba |
Keywords: Virtual Reality and Interfaces, Human-Robot/System Interaction, Systems for Service/Assistive Applications
Abstract: Amyotrophic lateral sclerosis (ALS) gradually make it difficult for the patients to move, breath, and express their will as their symptoms progress associated with motor neuron. It is important for the patients to be able to not only communicate their daily needs to someone such as family member of caregivers but also perform their daily activities and participate in society and fulfill social roles without constant support by other people. To improve their ability of independence, the purpose of this study is to develop a cybernic interface that can perform daily living tasks by transiting seamlessly between cyberspace and physical space based on the patient’s remaining voluntary motor functions, and confirm the basic performance of the developed interface. The system consists of bio-electrical signals measurement unit, display unit, processor unit, and IoT unit. The system controlled the flipping up and down the head-mounted display (HMD) by estimating the intention of transition between cyberspace and physical space based on the bio-electrical signals of the forearm. When the HMD is attached to the user’s face, the user can move at the virtual environment, operate IoT device at the room of physical world via virtual environment, and select voice output for expressing own state by the gaze. To confirm the basic performance of the developed interface for performing daily living tasks by oneself, we conducted basic experiment with an able-bodied participant. As a result, the participant was able to flip up the HMD, move at virtual environment, and operate the TV and lock at the physical space without moving own body similar to ALS patients. In conclusion, we confirmed the basic performance of performing daily living tasks to improve the ability of independence for ALS patients. The cybernic interface that can connect central nervous system to cyberspace and physical space has a potential for removing constraints due to motor dysfunctions and spatial limitation.
|
|
14:30-14:45, Paper TueBM2.5 | |
Simulating the Gravitational Displacement of a Gigantic Manipulator Posture Using Lumped Stiffness-Matrix Modeling on a Physics Engine |
|
Maruyama, Takahito | University of Tsukuba |
Ogawa, Shota | National Institutes for Quantum and Radiological Science and Tec |
Jaklin, Norman | Tree C Technology B.V |
Tolsma, Sander | Tree C Technology B.V |
Tsubouchi, Takashi | University of Tsukuba |
Keywords: Virtual Reality and Interfaces, System Simulation
Abstract: The ITER project seeks to construct the world's largest fusion device to validate fusion as a viable energy source. The blanket remote handling system (BRHS) manages blanket modules weighing up to four tons, which shield the vessel from fusion plasma. A virtual reality (VR) environment is indispensable for effective motion planning and monitoring during BRHS operations. However, gravitational displacements over 100 mm at the end-effector pose a challenge to achieving the required ±50 mm error range of the VR relative to the actual manipulator. To address this issue, we propose a method utilizing the lumped model, in which virtual joints simulating the stiffness of the manipulator links are implemented. The physics engine of the VR software calculates the deformation of the virtual joints. Since stiffness parameters are obtained through structural analysis, this approach eliminates the need for actual equipment measurements. The virtual joints feature a six-degree-of-freedom (6-DoF) spring, commonly used in the lumped model, and a full stiffness matrix for better simulation. Through evaluation, we reduced the maximum VR error from 97.8 mm (rigid model) to 64.6 mm (6-DoF spring model). Although the maximum error still exceeded the ±50 mm requirement, the proposed method significantly improved the VR accuracy.
|
|
14:45-15:00, Paper TueBM2.6 | |
Latency Improvement Strategy for Temporally Stable Sequential 3DMM-Based Face Expression Tracking |
|
Nguyen, Tri Tung Nguyen | Ritsumeikan University |
Tran, Dinh Tuan | College of Information Science and Engineering, Ritsumeikan Univ |
Lee, Joo-Ho | Ritsumeikan University |
Keywords: Virtual Reality and Interfaces, Software Platform, Human-Robot/System Interaction
Abstract: 2D image-based face tracking is a core feature for multiple AR/VR applications. Recent advancements in self-supervised 3DMM face reconstruction maintained high-accuracy analysis-by-synthesis tracking but were not designed for online inference settings with low latency performance. Recently, state-of-the-art models such as MICA has demonstrated significant improvement in term of accuracy for the offline face construction task but the design is ill-suited for practical use cases due to their long processing time on low and middle-end hardware. The original workflow includes two analysis-by-synthesis stages: face shape reconstruction and face tracking. The shape reconstruction aims to regress a neutral 3DMM model from the input. Then the tracking process learns relevant parameters for expressions, eyes, mouth, etc. for a differentiable render to reconstruct the original photographic input. This study aims to propose a design for an interface to apply offline 3DMM face tracking into an online inference pipeline for facial analysis-based applications.
|
|
TueBM3 |
Meeting room 3 |
Automation System |
In-person Regular Session |
Chair: Kikuuwe, Ryo | Hiroshima University |
Co-Chair: Kanehiro, Fumio | National Inst. of AIST |
|
13:30-13:45, Paper TueBM3.1 | |
Remote Shape Prediction of Submarine Cables Using Fiber-Optic Distributed Sensors |
|
Long, Zeyu | Osaka University |
Wakamatsu, Hidefumi | Grad. School of Eng., Osaka Univ |
Iwata, Yoshiharu | Osaka University |
Keywords: Automation Systems, System Simulation, Sensor Networks
Abstract: Maintenance of submarine cables require a significant amount of human recourses, time, and cost. Therefore, in this paper, we propose a method that can predict the shape of submarine cables remotely. We use multiple fiber-optic distributed sensor wrapped around the cable to predict the shape of the cable based on the normal strain data obtained from the fiber. This paper mainly provides modeling and theoretical support for this method. We establish a discrete model for the cable and the fiber, and use optimization computing method to minimize the potential energy. We set a sample shape, and output the normal strain data. Finally, we use this normal strain data to simulate and predict the shape of the cable. The final prediction results match the sample shape, ensuring the reliability of this method. We conducted multiple simulations to find the minimum number of fiber required to accurately predict the shape of the cable. In the end, we analyze the shortcomings of this method.
|
|
13:45-14:00, Paper TueBM3.2 | |
Trajectory Control with Consideration of Vibration Suppression in Straight Transfer System with Arbitrary Initial State |
|
Uchida, Akira | University of Yamanashi |
Noda, Yoshiyuki | University of Yamanashi |
Kaneshige, Akihiro | Toyota College of Technology |
Keywords: Automation Systems, Factory Automation, Logistics Systems
Abstract: This paper contributes to an advanced transfer control system in the load transport system with the vibration element. In most of the previous approaches related to the trajectory design with vibration suppression, the reference trajectory was designed with the initial state which the cart motion is the static condition. However, in the case that a part of trajectory in the existing trajectory is redesigned, it is required to design the trajectory with the desired initial state which is not the static condition. Therefore, in this study, we proposed the trajectory design method of the straight transfer system with the vibration suppression and the arbitrary initial state. In this approach, the reference trajectory of the cart motion is designed by separating the transient and the stationary state variables. The stationary state variables can be determined easily based on the initial state. The reference trajectory in the transient state variables can be derived by formulating the trajectory design problem to the quadratic programming with the state constraints. The efficacy of the proposed approach is verified by the simulations of the straight transfer in the overhead traveling crane.
|
|
14:00-14:15, Paper TueBM3.3 | |
End-Effector Position Estimation and Control of Hydraulic Excavators with Total Stations |
|
Yamamoto, Yuki | Hiroshima University |
Kikuuwe, Ryo | Hiroshima University |
Keywords: Automation Systems, Sensor Fusion
Abstract: This paper presents an end-effector position estimator for excavators with total stations and joint angle sensors. The total stations are external sensors for measuring positions in the world coordinates and have high measurement accuracy, but the sampling interval is long, typically about 0.3 s or longer. The structure of the estimator is a simple integrator of the end-effector velocity and acceleration obtained from the angle sensors with a high sampling frequency. To combine the information from the total stations and the angle sensors, which have different sampling intervals, the estimator resets its output by the total-station measurement with the lower sampling interval. Moreover, this paper presents an end-effector position controller employing the estimator. The estimator and the controller are validated by employing a real-time simulator of a hydraulic excavator.
|
|
14:15-14:30, Paper TueBM3.4 | |
Robotic Automation System of Polymer Press Process for Materials Lab-Automation |
|
Asano, Yuki | The University of Tokyo |
Okada, Kei | The University of Tokyo |
Shiomi, Junichiro | University of Tokyo |
Keywords: Automation Systems, Intelligent and Flexible Manufacturing, Vision Systems
Abstract: In this paper, we describes an automation system using a robot arm for the press process of polymer materials development. In the system, the press machines are operated through robot manipulation and control signals from the system. To achieve press machine operation by the robot arm, we developed tools and a gripper interface for robot arms to transferring human operation to robotic system. As an evaluation of molded polymers, we constructed a method to recognize the top-view shape of polymers by image processing and estimate its thickness. Evaluation functions were proposed to evaluate the press process, which consider thickness of molded polymers and press times. Through verification experiments, it was confirmed that the system could proceed the operation sequentially
|
|
14:30-14:45, Paper TueBM3.5 | |
Sensor Anomaly Detection for Biped Robot Using the Dynamic Equation of a Robotic System |
|
Kanehiro, Fumio | National Inst. of AIST |
Alaverdov, Antoine | CNRS-AIST JRL |
Keywords: Automation Systems, Decision Making Systems, Sensor Networks
Abstract: Falling down is a critical event for biped robots. A robot can fall down easily when one of the sensors is out of order. This paper aims to develop an anomaly detection method for biped robots that can detect anomalies in sensor outputs. A large literature on anomaly detection exists: machine learning algorithms, statistical techniques, and various methods have been developed to detect anomalies. However, each technique performs well under certain conditions, and unfortunately none of these has proven to be efficient in accurately detecting anomalies in real-world situations for robotic systems. In this paper, we introduce a novel approach to detecting anomalies in a biped robot, based on the dynamic equation of a robotic system coupled with a robust z-score. Our method achieved unsupervised and real-time anomaly detection with great accuracy.
|
|
14:45-15:00, Paper TueBM3.6 | |
Increasing Efficiency and Reliability of RF Machinery Testing Using Cartesian Robotics and Automatic Data Collection |
|
Miller, Paul | Wentworth Institute of Technology |
Baliki, Sam | Wentworth Institute of Technology |
Hanna, Rami | Wentworth Institute of Technology |
McCusker, James | Wentworth Institute of Technology |
Wadell, Brian C. | Teradyne, Inc |
Keywords: Automation Systems, Mechatronics Systems, Mechanism Design
Abstract: When analyzing high-throughput radio frequency (RF) chip testing machinery, the limitations of manual tests are evident. Manual testing can be imprecise and costly. Robotic automation, force feedback, and remote access can be used as comprehensive solutions that modernize testing procedures. The integration of a Cartesian robot with an automatic tool-changer brings precision and repeatability to testing processes without the need for an operator. Furthermore, the addition of a camera and constant load monitoring allows for imaging and fault detection. These features address the complications posed by delicate and expensive RF coaxial connectors. Most importantly, this system enables efficient data collection that lays the foundation for future research and process characterization.
|
|
TueCK1 |
Event Hall 1 |
Award Candidate Session 1 |
In-person Regular Session |
Chair: Do, Thanh Nho | University of New South Wales |
Co-Chair: Solvang, Bjoern | The Arctic University of Norway |
|
15:45-16:00, Paper TueCK1.1 | |
Integration of Soft Tactile Sensing Skin with Controllable Thermal Display Toward Pleasant Human-Robot Interaction |
|
Osawa, Yukiko | National Institute of Advanced Industrial Science and Technology |
Luu, Quan | Japan Advanced Institute of Science and Technology |
Nguyen, Viet Linh | Japan Advanced Institute of Science and Technology |
Ho, Van | Japan Advanced Institute of Science and Technology |
Keywords: Integration Platform, Hardware Platform, Human-Robot/System Interaction
Abstract: Safe and pleasant physical human-robot interaction (pHRI) is essential in the robotic skin and related control paradigms. Among approaches, integrating tactile sensing and thermal techniques enables the robot to provide pleasant and gentle touch perceptions, thus enhancing likability and trustworthiness during interactions with humans. However, achieving both accurate tactile sensing and desired thermal comfort perceptions poses significant challenges. In response to these challenges, this paper presents the development of a novel soft skin named ThermoTac. This innovative skin not only possesses the ability to sense physical touches effectively but also provides thermal comfort sensations. The integration of these capabilities is realized through a carefully designed and strategic system. The developed skin is integrated with vision-based tactile sensing and water circulation systems to evaluate its performance and effectiveness in realistic interaction scenarios. The results of the experiments highlight the potential of ThermoTac as a promising solution for safe and stable human-robot interaction, paving the way for advanced tactile sensing and thermal comfort in robotics.
|
|
16:00-16:15, Paper TueCK1.2 | |
Real-Time Failure/Anomaly Prediction for Robot Motion Learning Based on Model Uncertainty Prediction |
|
Ichiwara, Hideyuki | Hitachi, Ltd |
Ito, Hiroshi | Hitachi, Ltd |
Yamamoto, Kenjiro | Hitachi, Ltd |
Keywords: Machine Learning, Motion and Path Planning
Abstract: End-to-end robot motion generation methods using deep learning have achieved various tasks. However, due to insufficient training or the occurrence of abnormal conditions, the model sometimes fails tasks unexpectedly. If failures/anomalies can be predicted before occurring, irreversible task failures can be prevented. In this paper, we propose a method of predicting model uncertainty to predict failures/anomalies in real-time. For a naive method, we used a model that predicts the robot's actions stochastically and also tried a method that predicts failure/anomaly on the basis of the variance. However, it was experimentally shown that the variance due to the variation of the training data and the uncertainty of the model cannot be distinguished. Therefore, by predicting the likelihood of the model, which corresponds to the degree of discrepancy between the model and observations, in real-time and treating it as the uncertainty of the model, we applied it to the prediction of failure/anomaly. The method's effectiveness was demonstrated by achieving a high judgment accuracy rate of 85% (17/20 cases) in an object-picking task.
|
|
16:15-16:30, Paper TueCK1.3 | |
Generalized Framework for Wheel Loader Automatic Shoveling Task with Expert Initialized Reinforcement Learning |
|
Shen, Chengyandan | Unicontrol |
Sloth, Christoffer | University of Southern Denmark |
Keywords: Machine Learning, Automation Systems, Control Theory and Technology
Abstract: This paper presents a generalized framework for fast retrofitting of wheel loaders to enable automatic bucket shoveling with human-level performance. The retrofitting is accomplished in three steps: parameter estimation, expert demonstration, reinforcement learning (RL), and can be accomplished on any wheel loader. First, the dynamics of the given wheel loader is identified from a simple parameter estimation procedure. Second, data of an expert demonstrating the task with the wheel loader is recorded; third, the recorded expert demonstrations are used in an expert initialized RL method called Circle of Learning(CoL). Unlike typical model-free RL methods, which take a long training time to learn such tasks with human-level performance, CoL can shorten the training phase by pre-training the initial behavior of the agent by imitating expert demonstrations. The proposed framework is validated on an industrial wheel loader. The results demonstrate that the retrofitted wheel loader can achieve a bucket fill rate above 80% for automatically shoveling wet soil and medium coarse gravel, the deployed policy trained by CoL only took 2 hours of training with 10 expert demonstration examples. In contrast, the policy trained using TD3 achieved less than half the bucket fill rate within the same training duration.
|
|
16:30-16:45, Paper TueCK1.4 | |
Real-Time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes |
|
Yamamoto, Kenjiro | Hitachi, Ltd |
Ito, Hiroshi | Hitachi, Ltd |
Ichiwara, Hideyuki | Hitachi, Ltd |
Mori, Hiroki | Waseda University |
Ogata, Tetsuya | Waseda University |
Keywords: Machine Learning, Motion and Path Planning
Abstract: While deep learning enables real robots to perform complex tasks had been difficult to implement in the past, the challenge is the enormous amount of trial-and-error and motion teaching in a real environment. The manipulation of moving objects, due to their dynamic properties, requires learning a wide range of factors such as the object’s position, movement speed, and grasp timing. We propose a data augmentation method for enabling a robot to grasp moving objects with different speeds and grasping timings at low cost. Specifically, the robot is taught to grasp an object moving at low speed using teleoperation, and multiple data with different speeds and grasping timings are generated by down-sampling and padding the robot sensor data in the time-series direction. By learning multiple sensor data in a time series, the robot can generate motions while adjusting the grasping timing for unlearned movement speeds and sudden speed changes. We have shown using a real robot that this data augmentation method facilitates learning the relationship between object position and velocity and enables the robot to perform robust grasping motions for unlearned positions and objects with dynamically changing positions and velocities.
|
|
16:45-17:00, Paper TueCK1.5 | |
Distributed Cascade Force Control of Soft-Tactile-Based Multi-Robot System for Object Transportation |
|
Nguyen, Anh Duy | University of Prince Edward Island |
Le Dinh, Minh Nhat | Japan Advanced Institute of Science and Technology |
Nguyen, Nhan Huu | Japan Advanced Institute of Science and Technology |
Pham Duy, Hung | University of Engineering and Technology (VNU-UET) |
Ho, Van | Japan Advanced Institute of Science and Technology |
Ngo, Trung Dung | University of Prince Edward Island |
Keywords: Multi-Robot Systems, Mechatronics Systems, Soft Robotics
Abstract: In this paper, we present a distributed cascade force control system (DCFC) for multiple robots with the aim of pushing a rigid object towards a desired moving target without their inter-robot communication. These mobile robots are equipped with 360-degree vision-based soft tactile sensors utilized to determine contact location and resultant impact force. By investigating the dynamics of moving rigid objects on the flat, we proposed a distributed cascade control. The inner loop control incorporates contact force and positioning, ensuring the robots' pushing contact and applying the desired force to the object. The outer loop control coordinates the robots to push the object in a desired direction without inter-robot communication, regardless of unknown object mass and friction uncertainty. The stability and convergence of the control system are verified using the Lyapunov stability theory. We also conducted simulation and real-world experiments to validate the performance of the proposed control method, and the experimental results showcase the successful coordination of multiple robots in pushing an object towards a moving desired direction.
|
|
17:00-17:15, Paper TueCK1.6 | |
Autonomous Driving of Personal Mobility by Imitation Learning from Small and Noisy Dataset |
|
Kobayashi, Taisuke | National Institute of Informatics |
Enomoto, Takahito | Nara Institute of Science and Technology |
Keywords: Machine Learning, Autonomous Vehicle Navigation, Human Factors and Human-in-the-Loop
Abstract: The concept of personal mobility is getting popular, and its autonomous driving specialized for individual drivers is expected for a new step. However, a large driving dataset is difficult to be collected from the individual driver of personal mobility. In addition, when the driver is not familiar with the operation of the personal mobility, the dataset will contain non-optimal data as noise. This study, therefore, focuses on an autonomous driving method for the personal mobility with such a small and noisy personal dataset. To exclude noise while maximizing the use of the normal data, we introduce a new loss function based on Tsallis statistics that weights gradients depending on the original loss function and allows us to exclude noise in the optimization phase. The experimental results showed that while the conventional autonomous driving failed to drive, the proposed method learned robustly against noise and successfully drove automatically.
|
|
TueCK2 |
Event Hall 2 |
Soft Robotics for System Integration 3 |
In-person Special Session |
Chair: Wang, Zhongkui | Ritsumeikan University |
Co-Chair: Nabae, Hiroyuki | Tokyo Institute of Technology |
Organizer: Maeda, Shingo | Tokyo Institute of Technology |
Organizer: Hirai, Shinichi | Ritsumeikan Univ |
Organizer: Suzumori, Koichi | Tokyo Institute of Technology |
Organizer: Nabae, Hiroyuki | Tokyo Institute of Technology |
Organizer: Wiranata, Ardi | University of Gadjah Mada |
|
15:45-16:00, Paper TueCK2.1 | |
A Synthetic Dataset for Robotic Food Handling System (I) |
|
Xue, Yitong | RITSUMEIKAN UNIVERSITY |
Qiu, Zhe | Ritsumeikan University |
Zhang, Huayan | Shenzhen Institute of Artificial Intelligence and Robotics for S |
Wang, Zhongkui | Ritsumeikan University |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords: Automation Systems, Robotic hands and grasping, Machine Learning
Abstract: This paper aims to address the issue of labor shortage in the food industry by constructing a food automatic sorting system to replace manual labor and alleviate the workload of employees. Currently, deep learning is widely applied in the food industry. However, the process of creating a self-made dataset through capturing and annotating images consumes significant resources. To overcome the challenge, this study adopts a method of automatically generating the dataset, specifically generating instance segmentation data, to train the deep learning model for visual prediction of overlapped food items. Additionally, a soft robotic end-effector is used for food handling to prevent food items from damage during the sorting process. By implementing the proposed food sorting system, the overlapped food items were accurately predicted using a limited number of food models, and were successfully sorted in the food handling tasks.
|
|
16:00-16:15, Paper TueCK2.2 | |
Development of a Flexible Multi-Degrees-Of-Freedom Robot for Plant Pollination |
|
Masuda, Naoya | Toyohashi Univ. of Tech |
Khalil, Mohamed M. | Toyohashi Univ. of Tech |
Toda, Seitaro | Toyohashi Univ. of Tech |
Takayama, Kotaro | Toyohashi Univ. of Tech |
Kanada, Ayato | Kyushu Univ |
Mashimo, Tomoaki | Okayama Univ |
Keywords: Soft Robotics, Systems for Field Applications, Mechanism Design
Abstract: Pollination is an important factor in crop growth, but agricultural fields are currently suffering from the lack of natural pollinators due to a variety of factors. Recently, artificial pollination has been integrated to help solve that severe problem. Robotic pollinators not only can aid farmers by providing more cost-effective and stable methods for pollinating plants but also benefit crop production in environments not suitable for natural pollinators such as greenhouses. Robotic pollination requires precision and autonomy but few systems have addressed both aspects in practice. In this paper, a flexible multi-degrees-of-freedom robot is presented that is capable of precisely pollinating flowers. The robot imitates the effect of wind blowing as a pollination technique. Based on the cultivation restriction, the robot adapts a collision-free motion design to move the end-effector to the blowing position without hitting the crop and the construction of the greenhouses. The proposed robot can navigate and pollinate flowers at any oriented position when tested with artificial flowers in a simulated pollination experiment.
|
|
16:15-16:30, Paper TueCK2.3 | |
Temperature-Gradient-Based Partial Activation of Wet SMA Actuators for Displacement Output Linearity |
|
Osorio Salazar, Andres | Tokyo Institute of Technology |
Sugahara, Yusuke | Tokyo Institute of Technology |
Keywords: Soft Robotics, Mechatronics Systems, Control Theory and Technology
Abstract: Shape Memory Alloy (SMA) actuators have output non-linearities, posing challenges for control in robotic applications. As an alternative to model- and controller-based compensation, in this paper, with the objective of enhancing the linearity of the SMA output to make it possible to use conventional control algorithms, we present a novel physical compensation method. This method applies a temperature gradient to activate and deactivate specific longitudinal sections of the SMA. Adjusting the gradient's inflection point position allows control over the active/inactive length ratio, ideally linearly tied to the total actuator output. By using this activation technique, improving the output linearity of the SMA is possible. An actuator based on this concept was built and tested. The results of the experiment showed improved open-loop output linearity, time constant, and dead time compared to a typical SMA temperature controlled wet activation, which allowed us to combine the approach with a PID controller and displacement feedback to achieve a resolution of ~2.6%, comparable to a typical pneumatic cylinder position control system. The concept presented brings about position control of SMA actuators without characterization experiments or temperature feedback and simplifies its control strategies.
|
|
16:30-16:45, Paper TueCK2.4 | |
Wearable Device to Inhibit Wrist Dorsiflexion for Improving Movement Form in Table Tennis Backhand |
|
Shirota, Kazuki | Kyushu University |
Kashiwagi, Akihiko | Panasonic Connect |
Kiguchi, Kazuo | Kyushu University |
Nishikawa, Satoshi | Kyushu University |
Keywords: Soft Robotics, Human-Robot Cooperation/Collaboration, Human-Robot/System Interaction
Abstract: Proper movement form is crucial in sports. However, novices often struggle to achieve it without proper guidance. While current engineering research has introduced devices to facilitate necessary movements, our research explores the potential of a device that eliminates unnecessary movement. We developed a wearable device to restrict wrist dorsiflexion during table tennis backhand motion. In this study, experiments were conducted to investigate the effect of the device and compare the teaching of movements by the device with verbal instruction. Novice table tennis players were divided into three groups for our experiments: the experimental group used the device, the oral group received verbal instructions to minimize wrist movement, and the control group received neither device nor verbal guidance. Each group consisted of four participants. Comparative analysis among the groups revealed that three out of four participants in the experimental group tended to suppress their wrist motion and utilize their elbows while using the device. Furthermore, upon removing the device, these three participants also tended to suppress wrist motion and utilize their elbows, indicating enhanced movement form. In contrast, the oral group demonstrated suppressed wrist movement during instruction, yet tended to underutilize elbow movement. Based on these findings, it can be concluded that the use of our device contributed to a proper movement form.
|
|
16:45-17:00, Paper TueCK2.5 | |
An Empirical Model of Soft Bellows Actuator (I) |
|
Zhang, Shengyang | Ritsumeikan University |
Mori, Yoshiki | Ritsumeikan Univercity |
Qiu, Zhe | Ritsumeikan University |
Wang, Zhongkui | Ritsumeikan University |
Hirai, Shinichi | Ritsumeikan Univ |
Keywords: Soft Robotics, Mechanism Design, Intelligent and Flexible Manufacturing
Abstract: Soft actuators are used for automation in the food industry and medical applications due to their compliance and flexibility. However, modeling such actuators has been a challenging task owing to its softness and continuous nature. In this study, we focus on soft bellows actuator and propose a simple empirical model for estimating its output force. The model includes the parameters of geometry and material, which can be used to facilitate the actuator design and control. To validate the proposed empirical model, we first conducted finite element (FE) simulations of the bellows actuator with different design parameters and then performed experiments on actual bellows actuator which was fabricated using a 3D printer and TPU material. Both simulation and experimental results show that the proposed empirical model can accurately estimate the output force of soft bellows actuator. The model is expected to effectively facilitate the design and control of bellows actuator.
|
|
TueCM1 |
Meeting room 1 |
Machine Learning |
In-person Regular Session |
Chair: Do, Ton | Nazarbayev University |
Co-Chair: Tasaki, Tsuyoshi | Meijo University |
|
15:45-16:00, Paper TueCM1.1 | |
Autonomous Driving from Diverse Demonstrations with Implicit Selection of Optimal Mode |
|
Kobayashi, Taisuke | National Institute of Informatics |
Takeda, Yusuke | Nara Institute of Science and Technology |
Keywords: Machine Learning, Autonomous Vehicle Navigation, Human Factors and Human-in-the-Loop
Abstract: Imitation learning has been attracting attention as one of autonomous driving technologies. Among them, behavioral cloning (BC) can learn the optimal control policy offline from expert demonstrations prepared in advance, thus achieving autonomous driving without risky trial-and-error in a real environment. However, one drawback of BC is that when expert demonstrations are multi-modally distributed, it would acquire average behaviors without capturing either of their modes. This can lead to erroneous driving in situations such as detour routes, where multiple optimal behaviors may be considered, and cause risks such as collisions. In this study, we propose a novel BC that can implicitly select the most appropriate mode among multimodal demonstrations. The proposed method first learns an inverse dynamics model that infers the action that caused the state transition. By transferring its skill to the policy with the priority to the highly accurate mode, imitating only the better detour route is enabled while still allowing offline learning. In the experiments, the proposed method succeeds in stable autonomous driving even when one of the detour demonstrations is noisy.
|
|
16:00-16:15, Paper TueCM1.2 | |
Zero-Shot Pose Estimation Using Image Translation Maintaining Object Pose |
|
Fujita, Kohei | Meijo University |
Tasaki, Tsuyoshi | Meijo University |
Keywords: Machine Learning, Vision Systems, Multi-Modal Perception
Abstract: The object pose estimation neural network (NN) using RGBD images can estimate the pose of known objects in the training data with high accuracy. However, for unknown objects that are not in the training data, even if the shapes are similar, it becomes difficult to estimate the pose because they appear different. Therefore, we create RGB images of known objects from the shape data of unknown objects and improve pose estimation accuracy. The creation of RGB images from shape data of unknown objects has problems; for example, for a simple shape such as a rectangular box, the image created can have the top and bottom reversed. Therefore, we have developed a new method the coarse pose of the unknown object is estimated from the similarities between the appearances of the unknown and known objects, and the similarity is defined by NN itself without using an unknown object data. Using the coarse pose and shape data of the unknown object as input, our image translation NN creates an RGB image of a known object with maintaining the pose of the unknown object. In zero-shot, where not a single data for an unknown object were used in training, the percentage of the posture estimation within 30deg of error was 66.6% without image translation and 75.7% with image translation, demonstrating a 9.1pt improvement.
|
|
16:15-16:30, Paper TueCM1.3 | |
Simplifying Hyperparameter Derivation for Integration Neural Networks Using Information Criterion |
|
Iwata, Yoshiharu | Grad. School of Eng., Osaka Univ |
Wakamatsu, Hidefumi | Grad. School of Eng., Osaka Univ |
Keywords: Machine Learning
Abstract: When an optimal design is performed using simulation, sufficient optimization cannot be achieved if the time required for simulation is too long. On the other hand, simulation with reduced accuracy cannot guarantee the accuracy of the optimal solution. To address this problem of conflicting accuracy and time, attempts have been made to construct highly accurate approximators using machine learning. However, much training data is currently required to increase accuracy. In contrast, a high precision approximator with a small number of data using integration neural networks (INN, INN2), which fuse deductive and inductive knowledge, has been proposed. The optimization of the network structure, a hyperparameter of INN and INN2, has conventionally been determined by evaluating accuracy using evaluation data prepared separately from training data. However, creating this evaluation data also takes much time. Therefore, this study focused on the information criterion, which statistically evaluates the balance between the input data's diversity and the model's accuracy. And we examined the optimization of the structure using training data based on this information criterion. The results showed that the behavior of INN and INN2 was different and that the appropriate structure using the information criterion and the evaluation data was consistent for INN. On the other hand, it was found that INN2 requires using evaluation data.
|
|
16:30-16:45, Paper TueCM1.4 | |
Odometry-Less Indoor Dynamic Object Detection and Localization with Spatial Anticipation |
|
Shen, Zhengcheng | TU Berlin |
Gao, Yi | TU Berlin |
Kästner, Linh | T-Mobile, TU Berlin |
Lambrecht, Jens | Technische Universität Berlin |
Keywords: Machine Learning, Vision Systems, Sensor Fusion
Abstract: The continually evolving image segmentation methods in computer vision can further broaden the cognitive abilities of the robot. As humans, we won't judge if the object is moving by accuracy speed estimation but through semantic information based on visual input. With the existing video segmentation method, the robot can pay more attention to the foreground objects, which are more important for path planning and navigation in most cases. The proposed methods try to deploy the first-person view segmentation method with an RGBD sensor and get the location information and locally egocentric map. The error of the segmentation methods will cause unexpected objects in the egocentric map, and the self-occlusion will influence the location and distance estimation. To further improve the methods, a spatial anticipate unit is embedded into the framework. We improve 5.6% the localization accuracy(RMSE), 12.9% the distance estimation(MAE) with the trained object, and 28.9% the recall for the trained object. The methods also show the margin on the untrained object dataset for similar objects.
|
|
16:45-17:00, Paper TueCM1.5 | |
Real Time Sound Source Localization Using Von-Mises ResNet |
|
Bozkurtlar, Mert | Tokyo Institute of Technology, Istanbul Technical University |
Yen, Benjamin | Tokyo Institute of Technology |
Itoyama, Katsutoshi | Tokyo Institute of Technology |
Nakadai, Kazuhiro | Tokyo Institute of Technology |
Keywords: Machine Learning, Human-Robot Cooperation/Collaboration
Abstract: This paper addresses the task of learning periodic information using deep neural networks to achieve real-time, environment-independent sound source localization. Previous papers showed phase data is the most significant cue in sound source localization tasks and the proposed vM-B DNN was validated to be able to handle such periodic information using on synthesized data. However, they haven't shown its effectiveness and robustness in realistic use cases. This paper introduces a more complex model based on residual networks and adapts vM-B activation function for convolutional layers for use cases that require real-time predictions in dynamically changing environments.
|
|
17:00-17:15, Paper TueCM1.6 | |
Instance Segmentation-Based Markerless Tracking of Fencing Sword Tips |
|
Sawahata, Takehiro | Chuo University |
Moro, Alessandro | Ritecs Inc |
Pathak, Sarthak | Chuo University |
Umeda, Kazunori | Chuo University |
Keywords: Machine Learning, Vision Systems
Abstract: This study addresses the challenge of detecting the tip of a fencing sword. The swift motion and diminutive size of the fencing sword tip not only poses difficulties in detection but also occasionally lead to its omission from video recordings. Moreover, conventional detection approaches such as affixing markers to the sword tip are unsuitable in sports contexts as they could encumber the athletes. In light of these considerations, our research has devised a system that exclusively employs monocular camera images to consistently gather information about the sword tip. Even in cases where the tip is not captured, we propose a method for predicting its position based on historical data and subsequent interpolation. Specifically, the entire sword is recognized using instance segmentation. And the tip of the sword is identified with skeletal point information. In instances where the tip eludes detection, its position is projected using preceding information and skeletal wrist point data, to ensure uninterrupted tracking. Our proposed method's efficacy was confirmed through various experiments conducted under conditions mirroring actual match scenarios. These experiments demonstrate the effectiveness of our approach.
|
|
17:15-17:30, Paper TueCM1.7 | |
Generating Long-Horizon Task Actions by Leveraging Predictions of Environmental States |
|
Iino, Hiroto | Waseda University |
Kase, Kei | Waseda University |
Nakajo, Ryoichi | National Institute of Advanced Industrial Science and Technology |
Chiba, Naoya | Tohoku University |
Mori, Hiroki | Waseda University |
Ogata, Tetsuya | Waseda University |
Keywords: Machine Learning, Multi-Modal Perception, Motion and Path Planning
Abstract: To expand the usage of robots in daily life, it is crucial for robots to learn compounded and long-horizon tasks. To learn long-horizon tasks, the tasks are generally segmented into shorter subtasks. The segmentation of the task can be done autonomously or manually, and is often based on the environment changes such as movement of object positions. The environmental state of each subtask contains valuable information that may be used as an auxiliary input for the robot learning model. However, leveraging environmental states for robot learning has the following two difficulties: maintaining and remembering environmental states that are not observable, and predicting accurate environmental states. To address these difficulties, we propose a method that predicts environmental states and confidence of prediction to leverage environmental states. The proposed method improves learning performance in long-horizon tasks by maintaining the long-term memory of the environmental states through its prediction and considering the accuracy of the environmental states. In this study, we evaluate our method by executing a long-horizon manipulation task that requires memory of past environmental states to pick-and-place multiple cubes into appropriate boxes. The six-degree-of-freedom robot was able to correctly remember the past behavior and placed the cube into the box appropriately.
|
|
TueCM2 |
Meeting room 2 |
Vision Integrated Systeem |
In-person Regular Session |
Chair: Caron, Guillaume | CNRS |
Co-Chair: Bunkley, Steven | US Army Corps of Engineers |
|
15:45-16:00, Paper TueCM2.1 | |
Integration of Vision-Based Object Detection and Grasping for Articulated Manipulator in Lunar Conditions |
|
Boucher, Camille | IMT Atlantique |
Diaz Huenupan, Gustavo Hernan | Tohoku University |
Santra, Shreya | Tohoku University |
Uno, Kentaro | Tohoku University |
Yoshida, Kazuya | Tohoku University |
Keywords: Vision Systems, Robotic hands and grasping, Systems for Field Applications
Abstract: Robotic manipulators will play a crucial role in performing various tasks on the Moon, such as resource extraction, construction and assembly of human outposts. To enable autonomy lunar robotic applications, robust vision-based frameworks must be integrated efficiently. However, this encounters numerous challenges, for instance, uneven terrain configurations and extreme lighting conditions on the Moon. This paper presents a versatile task pipeline that incorporates object detection, instance segmentation and grasp detection, the results of which can be applied to diverse applications of manipulators. We demonstrate the successful execution of two experiments by a 7-DoF manipulator. The first experiment involves stacking differently sized rocks on a non-flat surface in challenging lighting conditions with an impressive success rate of 92%. In the second experiment, we assemble 3D-printed robotic components, thus paving the path to initiate more complex tasks in the future.
|
|
16:00-16:15, Paper TueCM2.2 | |
Simultaneous Object Tracking and Shape Reproduction Using LiDAR Point Cloud Data |
|
Narita, Ryota | Tokyo City University |
Sekiguchi, Kazuma | Tokyo City University |
Nonaka, Kenichiro | Tokyo City University |
Keywords: Vision Systems, Autonomous Vehicle Navigation, Motion and Path Planning
Abstract: In this study, we use light detection and ranging (LiDAR) to track moving objects and estimate their positions, velocities, and shapes. Both the position and shape of a target moving object are estimated based on the time-series point cloud for the object using simultaneous localization and mapping (SLAM), which estimates the self-position and the surrounding environment. After point cloud data for dynamic objects (e.g., people and vehicles) are extracted, the center of gravity of each dynamic object point cloud is tracked using a joint probabilistic data association filter (JPDAF) to obtain the point cloud for the target. It is shown that it is possible to estimate the position, velocity, and shape of moving objects based on LiDAR data.
|
|
16:15-16:30, Paper TueCM2.3 | |
Strawberry Packaging Support System Based on Image Recognition |
|
Esaki, Yuta | Saga University |
Fukuda, Osamu | Saga University |
Yeoh, Wen Liang | Saga University |
Okumura, Hiroshi | Saga University |
Yamaguchi, Nobuhiko | Saga University |
Keywords: Vision Systems, Machine Learning, Decision Making Systems
Abstract: Strawberry packaging requires strict weight control and precise work for each pack. Owing to the fragility of the fruit, the quality of strawberries deteriorates when handled often; the mass has to be measured with minimal human contact. Hence, packaging work tends to increase the burden on skilled workers who can estimate weight by visual inspection. Moreover, the hiring of workers is problematic. In this study, by leveraging camera images, we developed a system based on image recognition that estimates strawberry mass and ripeness. This visualized information would enable inexperienced workers to perform appropriate tasks. The system can thus determine the optimal solution for appropriate strawberry packaging, enabling informed packaging decisions.
|
|
16:30-16:45, Paper TueCM2.4 | |
Proposal of Voxel Semantics Estimation System for Automated Snowplow Operation |
|
Kubo, Kohei | Hokkaido University |
Emaru, Takanori | Hokkaido University |
Keywords: Vision Systems, Multi-Modal Perception, Autonomous Vehicle Navigation
Abstract: A shortage of snow removal workers is a problem in Japan's heavy snowfall regions. In order to solve this problem, there is a need to automate the operation of snowplows. For automation, it is necessary to accurately recognize the surrounding environment, such as snow piles, buildings, and roads, in a three-dimensional space. Therefore, in this study, we propose an accurate environment recognition system using LiDAR and advance map (Semantic Map). The system models the 3D space with Voxels and estimates the semantics of each Voxel by probabilistically integrating the Semantic Map and the Point Cloud Semantic Segmentation, so that the system can be easily integrated with applications such as path planning and snow volume estimation. Appropriate integration of the two types of information can compensate for the recognition errors of each method, resulting in highly accurate environmental recognition. Furthermore, although two semantics estimation methods were integrated in this study, the proposed system can also handle more than three methods, making it an extensible system. Through experiments in snowy environments, we thoroughly analyzed the proposed method. As a result, we found that the building information in the Semantic Map can compensate for the misrecognition of snow piles in the Point Cloud Semantic Segmentation. The results of the quantitative evaluation showed that the proposed method can obtain high IoUs: 73.8% for snow piles, 75.2% for roads, and 83.6% for buildings, which exceeded the accuracy of Semantic Segmentation alone in all classes.
|
|
16:45-17:00, Paper TueCM2.5 | |
3D Measurement Using Line Laser and Stereo Camera with Background Subtraction |
|
Nonaka, Shunya | Chuo University |
Pathak, Sarthak | Chuo University |
Umeda, Kazunori | Chuo University |
Keywords: Vision Systems
Abstract: In this paper, we propose a method to improve the accuracy of three-dimensional (3D) measurement with stereo camera by marking the measured object with a line laser. Stereo cameras are often used for 3D measurement. However, the search for corresponding points in the left and right images depends on the amount of texture of a measured object. In addition, devices using a monocular camera and a laser or an optical projector are often proposed for 3D measurement by image processing. However, these devices require precise calibration. Therefore, we previously developed a measurement system that combines a stereo camera and a line laser. This measurement system improved the accuracy of 3D measurement with a stereo camera by marking arbitrary points with a line laser and measuring those points, independent of the texture amount of the object to be measured. In addition, the line laser could be moved freely, eliminating the need for calibration with a stereo camera. In accuracy evaluation experiments, measurements on the order of millimeters were achieved. However, this method has issues with robustness and processing time. Therefore, we propose a new method using background subtraction that solves those issues in this paper.
|
|
17:00-17:15, Paper TueCM2.6 | |
Scanning and Affordance Segmentation of Glass and Plastic Bottles |
|
Erich, Floris Marc Arden | National Institute of Advanced Industrial Science and Technology |
Ando, Noriaki | National Institute of Advanced Industrial Science and Technology |
Yoshiyasu, Yusuke | CNRS-AIST JRL |
Keywords: Vision Systems
Abstract: Glass objects and objects made from translucent plastics are difficult to scan using ordinary 3D scanning techniques as they violate the Lambertian assumption that most 3D scanning technology relies on. We first present a method using Neural Radiance Fields to create meshes of transparent and translucent objects. After scanning the object it is useful to identify parts of the objects that can be manipulated by a robot. For example, to grasp a bottle we should identify a grasping point on the main body of the bottle, but to open the bottle we need to identify a grasping point on the cap of the bottle. In this paper we discuss scanning results of translucent bottles using Neural Scanning and we discuss an algorithm for segmenting the bottles into task specific parts.
|
|
17:15-17:30, Paper TueCM2.7 | |
A Study on Learned Feature Maps Toward Direct Visual Servoing |
|
Quaccia, Matthieu | CNRS-AIST JRL |
André, Antoine N. | CNRS-AIST JRL |
Yoshiyasu, Yusuke | AIST AIRC |
Caron, Guillaume | CNRS-AIST JRL, Université De Picardie Jules Verne, MIS Laborator |
Keywords: Vision Systems
Abstract: Direct Visual Servoing (DVS) is a technique used in robotics and computer vision where visual information, typically obtained from camera pixels brightness, is directly used for controlling the motion of a robot. DVS is known for its ability to achieve accurate positioning, thanks to the redundancy of information all without the necessity to rely on geometric features. In this paper, we introduce a novel approach where pixel brightness is replaced with learned feature maps as the visual information for the servoing loop. The aim of this paper is to present a procedure to extract, transform and integrate deep neural networks feature maps toward replacing the brightness in a DVS control loop.
|
|
TueCM3 |
Meeting room 3 |
Haptics Interface and Human Modeling |
In-person Regular Session |
Chair: Cisneros Limon, Rafael | National Institute of Advanced Industrial Science and Technology (AIST) |
Co-Chair: Kappassov, Zhanat | Nazarbayev University |
|
15:45-16:00, Paper TueCM3.1 | |
Preliminary Study on the Feasibility of Using Permanent Magnetic Elastomer As a Stretchable Skin Sensor |
|
Abhyankar, Devesh | Waseda University |
Wang, Yushi | Waseda University |
Kamezaki, Mitsuhiro | The University of Tokyo |
Wang, Qichen | WASEDA University |
Sugano, Shigeki | Waseda University |
Keywords: Haptics and tactile sensors, Soft Robotics, Human-Robot Cooperation/Collaboration
Abstract: Sensor technology will greatly accelerate the development of automation, especially in the field of robotics, particularly for safe interaction with both humans and the environment. In previous work, permanent magnet elastomers (PMEs), which were produced by mixing Neodymium powders into a silicone base and subsequently magnetizing the material, were integrated in skin sensors, and force detection experiments based on magnetic field changes were performed. This study investigates the changes in magnetic fields of PMEs made from silicone bases of various softness when stretched, as well as performs cyclic tests to analyze the elongation of each PME sample. The experimental results show that a linear relationship exists between the magnetic field of PMEs and the stretched distance , which was evaluated using a linear fit with a slope in the range of -0.006 to -0.057 mT/mm. Moreover, the findings demonstrate that the PMEs can be used to measure pressing forces even when they are being stretched. The elongation of PME changes by 4% for sample made from Dragon Skin™ 10 MEDIUM (D10) and 1.4% for sample made from Ecoflex™ 00-50 (E50). The results confirmed the feasibility of using PMEs as stretchable sensors.
|
|
16:00-16:15, Paper TueCM3.2 | |
Action-Driven Tactile Object Exploration for Shape Reconstruction Using Silver Nanowire Injected Sensors |
|
Mussin, Tleukhan | Nazarbayev University |
Kassym, Yermakhan | Nazarbayev University |
Kabdyshev, Nurlan | Nazarbayev University |
Kaikanov, Marat | Nazarbayev University |
Kappassov, Zhanat | Nazarbayev University |
Keywords: Haptics and tactile sensors, Robotic hands and grasping, Decision Making Systems
Abstract: We present an action-driven tactile exploration system using flexible touch sensors integrated into the fingertips of a robot hand. These sensors are made from foams injected with silver nanotubes that exhibit resistance changes under pressure to enable tactile perception. A microcontroller-based unit captures the resistance, with communication deployed using the Robot Operating System. Our approach is evaluated in tactile object exploration, where robotic finger movements estimate the coordinates of the points of contact upon collisions with an object. Through repeated grasps, a point cloud of the shape of an object is generated. Distinct finger movements, or affordances, are developed to collect new contact points to discriminate four different shapes, including cube, cylinder, pyramid, and sphere. We benchmark our approach on medium and large sets of objects.
|
|
16:15-16:30, Paper TueCM3.3 | |
Learning to Classify Surface Roughness Using Tactile Force Sensors |
|
Houhou, Younes | CNRS-AIST Joint Robotics Laboratory |
Cisneros Limon, Rafael | National Institute of Advanced Industrial Science and Technology |
Singh, Rohan Pratap | Univerity of Tsukuba, National Institute of Advanced Industrial |
Keywords: Haptics and tactile sensors, Machine Learning, Human-Robot/System Interaction
Abstract: This article explores the use of Multi-Layer Perceptron(MLP) and Long Short-Term Memory (LSTM) neural networks to classify force sequences for the purpose of distinguishing surface roughness levels. The force data utilized for this classification is extracted from simulated interactions on the MuJoCo platform. This study presents a methodology that will later be used for haptic feedback. It is involving the classification of force profiles to distinguish three distinct surface textures. This article also demonstrates the potential of employing MLP and LSTM networks to enhance the accuracy of surface roughness identification in haptic interfaces, thereby fostering advancements in human-robot interactions. The outcomes presented in this article showcase the results achieved by our neural networks using MuJoCo data. The overarching goal of this surface roughness detection is to offer an improved haptic system to our robot avatar.
|
|
16:30-16:45, Paper TueCM3.4 | |
Tactile Presentation of Orchestral Conductor's Motion Trajectory |
|
Ueda, Yuto | Keio University |
Withana, Anusha | The University of Sydney |
Sugiura, Yuta | Keio University |
Keywords: Human Interface
Abstract: Visually impaired people experience difficulties participating in orchestral or other musical performances because they cannot see the temporal instructions presented by the conductor using hand and baton movements. In this study, as an early validation of the concept’s feasibility, we propose an approach to present conductor's movements to the target musicians using an array of vibro-tactile actuators. Specifically we utilize tactile apparent movement to convey the conductor's motion to visually impaired people. We conducted comparative experiments using the response time between the correct beat timing and the predicted beat timing as the accuracy measure to evaluate the proposed system. The results show that in situations such as when the tempo of music or starting time of the performance changes, the proposed method significantly outperformed existing solutions.
|
|
16:45-17:00, Paper TueCM3.5 | |
Converting Tatamis into Touch Sensors by Measuring Capacitance |
|
Sawada, Naoharu | Keio University |
Yamamoto, Takumi | Keio University |
Sugiura, Yuta | Keio University |
Keywords: Human Interface
Abstract: In traditional Japanese-style rooms, tatamis are used as flooring material. People are in continuous contact with the floors. Therefore, we can measure our daily activities through the floors. In this study, we propose a method of converting tatamis into touch sensors by measuring capacitance to obtain information related to the floors. This system detects contact with people by placing conductive sheets under the surface of the tatami and measuring the capacitance that changes when people's bodies come into close proximity. To evaluate the system, we conducted a user study (N = 5) about gesture recognition with 12 hand gestures and obtained an average identification accuracy of 93.5% (SD = 2.1%).
|
|
17:00-17:15, Paper TueCM3.6 | |
The Cognitive Bias-Informed Latent Class Choice Model: A Novel Approach to Predicting Human Behavior |
|
Mitomi, Tatsuya | Fujitsu Limited |
Makihara, Fumiya | Fujitsu Limited |
Segawa, Eigo | Fujitsu Limited |
Keywords: Modeling and Simulating Humans, Intelligent Transportation Systems
Abstract: Economic policies aimed at achieving a sustainable society have been formulated in several countries. To choose effective policies, individuals’ behavior must be guided in line with policymakers’ intentions. Therefore, policy makers often explore models designed to predict people’s behavior. However, it is difficult for existing models to accurately predict human behavior, such as cognitive biases and individual differences, while maintaining a clear interpretation of the mechanisms behind this behavior. In this study, we propose a new method that can predict actual human behavior while ensuring the interpretability of behavioral choices by integrating knowledge from cumulative prospect theory and latent class choice models into an advanced multinomial logit model. This model has better predictive performance than conventional models and it can predict human behavior more precisely.
|
|
17:15-17:30, Paper TueCM3.7 | |
Stochastic Fluctuation in EEG Evaluated Via Scale Mixture Model for Decoding Emotional Valence |
|
Fukuda, Shunya | Hiroshima University |
Furui, Akira | Hiroshima University |
Machizawa, Maro G. | Hiroshima University |
Tsuji, Toshio | Hiroshima University |
Keywords: Modeling and Simulating Humans, Human Interface, Systems for Service/Assistive Applications
Abstract: Electroencephalogram (EEG) analysis has garnered attention as a method for quantitatively decoding human emotions, and EEG amplitude values in specific frequency bands are typically used for this purpose. However, as brain states can fluctuate rapidly in response to external stimuli, accounting for temporal fluctuations in amplitude could enhance the accuracy of emotion decoding. In this paper, we investigate the relationship between pleasant/unpleasant emotions and fluctuations in EEG amplitude by utilizing a scale mixture model that assumes a hierarchical stochastic structure for EEG variance. This model focuses on the connection between the non-Gaussianity of the EEG amplitude distribution and stochastic fluctuation of the EEG variance (i.e., amplitude), which can be quantitatively evaluated by introducing a feature value. In the experiments, we used an EEG dataset obtained during the presentation of pleasant and unpleasant images and computed the proposed and conventional features, such as simple variance and approximate entropy values, for comparison. Statistical tests and receiver operating characteristic analyses of the calculated features indicated that the proposed feature, which reflects the stochastic fluctuation of variance, can distinguish between pleasant and unpleasant emotions more accurately than conventional features. These findings suggest that not only the conventional amplitude value but also its fluctuation, may be useful in assessing emotional valence.
|
| |