| |
Last updated on July 1, 2024. This conference program is tentative and subject to change
Technical Program for Tuesday June 25, 2024
|
TO2A |
Rosenthal |
Human-Robot Interaction I |
Regular |
Chair: Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
Co-Chair: Frederiksen, Morten Roed | IT-University of Copenhagen |
|
10:30-10:40, Paper TO2A.1 | |
A Novel Design of Thin Flexible Force Myography Sensor Using Weaved Optical Fiber: A Proof-Of-Concept Study |
|
Chung, Chongyoung | Korea Advanced Institute of Science and Technology (KAIST) |
Mun, Heeju | Korea Advanced Institute of Science and Technology |
Atashzar, S. Farokh | New York University (NYU), US |
Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
Keywords: Physical and Cognitive Human-Robot Interaction, Force and Tactile Sensing, Soft Robotics
Abstract: Motion recognition and tracking is one of the crucial features for intuitive and direct human-robot interaction (HRI), prompting researchers to explore various sensor types. In this paper, we proposed a sleeve-type force myography (FMG) system using new type of thin and flexible FMG sensor utilizing weaved optical fiber. This study introduces a novel design for a thin and flexible FMG sensor using weaved plastic optical fibers for motion recognition and tracking. The proposed sensor demonstrates a compact form factor (15 mm width, 25 mm height, and 2 mm thickness) with high flexibility, making it suitable for embedding in clothing without causing discomfort. Evaluations confirm its high sensitivity, wide force sensing range (>10 N). Accuracy of the estimating force using the proposed sensor was approximately 99.17% or higher and the response time of 85 ms ensures its effectiveness in real-time applications, emphasizing its potential for applications like prosthetics and virtual reality (VR) interactions. To conduct the proof of concept for the FMG sensor, elbow flexion angle estimation was performed focusing solely on the bicep muscle, and high-precision flexion angle tracking was achieved with 94.27% of correlation coefficient. Overall, the proposed FMG sensor presents a promising solution for intuitive and accurate motion recognition in various HRI applications.
|
|
10:40-10:50, Paper TO2A.2 | |
The Effect of Use of Social Robot NAO on Children's Motivation and Emotional States in Special Education |
|
Namlısesli, Deniz | Yeditepe University |
Baş, Hale Nur | Medipol University |
Bostancı, Hilal | Medipol University |
Coşkun, Buket | Yeditepe University |
Erol Barkana, Duygun | Yeditepe University |
Tarakci, Devrim | Medipol University |
Keywords: Social and Socially Assistive Robotics
Abstract: The utilization of social robots in therapeutic and educational settings offers promising advancements, especially for children with special needs. 21 children receiving special education at Dilbade Special Education and Rehabilitation Center were evaluated under two conditions: sessions led by special education teachers and those supported by the social robot NAO. The Pediatric Motivation Scale (PMS) results demonstrate that special education sessions involving the social robot NAO increased the children's motivation compared to traditional sessions. Moreover, statistical features from the physiological signals, including Blood Volume Pulse (BVP), Electrodermal Activity (EDA), and Skin Temperature (ST) were found. Notably, all significant features from BVP, EDA, and ST have increased, which indicated that children were excited and felt positive during the sessions involving the social robot NAO. Furthermore, the subjective evaluations from children, families, and special education teachers supported the quantitative findings, with a majority expressing enthusiasm for including the social robot NAO in special education.
|
|
10:50-11:00, Paper TO2A.3 | |
Toward Anxiety-Reducing Pocket Robots for Children |
|
Frederiksen, Morten Roed | IT-University of Copenhagen |
Stoy, Kasper | IT University of Copenhagen |
Mataric, Maja | University of Southern California |
Keywords: Physical and Cognitive Human-Robot Interaction, Social and Socially Assistive Robotics, Force and Tactile Sensing
Abstract: A common denominator for most therapy treatments for children who suffer from an anxiety disorder is daily practice routines to learn techniques needed to overcome anxiety. However, applying those techniques while experiencing anxiety can be highly challenging. This paper presents the design, implementation, and pilot study of a tactile hand-held pocket robot “AffectaPocket”, designed to work alongside therapy as a focus object to facilitate coping during an anxiety attack. The robot does not require daily practice to be used, has a small form factor, and has been designed for children 7 to 12 years old. The pocket robot works by sensing when it is being held and attempts to shift the child's focus by presenting them with a simple three-note rhythm-matching game. We conducted a pilot study of the pocket robot involving four children aged 7 to 10 years, and then a main study with 18 children aged 6 to 8 years; neither study involved children with anxiety. Both studies aimed to assess the reliability of the robot's sensor configuration, its design, and the effectiveness of the user tutorial. The results indicate that the morphology and sensor setup performed adequately and the tutorial process enabled the children to use the robot with little practice. This work demonstrates that the presented pocket robot could represent a step toward developing low-cost accessible technologies to help children suffering from anxiety disorders.
|
|
11:00-11:10, Paper TO2A.4 | |
Advancing Interactive Robot Learning: A User Interface Leveraging Mixed Reality and Dual Quaternions |
|
Feith, Nikolaus | Montanuniversität Leoben |
Rueckert, Elmar | Montanuniversitaet Leoben |
Keywords: Physical and Cognitive Human-Robot Interaction, Learning From Humans
Abstract: This paper proposes an innovative mixed reality (MR) user interface using dual quaternions (DQ) to enhance interactive robot learning (IntRL). The interface, developed for Microsoft Hololens2, facilitates intuitive interaction and visualization of robot pose trajectories in 3D space. It is designed with three main modes: Subscribe, for observing robot movements; Publish, for controlling robot actions; and Interaction, the main feature that allows users to adjust and refine trajectories. The use of DQ in this context provides a robust and efficient way to represent complex spatial relationships and motion. By bridging the gap between human operators and robotic systems, this interface aims to simplify complex robotic manipulations and demonstrates potential for broader applications in interactive learning environments, offering a novel approach in the field of robotics.
|
|
11:10-11:20, Paper TO2A.5 | |
Kinesthetic Skill Refinement for Error Recovery in Skill-Based Robotic Systems |
|
Kowalski, Victor | German Aerospace Center (DLR) |
Eiband, Thomas | German Aerospace Center (DLR) |
Lee, Dongheui | Technische Universität Wien (TU Wien) |
Keywords: Physical and Cognitive Human-Robot Interaction, Robotic Systems Architectures and Programming, Learning From Humans
Abstract: Skill-based robotic systems can perform tasks more flexibly than typical industrial manipulators. These systems are equipped with a repertoire of reusable skills and take advantage of a knowledge base about their workspace. That being so, the robot can execute tasks composed of a combination of different skills, tools, and objects without having to be reprogrammed explicitly for each task. Despite its advantages, these systems are affected by modeling errors and an inaccurate knowledge base. Such issues lead to failures in production. Since automated error detection is still an open problem, they often have to be solved by a robot operator. That is generally done by accessing the implementation of the faulty task and determining what to change to achieve the desired outcome, which is time-consuming and requires expertise. The proposed work aims to provide the robot operator with a faster and more intuitive error recovery method for a skill-based system via GUI-assisted kinesthetic refinement of robot skills. Furthermore, partially automated error recovery strategies are included. First, the targeted skills can be composed of an arbitrary number of steps with corresponding reversion behaviors. Second, consecutive human corrections on different parts of a given object are analyzed to infer a possible object pose error. Experiments show that our method takes one-fourth of the time required for conventional manual correction.
|
|
11:20-11:30, Paper TO2A.6 | |
An Intuitive Framework to Minimize Cognitive Load in Robotic Control: A Comparative Evaluation of Body Parts |
|
Kim, Joonhyun | Hanyang University |
Lee, Jungsoo | Hanyang University |
Kim, Wansoo | Hanyang University ERICA |
Keywords: Physical and Cognitive Human-Robot Interaction, Wheeled Mobile Robots
Abstract: Within the domain of robotic control frameworks, a critical consideration is the minimization of cognitive load for the operator. Many past studies have aimed to achieve this by incorporating human motion into control systems. However, these methods often relied on motion capture systems, necessitating the cumbersome procedure of wearing equipment and calibrating sensors. This paper introduces an intuitive framework that utilizes raw values from a single Inertial Measurement Unit (IMU) sensor to capture the operator’s intent for robot control, thereby eliminating the need for complex sensor configurations and lengthy setup procedures. Furthermore, our study includes a comparative evaluation to determine the most effective body part - wrist, torso, or head - compared to traditional joystick control, in terms of minimizing cognitive load and maximizing intuitiveness. The evaluation criteria include stability, cognitive load, usability, and task completion time, with experiments involving both expert and non-expert users. Our findings indicate that wrist-based control is most beneficial for experts, improving stability, cognitive load management, usability, and completion speed. In contrast, non- experts prefer torso-based control for its intuitive nature, ease of use, and stability. Notably, the wrist and torso controls, which were most favored by the subjects, are assessed as more user- friendly than traditional joystick controls due to their hands- free operation capability. The practicality of our proposed framework is underscored by its potential compatibility with commonly available smart devices, paving the way for future research in realistic application scenarios.
|
|
TO2B |
KC 905 |
Medical Robotics |
Regular |
Chair: Nguyen, Nhan Huu | Japan Advanced Institute of Science and Technology |
Co-Chair: Lee, Deukhee | KIST |
|
10:30-10:40, Paper TO2B.1 | |
Haptic-Enhanced Virtual Reality Simulator for Robot-Assisted Femur Fracture Surgery |
|
Alruwaili, Fayez | Rowan University |
Halim-Banoub, David | Rowan University |
Rodgers, Jessica | Rowan University |
Dalkilic, Adam | Rowan University |
Haydel, Christopher | Orthopedic Trauma Surgery with Virtua Health |
Javad, Parvizi | Rothman Orthopedic Institute |
Iordachita, Ioan Iulian | Johns Hopkins University |
Abedin-Nasab, Mohammad | Rowan University |
Keywords: Robotic Systems Architectures and Programming, Medical Robotics and Computer-Integrated Surgery, Haptics
Abstract: In this paper, we develop a haptic-enhanced virtual reality (VR) simulator for the Robossis robot-assisted femur fracture surgery. Given the complex nature of robot-assisted surgery and its steep learning curve, a dedicated training tool is vital for equipping surgeons with the necessary skills to effectively operate the surgical system. We develop the Robossis Surgical Simulator (RSS) to closely replicate the surgical environment of the Robossis system. The user interacts with the RSS using external hardware that includes the Sigma-7 Haptic Controller and the Meta Quest VR headset. Further, we extended the implementation of the separating axis theorem to retrieve the collision between the distal and proximal bone segment and, hence, determine the required haptic feedback that restricts the bone-bone collision. This development demonstrates a promising avenue and a novel approach to enhance the training protocol for the Robossis system.
|
|
10:40-10:50, Paper TO2B.2 | |
Konjac Glucomannan-Based Soft Sensorized Phantom Simulator for Detection of Cutting Action |
|
Nguyen, Nhan Huu | Japan Advanced Institute of Science and Technology |
Ho, Van | Japan Advanced Institute of Science and Technology |
Keywords: Soft Robotics, Medical Robotics and Computer-Integrated Surgery
Abstract: As automation machines continue to integrate into various industrial sectors, the rise of soft robotic technologies holds promise for their application in everyday human life. In light of concerns about the environmental impact of this transition, including energy consumption and pollution, sustainability has become a key focus for the soft robotics community. This study introduces an innovative solution using a biodegradable material derived from Konjac Glucomannan (KGM) powder combined with Gelatin (referred to as KGE) for the development of eco-friendly soft devices. Tensile testing demonstrates that KGE possesses mechanical integrity comparable to commonly used hydrogels. Additionally, we present the inaugural implementation of medical simulators designed to replicate the skin's response during surgical procedures. An Electrical Impedance Tomography (EIT) system is employed to track incision trails and physical damages on the simulated skin. Experimental testing across various injury types reveals the system's ability to capture the wound's shape, albeit with limited precision in localization. This research marks a significant step toward expanding the utilization of KGM in the creation of a new generation of sustainable soft machines.
|
|
10:50-11:00, Paper TO2B.3 | |
Admittance Control for Adaptive Remote Center of Motion in Robotic Laparoscopic Surgery |
|
Nasiri, Ehsan | Stevens Institute of Technology |
Wang, Long | Stevens Institute of Technology |
Keywords: Medical Robotics and Computer-Integrated Surgery, Rehabilitation and Healthcare Robotics, Force and Tactile Sensing
Abstract: In laparoscopic robot-assisted minimally invasive surgery, the kinematic control of the robot is subject to the remote center of motion (RCM) constraint at the port of entry (e.g., trocar) into the patient’s body. During surgery, after the instrument is inserted through the trocar, intrinsic physiological movements such as the patient’s heartbeat, breathing process, and/or other purposeful body repositioning may deviate the position of the port of entry. This can cause a conflict between the registered RCM and the moved port of entry. To mitigate this conflict, we seek to utilize the interaction forces at the RCM. We develop a novel framework that integrates admittance control into a redundancy resolution method for the RCM kinematic constraint. Using the force/torque sensory feedback at the base of the instrument driving mechanism (IDM), the proposed framework estimates the forces at RCM, rejects forces applied on other locations along the instrument, and uses them in the admittance controller. In this paper, we report analysis from kinematic simulations to validate the proposed framework. In addition, a hardware platform has been completed, and future work is planned for experimental validation.
|
|
11:00-11:10, Paper TO2B.4 | |
Automatic Exposure Control for HoloLens 2 Camera: Detection of Reflective Markers for Robust Tool Tracking under Surgical Light |
|
Diana, Nova Eka | KIST School, University of Science and Technology |
Lee, Deukhee | KIST |
Keywords: Medical Robotics and Computer-Integrated Surgery, Computer Vision and Visual Servoing
Abstract: Introduction of surgical navigation into the operating room presents challenges related to lighting. While augmented reality (AR) has the potential to aid in surgery, most research on this subject have been conducted in preclinical settings with controlled lighting. HL2 (HoloLens 2), an OST-HMD (optical see-through head-mounted device) for AR commonly used in surgery, experiences performance limitations when exposed to direct and intense lighting. To tackle this problem, our research presents a real-time method to modify the exposure settings of the HL2 camera and cater to alterations in lighting conditions. The proposed algorithm identifies the brightest spot in a series of frames and then calculates its luminance level. Using this information, the process automatically determines the exposure time at a predefined ISO value while minimizing image artifacts. To assess the efficacy of this approach, we conducted experiments utilizing two different surgical lights and two distinct marker designs, both under static and dynamic conditions. The experimental findings suggest that both luminaires can achieve a detection rate of 93.59 ± 7.58% (p = 0.001*) for smaller marker geometries. Conversely, for larger geometries, the detection rate was 91.90±9.02%, with p-values of 0.0137* and 0.0189*, respectively, under each luminaire source, respectively, can be achieved.
|
|
11:10-11:20, Paper TO2B.5 | |
A Realistic Surgical Simulator for Non-Rigid and Contact-Rich Manipulation in Surgeries with the Da Vinci Research Kit |
|
Ou, Yafei | University of Alberta |
Zargarzadeh, Sadra | University of Alberta |
Sedighi, Paniz | University of Alberta |
Tavakoli, Mahdi | University of Alberta |
Keywords: Medical Robotics and Computer-Integrated Surgery, Grasping, Telerobotics
Abstract: Realistic real-time surgical simulators play an increasingly important role in surgical robotics research, such as surgical robot learning and automation, and surgical skills assessment. Although there are a number of existing surgical simulators for research, they generally lack the ability to simulate the diverse types of objects and contact-rich manipulation tasks typically present in surgeries, such as tissue cutting and blood suction. In this work, we introduce CRESSim, a realistic surgical simulator based on PhysX 5 for the da Vinci Research Kit (dVRK) that enables simulating various contact-rich surgical tasks involving different surgical instruments, soft tissue, and body fluids. The real-world dVRK console and the master tool manipulator (MTM) robots are incorporated into the system to allow for teleoperation through virtual reality (VR). To showcase the advantages and potentials of the simulator, we present three examples of surgical tasks, including tissue grasping and deformation, blood suction, and tissue cutting. These tasks are performed using the simulated surgical instruments, including the large needle driver, suction irrigator, and curved scissor, through VR-based teleoperation.
|
|
TO2C |
KC 907 |
Reasoning and Cognition |
Regular |
Chair: Sørensen, Sune Lundø | University of Southern Denmark |
Co-Chair: Yumbla, Francisco | ESPOL Polytechnic University |
|
10:30-10:40, Paper TO2C.1 | |
Perceptual Anchoring for Gaze-Tracking Wearables and Robot-Mounted Sensors |
|
Sørensen, Sune Lundø | University of Southern Denmark |
Huang, Shouren | Tokyo University of Science |
Cao, Yongpeng | The University of Tokyo |
Mikkel, Kjærgaard | University of Southern Denmark |
Keywords: World Modelling, Physical and Cognitive Human-Robot Interaction, Rehabilitation and Healthcare Robotics
Abstract: For persons with motor disabilities, robots have a great potential to improve the quality of daily interactions. A fully-fledged solution requires addressing many different challenges. Gaze-tracking wearables provides a new input modality for such solutions. In this paper we address the challenge of building and maintaining a world model using robot and gaze-tracking sensing data. We construct a world modeling pipeline consisting of a data association step and an anchoring step. In the data association step we compare four different methods. The results show that three possible data association methods outperform a geometric baseline. However, the results have potential for improvement, and thus future work to enhance the performance of the pipeline is needed. To guide future work, we have investigated potential factors which affect the performance of both the data association and the anchoring process, and found that the positions of detected objects have a significant impact.
|
|
10:40-10:50, Paper TO2C.2 | |
Semi-Autonomous Fast Object Segmentation and Tracking Tool for Industrial Applications |
|
Neubauer, Melanie | Montanuniversität Leoben |
Rueckert, Elmar | Montanuniversitaet Leoben |
Keywords: Computer Vision and Visual Servoing, Object Recognition
Abstract: In the domain of deep learning in computer vision, minimizing the annotation workload for data is crucial. Due to the uniqueness of individual objects, comprehensive data annotation is essential for training deep neural networks. To streamline this process, a partially automated video annotation approach is proposed. The idea is to segment and classify each object with a single click, enabling automatic annotation through interpolating and tracking across subsequent frames, where the object is visible. In this paper, we developed a Fast Object Segmentation and Tracking Tool (FOST), which significantly reduces the labor-intensive nature of labeling image data from videos. Compared to other annotation tools, ours has the capability to automatically segment pre-selected objects in subsequent frames through the utilization of optical flow calculations. FOST is evaluated on three industrial applications. In our tests, we achieve significant results, with segmentation times ranging from approximately 0.14 to 0.29 seconds per frame, contingent on the number of segmented objects within each frame.
|
|
10:50-11:00, Paper TO2C.3 | |
Mobile Robot Based Personalized Thermal Comfort Scheme |
|
Kim, Hyunju | KAIST |
Kim, Geon | KAIST |
Lee, Dongman | KAIST |
Keywords: Behavior-Based Systems, AI Reasoning Methods for Robotics, Wheeled Mobile Robots
Abstract: Indoor thermal control is a crucial technique for ensuring user comfort and efficient energy usage, and typically relies on conventional methods that standardize the indoor thermal environment, neglecting individual personalized preferences. Mobile robots have emerged as a potential solution for personalizing temperature comfort. However, the existing research often falls short in considering factors like robot movement control, user activities, human states, and potential disturbance to users, leading to inaccurate estimations of a user's temperature adjustment needs. This paper introduces a mobile robot-based personalized thermal control system, designed to enhance the accuracy in recognizing human states relevant to thermal comfort in real indoor environments. This system achieves accurate thermal comfort estimation using vision-based recognition while reducing robot movement to decrease user inconvenience. The robot dynamically navigates to optimal positions, guided by the confidence level in vision-based human state recognition. We train the robot's movement control policy using a Deep Reinforcement Learning-based model. Real-world evaluation shows the system's success in accurately recognizing human states with minimal movement trajectories and reduced user discomfort. The proposed robot-based approach offers a significant advancement in personalized thermal control, allowing for more accurate thermal comfort estimation.
|
|
11:00-11:10, Paper TO2C.4 | |
Action2Code: Transforming Video Demonstrations into Sequential Robotic Instructions |
|
Upadhyay, Abhinav | Accenture Labs |
Mortala, Gautam Reddy | Indraprastha Institute of Information Techonology Delhi |
Dubey, Alpana | Accenture |
Saurav, Shubham | Accenture |
Sengupta, Shubhashis | Accenture Solutions Pvt. Ltd |
Keywords: Learning From Humans, AI Reasoning Methods for Robotics, Manipulation Planning and Control
Abstract: The time-consuming nature of programming robotic motion and manipulation to achieve specific tasks is a key reason that robots have generally been restricted to repetitive tasks with little variation. Developers need to manually write specific code for each task, making it challenging to adapt the code to different environments and assembly scenarios. This inefficiency leads to significant time being spent on creating redundant code for similar robotic actions in various assemblies. To address this, we develop a generative network to program robots through demonstration, bringing more agility to the process. We propose a network, Action2Code, that takes video demonstration as an input and translates it to robotic instructions. We evaluate our approach on two datasets, namely RH20T and ASMCode, using quantitative analysis across four metrics to assess code generation. We observe a CodeBLEU score of 0.69 and 0.65, along with a CodeBERTScore of 0.78 and 0.73 for RH20T and ASMCode, respectively. Our results demonstrate the effectiveness of the model in generating code from video demonstrations.
|
|
11:10-11:20, Paper TO2C.5 | |
TSP-Bot: Robotic TSP Pen Art Using High-DoF Manipulators |
|
Song, Daeun | Ewha Womans University |
Lim, Eunjung | Ewha Womans University |
Park, Jiyoon | Ewha Womans University |
Jung, Minjung | Ewha Womans University |
Kim, Young J. | Ewha Womans University |
Keywords: Contact: Modeling, Sensing and Control , Foundations of Sensing and Estimation, Robotic Systems Architectures and Programming
Abstract: TSP art is an art form for drawing an image using piecewise-continuous line segments. We present TSP-Bot, a robotic pen drawing system capable of creating complicated TSP pen art on a planar surface using multiple colors. The system begins by converting a colored raster image into a set of points that represent the image's tone, which can be controlled by adjusting the point density. Next, the system finds a piecewise-continuous linear path that visits each point exactly once, which is equivalent to solving a Traveling Salesman Problem (TSP). The path is simplified with fewer points using bounded approximation and smoothed and optimized using Bezier spline curves with bounded curvature. Our robotic drawing system consisting of single or dual manipulators with fingered grippers and a mobile platform performs the drawing task by following the resulting complex and sophisticated path composed of thousands of TSP sites. As a result, our system can draw complicated and visually pleasing TSP pen art.
|
|
11:20-11:30, Paper TO2C.6 | |
MARVIN: Mobile Autonomous Robot Vehicle for Investigation & Navigation |
|
Andrade Proaño, Luis Alberto | Escuela Superior Politecnica Del Litoral |
Fajardo-Pruna, Marcelo | Escuela Superior Politécnica Del Litoral, ESPOL |
Tutivén, Chritian Javier | Escuela Superior Politécnica Del Litoral |
Valarezo Añazco, Edwin | Escuela Superior Politecnica Del Litoral |
Recalde, Angel | Escuela Superior Politécnica Del Litoral |
Cajo, Ricardo | Escuela Superior Politecnica Del Litoral (ESPOL) |
Yumbla, Francisco | ESPOL Polytechnic University |
Keywords: Wheeled Mobile Robots, Intelligent Robotic Vehicles, Motion Planning and Obstacle Avoidance
Abstract: Self-driving vehicles are a rising field for investigation; both in the commercial and academic world, but most framework platforms currently are not designed for the Latin American infrastructure environment, especially in manufacturing costs and their implementation. Therefore, this project involves designing, building, and programming a scaled self-driving vehicle based on the ROS2 infrastructure to make a more economical alternative for this market. SLAM tools were implemented in the prototype so that it could map its environment in real-time and self-navigate through it. In addition, a 3D model and a Gazebo based simulation were developed as the real environment test, so that the user can test the prototype in a simulated. All this research including the list of the parts necessary for the development of the car and the open-source architecture are uploaded to a GitHub repository. The final cost is comparably lower than its competition, therefore, this product can be considered a viable alternative as a learning platform in the actual market maintaining the characteristics of autonomous vehicles.
|
|
TO2D |
KC 909 |
Automation & Industrial Robots |
Regular |
Chair: Ji, Yonghoon | JAIST |
Co-Chair: Kojima, Fumio | Kobe University |
|
10:30-10:40, Paper TO2D.1 | |
A Practical Evaluation of Multi-Agent Pathfinding in Automated Warehouse |
|
Park, Chanwook | LG Electronics |
Nam, Moonsik | LG Electronics |
Mun, Hyeongil | LG Electronics |
Kim, Youngjae | LG Electronics |
Keywords: Multi-Robot Systems, Industrial Robots
Abstract: Multi-agent pathfinding (MAPF) has drawn more attention with the increasing demand for deploying multi-robot applications in industry. Warehouse automation is one particular application of MAPF that is led by global logistics companies. In this application, a fleet of robots simultaneously navigates to their goal locations without collisions among themselves. The key purpose is to optimize operation efficiency in terms of throughput and operation costs. An increasing number of robots initially leads to higher throughput, but inefficiency in path-planning becomes unavoidable due to the dense robot population. In this work, we suggest a novel evaluation metric for automated warehouse applications, called a multi-agent efficiency factor. This metric attempts to quantify the efficiency of multi-robot operations in terms of time or energy consumption in congested environments. We simulate the lifelong version of MAPF in several environments using CCBS-PGA, a highly adaptive MAPF algorithm. Then we evaluate the efficiency of the multi-robot operations using the proposed factor, together with the throughput per agent. Our experiments demonstrate the effectiveness of the multi-agent efficiency factor as an evaluation metric for lifelong MAPF. Finally, we discuss the importance of agent density in designing multi-robot applications.
|
|
10:40-10:50, Paper TO2D.2 | |
Exploring the Effect of Anthropomorphic Design on Trust in Industrial Robots: Insights from a Metaverse Cobot Experiment |
|
Wittmann, Maximilian | Friedrich-Alexander-Universität Erlangen-Nürnberg |
Keywords: Industrial Robots
Abstract: Collaborative robot (cobot) solutions offer several benefits, among them their cost-effectiveness, easy implementation on the shop floor, and the ability to automate repetitive processes. Increasingly, these systems are powered by Artificial Intelligence (AI). Cognitive and emotional barriers, however, often prevent a widespread introduction of AI-powered cobot solutions. One potential remedy for this lack of trust may be anthropomorphism, which has been empirically shown to improve the likeability of systems and affect the trust perception of human users. We developed a metaverse collaboration game set in a final assembly environment. At the example of an industrial robotic use case, we investigate the impact of anthropomorphic design on trust in the cobot. We ran a between-subject experiment with a sample size of 65 participants who interacted with a mechanical robot version or the anthropomorphized robot. The perception of the robot’s reliability, functionality, helpfulness, and trust increased within each group. Contrary to our assumptions, however, there were no significant differences in the median trust ratings between the anthropomorphically and the nonanthropomorphically designed robot. We discuss the ramifications for industrial human-robot interaction.
|
|
10:50-11:00, Paper TO2D.3 | |
Hybrid Force Motion Control with Estimated Surface Normal for Manufacturing Applications |
|
Nasiri, Ehsan | Stevens Institute of Technology |
Wang, Long | Stevens Institute of Technology |
Keywords: Industrial Robots, Contact: Modeling, Sensing and Control , Manipulation Planning and Control
Abstract: This paper proposes a hybrid force-motion framework that utilizes real-time surface normal updates. The surface normal is estimated via a novel method that leverages force sensing measurements and velocity commands to compensate the friction bias. This approach is critical for robust execution of precision force-controlled tasks in manufacturing, such as thermoplastic tape replacement that traces surfaces or paths on a workpiece subject to uncertainties deviated from the model. We formulated the proposed method and implemented the framework in ROS2 environment. The approach was validated using kinematic simulations and a hardware platform. Specifically, we demonstrated the approach on a 7-DoF manipulator equipped with a force/torque sensor at the end-effector.
|
|
11:00-11:10, Paper TO2D.4 | |
Identification of Belt Conveyor Malfunctions Using Machine Learning with Diverse Anomalous Sound Data |
|
Yoon, Jiyoung | Korea Polytechnics |
Kim, Do-Yoon | WITHROBOT Inc |
Kim, Hyun-Don | Robot Campus of Korea Polytechnic |
Keywords: Industrial Robots, AI Reasoning Methods for Robotics, Robotic Systems Architectures and Programming
Abstract: This study introduces a novel approach to identifying belt conveyor malfunctions through the application of machine learning techniques, utilizing diverse anomalous sound data as training input. Traditional methods often rely on specific fault sound data for training, limiting their adaptability to unforeseen anomalies. In contrast, the methodology of this study utilizes machine learning algorithms trained on various abnormal sounds such as glass breaking or screams to identify malfunctions of the belt conveyor. The study focuses on the collection and curation of a diverse dataset encompassing various anomalous sounds, ensuring a comprehensive representation of potential disturbances in an industrial setting. Machine learning models, including deep neural networks, are trained on this heterogeneous dataset, enabling them to recognize abnormal patterns associated with belt conveyor faults. Experimental results demonstrate the efficacy of the proposed approach in accurately identifying belt conveyor malfunctions, even when the learned models are exposed to novel and previously unseen anomalies. The versatility of the method showcases its potential for real-world applications where conventional fault-specific training data may be insufficient. This study not only contributes to the advancement of anomaly detection in industrial settings but also highlights the importance of leveraging diverse sound data for enhanced machine learning-based fault identification.
|
|
11:10-11:20, Paper TO2D.5 | |
Markov Decision Process Approach for Battery Charging of an Automated Guided Vehicle |
|
Lee, Min Seok | Korea Advanced Institute of Science and Technology |
Hwang, Illhoe | DAIM Research, Co., Ltd |
Jang, Sungwook | KAIST |
Im, Nakjoon | KAIST |
Jang, Young Jae | Korea Advanced Institute of Science and Technology |
Keywords: Industrial Robots, Wheeled Mobile Robots
Abstract: The escalating automation of operations in manufacturing systems has seen a notable rise in the utilization of automated guided vehicles (AGVs) within automated material handling systems. AGVs, reliant on battery power, necessitate strategic charging policies to avert battery depletion during operations. However, prevailing heuristic approaches often yield inefficiencies. This study formulates the AGV charging problem as a Markov decision process (MDP) model, considering the uncertainty and geometric information of the environment. Relative value iteration has been implemented to optimize the MDP model. The proposed charging policy undergoes rigorous analysis and comparison with existing heuristics through simulation experiments. This process establishes its efficacy in advancing AGV charging efficiency.
|
|
11:20-11:30, Paper TO2D.6 | |
Off-The-Shelf Bin Picking Workcell with Visual Pose Estimation: A Case Study on the World Robot Summit 2018 Kitting Task |
|
Hagelskjær, Frederik | University of Southern Denmark |
Høj Lorenzen, Kasper | University of Southern Denmark |
Kraft, Dirk | University of Southern Denmark |
Keywords: Industrial Robots, Object Recognition, Grasping
Abstract: The World Robot Summit 2018 Assembly Challenge included four different tasks. The kitting task, which required bin-picking, was the task in which the fewest points were obtained. However, bin-picking is a vital skill that can significantly increase the flexibility of robotic set-ups, and is, therefore, an important research field. In recent years advancements have been made in sensor technology and pose estimation algorithms. These advancements allow for better performance when performing visual pose estimation. This paper shows that by utilizing new vision sensors and pose estimation algorithms pose estimation in bins can be performed successfully. We also implement a workcell for bin picking along with a force based grasping approach to perform the complete bin picking. Our set-up is tested on the World Robot Summit 2018 Assembly Challenge and successfully obtains a higher score compared with all teams at the competition. This demonstrate that current technology can perform bin-picking at a much higher level compared with previous results.
|
|
TO2E |
KC 912 |
Aerial and Flying Robots |
Regular |
Chair: Geckeler, Christian | ETH Zürich |
Co-Chair: Wilfried Yves Hamilton, Adoni | Helmholtz-Zentrum Dresden-Rossendorf - (HZDR) |
|
10:30-10:40, Paper TO2E.1 | |
Autotarget*: A Distributed Robot Operating System Framework for Autonomous Aerial Swarms |
|
Wilfried Yves Hamilton, Adoni | Helmholtz-Zentrum Dresden-Rossendorf - (HZDR) |
Fareedh-Shaik, Junaidh | Helmholtz-Institut Freiberg Für Ressourcentechnologie |
Lorenz, Sandra | Helmholtz-Institut Freiberg Für Ressourcentechnologie |
Richard, Richard Gloaguen | Helmholtz-Institut Freiberg Für Ressourcentechnologie |
Thomas D., Kühne | Center for Advanced Systems Understanding -(HZDR-CASUS) |
Keywords: Aerial and Flying Robots, Robotic Systems Architectures and Programming, Multi-Robot Systems
Abstract: Robot Operating System (ROS) has proven itself as a viable framework for developing robot-related applications. It offers features such as hardware abstraction, low-level device support, inter-process communication, and useful libraries for autonomous robot systems. Concerning aerial robots, commonly called unmanned aerial vehicles (UAV) or drones, ROS provides unfortunately very basic functions. Moreover, it does not guarantee real-time operation, as it runs under Linux. Consequently, it is difficult to implement advanced ROS applications that involve a swarm of drones that need to communicate with each other to carry out a common mission. This paper proposes an extended version of the ROS framework called autotarget∗, which provides a set of efficient functions designed for distributed operation on multiple UAVs flying at the same time. autotarget∗ relies on a multi-tier architecture with a decentralized communication layer, enabling intra-UAV messaging as well as the scalability of swarm UAVs. It has a set of daemons whose feature is to regulate the swarm’s consensus control and failover policy to ensure convergence towards a common goal. Experiments with real-world swarms revealed that autotarget∗ is portable and satisfies performance requirements for collaborative mission applications. We further conducted a coverage planning mission using the parallel back-and-forth algorithm, which demonstrated the efficiency of the framework in terms of time and energy. Our work should pave the way for an open-source environment dedicated to simplifying collaborative ROS application development, particularly for multi-UAV systems.
|
|
10:40-10:50, Paper TO2E.2 | |
QuadFormer: Real-Time Unsupervised Power Line Segmentation with Transformer-Based Domain Adaptation |
|
Rao, Pratyaksh | New York University |
Qiao, Feng | Autel Robotics |
Weide, Zhang | Autel Robotics |
Xu, Yiliang | Autel Robotics |
Deng, Yong | Autel Robotics |
Wu, Guangbin | Autel Robotics |
Zhang, Qiang | Autel Robotics |
Loianno, Giuseppe | New York University |
Keywords: Computer Vision and Visual Servoing, AI Reasoning Methods for Robotics, Aerial and Flying Robots
Abstract: Accurately identifying power lines (PL) is crucial for ensuring the safety of aerial vehicles. Despite the potential of recent deep learning approaches, obtaining high-quality ground truth annotations remains a challenging and labor-intensive task. Unsupervised domain adaptation (UDA) emerges as a promising solution, leveraging knowledge from labeled synthetic data to improve performance on unlabeled real images. However, existing UDA methods often suffer from its huge computation costs, limiting their deployment on real-time embedded systems commonly utilized in aerial vehicles. To mitigate this problem, this paper introduces QuadFormer, a real-time framework designed for unsupervised semantic segmentation within the UDA paradigm. QuadFormer integrates a lightweight transformer-based segmentation model with a cross-attention mechanism to narrow the domain gap. Additionally, we propose a novel pseudo label scheme to enhance the segmentation accuracy of the unlabelled real data. Furthermore, to facilitate the evaluation of our framework and promote reserach in powerline segemntation, we present two new datasets: AutelPL Synthetic and AutelPL Real. Experimental results demonstrate that QuadFormer achieves state-of-the-art performance on both AutelPL Synthetic rightarrow TTPLA and AutelPL Synthetic rightarrow AutelPL Real tasks. We will publicly release the dataset to the research community.
|
|
10:50-11:00, Paper TO2E.3 | |
Particle Swarm Optimization for Training Quadrotor PID Controller |
|
Rodriguez, Eric | The University of Texas at Rio Grande Valley |
Lu, Qi | The University of Texas Rio Grande Valley |
Keywords: Aerial and Flying Robots, Dynamics and Control
Abstract: Energy expenditure for quadrotor control has a likelihood of being costly given parameter-dependent controllers that are less than optimal. The cost can grow proportionally when applied to multiple quadrotors for tracking and collaborative navigation tasks. This research aims to establish a basic approach to tuning PID (Proportional-Integral-Derivative) parameters for a simulated quadrotor drone. A PID controller for autonomy provides a straightforward method for correcting robotic movement based on its current state. However, applying a PID system to a flight controller poses challenges with an inherently under-actuated system, which includes the likelihood of large overshoots and lengthy adjustment times. To address this, we utilize PSO (Particle Swarm Optimization) for optimizing PID parameters in a simulated quadrotor. The PSO is employed to find optimal PID values for thrust, yaw, and translational movement on x- and y-positions by identifying converging values across randomly created particles. We conducted a set of experiments and compared it to the default PID controller. The experiments demonstrate converging properties for particles that achieve minimal fitness scores, particularly in reducing overshoot. The results indicate that the optimized PID controller outperforms the default PID controller without optimization. Using optimized PID controllers can decrease the amount of positional error during flight and when adjusting position with collaborative navigation and collision avoidance algorithms.
|
|
11:00-11:10, Paper TO2E.4 | |
Collision Dynamics of Motorized Deformable Propellers for Drones |
|
Pham, Tien Hung | Japan Advanced Institute of Science and Technology |
Nguyen, Dinh | Hanoi University of Industry |
Bui, Son | Hanoi University of Industry |
Loianno, Giuseppe | New York University |
Ho, Van | Japan Advanced Institute of Science and Technology |
Keywords: Aerial and Flying Robots, Soft Robotics, Modeling, Identification, Calibration
Abstract: This paper investigates and analyzes the behavior of a deformable propeller during and after collisions. The experimental setup includes a deformable propeller, a BLDC motor, and a collision initiated while the propeller is rotating steadily. Here, we examine the changes in propeller’s angular velocity over time from the start of the collision until it fully recovers its initial velocity. This variation will be compared between the experimentally measured wing velocity using an encoder and the calculated propeller’s angular velocity in the simulation. The constructed model describes the relationship between propeller’s angular velocity and the input voltage supplied to the motor based on the Lagrange method. The study confirmed the shape transformation process and full restoration of the propeller’s original shape following collisions through high-speed video analysis. The results demonstrate consistent monitoring of collision initiation and the subsequent recovery process. This research enhances comprehension of the collision dynamics, thereby contributing to a deeper understanding of the fundamental physics governing deformable propellers, ultimately enhancing safety for drones.
|
|
11:10-11:20, Paper TO2E.5 | |
User-Centric Payload Design and Usability Testing for Agricultural Sensor Placement and Retrieval Using Off-The-Shelf Micro Aerial Vehicles |
|
Geckeler, Christian | ETH Zürich |
Kong, Iris | Technical University of Denmark |
Mintchev, Stefano | ETH Zurich |
Keywords: Aerial and Flying Robots, Telerobotics, Mechanism and Design
Abstract: Increased flight time and advanced sensors are making MAVs easier to use, facilitating their widespread adoption in fields such as precision agriculture or environmental monitoring. However, current applications are limited mainly to passive visual observation from far above; to enable the next generation of aerial robot applications, MAVs must begin to directly physically interact with objects in the environment, such as placing and collecting sensors. Enabling these applications for a wide spectrum of end-users is only possible if the mechanism is safe and easy to use, without overburdening the user with complex integration, complicated control, or overwhelming and convoluted feedback. To this end we propose a self-sufficient passive payload system to enable both the deployment and retrieval of sensors for agriculture. This mechanism can be simply mechanically attached to a commercial, off-the-shelf MAV, without requiring further electrical or software integration. The user-centric design and mechanical intelligence of the system facilitates ease of use through simplified control with targeted perceptual feedback. The usability of the system is validated quantitatively and qualitatively in a user study demonstrating sensor deployment and collection. All participants were able to deploy and collect at least four sensors both within 10 minutes in visual line-of-sight and within 12 minutes in beyond visual line-of-sight, after only three minutes of practice. Enabling MAVs to physically interact with their environment will usher in the next stage of MAV utility and applications. Complex tasks, such as sensor deployment and retrieval, can be realized relatively simply, by relying on a mechanically passive system designed with the user in mind, these payloads can enable such applications to be more widely available and inclusive to end-users.
|
|
TO4A |
Rosenthal |
Human-Robot Interaction II |
Regular |
Chair: Erol Barkana, Duygun | Yeditepe University |
Co-Chair: Oh, Paul Y. | University of Nevada, Las Vegas (UNLV) |
|
14:20-14:30, Paper TO4A.1 | |
Safety-Optimized Strategy for Grasp Detection in High-Clutter Scenarios |
|
Li, Chenghao | Japan Advanced Institute of Science and Technology |
ZHOU, PEIWEN | Japan Advanced Institute of Science and Technology |
Chong, Nak Young | Japan Advanced Institute of Science and Technology |
Keywords: Grasping
Abstract: The detection accuracy and speed of grasp detection models on benchmarks are the focal points of concern in the robotic grasping community. Especially in a collaborative robot setting, the safety of the model is an essential aspect that cannot be overlooked. In this paper, we explore how to enhance the safety of grasp detection models in autonomous vision-guided grasping. Specifically, we propose a simple yet practical Safety-optimized Strategy, which consists of two parts. The first part involves depth prioritization, optimizing the grasp sequence from top to bottom based on the order of depth values, which can mitigate the issue of grasp collisions that may arise when the depth value of the object with the highest grasp quality is significantly higher than that of other objects in high-clutter scenarios. The second part is false-positive protection, where we introduce the robust ArUco marker as the lowest grasp priority. The marker is fixed at certain positions within the camera's field of view, enabling the robot to halt its movement, thereby restraining the robot from grasping objects that should not be grasped. Once the marker disappears, the robot can resume its operations. We validate our method through real grasping experiments with a parallel-jaw gripper and an industrial robotic arm, demonstrating its effectiveness in high-clutter scenarios.
|
|
14:30-14:40, Paper TO4A.2 | |
Mixed Reality-Based Teleoperation of Mobile Robotic Arm: System Apparatus and Experimental Case Study |
|
Annalisa, Jarecki | Texas A&M University |
Lee, Kiju | Texas A&M University |
Keywords: Human-Robot Augmentation, Wheeled Mobile Robots, Telerobotics
Abstract: This paper presents a system apparatus for supporting the remote operation of a mobile robot arm using a mixed reality (MR)-based user interface (UI). The presented system is based on Robot Operating System 2, utilizing newly developed and existing software packages for gesture-based control of the mobile base and the robotic arm. An experimental case study was designed to evaluate the system-level integration and usability. The case study involved seven participants completing a simple sequence of remote operation tasks using two different UI modalities, the MR device and a conventional computer interface (i.e., a 2D display and a keyboard). The results showed that the MR-based UI might be perceived by participants as more intuitive than the conventional control interfaces, while some limitations, such as gesture sensitivity and increased task load due to unfamiliarity, were also identified.
|
|
14:40-14:50, Paper TO4A.3 | |
A Preliminary Study of the Mobile Robot-Based Cooperative System for Outdoor Hazardous Testing Spaces |
|
KIM, Boseong | ADD |
KIM, Yoonkeon | ADD |
KIM, Eungsoo | ADD |
KANG, Jaeun | ADD |
KIM, Kihoon | Newtech Inc |
Jo, Younghyun | Innotech Inc |
Keywords: Robotics in Hazardous Applications, Wheeled Mobile Robots, Robot Surveillance and Security
Abstract: Unlike the diverse improvement efforts actively ongoing in general industrial sites, military development product that involves explosives and propellants demand a high level of operational reliability and safety to their specificity. In this paper, a mobile robot-based cooperative testing system has been proposed and developed as an attempt to mitigate various threats that may arise during firearm and ammunition testing. The robot is designed to autonomously execute the procedures necessary for firearm preparation, relying on a configuration of both commercially available and self-developed sensors. a vision-based automatic sorting algorithm has been implemented to self-detect variations in the reference azimuth angle during live firing test. To verify the ammunition loading safety of the developed system, a measurement environment was configured to measure the impact force on the ammunition using PCB PIEZOTRONICS 350C03 sensors and the Dewetron DEWE-2072 data acquisition system. The analysis results show a 70% reduction in impulse compared to manual loading cases, demonstrating the safety effectiveness of the proposed system. Moreover, the feasibility of the mobile robot-based testing technique has also been verified in the live firing test.
|
|
14:50-15:00, Paper TO4A.4 | |
A Data Collection Scheme to Develop Future Autonomous Manipulation for Military Applications |
|
Kim, Dongbin | United States Military Academy |
Manjunath, Pratheek | United States Army |
Adeniran, Emmanuel | Yale University |
Davis, Joseph | United States Military Academy |
Keywords: Telerobotics, Human-Robot Augmentation, Learning From Humans
Abstract: The U.S. Department of Defense is advancing mobile robotic manipulation to conduct dangerous tasks such as Explosive Ordnance Disposal (EOD) and hazardous materials (HazMat) handling, where current autonomous systems fall short. In a battlefield environment, contested communications may prohibit the use of telemanipulation, thus establishing the need for highly dexterous autonomous mobile manipulation robots. This paper presents a data collection scheme using telemanipulation, comprised of a Mixed Reality (MR) control interface, enabling a human in the loop to remotely execute a specialized task. The user-interaction data collected is instrumental in developing advanced, predictive, and adaptive control systems via machine learning algorithms. These systems enhance robot autonomy while ensuring operator oversight, particularly critical in military settings. We detail an initial pick-and-place task with novice cadet researchers, analyzing the results and setting the stage for future research in autonomous mobile manipulation for battlefield applications.
|
|
15:00-15:10, Paper TO4A.5 | |
Usability Study of a Human-Robot Tele-Collaboration System for Nuclear Glove Boxes and Isolators |
|
KIM, BaekSeok | University of Nevada, Las Vegas |
Kassai, Nathan | University of Nevada, Las Vegas |
Oh, Paul Y. | University of Nevada, Las Vegas (UNLV) |
Keywords: Telerobotics, Human-Robot Augmentation, Physical and Cognitive Human-Robot Interaction
Abstract: This paper describes our ongoing research to develop a Human Robot Tele-Collaboration System. The goal is to enhance the usability of Glove Boxes commonly used in various nuclear facilities through the collaboration of robots and humans. In many cases, Glove Box operators have limited visibility and cannot reach the entire workspace, necessitating the use of additional tools inside the Glove Box. While attempts exist to solve these issues by placing robots inside the Glove Box, remotely operating a robot to perform fine tasks remains a challenging problem. This paper presents a Tele-Collaboration System that utilizes robots placed inside the Glove Box to overcome the limited workspace issues of the Operator. Moreover, glove Box operators can complement the robot's lacking dexterity, enabling more effective task performance.
|
|
15:10-15:20, Paper TO4A.6 | |
Integrating Human Expertise in Continuous Spaces: A Novel Interactive Bayesian Optimization Framework with Preference Expected Improvement |
|
Feith, Nikolaus | Montanuniversität Leoben |
Rueckert, Elmar | Montanuniversitaet Leoben |
Keywords: Learning From Humans, Physical and Cognitive Human-Robot Interaction
Abstract: Interactive Machine Learning (IML) seeks to integrate human expertise into machine learning processes. However, most existing algorithms cannot be applied to real world scenarios because their state spaces and/or action spaces are limited to discrete values. Furthermore, the interaction is limited to either a binary, good or bad, decision or the choice of which of the proposed solutions is the best. We therefore propose a novel framework based on Bayesian Optimization (BO). Interactive Bayesian Optimization (IBO) captures user preferences and provides an interface for users to shape the strategy by hand. Additionally, we've incorporated a new acquisition function, Preference Expected Improvement (PEI), to refine the system's efficiency using a probabilistic model of the user preferences. Our approach is geared towards ensuring that machines can benefit from human expertise, aiming for a more aligned and effective learning process. In the course of this work, we applied our method to simulations and in a real world task using a Franka Panda robot to show human-robot collaboration.
|
|
TO4B |
KC 905 |
Rehabilitation and Healthcare Robotics |
Regular |
Chair: Recchiuto, Carmine Tommaso | University of Genova |
Co-Chair: Park, Young-Bin | Ulsan National Institute of Science and Technology |
|
14:20-14:30, Paper TO4B.1 | |
Investigating the Generalizability of Assistive Robots Models Over Various Tasks |
|
Osooli, Hamid | University of Massachusetts Lowell |
Coco, Christopher | University of Massachusetts Lowell |
Spanos, Jonathan | University of Massachusetts Lowell |
majdi, amin | University of Massachusetts Lowell |
Azadeh, Reza | University of Massachusetts Lowell |
Keywords: Rehabilitation and Healthcare Robotics, Modeling, Identification, Calibration
Abstract: In the domain of assistive robotics, the significance of effective modeling is well acknowledged. Prior research has primarily focused on enhancing model accuracy or involved the collection of extensive, often impractical amounts of data. While improving individual model accuracy is beneficial, it necessitates constant remodeling for each new task and user interaction. In this paper, we investigate the generalizability of different modeling methods. We focus on constructing the dynamic model of an assistive exoskeleton using six data-driven regression algorithms. Six tasks are considered in our experiments, including horizontal, vertical, diagonal from left leg to the right eye and the opposite, as well as eating and pushing. We constructed thirty-six unique models applying different regression methods to data gathered from each task. Each trained model's performance was evaluated in a cross-validation scenario, utilizing five folds for each dataset. These trained models are then tested on the other tasks that the model is not trained with. Finally the models in our study are assessed in terms of generalizability. Results show the superior generalizability of the task model performed along the horizontal plane, and decision tree based algorithms.
|
|
14:30-14:40, Paper TO4B.2 | |
Quantification of Shoulder Joint Impedance During Dynamic Motion: A Pilot Study Using a Parallel-Actuated Shoulder Exoskeleton Robot |
|
Hwang, Seunghoon | Arizona State University |
Chan, Edward | Arizona State University |
Lee, Hyunglae | Arizona State University |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: Previous studies characterizing shoulder joint impedance were either strictly limited to 2D planar motion or static postures in 3D space. It is still not clear how shoulder joint impedance is regulated during dynamic motion in 3D space. To address this knowledge gap, this study presents our initial efforts to quantify shoulder joint impedance during dynamic shoulder motion using a parallel-actuated shoulder exoskeleton robot. The robot's 4-bar spherical parallel manipulation mechanism, characterized by low inertia, allows for transparent and natural arm motion in 3D space and applies rapid perturbations to the upper arm during dynamic shoulder motion. Two unimpaired individuals participated in an experiment involving repeated shoulder flexion and extension motions at a fixed horizontal shoulder extension angle of 45 degrees. Shoulder impedance was quantified by estimating the relationship between the kinematics of input position perturbations, which were applied in the orthogonal direction of the arm motion, and the output torque responses resulting from these perturbations. This relationship was approximated by a second-order model consisting of inertia, damping, and stiffness. Both subjects showed high reliability in impedance quantification during shoulder flexion and extension movements, evidenced by a high percentage Variance Accounted For that exceeds 96%. The experimental results showed the following notable trends. First, the contribution of stiffness to the shoulder torque was greater than that of the other two impedance parameters. Next, damping was larger during shoulder extension (downward motion) as opposed to flexion (upward motion). Lastly, inertia remained relatively constant regardless of shoulder motions. This pilot study validated the reliability of the presented robotic approach, paving the way for future shoulder impedance studies involving various dynamic motions in 3D space.
|
|
14:40-14:50, Paper TO4B.3 | |
A Hybrid CNN-LSTM Network with Attention Mechanism for Myoelectric Control in Upper Limb Exoskeletons |
|
Sedighi, Paniz | University of Alberta |
Marey, Amr Mohamed Fawzy | University of Alberta |
Golabchi, Ali | University of Michigan |
Li, Xingyu | University of Alberta |
Tavakoli, Mahdi | University of Alberta |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: This paper introduces a novel attention-based sequence-to-sequence network for predicting upper-limb exoskeleton joint angles, enhancing the control of assistive technologies for individuals with upper limb impairments. By integrating EMG and IMU signals, our model facilitates real-time decoding of user intentions, generating precise movement trajectories for a 3-DoF cable-driven upper-limb exoskeleton. The implementation of an attention mechanism within an encoder-decoder architecture allows for the dynamic prioritization of the most pertinent EMG features and historical angular positions, significantly improving prediction accuracy and system responsiveness. This approach not only offers a tailored response to varying sequence lengths and compensates for sensor unreliability but also introduces a level of personalization and adaptability previously unattainable in robotic rehabilitation and assistive devices. Through this model, we demonstrate a more effective, user-specific method of enhancing motor function recovery and facilitating daily activities, setting a new standard for assistive exoskeleton technology.
|
|
14:50-15:00, Paper TO4B.4 | |
Human-Robot Interactive Control for Knee Exoskeleton Using Feedback Torque and Adjustable Stiffness |
|
Du, Zhao-Ning | Shenzhen University |
Cao, Guang-Zhong | Shenzhen University |
Zhang, Yue-Peng | Shenzhen University |
Li, Ling-Long | Shenzhen University |
Huang, Su-Dan | Shenzhen University |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: Aiming at the problem that the rigid exoskeleton has insufficient compliance due to mechanical vibration in the human-robot cooperation motion and can only move within a small range of designed trajectories, the user's active motion intention cannot be fully expressed. In this paper, a human-robot cooperation control method using compliance factor with feedback torque and adjustable stiffness is proposed, which can arbitrarily change the current motion trajectory according to the user's active intention torque, while the knee exoskeleton can maintain the compliance and stability of the motion. The compliance factor includes stiffness adjustment items and torque adjustment items. The stiffness change feedback based on SEA improves the flexibility. The intention amplification function is designed to enhance the user's active intention. The proposed method is validated by the designed knee exoskeleton. When feedback compliance factor, knee exoskeleton has good compliance and flexibility. In extension and flexion, the highest γ values (γ reflects the intensity of the subject's initiative) are 3.390 and 3.705, respectively.
|
|
TO4C |
KC 907 |
AI Reasoning Methods for Robotics |
Regular |
Chair: Farzan, Siavash | California Polytechnic State University |
Co-Chair: Lee, Kyoobin | Gwangju Institute of Science and Technology |
|
14:20-14:30, Paper TO4C.1 | |
Relational Q-Functionals: Multi-Agent Learning to Recover from Unforeseen Robot Malfunctions in Continuous Action Domains |
|
Findik, Yasin | University of Massachusetts Lowell |
Robinette, Paul | University of Massachusetts Lowell |
Jerath, Kshitij | University of Massachusetts Lowell |
Azadeh, Reza | University of Massachusetts Lowell |
Keywords: Performance Evaluation and Optimization, AI Reasoning Methods for Robotics, Behavior-Based Systems
Abstract: Cooperative multi-agent learning methods are essential in developing effective cooperation strategies in multi-agent domains. In robotics, these methods extend beyond multi-robot scenarios to single-robot systems, where they enable coordination among different robot modules (e.g., robot legs or joints). However, current methods often struggle to quickly adapt to unforeseen failures, such as a malfunctioning robot leg, especially after the algorithm has converged to a strategy. To overcome this, we introduce the Relational Q-Functionals (RQF) framework. RQF leverages a relational network, representing agents' relationships, to enhance adaptability, providing resilience against malfunction(s). Our algorithm also efficiently handles continuous state-action domains, making it adept for robotic learning tasks. Our empirical results show that RQF enables agents to use these relationships effectively to facilitate cooperation and recover from an unexpected malfunction in single-robot systems with multiple interacting modules. Thus, our approach offers promising applications in multi-agent systems, particularly in scenarios with unforeseen malfunctions.
|
|
14:30-14:40, Paper TO4C.2 | |
A Methodology of Stable 6-DoF Grasp Detection for Complex Shaped Object Using End-To-End Network |
|
Jeong, Woojin | LG Electronics |
Gu, Yongwoo | LG Electronics |
Lee, Jaewook | LG Electronics |
Yi, June-Sup | LG Electronics |
Keywords: AI Reasoning Methods for Robotics, Grasping, Foundations of Sensing and Estimation
Abstract: The proficient grasping of objects is a significant challenge in robotics, particularly in dynamic environments. Deep learning-based approaches have shown promise in adapting to changing situations and achieving successful grasping. Previous research utilizing deep learning to generate grasp candidates can be categorized into two approaches based on the degrees of freedom of a grasp: the 4-DoF (top-down) and 6-DoF methods. Due to the 4-DoF approach is limited by its lack of gripper orientation flexibility, the 6-DoF approach offers more accuracy and precision in grasping objects. This paper proposes improvements to the GSNet network, which is an open-source state-of-the-art network for 6-DoF grasping, through parameter tuning and the application of stable score and Multiscale Cylinder Grouping strategies. Detailed explanations are also provided on the method of applying different strategies to a single network and on the approaches of parameter tuning. By the implementation process, there were improvements for grasping complex-shaped objects and small objects. To validate the improvements, experiments were conducted to measure the AP in the scenes of the GraspNet-1Billion dataset. The results indicate that the maximum AP achieved in the case of novel objects is 27.92, which is higher than that of the original network. Additionally, the experimental results showed a success rate of 90.9% for bin picking with cluttered objects, demonstrating the practical utility of our network even in real-world environments.
|
|
14:40-14:50, Paper TO4C.3 | |
Transparent Object Depth Reconstruction Framework for Mixed Scenes with Transparent and Opaque Objects |
|
Jeong, Woojin | LG Electronics |
Gu, Yongwoo | LG Electronics |
Lee, Jaewook | LG Electronics |
Yi, June-Sup | LG Electronics |
Keywords: AI Reasoning Methods for Robotics, Grasping, Object Recognition
Abstract: The increasing demand for autonomous robots necessitates the recognition and handling of transparent objects commonly found in daily life. Recognizing transparent objects through sensors is challenging due to their partial transmission and refraction of light, resulting in inaccurate depth measurements. Through testing, we checked that previous studies on transparent object depth reconstruction yielded inaccurate results in scenes where transparent and opaque objects are mixed, as well as false-positive results in scenes without transparent objects. We propose a framework that performs transparent object depth reconstruction even in scenes where both transparent and opaque objects coexist, while avoiding false-positive results in scenes without transparent objects. We utilize the structure of ClearGrasp, which has separate networks for different roles, allowing easy modifications. The Translab network, which is trained using the Trans10K-v2 dataset, is used for transparent object segmentation during the inference process. For the Depth Completion network, we utilized a network with a self-attention mechanism that effectively completes significant depth differences in the surroundings. Additionally, we design and apply the Depth Modification module to enhance depth input of the Depth Completion network by retaining accurate depth values in transparent object regions. The experimental results showed that our network achieved a lower depth estimation RMSE of 0.019 for real-novel objects compared to existing state-of-the-art networks. Furthermore, the experimental results showed a success rate of 94.1% for bin picking with cluttered objects.
|
|
14:50-15:00, Paper TO4C.4 | |
Curiosity-Driven Learning for Visual Control of Nonholonomic Mobile Robots |
|
Soualhi, Takieddine | Belfort-Montbéliard University of Technology |
Crombez, Nathan | Université De Technologie De Belfort-Montbéliard |
Lombard, Alexandre | Université De Technologie De Belfort-Montbéliard, Laboratoire Co |
Galland, Stephane | Université De Technologie De Belfort Montvéliard |
Ruichek, Yassine | University of Technology of Belfort-Montbeliard - France |
Keywords: AI Reasoning Methods for Robotics, Computer Vision and Visual Servoing, Wheeled Mobile Robots
Abstract: In this paper, we study the problem of visual servoing of nonholonomic mobile robots. Achieving precise positioning becomes particularly challenging within the classical approaches of visual servoing, primarily due to motion and field-of-view constraints. Previous work has demonstrated the effectiveness of deep reinforcement learning in addressing visual servoing tasks for robotic manipulators. In light of this, we propose a novel deep reinforcement learning framework that integrates deep recurrent policies and curiosity-driven learning to tackle the problem of visual servoing of nonholonomic mobile robots. First, we analyze the influence of the nonholonomic constraints on control policy learning, and subsequently, we evaluate our approach on both simulated and real-world environments. Our results demonstrate the superiority of our model in terms of spatial trajectories and convergence accuracy compared to the existing approaches.
|
|
15:00-15:10, Paper TO4C.5 | |
Is It Safe to Cross? Interpretable Risk Assessment with GPT-4V for Safety-Aware Street Crossing |
|
Hwang, Hochul | University of Massachusetts Amherst |
Kwon, Sunjae | UMass Amherst |
Kim, Yekyung | University of Masachusetts, Amherst |
Kim, Donghyun | University of Massachusetts Amherst |
Keywords: AI Reasoning Methods for Robotics, Computer Vision and Visual Servoing, Object Recognition
Abstract: Safely navigating street intersections is a complex challenge for blind and low-vision individuals, as it requires a nuanced understanding of the surrounding context -- a task heavily reliant on visual cues. Traditional methods for assisting in this decision-making process often fall short, lacking the ability to provide a comprehensive scene analysis and safety level. This paper introduces an innovative approach that leverages vision-language models (VLMs) to interpret complex street crossing scenes, offering a potential advancement over conventional traffic signal recognition techniques. By generating a safety score and scene description in natural language, our method supports safe decision-making for blind and low-vision individuals. We collected crosswalk intersection data that contains multiview egocentric images captured by a quadruped robot and annotated the images with corresponding safety scores based on our predefined safety score categorization. Grounded on the visual knowledge, extracted from images and text prompts, we evaluate a VLM for safety score prediction and scene description. Our findings highlight the reasoning and safety score prediction capabilities of the VLM, activated by various prompts, as a pathway to developing a trustworthy system, crucial for applications requiring reliable decision-making support.
|
|
15:10-15:20, Paper TO4C.6 | |
Seq2Act: A Sequence-To-Action Framework for Novel Shapes in Robotic Peg-In-Hole Assembly |
|
Lee, Geonhyup | Gwangju Institute of Science and Technology |
Lee, Joosoon | Gwangju Institute of Science and Technology |
Lee, Kyoobin | Gwangju Institute of Science and Technology |
Keywords: AI Reasoning Methods for Robotics, Dynamics and Control, Industrial Robots
Abstract: Robotic peg-in-hole assembly is a critical task in manufacturing that often faces misalignment issues due to sensor inaccuracies and control mechanisms. Traditional methods for addressing these issues, while effective, have limitations in flexibility and efficiency. This paper presents Seq2Act, a new framework for generating sequential peg-in-hole data to address the challenges mentioned above, using physical simulation and imitation learning. The framework is powered by a transformer model that predicts the next action for insertion. By leveraging simulation, the framework generates data and strategies for peg-in-hole insertion across a wide range of shapes, from quadrilaterals to decagons. The model anticipates peg movements for alignment and insertion, achieving impressive success rates of 87.7% and 81.4% for seen and unseen shapes, respectively. This highlights its adaptability without heavy reliance on reinforcement learning or human-guided demonstrations. Our approach represents a significant advancement in robotic assembly, providing a solution that is both data-driven and adaptable to a variety of shapes and sizes.
|
|
TO4D |
KC 909 |
Intelligent Robotic Vehicles |
Regular |
Chair: Park, Chung Hyuk | George Washington University |
Co-Chair: Englot, Brendan | Stevens Institute of Technology |
|
14:20-14:30, Paper TO4D.1 | |
An Efficient Method for Solving Routing Problems with Energy Constraints Using Reinforcement Learning |
|
Do, Haggi | KAIST |
Son, Hakmo | Korea Advanced Institute of Science &Technology (KAIST) |
Kim, Jinwhan | KAIST |
Keywords: Intelligent Robotic Vehicles
Abstract: With the increasing popularity of electric vehicles (EVs), the research field of planning efficient routes for these vehicles is gaining growing attention. As there are a limited number of charging stations for EVs compared to gas stations for fossil fuel vehicles, EV routing requires careful consideration of energy constraints and replenishment. The classical traveling salesperson problem (TSP) and vehicle routing problem (VRP) are known to be NP-hard, which means that the electric vehicle routing problem (EVRP), a similar problem with added energy constraints, is computationally even more challenging. Recently, reinforcement learning (RL) is being suggested as an effective tool that can alleviate the computational burden of challenging problems. This paper presents a RL-based method for solving routing problems with energy constraints. Multi-head attention mechanisms are employed for both the encoder and decoder, and a masking scheme is applied at the decoding phase in order to compute a feasible solution and minimize the energy constraint violation. This method generates an efficient route in which all task nodes are visited while meeting the energy requirements by visiting the charging stations when needed. The performance of the methodology is demonstrated through a Monte Carlo simulation, and the results are discussed and analyzed.
|
|
14:30-14:40, Paper TO4D.2 | |
Improving Multi-Robot Visual Navigation Using Cooperative Consensus |
|
Lyons, Damian | Fordham University |
Rahouti, Mohamed | Fordham University |
Keywords: Multi-Robot Systems, Computer Vision and Visual Servoing, Intelligent Robotic Vehicles
Abstract: WAVN (Wide Area Visual Navigation) is an approach to robot team navigation that uses the current visual information from all team members to allow any team member to cooperatively navigate to a distant (out of view) destination. Neither map nor robot position information is needed. Current applications include agriculture and forestry, particularly in remote locations. However, increasingly ubiquitous camera sensors indicate that WAVN could be used as a lightweight navigation paradigm for robot teams in households, offices, public areas, and manufacturing applications. In this paper, we propose and evaluate a novel synergy of navigation and blockchain methodologies: A novel robot-centric blockchain consensus mechanism based on common visual landmarks between pairs of robots. We show how this mechanism guarantees specific navigability properties for the ledger, resulting in an improved WAVN navigation algorithm, Randomly Shortened Chain (RSC), and we present navigation performance results to demonstrate the improved efficiency due to cooperation.
|
|
14:40-14:50, Paper TO4D.3 | |
Buoy Light Detection and Pattern Classification for Unmanned Surface Vehicle Navigation |
|
Lee, Junseok | GIST(Gwangju Institute of Science and Technology) |
Kim, Taeri | Gwangju Institute of Science and Technology(GIST) |
Lee, Seongju | Gwangju Institue of Science and Technology (GIST) |
Park, Jumi | Gwangju Institute of Science and Technology |
Lee, Kyoobin | Gwangju Institute of Science and Technology |
Keywords: Object Recognition, Intelligent Robotic Vehicles, AI Reasoning Methods for Robotics
Abstract: Buoys and beacons indicate information about dangers in coastal navigation. At night, owing to challenging visibility, buoy lights are employed instead of buoys and beacons. For safe navigation, it is crucial to comprehend these lights, and autonomous vessels require algorithms capable of classifying buoy lights without human intervention, particularly during the night. To address this, we propose a Buoy Light Detection and Classification Network (BLDCNet), which combines buoy light detection and pattern classification. BLDCNet is applied to the Temporal Shift Module (TSM), known for its excellent performance in video understanding, to achieve precise classification based on the continuous light patterns in sequential images. We evaluate the performance of BLDCNet using a synthetic dataset generated to resemble real maritime environments and a real-world dataset obtained by capturing buoy light pattern videos onshore. BLDCNet achieved a classification performance of 89.21% for 11 different buoy light patterns.
|
|
14:50-15:00, Paper TO4D.4 | |
Adaptive Dynamic Window Approach for Robot Navigation in Disturbance Vector Fields |
|
Redwan Newaz, Abdullah Al | University of New Orleans |
Alam, Tauhidul | Louisiana State University Shreveport |
Keywords: Intelligent Robotic Vehicles, Motion Planning and Obstacle Avoidance, Dynamics and Control
Abstract: Reliable autonomous robot navigation in dynamic environments with external disturbances remains challenging. The Dynamic Window Approach (DWA) generates collision-free trajectories by optimizing over sensor observations and motion constraints. However, the DWA lacks the ability to account for unpredictable disturbances, making robot navigation unreliable. We propose an enhanced planning and control approach that incorporates learned disturbance models to improve adaptability. Our key idea is to represent disturbances as parametric vector fields. By learning the vector field online, we capture environmental flows to be leveraged during planning. We integrate the learned model into the DWA objective to generate optimized trajectories through disturbance flows while avoiding obstacles. The proposed adaptive planning framework is validated in simulations and real-world experiments with ground and aquatic robots. Different case studies demonstrate the approach’s ability to produce smooth, collision-free robot navigation in varied disturbance fields and environments. Compared to the standard DWA, our planner handles uncertainties and changing conditions better by learning online. Thus,this disturbance-incorporated planning enables more reliable autonomous navigation in uncertain, dynamic environments.
|
|
15:00-15:10, Paper TO4D.5 | |
3D Spatial Information Restoration Based on G-ICP Approach with LiDAR and Camera Mounted on an Autonomous Surface Vehicle |
|
Heo, Suhyeon | KRISO, Korea Research Institute of Ships & Ocean Engineering |
Kang, Minju | Korea Research Institute of Ships & Ocean Engineering |
Choi, Jinwoo | KRISO, Korea Research Institute of Ships & Ocean Engineering |
Park, Jeonghong | KRISO |
Keywords: Multisensor Data Fusion, Intelligent Robotic Vehicles
Abstract: In this study, we proposed a 3D spatial information restoration approach using LiDAR and camera to improve the autonomy level of autonomous surface vehicles (ASVs). The preprocessing phase was designed for removal of inherence noise corresponding to data obtained from LiDAR and camera. Because RGB color information is sensitive to changes in illumination, a gamma correction and dark channel prior (DCP) approach was applied to minimize the rate of change of color information due to environmental factors. In addition, because using the LiDAR point cloud data (PCD) source as is would take a long time to process the data, and noise would reduce the accuracy of the data processing, we went through a preprocessing process to remove noise and outliers through filters. Then, the relative coordinate information between the LiDAR and camera was used to calibrate each data in advance, so that the RGB color information was projected on the filtered PCD. Subsequently, accumulated using generalized iterative closest point (G-ICP) approach in order to generate 3D spatial information. The field data obtained in an inland water environment was used to demonstrate the validity of the proposed approach, its results were described.
|
|
15:10-15:20, Paper TO4D.6 | |
Randomized Multi-Robot Patrolling with Unidirectional Visibility |
|
Echefu, Louis | Louisiana State University, Shreveport |
Alam, Tauhidul | Louisiana State University Shreveport |
Redwan Newaz, Abdullah Al | University of New Orleans |
Keywords: Robot Surveillance and Security, Multi-Robot Systems, Intelligent Robotic Vehicles
Abstract: Patrolling an adversarial environment with multiple robots equipped with vision sensors poses challenges, such as the potential for wireless communication jamming and limited visibility ranges. Early methods relying on deterministic paths are susceptible to predictability by adversaries. Conversely, recent non-deterministic approaches work in discrete environments but overlook sensor footprints and require synchronization. Therefore, this paper proposes an approach to compute patrolling policies for multiple distributed robots that monitor any polygonal environment leveraging limited unidirectional visibility regions in a continuous space and randomized patrolling paths. A visibility roadmap graph is initially constructed from a given environment through its recursive decomposition to account for unidirectional visibility. Our proposed multi-robot task allocation method then partitions the constructed visibility roadmap graph into a set of disjoint subgraphs (areas) and allocates them to multiple robots. Distributed randomized patrolling policies are finally computed in the form of Markov chains, utilizing convex optimization to minimize the average expected commute times for all pairs of locations in allocated areas. We present multiple simulation results to demonstrate the effectiveness of our visibility-based randomized patrolling approach. We also analyze the performance of our approach in detecting targets by robots through a series of simulation runs while they follow the computed policies during patrolling.
|
|
TO4E |
KC 912 |
Robotic Mechanism and Design |
Regular |
Chair: Lu, Qi | The University of Texas Rio Grande Valley |
Co-Chair: Bae, Jangho | University of Pennsylvania |
|
14:20-14:30, Paper TO4E.1 | |
Design and Fabrication of Multi-Functional Optical Microbots |
|
Jamil, Md Faiyaz | The Ohio State University |
Konara, Menaka | University of Massachusetts Dartmouth |
Pokharel, Mishal | University of Massachusetts, Dartmouth |
Park, Kihan | University of Massachusetts Dartmouth |
Keywords: Micro/Nano Robots, Micro/nanosystems, Mechanism and Design
Abstract: Microrobotics has evolved as an interesting research area with the development of new microfabrication techniques and the applicability of these micro units to real-world applications. Among the other types of micro-robotics, the light-actuated microbot is widely studied for its promise of biocompatibility, real-time control, live feedback, etc. With regards to optically controlled microbots, they utilize tightly focused laser beams for micro/nano-scale motion. A microbot that can perform different tasks improves the applicability of these micro-agents to different applications from micro-manipulation to some interesting biomedical applications. Many researchers believe that, with proper engineering, these microbots can be used for cancer treatment and diagnosis in a non-invasive manner. In this study, multi-functional microbots have been developed mainly focusing on biomedical applications such as targeted drug delivery, cell characterization, and cell manipulation. Two-photon polymerization has been utilized for fabricating three-dimensional microbots using bio-compatible materials. Multiple optical traps were generated using a low-cost modular optical tweezer system and time-shared optical trapping is used to control the microbots. This study gives a glimpse into the future possibilities of using light-actuated microbots to perform complex therapeutic tasks.
|
|
14:30-14:40, Paper TO4E.2 | |
Advancing Planar Magnetic Microswimmers: Swimming, Channel Navigation, and Surface Motion |
|
Duygu, Yasin Cagatay | Southern Methodist University |
Kararsiz, Gokhan | Southern Methodist University |
Liu, Austin | The Harker School |
Cheang, U Kei | Southern University of Science and Technology (SUSTech) |
Leshansky, Alexander | Technion |
Kim, MinJun | Southern Methodist University |
Keywords: Micro/Nano Robots, Micro/nanosystems, Dynamics and Control
Abstract: Planar magnetic microswimmers are well-suited for in vivo biomedical applications due to their cost-effective mass production through standard photolithography techniques. The precise control of their motion in diverse environments is a critical aspect of their application. This study demonstrates the control of these swimmers individually and as a swarm, exploring navigation through channels and showcasing their functional capabilities for future biomedical settings. We also introduce the capability of microswimmers for surface motion, complementing their traditional fluid-based propulsion and extending their functionality. Our research reveals that microswimmers with varying magnetization directions exhibit unique trajectory patterns, enabling complex swarm tasks. This study further delves into the behavior of these microswimmers in intricate environments, assessing their adaptability and potential for advanced applications. The findings suggest that these microswimmers could be pivotal in areas such as targeted drug delivery and precision medical procedures, marking significant progress in the biomedical and micro-robotic fields and offering new insights into their control and behavior in diverse environments.
|
|
14:40-14:50, Paper TO4E.3 | |
Introducing H4ND: Hyper-Resilient, 4-Fingered, Nimble, Dexterous Anthropomorphic Robot Hand Optimized for Research |
|
Kosanovic, Nicolas | University of Louisville |
Chagas Vaz, Jean | University of Louisville |
Keywords: Robotic Hands, Mechanism and Design, Biomimetic and Bioinspired Robots
Abstract: General-purpose grasping is a vital component of robotic manipulation that requires resilient and sophisticated gripper hardware. Anthropomorphic robot hands try to address this need by imitating the universal manipulation abilities of human hands; however, such technology tends to be more mechanically complex, expensive, and fragile than simpler grippers. This work presents the H4ND—an inventive, next-generation servo-driven 3D printed robotic hand designed to be repeatedly damaged and repaired at never-beforeseen rates. Unlike its servo-driven counterparts (e.g. Allegrohand, HDHM, LEAP Hand), the H4ND uses size-optimized sacrificial linkages to shift its point-of-failure onto a laughably inexpensive, massively manufacturable, 3D-printed part. Hence, this grasping platform can suffer merciless abuse during experimentation, undergo a quick-and-easy repair process, and be fully functional again. This is demonstrated in experiments featuring telepresence control and heavy object manipulation. With a 4.50 kg maximum payload, 0.25 s finger closing time, 0.590 kg weight, 16 Degrees of Freedom, humanoid form factor, a 500 USD price tag, and less than a 5-minute mean repair time, the H4ND is a hyper-resilient, inexpensive, and potentially market-disrupting solution to robotic grasping. Therefore, the H4ND can empower researchers to spend less time worrying about their hardware’s cost/longevity, and more time doing research.
|
|
14:50-15:00, Paper TO4E.4 | |
Detection and Mitigation of Misleading Pheromone Trails in Foraging Robot Swarms |
|
Luna, Ryan | The University of Texas Rio Grande Valley |
Lu, Qi | The University of Texas Rio Grande Valley |
Keywords: Multi-Robot Systems, Robot Surveillance and Security
Abstract: This study addresses the overlooked aspect of security in swarm robotics by exploring the vulnerabilities of pheromone-based foraging robot swarms to deceptive pheromone trail attacks. Simulating scenarios where detractor robots lay misleading trails to capture benign foraging robots in the swarm. We analyze the impact of the attack on the swarm and evaluate the foraging efficiency. We introduce a defense mechanism using distance-based clustering (DBSCAN) along with a cluster grouping mechanism to isolate large batches of detractors at a time. The isolation strategy also incorporates an adaptive timing mechanism to identify detractors by computing the estimated travel time of pheromone trails. Our experiments show a decline in resource collection and an increase in forager robots captured with more detractors. However, the defense strategy effectively counters this challenge. It can isolate all detractors early on in the simulation, significantly reducing forager capture rates, and preserving the foraging performance of the swarm. This research highlights the security vulnerabilities in pheromone-based foraging algorithms and proposes a robust defense mechanism, contributing significantly to the development of more resilient foraging algorithms in swarm robotics. These findings are pivotal for deploying secure and efficient swarm robotics systems in real-world scenarios where both efficiency and security are paramount.
|
|
TI5A |
Room T1 |
Poster Sesssion I |
Interactive |
|
15:20-17:00, Paper TI5A.1 | |
Toward 6D Velocity Estimation for Legged Robot Using Rolling Motion |
|
Jung, Sangwoo | Seoul National University |
Kim, Ayoung | Seoul National University |
Keywords: Simultaneous Localization and Mapping (SLAM), Legged Robots, Multisensor Data Fusion
Abstract: Recent advances in quadruped robot SLAM have demonstrated significant progress by integrating contact sensors and joint sensors. Whereas existing methods assumed the contact frame to be stationary on the ground during the contact states, ignoring the effect of rolling motion and its drift. In this work, we propose leveraging the rolling motion of the contact frame to compute the instantaneous velocity of a legged robot base relative to the world frame. We estimate 6D velocity by exploiting IMU and forward kinematics through this derivation. The efficacy of our approach is validated through real-world experiments, particularly for z-axis precision.
|
|
15:20-17:00, Paper TI5A.2 | |
Design of Control System for the Antarctic Exploration Robot |
|
Uhm, Taeyoung | Korean Institute of Robotics and Technology Convergence |
Kwon, Ji-Wook | Korea Institute of Robotics and Technology Convergence(KIRO) |
Lee, JongDeuk | Korea Institute of Robotics & Technology Convergence(KIRO) |
KIM, JONG CHAN | Korea Institute of Robotics & Technology Convergenc |
HYOJUN, LEE | Korea Institute of Robotics & Technology Convergence |
Choi, Young-Ho | Korean Institute of Robot and Convergence |
Keywords: Robotics in Hazardous Applications, Multi-Robot Systems, Wheeled Mobile Robots
Abstract: Recently, research is being conducted to explore extreme cold regions such as Antarctica by operating unmanned robots. In order to operate unmanned robots, a system that can monitor and remotely control multiple robots is required. Therefore, in this paper, we propose a control system designed to enable remote monitoring and control of robots and command exploration missions using a wide area (about 50km) communication network in wide areas such as extreme cold regions. The proposed system was tested with the robot and showed its usefulness.
|
|
15:20-17:00, Paper TI5A.3 | |
Targeted Delivery of Deployable Therapeutic Sheets Using Magnetically Actuated Capsule |
|
Lee, Jihun | Daegu Gyeongbuk Institute of Science and Technology |
Park, Sukho | DGIST |
Keywords: Micro/Nano Robots, Medical Robotics and Computer-Integrated Surgery, Rehabilitation and Healthcare Robotics
Abstract: This study proposes a new target-delivery method using deployable therapeutic sheets (TheraSs) into the gastrointestinal (GI) tract using a magnetically actuated capsule. The TheraS is designed for GI-tract treatments and comprises therapeutic (chitosan–catechol), guard (triethylene glycol dimethacrylate), and unrolling (polyethylene glycol dimethacrylate) layers. Four rolled TheraSs are equipped in the four-channeled capsule and individually delivered to the targeted sites by magnetic actuation. TheraS maintains its rolled state until it contacts GI fluid, and when it contacts the GI fluids that cause it to unroll and adhere to the GI tract surface. Additionally, TheraS induces hyperthermia and drug release under an alternating magnetic field (AMF) as magnetic nanoparticles and drugs are loaded onto its therapeutic layer. Herein, TheraS is characterized morphologically and the deliverability of TheraS to the GI tract is verified through ex vivo tests. Finally, the cancer cell-killing performance of the TheraS was confirmed through cytotoxic therapy. Multiple TheraS delivery to multiple GI-tract lesion sites through the proposed capsule is confirmed, while therapeutic functionalities are verified by hyperthermia and drug release under AMF.
|
|
15:20-17:00, Paper TI5A.4 | |
Ground-Relative Positioning in 3D Pose Estimation: A Novel Approach for Real-World Alignment of Skeleton Data |
|
Kim, Myeongseop | Korea Electronics Technology Institute |
Taehyeon, Kim | Korea Electronics Technology Institute |
Oh, Jean | Carnegie Mellon University |
Lee, Kyu In | University of Houston |
Lee, Kyung-Taek | Korea Electronics Technology Institute |
Keywords: Physical and Cognitive Human-Robot Interaction, Human-Robot Augmentation, Performance Evaluation and Optimization
Abstract: This paper presents a novel Dynamic Ground Alignment (DGA) algorithm, revolutionizing 3D pose estimation for enhanced ground-relative positioning in both real-world and digital twin environments. Our approach, integrating with the Mediapipe Pose framework, focuses on a body-centered coordinate system and employs 3D pose landmarks to accurately align digital characters with the ground plane in virtual settings. This method addresses the common issue of unrealistic character grounding, often found in traditional pose estimation techniques, by dynamically recalibrating the character's Y-position to maintain realistic grounding and interaction within the virtual environment. The application of DGA in various fields, including animation, virtual reality, sports science, physical therapy, and ergonomic studies, significantly improves the fidelity of digital avatars and models, offering a more integrated and realistic interpretation of human movement. While our study primarily demonstrates the efficacy of DGA through visual comparisons, it lays the groundwork for future research in quantitative validation and broader applications in human-computer interaction and digital human modeling.
|
|
15:20-17:00, Paper TI5A.5 | |
Preliminary Research on Underwater Object Recognition and Localization Using Stereo Hand Eye System |
|
Park, Daegil | Korea Research Institute of Ships & Ocean Engineering (KRISO) |
PYO, Seunghyun | UST KRISO School |
Lee, Yeongjun | Korea Research Institute of Ships and Ocean Engineering |
Keywords: Underwater Robotics, Computer Vision and Visual Servoing, Object Recognition
Abstract: Recently, the necessity of marine autonomous work robots is emerging due to the shortage of manpower and aging of fishing villages. However, underwater object perception and recognition are much more difficult than on the ground. Underwater sonar sensors are difficult to utilize at close range, and vision sensors are difficult to use in water because not only short sensing range, but also the feature point uncertainty depending on turbidity, light intensity and direction. In this paper, we proposed a stereo hand eye system by mounting a camera on the end-tip of dual manipulators and changing the relative distance and direction between end-tip and object so that the feature points are picked up uniformly. In order to verify this system, the proposed system explored a certain target area in the water tank environment and tracked the target object. And then, the proposed system precisely estimated the underwater object position using the stereo vision algorithms.
|
|
15:20-17:00, Paper TI5A.6 | |
Pressure-Based EGaIn Soft Sensor with Three-Layer Structure |
|
Cho, Geun Sik | Kangwon National University |
Park, Yong-Jai | Kangwon National University |
Keywords: Soft Robotics, Force and Tactile Sensing
Abstract: Recent studies focusing on wearables aim to assist human movement and conserve energy, leading to a growing interest in developing various soft sensors for these applications. This research contains the utilization of diverse materials and fabrication methods. Among the various materials under exploration, EGaIn, a conductive eutectic gallium-indium alloy, has attracted significant attention. This paper focuses on the development of a three-layer structure pressure sensor utilizing EGaIn. This research investigates the performance of these sensors, particularly examining the impact of cone-shaped protrusions in the pathways where EGaIn is located. The fabricated sensors with these inserted protrusions have been subjected to testing. It was observed that the sensors incorporating protrusions demonstrated more stable resistance changes and could measure higher pressures.
|
|
15:20-17:00, Paper TI5A.7 | |
Pose Estimation Method for Depth Camera Using Indoor 3D Map and 3D Registration |
|
Jung, Sukwoo | Korea Electronics Technology Institute |
Kim, Myeongseop | Korea Electronics Technology Institute |
Taehyeon, Kim | Korea Electronics Technology Institute |
Lee, Kyung-Taek | Korea Electronics Technology Institute |
Keywords: Simultaneous Localization and Mapping (SLAM), Multisensor Data Fusion, Object Recognition
Abstract: Recent research in sensor pose estimation has gained considerable attention due to its wide-ranging applicability in various domains, including robotics, Virtual Reality (VR), and Augmented Reality (AR). Current sensor pose estimation methods typically rely on sensor data such as visual odometry, Inertial Measurement Unit (IMU), Wi-Fi, or Bluetooth. In this study, we introduce an innovative approach that diverges from conventional techniques, which often depend solely on images or IMU sensors for pose estimation. Instead, we take advantage of pre-reconstructed 3D maps to significantly enhance pose estimation accuracy. To accomplish this, we employ high-precision indoor maps obtained through the use of a LiDAR scanner. These pre-reconstructed 3D maps serve not only to sense the initial position of the sensor but also to accurately calculate positional coordinates by applying a 3D registration algorithm alongside the acquired data from the depth camera sensor. The algorithm's potential has been validated in this paper, and further experiments are planned in the future to confirm its effectiveness.
|
|
15:20-17:00, Paper TI5A.8 | |
Reinforcement Learning Based Control for a Continuum Mechanism Actuated by Pneumatic Artificial Muscles |
|
Kang, Bongsoo | Hannam University |
Keywords: Soft Robotics, Actuation and Actuators, AI Reasoning Methods for Robotics
Abstract: In this paper, a reinforcement learning technique is applied to control an internal continuum mechanism driven by pneumatic artificial muscles. Pneumatic artificial muscles are lightweight and can produce large forces, but their complex dynamic characteristics make mathematical modeling difficult. Therefore, it is not easy for conventional model-based control schemes to achieve desired motions of the mechanism, so reinforcement learning, similar to human behavior, is implemented to perform given tasks through iterative processing. In particular, experimental results showed that the proposed reinforcement learning incorporating deep learning techniques yielded good performance even when uncertainty of the environment was numerous.
|
|
15:20-17:00, Paper TI5A.9 | |
Integrating Robotic Navigation with Blockchain: A Novel PoS-Based Approach for Heterogeneous Robotic Teams |
|
Paykari, Nasim | Fordham University |
Alfatemi, Ali | Fordham University |
Lyons, Damian | Fordham University |
Rahouti, Mohamed | Fordham University |
Keywords: Multi-Robot Systems, Dynamics and Control, Computer Vision and Visual Servoing
Abstract: This project explores a novel integration of blockchain methodologies with Wide Area Visual Navigation (WAVN) to address challenges in visual navigation for a heterogeneous team of mobile robots deployed for unstructured applications in agriculture, forestry etc. Focusing on overcoming challenges such as GPS independence, environmental changes, and computational limitations, the study introduces the Proof of Stake (PoS) mechanism, commonly used in blockchain systems, into the WAVN framework. This integration aims to enhance the cooperative navigation capabilities of robotic teams by prioritizing robot contributions based on their navigation reliability. The methodology involves a stake weight function, consensus score with PoS, and a navigability function, addressing the computational complexities of robotic cooperation and data validation. This innovative approach promises to optimize robotic teamwork by leveraging blockchain principles, offering insights into the scalability, efficiency, and overall system performance. The project anticipates significant advancements in autonomous navigation and the broader application of blockchain technology beyond its traditional financial context.
|
|
15:20-17:00, Paper TI5A.10 | |
Design of Fuzzy Logic Parameter Tuners for Upper-Limb Assistive Robots |
|
Coco, Christopher | University of Massachusetts Lowell |
Spanos, Jonathan | University of Massachusetts Lowell |
Osooli, Hamid | University of Massachusetts Lowell |
Azadeh, Reza | University of Massachusetts Lowell |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: Assistive Exoskeleton Robots are helping restore functions to people suffering from underlying medical conditions. These robots require precise tuning of hyper-parameters to feel natural to the user. The device hyper-parameters often need to be re-tuned from task to task, which can be tedious and require expert knowledge. To address this issue, we develop a fuzzy logic controller that can dynamically tune robot gain parameters to adapt its sensitivity to the user's intention determined from muscle activation. The designed fuzzy controllers benefit from a set of expert-defined rules and do not rely on extensive amounts of training data. We evaluate the designed controller with three different tasks and compare our results against the manually tuned system. Our preliminary results show that our controller reduces the amount of fighting between the device and the human, measured using a set of pressure sensors.
|
|
15:20-17:00, Paper TI5A.11 | |
Soft Wearable Thermotouch Haptic Actuator |
|
Lee, Seohu | Korea University |
Jang, Seongkwan | Korea University |
Cha, Youngsu | Korea University |
Keywords: Haptics, Soft Robotics
Abstract: In this paper, we present a soft wearable thermotouch haptic actuator to produce both touch and thermal tactile feedback simultaneously and independently. The actuator is a two-layer structure of a pneumatic actuator and a thermoelectric device. The significantly different two parts are assembled by a novel design. A wearable air pump based on an origami pattern is also proposed to replace a bulky external air compressor. The pneumatic actuator for touch haptic feedback is also specially designed, tailored to the air pump.
|
|
15:20-17:00, Paper TI5A.12 | |
Determination of Calf Contact Point in Ultrasound Examination for Diagnosis of Chronic Venous Insufficiency |
|
Choi, ByeongSeon | JeonBuk National University |
Park, Jaebyung | Jeonbuk National University |
Keywords: Computer Vision and Visual Servoing, Rehabilitation and Healthcare Robotics, Medical Robotics and Computer-Integrated Surgery
Abstract: Chronic venous insufficiency (CVI) induces discomfort through symptoms such as lower extremity edema and venous hypertension. This paper proposes a method for determining calf contact points in ultrasound examination during the robotic CVI diagnosis process. The proposed perception system integrates image and point cloud processing to precisely identify contact points on calf models. We firstly utilize a color filtering technique to extract regions of interest from images. The filtered images are then combined with depth information to generate point cloud structure. For efficiency in data processing, points beyond one meter from the camera are removed. Subsequently, a planar patch detection technique is applied to the point cloud data to identify the shapes of the calf models. Utilizing the center point coordinates and xyz extents of the detected patches, the robot determines the path to perform the ultrasound examination. Our approach establishes a foundation for efficient and accurate CVI diagnosis by enabling the robot to identify contact points for ultrasound examination.
|
|
15:20-17:00, Paper TI5A.13 | |
Development of Variable Scaling Teleoperation Framework for Robotic Spine Surgery System |
|
Lee, Hunjo | Korea University of Science and Technology, Korea Institute of I |
Yang, Gi-Hun | KITECH |
Keywords: Telerobotics, Medical Robotics and Computer-Integrated Surgery, Robotic Systems Architectures and Programming
Abstract: This paper introduces a variable scaling teleoperation framework for intuitive robotic surgery. Different characteristics between master and slave, and limited information from remote environment are important issues which make a teleoperation difficult and unintuitive. Therefore, the variable scaling teleoperation framework is necessary to improve manipulation abilities of surgeons. In this study, we proposed the variable scaling framework which can modulate a motion scale or stiffness scale of a slave robot in real-time according to a current process. We used a grip force to represent a motion intention of operators and adjust the scale factors. The relationship between a grip force and the scale factors are based on instinctive skill of human. The proposed framework makes surgeons execute the robotic surgery intuitively. Our future study aims to implement the proposed system in real surgical environment and verify its effectiveness.
|
|
15:20-17:00, Paper TI5A.14 | |
Optimization of Navigating Magnetic Particles in Blood Vessels Using FFP in Open-Type EMA System |
|
Yang, Seungun | DGIST |
Kee, Hyeonwoo | DGIST |
Nguyen, Kim Tien | Korean Institute of Medical Microrobotics |
Kim, Jayoung | Korea Institute of Medical Microrobotics |
Park, Sukho | DGIST |
Keywords: Micro/Nano Robots, Micro/nanosystems, Medical Robotics and Computer-Integrated Surgery
Abstract: Many papers have proposed steering magnetic microparticles (MMPs) using magnetic fields for target drug delivery. In particular, current research is actively conducted on both steering and particle tracking of magnetic particles using a field-free point (FFP). However, existing studies use a closed-type electromagnetic actuation (EMA) coil system, making it difficult to apply it to an actual surgical environment and using external imaging devices such as X-rays together. In this study, we aim to overcome the limitations of closed-type EMA systems by using an open-type EMA system. However, an open-type EMA system has some challenges of magnetic force reduction with distance and magnetic field anisotropy. To solve this problem, this study proposes an optimization of an open-type EMA system, an improved FFP generation method, and a logic to steer multiple magnetic particles in blood vessels using anisotropic FFP. Finally, through simulations and experiments with a channel phantom, we validate the feasibility of navigating MMPs using FFPs within an open-type EMA system.
|
|
15:20-17:00, Paper TI5A.15 | |
Antenna Tracking System Application for Seamless UAV Flight Based on oneM2M IoT Platform |
|
Lee, Jiho | Korea Electronics Technology Institute |
Park, Jong-Hong | Korea Electronics Technology Institute |
Ahn, Il-Yeop | Korea Electronics Technology Institute |
Keywords: Aerial and Flying Robots
Abstract: Unmanned Aerial Vehicles (UAVs) are a revolutionary technology with the potential to impact civil environments in the near future. For seamless and safe UAV operations, both RF-based local communication and LTE-based global communication operating environments are essential. To achieve this, reliable connectivity for UAVs, even in communication shadow areas is indispensable. The aim of this paper is to provide dependable connectivity between UAV, the drone, and GCS (Ground Control System) using an autonomous antenna tracker with RF and LTE communication. The proposed system and application will enable safe operation of multiple heterogeneous UAVs operating in communication shadow areas, low altitude airspace and remote control beyond line of sight.
|
|
15:20-17:00, Paper TI5A.16 | |
The Effect of Robotic Ankle and MTP Joints Stretching on Plantar Fasciitis |
|
Kang, Hyun Soo | University of California Berkeley |
Chae, Seongok | Korea Advanced Institute of Science and Technology |
Jun, Yoojin | Korea Advanced Institute of Science and Technology |
Lee, Hojik | Korea Advanced Institute of Science and Technology |
Park, Hyung-Soon | Korea Advanced Institute of Science and Technology |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: Plantar fasciitis, causing severe foot pain, is commonly treated with stretching; however, its efficacy varies based on factors like foot arch posture. This study, utilizing the PASS robot, explores how short-duration stretching affects plantar fasciitis based on foot arch types. Six plantar fasciitis patients, classified into normal arch, pes cavus, and pes planus groups, underwent simultaneous MTP and ankle joint stretching with the PASS robot. To confirm the stretching effects, Stiffness values, stance period, and plantar fascia thickness were measured and analyzed before and after stretching. No significant differences were found in stretching effects based on foot arch groups. However, numerically, MTP joint stiffness showed more improvement in the normal arch group compared to the pes cavus group. Characteristics of pes cavus may contribute to diminished stretching effects.
|
|
15:20-17:00, Paper TI5A.17 | |
Development of a Gripping System for Sorting Agricultural Produce Post Harvest |
|
Kim, Myeongjin | Korea Institute of Industrial Technology (KITECH) |
Kim, Jiwoong | Korea Institute of Industrial Technology (KITECH) |
Yun, Dongho | Korea Institute of Industrial Technology (KITECH) |
Ju, Chanyoung | Korea Institute of Industrial Technology |
Keywords: Grasping, Mechanism and Design, Robotic Hands
Abstract: With the recent advancements in agricultural technology and the increasing demand for agricultural products, research on grippers for the purpose of sorting agricultural produce has been actively progressing. This paper addresses the conceptual design of a sorting system and grippers specifically tailored for identifying and sorting spoiled agricultural produce, such as onions and apples. The proposed sorting system, based on a camera, tracks the positions of deteriorated crops and employs a gripper attached to a delta robot to remove them. In this context, the gripper proposed in this paper envelops the entire deteriorated crop using gripper fingers and a rubber membrane, ensuring that contaminants are not introduced to high-quality produce during the sorting process. The paper covers the mechanism design for developing such grippers and presents conceptual diagrams of the sorting system. Additionally, insights into potential applications of the proposed system and suggestions for future research directions are provided.
|
|
15:20-17:00, Paper TI5A.18 | |
A Study on the Development of Guidelines for Evaluation of Usability of Care Robots for Lift and Transfer |
|
Oh, Hyejung | Korea Orthopedics & Rehabilitaion Engineering Center |
JUNG, Sungbae | Korea Orthopedics and Rehabilitation Engineering Center |
YUK, SUNWOO | Korea Orthopedics & Rehabilitaion Engineering Center |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: With the aging of the population, the development of IT, and the 4th Industrial Revolution, various care robots are being developed. A 'care robot' is a robot or a device to which robot technology is applied that provides physical and emotional assistance to the elderly or the disabled who have difficulties in their daily lives. When it comes to selecting or using a care robot, the safety and convenience of the product are very important factors. Accordingly, this study intends to develop guidelines for evaluating the usability of care robots. In this study, the usability evaluation guideline index of care robots for mobility is developed, and the effectiveness, efficiency, and satisfaction of the product are evaluated by analyzing the usability evaluation and collected data. There were a total of 11 usability evaluation scenarios, and usability evaluation was conducted for the elderly and caregivers. Questionnaires were distributed with opinions collected. The results of the usability evaluation are as follows: - Regarding safety, many viewed that it was satisfactory because there was no part that might cause harm to the user of the product. - Regarding effectiveness, there were no evaluators who felt very difficult in using the product, but there were opinions that skills were needed. - Regarding convenience, there were comments on the need for improvement of the battery weight and charging guidance.
|
|
15:20-17:00, Paper TI5A.19 | |
A New Adaptive Robotic Finger with the Selectively Actuatable Passive Joint Mechanism |
|
Park, Jongwoo | Korea Institue of Machinery & Materials |
Jeong, Hyunhwan | Korea University |
Keywords: Robotic Hands, Mechanism and Design, Grasping
Abstract: In this paper, we introduce a new type of robotic finger with an under-actuated adaptive joint mechanism. The proposed finger mechanism has three joints with just one active joint. The proposed finger mechanism is equipped with gear trains through links to transfer torque from a single actuator to all passive joints. The clutch brake in each joint is selectively engaged to control each joint independently. In order to realize the proposed mechanism, the modeling and design are conducted. Then, its prototype robotic gripper hardware equipped with three proposed robotic finger mechanisms is developed. The feasibility of the proposed robotic finger mechanism is verified by conducting initial experiments with actual hardware.
|
|
15:20-17:00, Paper TI5A.20 | |
Camber-Changing Flapping Hydrofoils for Efficient and Environmental-Safe Water Propulsion System |
|
Romanello, Luca | TUM |
Hohaus, Leonard | Technische Universität München |
Schmitt, David-Marian | Technical University Munich |
Armanini, Sophie Franziska | Technical University of Munich |
Keywords: Underwater Robotics, Mechanism and Design, Biomimetic and Bioinspired Robots
Abstract: This research introduces a novel hydrofoil-based propulsion framework for unmanned aquatic robots, inspired by the undulating locomotion observed in select aquatic species. The proposed system incorporates a camber-modulating mechanism to enhance hydrofoil efficiency and propulsive force generation. Through dynamic simulations, we validate the effectiveness of the camber-adjusting hydrofoil compared to a symmetric counterpart. The results demonstrate a significant improvement in horizontal thrust, emphasizing the potential of the cambering approach to enhance propulsive performance. Additionally, a prototype flipper design is presented, featuring individual control of heave and pitch motions, as well as a camber-adjustment mechanism. The integrated system not only provides efficient water-based propulsion but also offers the capacity for generating vertical forces during take-off maneuvers for seaplanes. The design is tailored to harness wave energy, contributing to the exploration of alternative energy resources. This work advances the understanding of bionic oscillatory principles for aquatic robots and provides a foundation for future developments in environmentally safe and agile underwater exploration.
|
|
15:20-17:00, Paper TI5A.21 | |
Estimating Vertical Forces on a Trampoline Using Shadow Images of a Foot-Shaped Jig Mounted on a Robotic Manipulator |
|
Park, Gunseok | Korea Institute of Industrial Technology |
Choi, Seung-Hwan | Korea Institute of Industrial Technology |
Kim, Min Young | Kyungpook National University |
Lee, Suwoong | Korea Institute of Industrial Technology |
Keywords: Contact: Modeling, Sensing and Control , Force and Tactile Sensing, Computer Vision and Visual Servoing
Abstract: Trampoline exercise is recognized as a beneficial activity for rehabilitation and fitness, contributing significantly to lower limb strength enhancement, overall physical conditioning, and rehabilitation therapy. Traditionally, collecting data involved attaching sensors to the trampoline equipment or users, a method that carried the risk of sensor malfunction due to continuous use of the trampoline. This study proposes a new approach using a camera sensor installed beneath the trampoline to estimate the vertical forces exerted solely through shadow images of a foot-shaped jig. The camera sensor captured shadow images from various points under the trampoline using a foot-shaped jig mounted on a robotic manipulator placed on the trampoline. These images were processed using Convolutional Neural Network (CNN) models, such as ResNet50, VGG16, DenseNet121, and AlexNet, through transfer learning. After a comprehensive evaluation of performance, the ResNet50 and DenseNet121 model exhibited the superior results.
|
|
15:20-17:00, Paper TI5A.22 | |
A Study on Fault Detection in Rotating Machines Using STFT Image of Time-Series Current Data |
|
Choi, Seung-Hwan | Korea Institute of Industrial Technology |
Lee, Suwoong | Korea Institute of Industrial Technology |
Keywords: Contact: Modeling, Sensing and Control , Industrial Robots, Performance Evaluation and Optimization
Abstract: This paper proposes a method for detecting faults in rotating machines. Typically, fault detection in rotating machines is carried out by attaching an accelerometer to collect vibration data, which is then applied to a fault detection algorithm. In this study, we present and experimentally validate a method of detecting faults using current data collected from endurance tests of driving modules, which are commonly used in rotating machines. In the experiment, the collected current and vibration data were transformed into short time Fourier transform (STFT) image data, which were then applied to the convolutional neural network (CNN) DenseNet model, for fault detection and performance comparison. The experimental results confirmed that the current data demonstrated fault detection performance comparable to that of the vibration data, thereby validating the effectiveness of this method.
|
|
15:20-17:00, Paper TI5A.23 | |
Nonlinear Identification of Unknown Object Dynamics for Human-Robot Collaborative Tasks |
|
Kim, Hakjun | Kyung Hee University |
Kim, Sanghyun | Kyung Hee University |
Park, Jinseong | Korea Institute of Machinery and Materials |
Keywords: Modeling, Identification, Calibration, Physical and Cognitive Human-Robot Interaction
Abstract: In this paper, the Sparse Identification of Nonlinear Dynamic (SINDy) method was adopted to identify the dynamics of an object when it held by a human and a robot together for collaborative tasks. A non-threatening perturbation that satisfies the persistence excitation criterion is generated to improve the estimation accuracy. With simulation results, the estimation performance was compared with the results obtained by extended Kalman filter.
|
|
15:20-17:00, Paper TI5A.24 | |
Imaginary Meta Reinforcement Learning for Decision-Making in Autonomous Driving |
|
Wen, Lu | University of Michigan, Ann Arbor |
Zhang, Songan | Shanghai Jiaotong University |
Keywords: Intelligent Robotic Vehicles, Motion Planning and Obstacle Avoidance
Abstract: Meta reinforcement learning (Meta RL) has been amply explored to quickly learn an unseen task by transferring previously learned knowledge from similar tasks. However, most state-of-the-art Meta RL algorithms require the meta-training tasks to have a dense coverage of the task distribution and a great amount of data for each of them. In this paper, we propose MetaDreamer, a context-based Meta RL algorithm that requires less real training tasks and data by doing meta-imagination and MDP-imagination. We perform meta-imagination by interpolating on the learned latent context space with disentangled properties, as well as MDP-imagination through the generative world model where physical knowledge is introduced. Our experiments with various benchmarks show that MetaDreamer outperforms existing approaches in data efficiency and interpolated generalization.
|
|
15:20-17:00, Paper TI5A.25 | |
The Concept of the Automated Fit-Up System and CAD Based Work Pieces Detection Algorithm for Sub-Assembly of Ship Building |
|
Kim, Han-Gyeol | Korea Institute of Robotics & Technology Convergence |
Jeong, Yujeong | Korea Institute of Robotics and Technology Convergence |
Jang, Minwoo | Korea Institute of Robotics and Technology Convergence |
Baek, Jonghwan | Korea Institute of Robotics and Technology Convergence |
Lee, Jae Youl | Korea Institute of Robotics and Technology Convergence |
Keywords: Mechanism and Design, Object Recognition, Industrial Robots
Abstract: The sub-assembly process significantly influences the quality of shipbuilding, prompting extensive research to enhance productivity through automation. However, progress in automating the fit-up process has been relatively slow due to its complex task involving recognition, transportation, fixation, and pre-welding of components. In this study, the concept of an automated fit-up system for shipbuilding subassembly is propose. While the fit-up process using the automated fit-up system is introduced. Additionally, an algorithm for recognizing plates placed on the assembly table is proposed. The algorithm was tested on a miniature testbed, demonstrating an accuracy of 98.8% based on the RMSE criterion.
|
|
15:20-17:00, Paper TI5A.26 | |
High Bandwidth Position Control of a Non-Collocated Tendon-Driven Manipulator |
|
Kim, Nam Gyun | Korea Advanced Institute of Science and Technology |
Ryu, Jee-Hwan | Korea Advanced Institute of Science and Technology |
Keywords: Dynamics and Control, Manipulation Planning and Control, Contact: Modeling, Sensing and Control
Abstract: Tendon-driven manipulators with non-collocated actuator placement provide numerous advantages. However, accurate high-bandwidth control of these manipulators remains a challenge. This paper proposes a high-bandwidth position control scheme for non-collocated tendon-driven manipulators. We modified the previously proposed successive stiffness increment (SSI) approach for higher-bandwidth position control by introducing a non-offset releasing path. Furthermore, the timedomain passivity approach was integrated with SSI to ensure the stability of the SSI-based high-bandwidth non-collocated control system. Consequently, the proposed dual-loop control scheme allows higher bandwidth in position tracking while ensuring stability and removing undesirable oscillations.
|
|
15:20-17:00, Paper TI5A.27 | |
Steering Mechanism Via Tail Bias Effect for Soft Toroidal Robots |
|
Park, Shinwoo | KAIST |
Kim, Nam Gyun | Korea Advanced Institute of Science and Technology |
Ryu, Jee-Hwan | Korea Advanced Institute of Science and Technology |
Keywords: Mechanism and Design, Soft Robotics, Modeling, Identification, Calibration
Abstract: This paper introduces a novel soft toroidal robot made from nylon fabric, representing a significant advancement in soft robotics. The robot features a unique steering mechanism leveraging the tail bias effect of nylon, enabling effective navigation in various terrains while maintaining its soft nature. Experimental results validate the theoretical model, demonstrating that the restoring moment is linearly related to curvature and independent of pressure. Simulations and demonstrations, including a 90-degree T-shaped pipe test, confirm the robot's maneuverability and efficiency in steering, especially in confined spaces.
|
|
15:20-17:00, Paper TI5A.28 | |
Image Processing Based on Deep Learning for Detecting Defect Problems in Ropeway Wire |
|
Baek, Jonghwan | Korea Institute of Robotics and Technology Convergence |
kim, namki | Korea Institute of Robotics & Technology Convergence |
Lee, Eun-Bi | Korea Institute of Robotics and Technology Convergence |
Jeong, Yujeong | Korea Institute of Robotics and Technology Convergence |
Jeong, Myeongsu | Korea Institute of Robotics & Technology Convergence |
Lee, Jae Youl | Korea Institute of Robotics and Technology Convergence |
Keywords: Object Recognition, Computer Vision and Visual Servoing, AI Reasoning Methods for Robotics
Abstract: Continuous and regular inspections are necessary due to the old ropeway facilities, but current inspection methods that rely on professional workers have problems with safety accidents and manpower shortages. To solve this problem, attempts are being made to automatically inspect the rope surface, and low-cost, high-efficiency vision-based inspection is attracting attention. We propose a wire rope defect detection method using deep learning-based image processing installed in cableway facilities. In this paper, we introduce a method to separate wire ropes from the background, classify defects using deep learning inference, and analyze the classified defect area by processing images. The performance evaluation of our wire surface defect detection results showed results within an error of 0.4mm.
|
|
15:20-17:00, Paper TI5A.29 | |
Study on Robotic Systems for Automatic Weld Condition Definition for Welded Joints in Ship Build Manufacturing |
|
Sung ho, Hong | Korea Institute of Robotics & Technology Convergence |
Lee, Eun-Bi | Korea Institute of Robotics and Technology Convergence |
Lee, Change Hee | Korea Institute of Robotics and Technology Convergence |
Lee, Jae Youl | Korea Institute of Robotics and Technology Convergence |
Ju, Chungho | Korea Institute for Robot Industry Advancement |
Yoo, Dong Joo | Sunmoon University |
Keywords: Industrial Robots, AI Reasoning Methods for Robotics, Computer Vision and Visual Servoing
Abstract: Even experienced on-site welders in the shipbuilding industry may encounter trial and error, leading to the occurrence of welding defects despite their familiarity with welding tasks. When performing familiar welding tasks, it is necessary to set welding conditions by searching the database for welding parameters based on the shape and thickness of the welded component. This process includes configuring welding conditions based on the shape, material, and thickness of each component. Finding the optimal welding conditions often requires numerous iterations of initial welding to check and repeat the welding status. This study investigates a welding process support system to minimize such trial and error. This system is designed to assist in finding optimal welding conditions by inputting parameters related to welding form, material, and thickness. It proposes a system that finds optimal welding conditions based on machine learning by considering existing databases, welding conditions and welding environments (ambient temperature, humidity) of experienced welders in the field. This system can contribute to improving welding quality and increasing productivity.
|
|
15:20-17:00, Paper TI5A.30 | |
Development of a Tele-Inspection Manipulation Robot for Sheave Liner Wear Inspection of Ropeway Wheels |
|
Kim, Seolha | Korea Institute of Robotics Technology Covergence |
Baek, Jonghwan | Korea Institute of Robotics and Technology Convergence |
Jang, Minwoo | Korea Institute of Robotics and Technology Convergence |
Jeong, Myeongsu | Korea Institute of Robotics & Technology Convergence |
kim, namki | Korea Institute of Robotics & Technology Convergence |
Lee, Eun-Bi | Korea Institute of Robotics and Technology Convergence |
Lee, Jae Youl | Korea Institute of Robotics and Technology Convergence |
Keywords: Manipulation Planning and Control, Motion Planning and Obstacle Avoidance, Contact: Modeling, Sensing and Control
Abstract: These days, inspection is conducted by specialized inspectors using visual inspection in the ropeway facilities with cable car, ski lift, etc. However, human visual inspection is conducted based on their experience, so inspectors may express varying opinions during the inspection process. In addition, ensuring safety in adverse climates and environments is challenging, leading to continual shortage of skilled inspectors. Accordingly, in this study, to inspect ropeway wheels, wear regularly regardless of climate, ropeway wheels remote monitoring inspection system was developed. To inspect the wheel wear, YOLOv5 and deep learning-based wheel detection method was used. The manipulator capable of inspecting the upper, lateral, and lower surfaces of the wheel was developed. Based on simulation, the appropriate link lengths and location were determined. Furthermore, manipulator posture and angles at the inspection position were determined through inverse kinematics calculations. The testbed was established for the wheel inspection and aimed to conduct operational tests of the actual manipulator using it. Based on this, the safety-assuring inspection system is aimed to be developed, to inspect the wheels even in adverse environment.
|
|
15:20-17:00, Paper TI5A.31 | |
Automated Peg-In-Hole Insertion: A Supervised Learning-Based Approach to Misalignment Error Compensation |
|
Cho, Taeyeop | Hanyang University, KITECH |
Kim, Jinseok | UST, KITECH |
Choi, Iksu | Sungkyunkwan University, KITECH |
Pyo, Dongbum | Korea Institute of Industrial Technology |
Keywords: Manipulation Planning and Control
Abstract: Tasks with frequent contact such as peg-in-hole assembly pose a risk of hazardous forces on the robot system due to uncertainty-induced collisions. To address this, we propose a regression learning model to infer the pose error angles of the peg and hole based on contact data. We demonstrate its effectiveness in overcoming jamming caused by misalignment in peg-in-hole insertion by a learned misalignment error compensation network (MEN). Experimental results show that the MEN achieves insertion independent of gain tuning with a 100% success rate compared to the single system, demonstrating stable peg-in-hole tasks without jamming.
|
|
15:20-17:00, Paper TI5A.32 | |
Maritime Object Detection for Autonomous Surface Vehicles through Distinct Problem Understanding |
|
Choi, Hyun-Taek | Korea Research Institute of Ships and Oceans Engineering |
Park, Jeonghong | KRISO |
Choi, Jinwoo | KRISO, Korea Research Institute of Ships & Ocean Engineering |
Kang, Minju | Korea Research Institute of Ships & Ocean Engineering |
Ha, Namhoon | Korea Research Institute of Ships and Oceans Engineering |
Choo, Ki-Beom | Korea Research Institute of Ships & Ocean Engineering(kriso) |
Kim, Jinwhan | KAIST |
Keywords: Multisensor Data Fusion, Object Recognition, Intelligent Robotic Vehicles
Abstract: In recent years, high-performance deep learning-based detection algorithms have been rapidly advancing, raising high expectations for autonomous vehicles, in particular, autonomous navigation. However, current detection studies primarily focus on improving the performance of general-purpose detection. Considering the performance limitations and resource constraints, the pursuit of detecting all maritime objects with the highest performance is not always practical. In this paper, we categorized detection performance into two objectives: (1) safe navigation and (2) surveillance/reconnaissance while describing characteristics of each objective in terms of purpose, priority, sensor weighting, and usage of additional information. Then we proposed a two-stage structure that can effectively handle these objectives. The designed algorithm in this structure selectively applied the 1st stage detection results to the corresponding objects based on the objective, allowing for efficient resource utilization for algorithm execution while still ensuring a minimum level of performance through the continuous operation of the 1st stage detection results. We have also provided example results to show the effectiveness of our proposed method.
|
|
15:20-17:00, Paper TI5A.33 | |
Efficient In-Pipe Cleaning Using Planetary Gear Mechanism-Based Brush Module |
|
Jeong, Byeongchan | Sungkyunkwan University |
Hur, Jaehyuk | Sungkyunkwan University |
Lee, Dong Young | Sungkyunkwan University |
Choi, Hyouk Ryeol | Sungkyunkwan University |
Keywords: Mechanism and Design, Industrial Robots
Abstract: Industrial pipe systems can become clogged with powdered waste products, causing the equipment to slow or stop functioning entirely. Because the pipes are located in difficult-to-reach places and contain hazardous substances, manual cleaning is difficult. The problem of powdery wastes building up in pipes and impeding suction-based removal is addressed in this work. A planetary gear system was added to the brush module of a pipe cleaning robot, enabling the brush to change its angle for effective cleaning. Large debris is broken up into smaller pieces by this mechanism, making it easier to remove. Furthermore, changing the brush angle helps loosen tough particles. The goal is to create an autonomous pipe cleaning robot that can inspect and clean pipes, saving labor and guaranteeing industrial site upkeep.
|
|
15:20-17:00, Paper TI5A.34 | |
Restoration of Underwater Vehicles Surface Pressure : Gappy POD Analysis of CFD Simulation Data |
|
Kim, Jinwoo | Seoul National University of Science and Technology |
Kim, Gyurae | Seoul National University of Science and Technology |
Kim, Jinhyun | Seoul National University of Science and Technology |
Keywords: Underwater Robotics, Multisensor Data Fusion, Range, Sonar, GPS and Inertial Sensing
Abstract: In this paper, we employed the gappy POD (Proper Orthogonal Decomposition) technique to restore high-dimensional pressure data from the surface of an underwater vehicle. To utilize the gappy POD technique, Snapshot data was obtained through CFD (Computational Fluid Dynamics) simulations, capturing the motion of the underwater vehicle as it moved forward. We conducted an analysis of the singular values of the snapshot data and the corresponding POD modes. The restored data, utilizing only a few data points and three modes, exhibited a high level of accuracy when compared to the original dataset.
|
|
15:20-17:00, Paper TI5A.35 | |
A Multi-Agent 3D Scene Graph Framework in Real-Time |
|
Kim, Yirum | Gwang-Ju Institute of Science and Technology |
Kim, Ue-Hwan | Gwangju Institute of Science and Technology (GIST) |
Keywords: Multi-Robot Systems, Simultaneous Localization and Mapping (SLAM)
Abstract: 3D scene graphs effectively encapsulate spatial and semantic information of 3D environments, with nodes denoting objects and edges indicating the predicates between objects. Recent studies have primarily focused on the 3D scene graph prediction of a single agent—overlooking the practicality of multi-agent systems in the real world. In this work, we propose a multi-agent 3D scene graph (MA3DSG): a framework designed for large indoor environments where multiple agents collaboratively generate 3D scene graphs from RGB-D sequences in real-time. As a key component of the proposed framework, we introduce the sequential feature-based place recognition method using the graph neural network (GNN) to update nodes and edges for spaces previously visited by other agents. Besides, enhancing this place recognition and the overall 3D scene graph performance requires the extraction of more effective features. To this end, we propose to employ the kernel point convolution for the advancement of the point cloud encoder. Furthermore, we introduce a new benchmark for evaluating MA3DSG systems including a dataset, metrics, and baselines. Through comprehensive experiments, we showcase our framework’s ability to rapidly and efficiently construct hierarchical 3D scene graphs for extensive indoor spaces.
|
|
15:20-17:00, Paper TI5A.36 | |
Development of Intelligent Situational Awareness System (iSAS) for Maritime Autonomous Surface Ships: Preliminary Field Tests at Sea |
|
Park, Jeonghong | KRISO |
Kang, Minju | Korea Research Institute of Ships & Ocean Engineering |
Choi, Hyun-Taek | Korea Research Institute of Ships and Oceans Engineering |
Ha, Namhoon | Korea Research Institute of Ships and Oceans Engineering |
Choo, Ki-Beom | Korea Research Institute of Ships & Ocean Engineering(kriso) |
Choi, Jinwoo | KRISO, Korea Research Institute of Ships & Ocean Engineering |
Keywords: Intelligent Robotic Vehicles, Multisensor Data Fusion, Object Recognition
Abstract: This paper presents the development of an intelligent situational awareness system (iSAS) for the autonomous navigation of maritime autonomous surface ships (MASSs) maneuvering in a marine environment. A sensor fusion-based multimodal system including cameras, lidar, and radar was configured to seamlessly and reliably detect and identify various maritime objects at sea. In particular, Considering the unique characteristics of each sensor, we designed and implemented detection approaches to quickly detect objects. Subsequently, an extended Kalman filter (EKF)-based tracking filter technique was applied to estimate the detected objects' position, speed, and course information. Furthermore, considering a time-varying uncertainty contained in the estimated information, the estimated information was employed to evaluate a quantitative risk indicator of the potential collision risk on the predictive course of the MASS. Preliminary field tests were carried out at sea to demonstrate the practical feasibility of the developed iSAS, and their results were described.
|
|
15:20-17:00, Paper TI5A.37 | |
An Industrial Robotic System for the Efficient Recycling of Printed Circuit Boards |
|
Pagano, Francesco | Hiro Robotics |
Lottero, Jacopo | Hiro Robotics |
Labolani, Davide | Ecole Centrale De Nantes |
Sgorbissa, Antonio | University of Genova |
Recchiuto, Carmine Tommaso | University of Genova |
Keywords: Manipulation Planning and Control, Industrial Robots, Robotic Systems Architectures and Programming
Abstract: Printed circuit boards play a crucial role in various industries, including Electronics Manufacturing, Telecommunications, and Automotive. However, electronic waste growth poses significant environmental and health risks, necessitating urgent actions to safeguard our planet and conserve resources. As a result, there’s a need for highly efficient waste management techniques. This work aims to address this challenge by designing and developing an automated system that integrates an anthropomorphic manipulator and stereo vision for the efficient recycling of Printed Circuit Boards. The approach involves creating a ROS 2-based software architecture for an anthropomorphic manipulator, integrating computer vision algorithms, planning, and control modules.
|
|
15:20-17:00, Paper TI5A.38 | |
Effects of Transfer-Assistive Robots on the Caregiver Burden |
|
Shin, Yong Soon | Hanyang University |
Kim, Min-Jung | Hanyang University |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: This study aimed to evaluate the effects of transfer-assistive robots on the caregiver burden. Thirty caregivers of people with severe disabilities and older adults used the transfer-assistive robot for 7 days. The transfer-assistive robots had significant reduction effect on caregiver burden.
|
|
15:20-17:00, Paper TI5A.39 | |
Care Workers’ Caring Experience on Transfer-Assistive Robot in Long-Term Care Facility |
|
Shin, Yong Soon | Hanyang University |
Jang, Hye-Young | Hanyang University |
KIM, MI YOUNG | HANYANG UNIVERSITY |
Park, So Seul | Hanyang University, Seoul |
YOUNG A, LEE | Hanyang Uriversity, Seoul, Korea |
Keywords: Rehabilitation and Healthcare Robotics
Abstract: This study was conducted to explore the experience of using transfer-assistive robots. A focus group interview was conducted with 11 care workers who had experience using transfer-assistive robots. A total of two groups of FGIs were performed for two hours each, and the collected data were analyzed using content analysis. As a result of the analysis, 5 themes and 16 sub-themes were derived; 1) Experience of changes in caregiving, 2) Overcoming inconveniences, 3) Positive change in professional identity through care robots, 4) Enabling person-centered care through the use of care robots, 5) Crossing the boundary from current to future care. The significance of this study lies in confirming the possibility of person-centered care in nursing homes through the use of transfer-assistive robots.
|
|
15:20-17:00, Paper TI5A.40 | |
Educational Approach to Human-Centered Care for Care Robot Users |
|
Shin, Yong Soon | Hanyang University |
KIM, MI YOUNG | HANYANG UNIVERSITY |
Jang, Hye-Young | Hanyang University |
Park, So Seul | Hanyang University, Seoul |
YOUNG A, LEE | Hanyang Uriversity, Seoul, Korea |
Keywords: Multi-Robot Systems, Learning From Humans, Performance Evaluation and Optimization
Abstract: The global rise in the elderly population, alongside an increasing number of disabled individuals and the aging of this demographic, underscores significant societal trends. This demographic shift amplifies the demand for care, particularly among the elderly and disabled communities. This study explores the development of an education program for care robot users, integrating insights from academic experts and literature review. It identifies anticipated challenges, necessary caregiver competencies, and educational initiatives essential for safe and effective care robot utilization. Findings emphasize the importance of addressing ethical, legal, and practical considerations while promoting collaborative and humane care practices. This study aimed to construct a care robot education program by engaging academic and practical experts. By incorporating expert discourse, the program can equip users with the necessary knowledge for proper usage, including humanities aspects. The study's findings serve as foundational data for developing the program's content. It is also expected that it can be used as educational material for users and professional educators for future care.
|
|
15:20-17:00, Paper TI5A.41 | |
Solving the Multi-Depot Vehicle Routing Problem with Acyclic Solution Using Deep Reinforcement Learning |
|
Son, Hakmo | Korea Advanced Institute of Science &Technology (KAIST) |
Do, Haggi | KAIST |
Kim, Jinwhan | KAIST |
Keywords: Intelligent Robotic Vehicles, Multi-Robot Systems
Abstract: This paper presents a reinforcement learning (RL)-based solution that leverages an attention mechanism to enhance performance and cost-effectiveness in addressing the multi-depot vehicle routing problem (MDVRP). Similar to the traveling salesman problem (TSP) and the vehicle routing problem (VRP), the MDVRP is identified as a combinatorial optimization challenge and an NP-hard problem. RL techniques and the attention mechanism are introduced to navigate the complexities of optimizing logistics associated with delivery robots and unmanned systems. We demonstrate the potential utility and feasibility of the proposed method for its application in complex logistical applications, towards which our future research efforts will be directed.
|
|
15:20-17:00, Paper TI5A.42 | |
Development of a Rehabilitation Robot with Surface Electromyography-Based Guidance Force Feedback |
|
Shin, Wonseok | KITECH |
Park, Seungtae | Korea National University of Science and Technology |
Kang, Jihun | UST Graduate School |
Ahn, Bummo | Korea Institute of Industrial Technology |
Kwon, Suncheol | KITECH |
Keywords: Rehabilitation and Healthcare Robotics, Neurorobotics, Physical and Cognitive Human-Robot Interaction
Abstract: We present a surface electromyography-based force feedback robot system that can be utilized in upper extremity rehabilitation. We hypothesized that providing electromyographic signal-based guidance force in line with the subject's intent during rehabilitation training can encourage more active muscle use. An upper limb rehabilitation robot that provides electromyographic signal-based guidance force only when the subject voluntarily attempts to correct the trajectory. As a feasibility test, a chronic phase stroke patient used the developed rehabilitation robot for 6 weeks. The results showed that the muscles required for the movement were more activated and the unnecessary muscles were less activated. The results demonstrate that the robotic guidance force feedback we proposed to induce muscle activation can be effective for upper limb rehabilitation training.
|
| |