| |
Last updated on August 28, 2023. This conference program is tentative and subject to change
Technical Program for Tuesday August 29, 2023
|
TuAT1 |
Room T1 |
HRI in Academia and Industry: Bridging the Gap I |
Special Session |
Chair: Eum, Younseal | Sookmyung Women’s University |
|
10:20-10:30, Paper TuAT1.1 | |
Guidelines for a Human-Robot Interaction Specification Language (I) |
|
Porfirio, David | U.S. Naval Research Laboratory |
Roberts, Mark | Naval Research Laboratory |
Hiatt, Laura M. | Naval Research Laboratory |
Keywords: Computational Architectures, Novel Interfaces and Interaction Modalities
Abstract: Designing novel application development environments (ADEs) is a growing area of systems research within the human-robot interaction (HRI) community. This research involves the design of a novel system, the ADE, to afford end users and application designers the ability to develop robot applications. Researchers then usually validate their ADEs in the form of user studies or a series of case studies. In this paper, we highlight a problem with the typical approach to conducting ADE research within HRI—there is currently little standardization in how these systems are designed, developed, and validated, leading to difficulty in sharing resources between different research groups and the inability to compare similar ADEs to each other. We argue that a standardized formal representation embedded within an Interaction Specification Language (ISL) can lead to more streamlined development and validation of ADEs for HRI. Furthermore, we discuss several desired characteristics that an ISL should embody.
|
|
10:30-10:40, Paper TuAT1.2 | |
Tactical Empathy for Long-Term HRI in Commercial In-Home Robots: An Academic Approach to Building a Bridge to the HRI Industry (I) |
|
Haring, Kerstin Sophie | University of Denver |
Keywords: Affective Computing, Embodiment, Empathy and Intersubjectivity, Social Intelligence for Robots
Abstract: Tactical empathy uses neuroscience concepts to navigate difficult situations. The long-term human-robot interaction and the commercially successful long-term human-robot interaction require navigation of all, including difficult situations. It seems that employing tactical empathy in robots is a path to build trust, rapport, and teamwork whenever humans interact with robots. This paper takes a high-level look at the results from academic research and how those can inform tactical empathy in commercial in-home robots, and discusses how academic research paired with the applied concept of tactical empathy facilitate long-term interactions of humans and commercially available robots that are expected to maintain a lasting interaction with their users.
|
|
10:40-10:50, Paper TuAT1.3 | |
The Road Ahead: Advancing Interactions between Autonomous Vehicles, Pedestrians, and Other Road Users (I) |
|
Block, Avram | MassRobotics |
Joshi, Swapna | Northeastern University |
Tabone, Wilbert | Delft University of Technology |
Pandya, Aryaman | Motional |
Lee, Seonghee | Stanford University |
Patil, Vaidehi | Carnegie Mellon University |
Britten, Nicholas | Virginia Tech |
Schmitt, Paul | Motional |
Keywords: Novel Interfaces and Interaction Modalities, Innovative Robot Designs, Long-term Experience and Longitudinal HRI Studies
Abstract: While great strides have been taken in advancing the field of Human-Robot Interaction (HRI), challenges abound in understanding and improving how Autonomous Vehicles (AVs) will interact with and within society. Through this paper, the authors attempt to paint the picture of challenges unique to the study and advancement of interfaces between AVs and vulnerable road users (VRUs). In turn, these gaps in research highlight the opportunities for academia, industry, and public policy to collaborate and advance the state of the art of AV-VRU interaction, and the need for a dedicated forum for sharing insights across these various sectors.
|
|
10:50-11:00, Paper TuAT1.4 | |
Community in HRI: Extending Academic and Industry Collaboration (I) |
|
Joshi, Swapna | Northeastern University |
Keywords: User-centered Design of Robots, Applications of Social Robots, Social Intelligence for Robots
Abstract: The growing robot adoption in real-world communities emphasizes the role of community factors in promoting robot acceptance and interaction. This paper advocates for formalizing community involvement to expand industry-academia collaboration in HRI. It explores the importance of community in HRI, highlights unique aspects of academia-industry relationships resulting from community engagement, and shares examples and a community involvement experience report. The paper also proposes a framework and considerations for sustainable collaboration with the community, aiming to unlock the full potential of HRI research. Lastly, it envisions generative communal labs as a way to tackle unresolved challenges of long-term integration of robots into communities and achievement of positive social and community impact.
|
|
11:00-11:10, Paper TuAT1.5 | |
A Framework for Realistic Simulation of Daily Human Activity (I) |
|
Idrees, Ifrah | Brown University |
Singh, Siddharth | Amazon |
Xu, Kerui | Amazon |
Glas, Dylan F. | Amazon |
Keywords: Robot Companions and Social Robots, Applications of Social Robots
Abstract: For social robots like Astro which interact with and adapt to the daily movements of users within the home, realistic simulation of human activity is needed for feature development and testing. This paper presents a framework for simulating daily human activity patterns in home environments at scale, supporting manual configurability of different personas or activity patterns, variation of activity timings, and testing on multiple home layouts. We introduce a method for specifying day-to-day variation in schedules and present a bidirectional constraint propagation algorithm for generating schedules from templates. We validate the expressive power of our framework through a use case scenario analysis and demonstrate that our method can be used to generate data closely resembling human behavior from three public datasets and a self-collected dataset. Our contribution supports systematic testing of social robot behaviors at scale, enables procedural generation of synthetic datasets of human movement in different households, and can help minimize bias in training data, leading to more robust and effective robots for home environments.
|
|
TuAT3 |
Room T3 |
Creating Human-Robot Relationships |
Regular Session |
Chair: Jin, Sangrok | Pusan National University |
|
10:20-10:30, Paper TuAT3.1 | |
Effects of Robots' "Body Torque" on Participation and Sustaining Multi-Person Conversations |
|
Takagi, Karebu | Shizuoka University |
Sakamoto, Takafumi | Shizuoka University |
Ichikawa, Jun | Shizuoka University |
Takeuchi, Yugo | Shizuoka University |
Keywords: Creating Human-Robot Relationships, Curiosity, Intentionality and Initiative in Interaction, Embodiment, Empathy and Intersubjectivity
Abstract: The physical signal called body torque, expressed as a combination of face and body directions, are likely to be effective in a newcomer situations in dialogues with three or more participants (multi-person conversation). Newcomers in a multi-person conversation infer the current participants' internal states from their physical signals and decide when and to what extent they will be allowed join the conversation. However, insufficient research has been conducted on the agent's design of behavior to encourage or discourage participation in conversations based on the current and new participants. This paper analyzes the effects of body torque on new participants based on the results of a field experiment at the National Museum of Emerging Science and Innovation (Miraikan). Our results suggest that current participants' head and body directions independently and interactively affect a
|
|
10:30-10:40, Paper TuAT3.2 | |
From Research to Design: Developing the Social Robotic Persuasive Design Cards and Its Techniques |
|
Liu, Baisong | Eindhoven University of Technology |
Tetteroo, Daniel | Eindhoven University of Technology |
Markopoulos, Panos | Eindhoven University of Technology |
Keywords: User-centered Design of Robots, Applications of Social Robots, Affective Computing
Abstract: Existing work on social robotic persuasion (SRP) has provided ample knowledge that can benefit the design of social robotics. However, as recent research points out, this knowledge is not presented in a format that effectively supports design practice. Research on translational science provides a theoretical foundation for connecting research to design practice, but the process of knowledge appropriation remains under-explored. In this paper, we present the development and evaluation of the SRPD (social robot persuasive design) cards and the corresponding Roundtable and Spotlight generative methods. Our results show that the SRPD cards can benefit the ideation phase in social robotic design and that our generative methods can optimize participants' experience in a brainstorming activity.
|
|
10:40-10:50, Paper TuAT3.3 | |
Improvisation ≠ Randomness: A Study on Playful Rule-Based Human-Robot Interactions |
|
Alcubilla Troughton, Irene | Utrecht University |
Von Kentzinsky, Hendrik | Free University of Amsterdam |
Bleeker, Maaike | Utrecht University |
Baraka, Kim | Vrije Universiteit Amsterdam |
Keywords: Creating Human-Robot Relationships, Interaction Kinesics, Robots in art and entertainment
Abstract: To develop and sustain rich social interactions between humans and robots, previous research has looked at task-oriented performance metrics or the ability for a robot to adequately express messages, emotions, or intents. In contrast, our research starts from the premise that movement, as a nonverbal modality of social interaction, can cover other essential aspects of social interaction that do not have to do with the expression of messages or inner states but that nonetheless contribute to improving the quality of interaction. These aspects have to do with interaction dynamics and highly depend on appropriate action choice. Drawing inspiration from rule-based improvisation, this paper seeks to show that there exists implicit expert knowledge that can be used to inform these movement action choices in playful, and non goal-oriented settings. We present an experimental study conducted at a performing arts festival, in which participants interacted with a robot in three simple rule-based movement games, in two conditions: one where the robot was fully controlled by an improvisation expert (Improv Timing/Improv Action) and one where the timing of the actions was controlled by the expert but the robot's action choices were drawn randomly (Improv Timing/Random Action). This was done in order to focus on action choice, beyond the timing of a response. Our results show that the IT/IA condition not only performs better in terms of anthropomorphism and animacy, but also increases the interest of people in interacting with the robot for longer periods of time. These results serve as preliminary evidence of how improvisational knowledge in this context contributes to improving the quality of an interaction, and point at the value of further work in this field.
|
|
10:50-11:00, Paper TuAT3.4 | |
Scale Development of Anxiety Toward Robots in Consumer Robotics: An Approach Using Item Response Theory |
|
Song, Christina Soyoung | Illinois State University |
Lee, Jinha | Indiana Wesleyan University, DeVoe Division of Business |
Jo, Bruce | Tennessee Technological University |
Keywords: Creating Human-Robot Relationships, Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans
Abstract: Anxiety toward robots is a pre-conditioned emotion that might inhibit the formation of positive attitudes about interactions with robots. In the context of consumer robotics, this study operationalizes anxiety toward robots as consumers’ anxious feelings and fears about robots in frontline retail encounters. The study evaluates item quality of the anxiety toward robots construct and showcases the benefits of using Classical Test Theory (CTT) and Item Response Theory (IRT) for developing and assessing a scale measurement. The findings of this study suggest that the five items comprising the anxiety toward robots scale demonstrate high measurement quality, indicating their accurate representation of the intended construct. By integrating CTT and IRT analytic techniques, the study successfully builds a reliable scale for measuring anxiety toward robots in a and contributes to the use of an alternative statistical method for facilitating the construction of valid research instruments in consumer robotics.
|
|
11:00-11:10, Paper TuAT3.5 | |
ChatHRC: Personalized Human-Robot Collaboration Using Fuzzy Reinforcement Learning with Natural Language Rewards |
|
Hu, Zhe | City University of Hong Kong |
Lu, Weifeng | City University of Hong Kong |
Zheng, Yu | Tencent |
Pan, Jia | University of Hong Kong |
Keywords: HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation, Multimodal Interaction and Conversational Skills
Abstract: Collaboration between humans and robots can be challenging because robots may have difficulty understanding a specific person's intentions, particularly in complicated tasks such as co-manipulation and assembly in computer, communication, and consumer electronics (3C) manufacturing. These tasks require different weights on accuracy and speed for various fabrication steps, making traditional physical interaction inadequate. In this paper, we introduce a fuzzy reinforcement learning-based admittance controller that can infer humans' intentions not only through physical interaction but also through natural language. During training, the natural language is encoded into a reward term to help the robot reach the human-intended convergence point, allowing us to develop a ``personalized" policy. During testing, the language serves as a tool to help the robot understand and obey humans' intentions when physical interaction alone is insufficient. For example, if the user finds it difficult to push the robot and needs it to move faster, they can say ``it's really slow," while a request for high-accuracy operation can be conveyed through ``the damping is too small." With this algorithm, the robot can comprehend the intentions and act accordingly in such situations. Further results and videos can be found at: url{https://sites.google.com/view/hri-nlp}.
|
|
11:10-11:20, Paper TuAT3.6 | |
Robotic Backchanneling in Online Conversation Facilitation: A Cross-Generational Study |
|
Kobuki, Sota | Tokyo Institute of Technology |
Seaborn, Katie | Tokyo Institute of Technology |
Tokunaga, Seiki | RIKEN |
Fukumori, Kosuke | Tokyo University of Agriculture and Technology |
Hidaka, Shun | Tokyo Institute of Technology |
Tamura, Kazuhiro | Riken |
Inoue, Koji | Kyoto University |
Kawahara, Tatsuya | Kyoto Univ |
Otake-Matsuura, Mihoko | RIKEN |
Keywords: Robot Companions and Social Robots, Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness
Abstract: Japan faces many challenges related to its aging society, including increasing rates of cognitive decline in the population and a shortage of caregivers. Efforts have begun to explore solutions using artificial intelligence (AI), especially socially embodied intelligent agents and robots that can communicate with people. Yet, there has been little research on the compatibility of these agents with older adults in various everyday situations. To this end, we conducted a user study to evaluate a robot that functions as a facilitator for a group conversation protocol designed to prevent cognitive decline. We modified the robot to use backchannelling, a natural human way of speaking, to increase receptiveness of the robot and enjoyment of the group conversation experience. We conducted a cross-generational study with young adults and older adults. Qualitative analyses indicated that younger adults perceived the backchannelling version of the robot as kinder, more trustworthy, and more acceptable than the non-backchannelling robot. Finally, we found that the robot's backchannelling elicited nonverbal backchanneling in older participants.
|
|
TuAT4 |
Room T4 |
Non-Verbal Cues and Expressiveness I |
Regular Session |
Chair: Kang, Dahyun | Korea Institute of Science and Technology |
|
10:20-10:30, Paper TuAT4.1 | |
Show Me What to Pick: Pointing versus Spatial Gestures for Conveying Intent |
|
Surendran, Vidullan | Pennsylvania State University |
Wagner, Alan Richard | Penn State University |
Keywords: Non-verbal Cues and Expressiveness, Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments
Abstract: Gestures are a convenient modality of conveying human intent in collaborative human-robot tasks, and the pointing gesture is commonly used in pick-and-place tasks. But, it is hard to accurately detect the location pointed to with stereo cameras, and experiments in the literature tend to space out the objects of interest in order to make the task easier. We propose the use of gestures conveying spatial directions as an alternative to the pointing gesture when objects are closely packed, since inaccuracies in the detection of the spatial location pointed to can increase task completion difficulty. Using a human study, we confirmed that the gestures we propose are naturally used by humans collaborating with other humans when performing the task. Then, we develop a computer vision pipeline capable of generating a vector representing the pointing direction, and detecting specific spatial gestures from an RGB-D video stream. Using a self report survey, we show statistically significant evidence that subjects report higher satisfaction and better team performance when using spatial gestures instead of the pointing gesture to communicate with a robotic teammate. Finally, we show preliminary evidence that this trend holds true even when the accuracy of the pointing location detection is artificially inflated.
|
|
10:30-10:40, Paper TuAT4.2 | |
The Robot in the Room: Influence of Robot Facial Expressions and Gaze on Human-Human-Robot Collaboration |
|
Fu, Di | University of Hamburg |
Abawi, Fares | Universität Hamburg |
Wermter, Stefan | University of Hamburg |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Non-verbal Cues and Expressiveness, Robot Companions and Social Robots
Abstract: Robot facial expressions and gaze are important factors for enhancing human-robot interaction (HRI), but their effects on human collaboration and perception are not well understood, for instance, in collaborative game scenarios. In this study, we designed a collaborative triadic HRI game scenario, where two participants worked together to insert objects into a shape sorter. One participant assumed the role of a guide. The guide instructed the other participant, who played the role of an actor, to place occluded objects into the sorter. A humanoid robot issued instructions, observed the interaction, and displayed social cues to elicit changes in the two participants' behavior. We measured human collaboration as a function of task completion time and the participants' perceptions of the robot by rating its behavior as intelligent or random. Participants also evaluated the robot by filling out the Godspeed questionnaire. We found that human collaboration was higher when the robot displayed a happy facial expression at the beginning of the game compared to a neutral facial expression. We also found that participants perceived the robot as more intelligent when it displayed a positive facial expression at the end of the game. The robot's behavior was also perceived as intelligent when directing its gaze toward the guide at the beginning of the interaction, not the actor. These findings provide insights into how robot facial expressions and gaze influence human behavior and perception in collaboration.
|
|
10:40-10:50, Paper TuAT4.3 | |
Recognizing Diver Hand Gestures for Human to Robot Communication Underwater |
|
Codd-Downey, Robert | York University |
Jenkin, Michael | York University |
Keywords: Non-verbal Cues and Expressiveness, Cooperation and Collaboration in Human-Robot Teams, Novel Interfaces and Interaction Modalities
Abstract: The underwater environment provides a range of interesting applications for human-robot teams. A critical issue for such teams is the development of an appropriate communication mechanism between humans and robots operating at depth. Humans operating at depth have developed an applied gesture-based communication language that can be leveraged to enable this communication, but it would be expensive and perhaps impractical to develop a hand-labelled dataset of these gestures to support a machine learning-based approach to the task. To avoid the cost of hand labelling such a large dataset, here we automate the process of collecting a labelled dataset through the use of a simple model trained on a hand-labelled dataset that only identifies salient objects (divers, their heads and hands), and then use a weakly supervised learning process to label a complex set of diver gestures. The result of this process is a system that can recognize a large number of diver hand gestures. Performance of the resulting system is compared against a hand-labelled set of diver gestures.
|
|
10:50-11:00, Paper TuAT4.4 | |
Exploring the Use of Colored Ambient Lights to Convey Emotional Cues with Conversational Agents: An Experimental Study |
|
Straßmann, Carolin | University of Applied Sciences Ruhr West |
Helgert, Andrè | University of Applied Sciences Ruhr West |
Breil, Valentin | University of Applied Sciences Ruhr West |
Settelmayer, Lina | University of Applied Sciences Ruhr West |
Diehl, Inga | University of Applied Sciences Ruhr West |
Keywords: Non-verbal Cues and Expressiveness, Embodiment, Empathy and Intersubjectivity, Cooperation and Collaboration in Human-Robot Teams
Abstract: Conversational agents (CAs) lack of possibilities to enrich the interaction with emotional cues, although this makes the conversation more human-like and enhances user engagement. Thus, the potential of CAs is not fully exploit and possibilities to convey emotional cues are needed. In this work, CAs use colored ambient lights to display moral emotions during the interaction. To evaluate this approach, a between- subject lab experiment (N = 64) was conducted. Participants played a cooperation game with Amazon’s Alexa. Depending on the experimental condition participants received different light expressions: no light, neutral light, or morally emotional light (yellow = joy, blue = sorrow, red = anger matching the game decisions). The effect of the light expressions on the perception of the CA, users’ empathy and cooperation behavior was tested. Against our assumptions, the results indicated no positive effect of the emotional light cues. Limitations, next steps, and implications are discussed.
|
|
11:00-11:10, Paper TuAT4.5 | |
Can a Gender-Ambiguous Voice Reduce Gender Stereotypes in Human-Robot Interactions? |
|
Torre, Ilaria | Chalmers University of Technology |
Lagerstedt, Erik | University of Skövde |
Dennler, Nathaniel | University of Southern California |
Seaborn, Katie | Tokyo Institute of Technology |
Leite, Iolanda | KTH Royal Institute of Technology |
Szekely, Eva | KTH Royal Institute of Technology |
Keywords: Non-verbal Cues and Expressiveness, Embodiment, Empathy and Intersubjectivity, Ethical Issues in Human-robot Interaction Research
Abstract: When deploying robots, its physical characteristics, role, and tasks are often fixed. Such factors can also be associated with gender stereotypes among humans, which then transfer to the robots. One factor that can induce gendering but is comparatively easy to change is the robot's voice. Designing voice in a way that interferes with fixed factors might therefore be a way to reduce gender stereotypes in human-robot interaction contexts. To this end, we have conducted a video-based online study to investigate how factors that might inspire gendering of a robot interact. In particular, we investigated how giving the robot a gender-ambiguous voice can affect perception of the robot. We compared assessments (n=111) of videos in which a robot's body presentation and occupation mis/matched with human gender stereotypes. We found evidence that a gender-ambiguous voice can reduce gendering of a robot endowed with stereotypically feminine or masculine attributes. The results can inform more just robot design while opening new questions regarding the phenomenon of robot gendering.
|
|
11:10-11:20, Paper TuAT4.6 | |
Improving Sign Language Understanding Introducing Label Smoothing |
|
Tan, Sihan | Tokyo Institute of Technology |
Khan, Nabeela Khanum | Tokyo Institute of Technology |
Itoyama, Katsutoshi | Tokyo Institute of Technology |
Nakadai, Kazuhiro | Tokyo Institute of Technology |
Keywords: Detecting and Understanding Human Activity, Multimodal Interaction and Conversational Skills, Non-verbal Cues and Expressiveness
Abstract: Sign language is one of the most important communication methods when considering equality, diversity, and inclusion. Sign language understanding implies understanding sign language using machines, and it involves mainly two functions; sign language recognition and sign language translation. To improve sign language understanding performance, this paper proposes to use label smoothing with CTC (Connectionist Temporal Classification) loss as training criteria for the sign language understanding neural network. Experimental results showed the effectiveness of the proposed method in both sign language recognition and translation.
|
|
TuAT5 |
Room T5 |
Innovative Robot Designs I |
Regular Session |
Chair: Sousa Silva, Rafael | Colorado School of Mines |
|
10:20-10:30, Paper TuAT5.1 | |
Single Actuator Tendon Driven Two Finger Linkage Gripper with Strong Pinch and Adaptable Cylindrical Grasp |
|
Unde, Jayant | Nagoya University |
Colan, Jacinto | Nagoya University |
Zhu, Yaonan | Nagoya University |
Aoyama, Tadayoshi | Nagoya University |
Hasegawa, Yasuhisa | Nagoya University |
Keywords: Innovative Robot Designs, HRI and Collaboration in Manufacturing Environments, Anthropomorphic Robots and Virtual Humans
Abstract: This paper presents the design and development of a single actuator tendon driven two-finger linkage gripper that can perform both strong pinch and adaptable cylindrical grasp. The gripper mechanism consists of an anthropomorphic linkage finger with an additional revolute joint driven by a single actuator and a fixed thumb. The gripper can achieve a maximum pinch force of 11.7 N and an adaptable grasping ranging from 30 mm to 145 mm diameter, making it suitable for various applications, such as pick-and-place tasks in robotics and automation. Moreover, complaint design makes it suitable for the safe physical human robot interaction. In addition, proposed linkage finger’s characteristics were evaluated through kinematic analysis, simulation and experimental tests of prototype. The proposed gripper design is simple, low-cost, and easy to implement, making it an attractive alternative to more complex and expensive gripper designs.
|
|
10:30-10:40, Paper TuAT5.2 | |
Worth the Wait: Understanding How the Benefits of Performative Autonomy Depend on Communication Latency |
|
Sousa Silva, Rafael | Colorado School of Mines |
Lieng, Michelle | Colorado School of Mines |
Muly, Emil | Colorado School of Mines |
Williams, Tom | Colorado School of Mines |
Keywords: Degrees of Autonomy and Teleoperation, Cooperation and Collaboration in Human-Robot Teams, Linguistic Communication and Dialogue
Abstract: Robots deployed in space exploration contexts need to efficiently communicate with both co-located and remote teammates to perform tasks and resolve points of uncertainty. In recent work, researchers have proposed Performative Autonomy, an autonomy design strategy for enabling language-capable robots in these contexts to enhance interactants' Situation Awareness. However, it is not yet clear how the efficacy of this autonomy design strategy might be impacted by the extreme latency that characterizes interplanetary communication. In this work, we thus present the results of the first study exploring the impact of interaction latency on the effectiveness of Performative Autonomy. Our results suggest that while Performative Autonomy exacerbates the increased task performance times required under high latency, this autonomy design strategy can be used without increasing cognitive load, even under substantial communication latency. Moreover, our results suggest that robots performing lower levels of autonomy were viewed as better teammates, and that this autonomy design strategy helped provide resilience to degradation to such perceptions that would otherwise be caused by increasing levels of latency. Overall, these results motivate further work within the new Performative Autonomy paradigm for both remote and proximal human-robot interactions, in both space-oriented and traditional, terrestrial, human-robot interaction domains.
|
|
10:40-10:50, Paper TuAT5.3 | |
Yousu: A Mythical Character Robot Design for Public Scene Interaction |
|
Sun, Qirui | Tsinghua University |
Guo, Yijie | Tsinghua University |
Yao, Zhihao | Tsinghua University |
Mi, Haipeng | Tsinghua University |
Keywords: Robots in art and entertainment, Personalities for Robotic or Virtual Characters, Curiosity, Intentionality and Initiative in Interaction
Abstract: With the advancement of interactive technology in the information age, the problem of "visual blindness" in the field of display design has become increasingly prevalent in public scene interaction design. Therefore, it has become crucial to address how new forms of interaction and interaction scenes can be adopted to attract the public. In the context of robotics' continuous development, robots are playing an increasingly prominent role as interactive subjects in public scene interaction experiences. In new fields such as digital entertainment, spatial experience, and new media art, various typical scenes of robot interaction have emerged. In this study, a window robot named "Yousu" was developed based on an ancient Chinese mythological character and deployed in the window of a bookstore in Beijing. A user experiment was conducted to investigate how to design a reasonable and effective character robot interaction in public scenes to enhance the interaction scenes' attractiveness.
|
|
10:50-11:00, Paper TuAT5.4 | |
Development of a 3-DOF Interactive Modular Robot with Human-Like Head Motions |
|
Moon, Chaerim | University of Illinois, Urbana-Champaign |
Yamsani, Sankalp | University of Illinois Urbana-Champaign |
Kim, Joohyung | University of Illinois at Urbana-Champaign |
Keywords: Innovative Robot Designs, Interaction Kinesics, Non-verbal Cues and Expressiveness
Abstract: When introducing robotic systems to home environments, there are several aspects to consider such as accessibility to robotic systems and interaction capability with human subjects. Thus, in this paper, a 3-DOF robotic sensor module is suggested to embrace the concerns. It includes a handy plug-and-play feature to use one module in different spots at home. In addition, it delivers non-verbal, non-display communicative cues through human-like head motions while it is plugged into docking mounts with different states, including angled or moving. It also can track human subjects to detect and focus on users who try to interact with the robot. The performance of the robotic module was evaluated, and its compatibility with different systems was demonstrated.
|
|
11:00-11:10, Paper TuAT5.5 | |
HRITI - Human Robot Interaction with Translational Intelligence |
|
Mahale, Gopalkrishna | PES University |
Subramanian, Karpagavalli | PES University |
Srikantan, Maalavika | PES University |
kulkarni, Vaishnavi | Pes University |
R, Rathan | PES University |
Tripathi, Shikha | Faculty of Engineering PES University, Bangalore, India |
Keywords: Innovative Robot Designs, Assistive Robotics, Child-Robot Interaction
Abstract: In today's manufacturing and supply chain industries, Robots play an increasingly prominent role. The presence of robots in the day-to-day life of humans are estimated to increase rapidly in the next couple of decades. Modern-day robots are designed to perform specific tasks and not to connect with humans emotionally. Humanoid robots aim to solve this problem by resembling humans in appearance and body mechanisms. This paper focuses on designing and constructing "HRITI", a female humanoid robot capable of generating various facial expressions, voice outputs, and body movements, allowing it to track its environment and perform actions based on it. This is achieved by designing and 3D printing the robot's body mechanism, coupled with motors and electronic control system consisting of various processing units, sensors, and actuators. The skin of the robot's face is made of liquid silicone by using a 3D printed mold and mold press. An 8-bit Atmel microcontroller is programmed with firmware written in embedded C to access sensor data and control the robot's body movement via AT commands, which can be accessed and controlled by the Raspberry Pi 4 combined with an AI-accelerator hardware such as Google Coral. This will benefit potential artificial intelligence researchers by providing a low-cost, open-source robot hardware platform for research in socio-emotional interactions between humans and machines. The existing robotic faces are not natural and do not show many emotions. Another unique feature of this robot is that it has olfactory capabilities.
|
|
11:10-11:20, Paper TuAT5.7 | |
Proposal of a New Performance Partner: "Soft Flying Robot” |
|
Shido, Hiroki | Waseda University |
Nishi, Hiroko | Toyo Eiwa University |
ISHII, Hiroyuki | Waseda University |
Keywords: Robots in art and entertainment, Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships
Abstract: Unmanned aerial vehicles (UAVs) are used in many industrial fields and human–drone interaction (HDI) has seen increasing research attention in recent years. Their ability to fly divides UAVs from other robots and novel social robots are expected to be developed from UAVs in the future. However, touch interaction is critical in social robotics, and ordinary UAVs tend to be unsafe to touch. Furthermore, studies on whole-body interaction between drones and humans, including dancing, are far less common than those with traditional robots. Therefore, we developed “Soft Flying Robot,” a new type of flying robot for HDI research. Its flight uses helium gas and piezoelectric air pumps, making it blade-free and safe for humans to touch and interact with. In addition, it is equipped with touch sensors and light-emitting units to enable clearer and more immersive interaction. We evaluated Soft Flying Robot by introducing it to a group of performers, observing them use it in a dance workshop, and asking them for their impressions. Initial results indicate that Soft Flying Robot induces new behaviors in users and draws visual attention. This paper mainly discusses the results of the qualitative analysis of Soft Flying Robot’s potential in human–drone co-creation.
|
|
TuAT6 |
Room T6 |
Novel Interfaces and Interaction Modalities I |
Regular Session |
Chair: Park, Juyoun | Korea Institute of Science and Technology |
|
10:20-10:30, Paper TuAT6.1 | |
Hands-Free Interface Using Breath for Robot-Assisted Operation |
|
Imai, Atsuhiro | Kogakuin University |
Misaki, Daigo | Kogakuin University |
Keywords: Novel Interfaces and Interaction Modalities, Medical and Surgical Applications, User-centered Design of Robots
Abstract: In this paper, we propose a novel hands-free interface for robot-assisted operation. Surgeons use both hands during surgery, making difficult for them to operate the display. However, by using a modality other than the hands to operate the display, the surgeon can check the necessary information for the operation in real time. Therefore, we developed a system that recognizes breath from the temperature change on the mask surface. Then, we compared the proposed system with a directly recognized breath method. Consequently, the proposed method had the same performance as directly recognized breath method and minimized the risk of contamination. We believe that the application of our developed system will contribute to the improvement of safety and efficiency in the medical field.
|
|
10:30-10:40, Paper TuAT6.2 | |
A Gesture-Based Multimodal Interface for Human-Robot Interaction |
|
Uimonen, Mikael Petro Juhana | VTT Technical Research Centre of Finland |
Kemppi, Paul Mikael | Mr |
Hakanen, Taru | VTT Technical Research Centre of Finland |
Keywords: Novel Interfaces and Interaction Modalities, Cooperation and Collaboration in Human-Robot Teams, Detecting and Understanding Human Activity
Abstract: Surface electromyography (sEMG) has been proposed as one of the possible input modalities for gesture or proportional control-based human-robot interaction to relieve the operator from hand-held controllers. However, when it comes to mobile robotics, the applications have been limited, often providing only direct control over the velocity of the robot. In this work, we propose a multimodal interface for controlling mobile robots in collaborative settings. The robot navigates using a combination of its internal sensors, a LiDAR, and spatial input from a person-detecting neural network. The operator of the robot is identified from the view of the robot's camera as the person wearing an sEMG armband, although the main use of the armband is for detecting hand gestures for commanding the robot. While the navigation is dependent on the line of sight between the robot and the operator, using the armband for gesture detection allows the robot to be commanded regardless of occlusions or changes in lighting conditions. Gesture-detection neural networks are first trained and tested offline with multiple subjects. Then, the complete interface is evaluated as a proof of concept with an expert user performing a sequence of tasks in cooperation with a quadruped mobile robot (Boston Dynamics). We demonstrate the usability of the interface in a realistic environment and show great long-term online performance of the gesture detection model with an average F-score of 0.94.
|
|
10:40-10:50, Paper TuAT6.3 | |
Design and Validation of a Torso-Dynamics Estimation System (TES) for Hands-Free Physical Human-Robot Interaction |
|
Song, Seung Yun | University of Illinois at Urbana-Champaign |
Guo, Yixiang | University of Illinois at Urbana-Champaign |
Yuan, Chentai | University of Illinois at Urbana-Champaign |
Marin, Nadja | The University of Illinois at Urbana-Champaign |
Xiao, Chenzhang | University of Illinois at Urbana-Champaign |
Bleakney, Adam | University of Illinois |
Elliott, Jeannette | University of Illinois |
Ramos, Joao | University of Illinois at Urbana-Champaign |
Hsiao-Wecksler, Elizabeth | University of Illinois at Urbana-Champaign |
Keywords: Novel Interfaces and Interaction Modalities, Evaluation Methods, User-centered Design of Robots
Abstract: We designed and validated two interfaces for physical human-robot interaction that utilize torso motions for hands-free navigation control of riding or remote mobile robots. The Torso-dynamics Estimation System (TES), which consisted of an instrumented seat (Force Sensing Seat, FSS) and a wearable sensor (inertial measurement unit, IMU), was developed to quantify the translational and rotational motions of the torso, respectively. The FSS was constructed from six uniaxial loadcells to output 3D resultant forces and torques, which were used to compute the translational movement of the 2D center of pressure (COP) under the seated user. Two versions of the FSS (Gen 1.0 and 2.0) with different loadcell layouts, materials, and manufacturing methods were developed to showcase the versatility of the FSS design and construction. Both FSS versions utilized low-cost components and a simple calibration protocol to correct for dimensional inaccuracies. The IMU, attached on the user’s upper chest, used a proprietary algorithm to compute the 3D torso angles without relying heavily on magnetometers to minimize errors from electromagnetic noises. A validation study was performed on eight test subjects (six able-bodied users and two manual wheelchair users with reduced torso range of motion) to validate TES estimations by comparing them to data collected on a research-grade force plate and motion capture system. TES readings displayed high accuracy (average RMSE of 3D forces, 3D torques, 2D COP, and torso angles were well less than maximum limits of 5N, 5Nm, 10mm, and 6˚, respectively).
|
|
10:50-11:00, Paper TuAT6.4 | |
Intuitive Arm-Pointing Based Home-Appliance Control from Multiple Camera Views |
|
Yokota, Masae | Chuo University |
Majima, Soichiro | Chuo University |
Pathak, Sarthak | Chuo University |
Umeda, Kazunori | Chuo University |
Keywords: Novel Interfaces and Interaction Modalities, Human Factors and Ergonomics
Abstract: The purpose of this paper is to construct and evaluate a system to operate home appliances by pointing. In Human Machine Interface (HMI) design, a natural operating method is important. Pointing is a universal gesture for selecting an object. Arm-pointing to an appliance and selecting it to perform a simple operation is a very intuitive and easy-to-use method of operation. Many studies prepare data with locations of appliances and their sizes. In this paper, we a camera-based system where the user can simply point at an appliance to select and operate it is proposed. The user's pointing direction and appliance locations are estimated automatically from image frames. This eliminates the need for any preparation beforehand and the appliances can be moved during operation. The proposed method was implemented and experimentally evaluated. It was found that the average recognition rates were about 87% and 57% when a humidifier and a TV were operated.
|
|
11:00-11:10, Paper TuAT6.5 | |
Adapting Behavior and Persistence Via Reinforcement and Self-Emotion Mediated Exploration in a Social Robot |
|
Assunção, Gustavo | Institute of Systems and Robotics - University of Coimbra |
Sorrentino, Alessandra | University of Florence |
Dias, Jorge | Khalifa University |
Castelo-Branco, Miguel | University of Coimbra, Institute for Biomedical Imaging and Tran |
Menezes, Paulo | Institute of Systems and Robotics |
Cavallo, Filippo | University of Florence |
Keywords: Social Intelligence for Robots, Creating Human-Robot Relationships, Social Learning and Skill Acquisition Via Teaching and Imitation
Abstract: Adaptability and behavioral diversity are core components of social interactions between humans. Naturally, these are traits research should strive to achieve in social robotics so agents may be better accepted and engage with their user peers. In this paper, we propose a novel activity modulation to increase behavioral diversity, based on a surprise-exploration correlation model, in a social robot undergoing behavioral optimization to user state and preference. This framework was tested with 21 participants to assess preferences as well as the impact that action variability and persistence would have on user perception of the robot. Results indicate a positive effect of persistence and variability over robot likability as well as user engagement, contributing insight for future research in social robotics.
|
|
11:10-11:20, Paper TuAT6.6 | |
Characterizing the Sense of Embodiment: The Development of a Sensorimotor Robotic Platform |
|
Hong, Kihun | University of California, Davis |
Trieu, Patrick | University of California, Davis |
Schofield, Jonathon | University of California, Davis |
Keywords: Embodiment, Empathy and Intersubjectivity, Evaluation Methods, Creating Human-Robot Relationships
Abstract: Embodiment is the experience of owning and controlling our bodies. It is a product of cognitive sensorimotor integration processes and by manipulating what we see and feel, we can extend the perceived boundaries of our bodies to include non-biological objects. Consequently, it is of interest to many human-robotic control applications that aim to promote the seamless interaction of the human user and machine. The sense of embodiment has three constituent components that robotic systems may engage with: the Sense of Ownership (SoO), the Sense of Agency (SoA), and our peripersonal space (PPS). Despite the recognition that each is linked to the other, how these components interact to form a cohesive sense of embodiment remains poorly understood. To address this issue, in this work, we designed an embodiment research platform that allows us to explore the relationships between the SoO, SoA, and PPS. We developed a sensorimotor interface, experimental setup, and protocol in which a robotic hand mirrored the actions of the participants’ real hand. Participants performed a series of grasping tasks in which we manipulated the cutaneous sensations they experienced and the position of the robot relative to their body. We then employed multiple independent measures of SoO, SoA, and PPS. Our results indicated that (1) cutaneous feedback improved all three components of embodiment, and (2) there appeared to be multiple complex interrelationships among the three components and their measures.
|
|
TuBT1 |
Room T1 |
HRI in Academia and Industry: Bridging the Gap II |
Special Session |
Chair: Eum, Younseal | Sookmyung Women’s University |
|
11:30-11:40, Paper TuBT1.1 | |
Developing Autonomous Behaviors for a Consumer Robot to Be Near People in the Home (I) |
|
Lee, Jin Joo | Amazon |
Atrash, Amin | Amazon Lab126 |
Glas, Dylan F. | Amazon |
Fu, Hanxiao | Amazon |
Keywords: Long-term Experience and Longitudinal HRI Studies, Robot Companions and Social Robots, Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper describes the development of algorithms that decide when to move, where to move, and how to look for people in a home environment. We introduce a design framework as a tool to guide the development of a social robot to proactively be with people for companionship and assistance in the home. Through a series of experiments ranging from simulations to longitudinal A/B studies, we demonstrate how to utilize the design framework to help guide the evaluation and selection of solutions. We deployed our autonomous robot in a long-term in-situ study and found our proposed approach to be more capable of being co-present with its household members compared to a baseline approach. Conducted in an industry setting, our research approach departs from typical academic practices as the motivations are inherently different. We share our perspective on the differences of industry research when developing a social robot as a commercial product.
|
|
11:40-11:50, Paper TuBT1.2 | |
Using Decision Support in Human-In-The-Loop Experimental Design Toward Building Trustworthy Autonomous Systems (I) |
|
Gregory, Jason M. | US Army Research Laboratory |
Sanchez, Felix | Booz Allen Hamilton |
Lancaster, Eli | Booz Allen Hamilton |
Agha-mohammadi, Ali-akbar | NASA-JPL, Caltech |
Gupta, Satyandra K. | University of Southern California |
Keywords: Evaluation Methods, User-centered Design of Robots
Abstract: Experimental design of autonomous systems involves defining experimental inputs to maximize the experimenter's information gained, minimize costs, and balance risk. This effectively leads to improved understanding and trustworthiness, which are necessary for deployment in real-world settings. Since experimental design is inherently a human-in-the-loop, sequential decision making problem, and decisions are being made about complex systems, an investigation into decision-making quality and decision-supporting methods is warranted. In this work, we investigate a decision support system (DSS) to augment the human's experimental design decision making abilities, and conduct an exploratory user study to investigate the potential for decision support. Our findings show that experimenters, including experienced field roboticists, make suboptimal decisions and mistakes during the experimental design process, which suggests robotics research could benefit from DSSs. Our proposed DSS shows promise in some select aspects of experimental design, including helping to reduce suboptimal decisions, and participants in the user study reported favorable opinions of using such a system, including a sense of usefulness and lack of burden. The broader implication of this work is the identification of decision support in experimental design as one way to help bridge the gap between academia and industry by way of accelerated, informative experimentation and increased system explainability.
|
|
11:50-12:00, Paper TuBT1.3 | |
Defining Interaction As Coordination Benefits Both HRI Research and Robot Development: Entering Service Interactions (I) |
|
Fischer, Kerstin | University of Southern Denmark |
|
|
12:00-12:10, Paper TuBT1.4 | |
Robotic Tutors for Nurse Training: Opportunities for HRI Researchers (I) |
|
Quintero-Peña, Carlos | Rice University |
Qian, Peizhu | Rice University |
Fontenot, Nicole | Houston Methodist |
Chen, Hsin-Mei | Houston Methodist |
Hamlin, Shannan | Houston Methodist |
Kavraki, Lydia | Rice University |
Unhelkar, Vaibhav V. | Rice University |
Keywords: Medical and Surgical Applications, Robots in Education, Therapy and Rehabilitation, Applications of Social Robots
Abstract: An ongoing nurse labor shortage has the potential to impact patient care well-being in the entire healthcare system. Moreover, more complex and sophisticated nursing care is required today for patients in hospitals forcing hospital-based nurses to carry out frequent training and assessment procedures, both to onboard new nurses and to validate skills of existing staff that guarantees best practices and safety. In this paper we recognize an opportunity for the development and integration of intelligent robot tutoring technology into nursing education to tackle the growing challenges of nurse deficit. To this end, we identify specific research problems in the area of human-robot interaction that will need to be addressed to enable robot tutors for nurse training.
|
|
12:10-12:20, Paper TuBT1.5 | |
Teaching a Robot Where to Park: A Scalable Crowdsourcing Approach (I) |
|
Bryant, De'Aira | Georgia Institute of Technology |
Etiene, Tiago | Amazon Lab126 |
Howard, Ayanna | Georgia Institute of Technology |
Smart, William | Oregon State University |
Glas, Dylan F. | Amazon |
Keywords: Social Intelligence for Robots, Detecting and Understanding Human Activity, Motion Planning and Navigation in Human-Centered Environments
Abstract: For social robots to successfully integrate into daily life in home environments, they will need reliable models of the way people perceive and use space in the home. This paper explores the problem of obtaining annotated training data at scale for subjective judgments about spatial locations. Focusing on the use case of identifying good and bad parking spots for a social robot operating in a home environment, two experiments are presented. The first study shows that the presentation of context-rich 3D images to human annotators yields notably different outcomes from those obtained when using 2D robot navigation maps. We attribute the source of these differences to a set of features visible only in the 3D views and introduce a technique for labeling these features on the 2D maps. The second study reveals that using labeled 2D maps produces annotation data very similar to that obtained using 3D images. Since a labeled 2D map can be generated at a fraction of the cost of a full set of 3D views, we recommend this method as a scalable approach to collecting subjective spatial data annotations in everyday environments.
|
|
12:20-12:30, Paper TuBT1.6 | |
From Assistive Devices to Manufacturing Cobot Swarms (I) |
|
Li, Monica, Mengqi | Polytechnique Montreal |
Belzile, Bruno | ETS Montreal |
Imran, Ali | École De Technologie Supérieure ÉTS |
Birglen, Lionel | Ecole Polytechnique De Montreal |
Beltrame, Giovanni | Ecole Polytechnique De Montreal |
St-Onge, David | Ecole De Technologie Superieure |
Keywords: HRI and Collaboration in Manufacturing Environments, User-centered Design of Robots, Detecting and Understanding Human Activity
Abstract: This paper provides an overview of the latest trends in robotics research and development, with a particular focus on applications in manufacturing and industrial settings. We highlight recent advances in robot design, including cutting-edge collaborative robot mechanics and advanced safety features, as well as exciting developments in perception and human-swarm interaction. By examining recent contributions from Kinova, a leading robotics company, we illustrate the differences between industry and academia in their approaches to developing innovative robotic systems and technologies that enhance productivity and safety in the workplace. Ultimately, this paper demonstrates the tremendous potential of robotics to revolutionize manufacturing and industrial operations, and underscores the crucial role of companies like Kinova in driving this transformation forward.
|
|
TuBT3 |
Room T3 |
Assistive Robotics I |
Regular Session |
Chair: Lee, Jongwon | Korea Institute of Science and Technology |
|
11:30-11:40, Paper TuBT3.1 | |
Haptically-Displayed Proprioceptive Feedback Via Simultaneous Rotary Skin Stretch and Vibrotactile Stimulation |
|
Lima, Bryanna | Georgia Institute of Technology |
Hammond III, Frank L. | Georgia Institute of Technology |
Keywords: Assistive Robotics
Abstract: The provision of proprioceptive feedback can be crucial to the use of wearable devices (prostheses) and teleoperated robots. This paper focuses on the development and evaluation of the wearable haptic feedback system capable of displaying proprioceptive information through rotary skin stretch and vibrotactile sensations. An experimental study was conducted to determine how well subjects can perceive skin stretch, applied to the posterior aspect of the upper arm by a rotating end effector, as proprioceptive feedback from a rotary input dial (potentiometer), and whether vibration can improve the accuracy of that proprioception. Results show that with skin stretch feedback, subjects could locate the angle of the input dial to within 5.48 degrees, compared to 5.82 degrees when participants had no feedback (used hand proprioception alone). When considering vibration alone and skin stretch with vibration, accuracy increased to 4.17 degrees and 4.25 degrees, respectively. Though proprioception of input dial angles improved with all forms of feedback, the time to subject determination of the angle increased by as much as 45%, from 4.4 sec to 6.4 sec.
|
|
11:40-11:50, Paper TuBT3.2 | |
Using the OptiBand to Increase the Long-Range Spatial Perception of People with Vision Disabilities |
|
Quick, Ryan Racel | Oregon State University |
Bontula, Anisha | Oregon State University |
Puente, Karina | Oregon State University |
Fitter, Naomi T. | Oregon State University |
Keywords: Assistive Robotics, User-centered Design of Robots, Robots in Education, Therapy and Rehabilitation
Abstract: Mobility aids such as the white cane provide close-range information to help people with vision disabilities navigate the world. However, this technology has a limited sensing range and does not provide long-distance scene awareness. This paper proposes a vibrotactile feedback device to fill this gap: the OptiBand, which was developed based on design criteria from a blind stakeholder. The presented user study (N=27) compared the OptiBand to a proxy for existing shorter-range mobility aids, considered two potential sensed distance-to-vibration mapping strategies, and covered the use cases of locating and approaching objects of interest. Results of the object-locating trials showed that using the OptiBand led to faster and more successful performance, as well as lower task load and more satisfaction with the device, compared to using a proxy state-of-the-art device. A final trial with the original stakeholder demonstrated that the design criteria were met and supplied insights for the next iteration of participatory design for the OptiBand. Those who are interested in assistive devices for people with vision disabilities can benefit from this work.
|
|
11:50-12:00, Paper TuBT3.3 | |
Tracker: Model-Based Reinforcement Learning for Tracking Control of Human Finger Attached with Thin McKibben Muscles |
|
Saito, Daichi | Tokyo Institute of Technology |
Nagatomo, Eri | Tokyo Institute of Technology |
Pardomuan, Jefferson | Tokyo Institute of Technology |
Koike, Hideki | Tokyo Institute of Technology |
Keywords: Machine Learning and Adaptation, Assistive Robotics, User-centered Design of Robots
Abstract: To adopt the soft hand exoskeleton to support activities of daily livings, it is necessary to control finger joints precisely with the exoskeleton. The problem of controlling joints to follow a given trajectory is called the tracking control problem. In this study, we focus on the tracking control problem of a human finger attached with thin McKibben muscles. To achieve precise control with thin McKibben muscles, there are two problems: one is the complex characteristics of the muscles, for example, non-linearity, hysteresis, uncertainties in the real world, and the other is the difficulty in accessing a precise model of the muscles and human fingers. To solve these problems, we adopted DreamerV2, which is a model-based reinforcement learning method, but the target trajectory cannot be generated by the learned model. Therefore, we propose Tracker, which is an extension of DreamerV2 for the tracking control problem. In the experiment, we showed that Tracker can achieve an approximately 81% smaller error than PID for the control of a two-link manipulator that imitates a part of human index finger from the metacarpal bone to the proximal bone. Tracker achieved the control of the third joint of the human index finger with a small error by being trained for approximately 60 minutes. In addition, it took approximately 15 minutes, which is less than the time required for the first training, to achieve almost the same accuracy by fine-tuning the policy pre-trained by the user’s finger after taking off and attaching thin McKibben muscles again as the accuracy before taking off.
|
|
12:00-12:10, Paper TuBT3.4 | |
An EMG-Based Spatio-Spectro-Temporal Index for Muscle Fatigue Quantification |
|
Dasanayake, Nimantha | Universiy of Moratuwa |
Gopura, R.A.R.C. | Department of Mechanical Engineering |
Ranaweera, Pubudu | University of Moratuwa |
Lalitharatne, Thilina Dulantha | Queen Mary University of London |
Keywords: Assistive Robotics, HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation
Abstract: This study introduces a new EMG-based muscle fatigue index that combines signal features in spatial, temporal, and spectral domains. This index incorporates a novel spatial EMG feature: Eigen Ratio Based Fatigue Index (ERFI) that can capture the variations in the active motor unit distribution during dynamic muscle fatiguing exercises. The new fatigue index maps the ERFI, a wavelet-based feature, a spectral feature, and an amplitude-based feature to the reduction in maximum voluntary contraction, which is considered the direct measure of muscle fatigue. The mapping function was implemented as a Multi-layer Perceptron (MLP). To evaluate the fatigue index, several fatigue tests under various speed and load conditions were conducted on four subjects. ERFI showed a significant variation (p < 0.01) over time for more than 85% of the tests. Several MLP input configurations to predict muscle fatigue were compared in this study based on various combinations of EMG features. An input configuration that used the novel spatial EMG feature among five other features performed the best and was able to predict muscle fatigue with a mean coefficient of determination over 65 %. It was also noticed that ERFI’s relationship with muscle fatigue is less dependent on the load and speed of the cyclic exercise when compared with other EMG features that were proposed in previous studies. Thus, it can be considered a better alternative to use in EMG-based control of active prosthetic and orthotic devices to compensate for the effect of muscle fatigue.
|
|
12:10-12:20, Paper TuBT3.5 | |
Walking Outdoor with a Zoomorphic Mobile Robot: Exploration of Robot-Assisted Physical Activities for Older Adults |
|
Wu, Chia-Hsin | Tampere University |
Ahtinen, Aino | Tampere University |
Vaananen, Kaisa | Tampere University |
Keywords: Assistive Robotics, Robot Companions and Social Robots, User-centered Design of Robots
Abstract: In the field of human-robot interaction (HRI), assistive robots have been integrated to promote social interactions and physically active lifestyles in the wellness context of eldercare. Despite their potential benefits, the current applications of assistive robots are constrained by limited usage environments and their predefined roles. Our research aims to explore older adults’ perceptions of assistive robots and approaches for delivering motivational physical activities by integrating, Spot, a zoomorphic mobile robot, as an outdoor walking guide. We conducted a participatory design study at a Finnish nursing home, consisting of three phases: the co-design workshop, the conceptual design, and the field study. This qualitative research collected data through observations and interviews. The findings report positive attitudes and natural social interactions among older adults during the outdoor physical activities assisted by the Spot robot. Based on these findings, we present a set of design implications for wellness robots in eldercare, including robot roles and tasks, methods for introducing robot literacy, and approaches to presenting robotic solutions to older adults.
|
|
12:20-12:30, Paper TuBT3.6 | |
Understanding Human-Robot Teamwork in the Wild: The Difference between Success and Failure for Mobile Robots in Hospitals |
|
Tornbjerg Eriksen, Kristina | Aalborg University |
Bodenhagen, Leon | University of Southern Denmark |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Medical and Surgical Applications
Abstract: This paper communicates findings from an ethnographic inspired field study of human-robot teamwork in a hospital, a highly significant topic, as the use of robots has expanded significantly in recent years, and they are being increasingly deployed in naturalistic environments, including hospitals, expected to take part in socio-technical practices and collaborate with humans in teams. The field study took place in a Danish hospital where mobile robots were installed to take on courier tasks and identified two primary human-robot teams in the given setting: one team consisting of the hospital’s Technical Manager and the mobile robots and another team consisting of Medical Laboratory Technicians and the mobile robots. The team comprising Medical Laboratory Technicians had a strong dependency on the team encompassing the Technical Manager, in the daily hospital operations. In addition, two main elements affected the teamwork between hospital staff and mobile robots in the given hospital. First, a clear division of responsibility for the robots, including well-defined, simple tasks and instant troubleshooting, was important in ensuring collaborative teamwork. Second, environmental factors were crucial as the hospital setting must be suited for both staff and robots, for the teamwork to succeed. The results were evaluated in comparison to results in a similar, earlier study conducted at another Danish hospital and consequently reveal how a clear division of responsibility for robots and appropriate environmental infrastructure allows for the teamwork between humans and robots to flow satisfactory.
|
|
TuBT4 |
Room T4 |
Non-Verbal Cues and Expressiveness II |
Regular Session |
Chair: Celiktutan, Oya | King's College London |
|
11:30-11:40, Paper TuBT4.1 | |
To Cross or Not-To-Cross: A Robotic Object for Mediating Interactions between Autonomous Vehicles and Pedestrians |
|
Chakravarthi Kumaran, Srivatsan | Media Innovation Lab, School of Communication, Reichman Universi |
Oberlender, Agam | Media Innovation Lab, School of Communication, Reichman Universi |
Grishko, Andrey | Media Innovation Lab, Interdisciplinary Center Herzliya |
Megidish, Benny | Media Innovation Lab, the Interdisciplinary Center (IDC) Herzliy |
Erel, Hadas | Media Innovation Lab, Interdisciplinary Center Herzliya |
Keywords: Non-verbal Cues and Expressiveness, Applications of Social Robots, User-centered Design of Robots
Abstract: A main challenge in incorporating autonomous vehicles into urban environments concerns communication with pedestrians crossing in front of them. We introduce a robotic object for facilitating natural crossing interactions by leveraging pedestrians' existing crossing habits. The robot was designed to fit on the vehicle's dashboard and simulate head gestures to indicate if it is safe to cross. We evaluated the perception of the robot as mediating the AV's communication and the comprehension of its gestures, in an in-person experiment. Participants were asked to cross the road in front of a vehicle that was presented as having autonomous driving capabilities. The robot was placed on the vehicle's dashboard in a location where pedestrians habitually look. The results indicated that when asked to cross, participants immediately looked toward the driver's seat and easily noticed the robot. They consistently categorized the robot's gestures into "cross" or "do-not-cross" categories and reported a strong sense of safety. Our findings suggest that robotic objects are a promising technology for mediating natural pedestrian-AV communication.
|
|
11:40-11:50, Paper TuBT4.2 | |
Advantages of Multimodal versus Verbal-Only Robot-To-Human Communication with an Anthropomorphic Robotic Mock Driver |
|
Schreiter, Tim | Örebro University |
Morillo-Mendez, Lucas | Örebro University |
Chadalavada, Ravi Teja | Örebro University |
Rudenko, Andrey | Robert Bosch GmbH |
Billing, Erik Alexander | University of Skövde |
Magnusson, Martin | Örebro University |
Arras, Kai Oliver | Bosch Research |
Lilienthal, Achim J. | Orebro University |
Keywords: Anthropomorphic Robots and Virtual Humans, Multimodal Interaction and Conversational Skills, Cooperation and Collaboration in Human-Robot Teams
Abstract: Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an “Anthropomorphic Robotic Mock Driver” (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.
|
|
11:50-12:00, Paper TuBT4.3 | |
A Study on Customer's Perception of Robot Nonverbal Communication Skills in a Service Environment |
|
Tuyen, Nguyen Tan Viet | King's College London |
Okazaki, Shintaro | King's College London |
Celiktutan, Oya | King's College London |
Keywords: Non-verbal Cues and Expressiveness, Robot Companions and Social Robots
Abstract: Nonverbal communication has the potential to enable robots to interact with customers in service environments efficiently. While previous efforts in this domain have been paid to the understanding of customers' interaction experience from different aspects, there is a lack of studies on the configuration of multimodal interaction (i.e., the combination of nonverbal gestures, voice, and touch) in service environments and the effect of nonverbal communication styles when performed in this setting. This paper aims to address the gap in the literature by introducing a multimodal HRI framework operated in a cafe shop setting. A systematic study is conducted with 171 customers. It is followed by an in-depth analysis based on objective and subjective measurements to build an understanding of customers' attitudes towards the robot's nonverbal behaviours.
|
|
12:00-12:10, Paper TuBT4.4 | |
Hey Robot, It’s Not What You Say, It’s How You Say It |
|
Miniotaite, Jura | KTH Royal Institute of Technology |
Wang, Siyang | KTH, Royal Institute of Technology |
Beskow, Jonas | KTH |
Gustafson, Joakim | KTH |
Szekely, Eva | KTH Royal Institute of Technology |
Pereira, Andre | KTH Royal Institute of Technology |
Keywords: Non-verbal Cues and Expressiveness, Linguistic Communication and Dialogue, Evaluation Methods
Abstract: Many robots use their voice to communicate with people in spoken language but the voices commonly used for robots are often optimized for transactional interactions, rather than social ones. This can limit their ability to create engaging and natural interactions. To address this issue, we designed a spontaneous text-to-speech tool and used it to author natural and spontaneous robot speech. A crowdsourcing evaluation methodology is proposed to compare this type of speech to natural speech and state-of-the-art text-to-speech technology, both in disembodied and embodied form. We created speech samples in a naturalistic setting of people playing tabletop games and conducted a user study evaluating Naturalness, Intelligibility, Social Impression, Prosody, and Perceived Intelligence. The speech samples were chosen to represent three contexts that are common in tabletopgames and the contexts were introduced to the participants that evaluated the speech samples. The study results show that the proposed evaluation methodology allowed for a robust analysis that successfully compared the different conditions. Moreover, the spontaneous voice met our target design goal of being perceived as more natural than a leading commercial text-to-speech.
|
|
12:10-12:20, Paper TuBT4.5 | |
Longitudinal Evolution of Coachees’ Behavioural Responses to Interaction Ruptures in Robotic Positive Psychology Coaching |
|
Spitale, Micol | University of Cambridge |
Axelsson, Minja | University of Cambridge |
Kara, Neval | Cankaya University |
Gunes, Hatice | University of Cambridge |
Keywords: Non-verbal Cues and Expressiveness, Affective Computing, Long-term Experience and Longitudinal HRI Studies
Abstract: Robotic mental well-being coaches could be used to help people maintain their well-being, and improve access to mental healthcare. In coaching, the alliance between the coach and coachee is important for the success of the practice. However, this alliance might be negatively affected by interaction ruptures (e.g., the robot making mistakes and the user feeling awkward) that still commonly occur in human-robot interactions. Therefore, robotic coaches should be able to recognize ruptures occurring during their interactions with human users to guarantee the success of the well-being practice. To this aim, we analyse coachee behavioural responses to interaction ruptures during a robotic positive psychology coaching practice and how these behavioural cues evolve over time. We focus our analysis on a dataset we collected in a previous work, where 26 participants interacted with either a QTrobot or a Misty II robot at their workplace over 4 weeks. We undertake a longitudinal analysis of coachees' multimodal non-verbal cues (i.e., facial expressions, vocal acoustic features, and body pose features) to investigate the contribution of individual modalities for detecting interaction ruptures. Our results show that coachees: i) displayed facial cues of rupture (e.g, laughing at the robot) and suspicion more in the first week than in the last week; ii) talked more and were less silent in the last week than in the previous weeks; and iii) exhibited a higher number of hand-over-face gestures (a cue for self-disclosure) in the last week than in the previous weeks. Our findings aim to inform the development of AI models for multi-modal detection of interaction ruptures which can be used to improve the effectiveness and the success of robotic well-being coaching.
|
|
12:20-12:30, Paper TuBT4.6 | |
Touch Me Right: Lateral Preferences During Touch in Human-Robot-Interactions |
|
Hitzmann, Arne | Advanced Telecommunications Research Institute International |
Sumioka, Hidenobu | ATR |
Shiomi, Masahiro | ATR |
Keywords: Social Touch in Human–Robot Interaction, Social Intelligence for Robots
Abstract: This study investigated the influence of behavior variations in a robot-initiated greeting. Previous studies on the physical interaction between humans and robots typically focused on constructing behaviors to increase the perceived naturalness of the interaction. The specific parameters of the physical contact were of secondary interest. The experiments in our study were designed to mainly focus on the physical aspect of the robot's interaction with the participants. We varied two parameters of our interaction, which consisted of a shoulder tap to initiate a greeting by the robot. This scenario was selected to withdraw the robot from the participant's awareness as much as possible. The parameters were the timing of the vocal salutation as well as an optional social cue in the form of a waving motion the robot executed when the participant looked at it after the shoulder tap. The results of the experiments showed, contrary to our predictions about the influence of timing and the performance of greeting motion, that those parameters affect the perceived naturalness of the participants in no significant captivity. Our results showed that in the conducted experiments, the most influential factor on the participants was the shoulder (left or right) on which the robot tapped them.
|
|
TuBT5 |
Room T5 |
Innovative Robot Designs II |
Regular Session |
Chair: Tanaka, Fumihide | University of Tsukuba |
|
11:30-11:40, Paper TuBT5.1 | |
Orthrus: A Dual-Arm Quadrupedal Robot for Mobile Manipulation and Entertainment Applications |
|
Yamsani, Sankalp | University of Illinois Urbana-Champaign |
Taylor, Sean | University of Illinois at Urbana Champaign |
Shin, Kazuki | University of Illinois at Urbana-Champaign |
Hong, Jooyoung | University of Illinois at Urbana-Champaign |
Mathur, Dhruv | John Deere Intelligent Solutions Group |
Gim, Kevin | University of Illinois, Urbana-Champaign |
Kim, Joohyung | University of Illinois at Urbana-Champaign |
Keywords: Innovative Robot Designs, Robots in art and entertainment, User-centered Design of Robots
Abstract: In this paper, we present an add-on system that enhances the capabilities of a quadrupedal robot. The add-on system efficiently allows the integration of two 6-DOF manipulators with a quadruped as a single system. The design of the system is developed with modularity as an important principle, allowing for versatility and adaptability in various applications of mobile manipulation. With the modular design, the system can easily be used for mobile-manipulation tasks but also as a system for human entertainment. We show the modular and versatility of the system through applications in a home setting and various entertainment settings. The proposed system leads to an enhanced level of human-robot interaction with more engaging and interactive experiences.
|
|
11:40-11:50, Paper TuBT5.2 | |
A robotIc Radial palpatIon mechaniSm for Breast Examination (IRIS) |
|
Jenkinson, George | University of Bristol |
Tiemann, Karl | University of Bristol |
Papathanasiou, Angeliki | University of Bristol |
Bewley, Jonny | University of Bristol |
Conn, Andrew | University of Bristol |
Tzemanaki, Antonia | University of Bristol |
Keywords: Medical and Surgical Applications, User-centered Design of Robots, HRI and Collaboration in Manufacturing Environments
Abstract: In this paper we present IRIS, a manipulator that is capable of applying contact forces when interacting with an object/stimulus, in increments in the order of mN up to 6N at 5 radial locations simultaneously. IRIS is based upon the contractual mechanism of its namesake: the iris diaphragm often found in cameras. Complete coverage of the surface of a realistic breast phantom is demonstrated using this contractual mechanism combined with control over the angle of incidence between the sensors and the stimulus using sim-to-real concepts. A significant amount of the complexity in control is outsourced to the morphology and compliance of the mechanism. The manipulator demonstrates the technological feasibility of a robotic clinical breast examination.
|
|
11:50-12:00, Paper TuBT5.3 | |
A Two-Layer Haptic Device for Presenting a Wide Range of Softness and Hardness Using a Pneumatic Balloon and a Mechanical Piston |
|
Sasaki, Takuya | Nara Institute of Science and Technology |
Hagimori, Daiki | Nara Institute of Science and Technology |
Perusquia-Hernandez, Monica | Nara Institute of Science and Technolgy |
Isoyama, Naoya | Nara Institute of Science and Technology |
Uchiyama, Hideaki | Nara Institute of Science and Technology |
Kiyokawa, Kiyoshi | Osaka University |
Kuroda, Yoshihiro | University of Tsukuba |
Keywords: Innovative Robot Designs, User-centered Design of Robots, Virtual and Augmented Tele-presence Environments
Abstract: Although a variety of haptic devices are used for virtual reality (VR) and augmented reality (AR) experiences, few can present a wide range of softness-hardness of the surface of the virtual objects. We propose a haptic device that can present a wide range of softness-hardness by using a two-layered structure consisting of a pneumatic balloon and a mechanical piston. Through a series of user studies, we confirmed that the prototype can present five levels of softness and three levels of hardness, and that the prototype device improves the VR experience in terms of realism, enjoyment, and comfort for virtual objects with a variety of softness/hardness.
|
|
12:00-12:10, Paper TuBT5.4 | |
Exploring the Design of Robot Mediation with Bodily Contact for Remote Conflict |
|
Wang, Ruhan | Tsinghua University |
Li, Chih-Heng | Tsinghua University |
Guo, Yijie | Tsinghua University |
Tanaka, Fumihide | University of Tsukuba |
Mi, Haipeng | Tsinghua University |
Keywords: Innovative Robot Designs, User-centered Design of Robots, Robot Companions and Social Robots
Abstract: Interpersonal conflicts are often more difficult to mediate when communicating remotely. The lack of social cues and external mediation makes it difficult for positive conflict behaviors to occur. To this end, robots have been shown to have the potential as mediators. In this paper, we attempt to discuss how to design appropriate bodily contact interactions for the different roles of a robot mediator so as to facilitate the effectiveness of its mediation. We first conduct a pilot interview to probe the potential roles and design elements of robot contact in this study. Then, we explore the relationship between these roles and design elements through a 16-participant design workshop. Finally, we analyze these findings and propose design suggestions for future robot mediator design.
|
|
12:10-12:20, Paper TuBT5.5 | |
Pneumatically Driven Ophthalmologic Surgery Robot with Intraocular Pressure Control |
|
Sogabe, Maina | The University of Tokyo |
Ito, Keiya | The University of Tokyo |
Miyazaki, Tetsuro | The University of Tokyo |
Ito, Norihiko | Tottori University |
Kawashima, Kenji | The University of Tokyo |
Keywords: Medical and Surgical Applications, Assistive Robotics
Abstract: We developed a novel ophthalmologic surgery robot for needle insertion to assist ophthalmologists. The robot consists of a single-degree-of-freedom injection device with a needle on its tip and an intraocular pressure control device. The device is driven by a pneumatic cylinder and uses the back drivability of the actuator to estimate the needle force. The insertion force of the needle is estimated from the position and pressure difference of the cylinder. The recognition of this insertion force is used as a trigger to start intraocular pressure control and assist smooth needle insertion medical solution. The pressure is increased by the control of the syringe with a pneumatic cylinder. The operator can manually inject medication after the needle is inserted. The effectiveness of the robot was confirmed with experiments using porcine eyes. The robot can successfully insert the needle when the target intraocular pressure was 2.4kPa.
|
|
12:20-12:30, Paper TuBT5.6 | |
Open-Ended Multi-Modal Relational Reasoning for Video Question Answering |
|
Luo, Haozheng | Northwestern University |
Ruiyang, Qin | Georgia Institute of Technology |
xu, chenwei | Northwstern University |
Ye, Guo | Northwestern University |
Luo, Zening | Northwestern University |
Keywords: Innovative Robot Designs, Multimodal Interaction and Conversational Skills, Linguistic Communication and Dialogue
Abstract: In this paper, we introduce a robotic agent specifically designed to analyze external environments and address participants' questions. The primary focus of this agent is to assist individuals using language-based interactions within video-based scenes. Our proposed method integrates video recognition technology and natural language processing models within the robotic agent. We investigate the crucial factors affecting human-robot interactions by examining pertinent issues arising between participants and robot agents. Methodologically, our experimental findings reveal a positive relationship between trust and interaction efficiency. Furthermore, our model demonstrates a 2% to 3% performance enhancement in comparison to other benchmark methods.
|
|
12:30-12:40, Paper TuBT5.7 | |
BioMORF: A Soft Robotic Skin to Increase Biomorphism and Enable Nonverbal Communication |
|
Bering Christiansen, Mads | University of Southern Denmark |
Asawalertsak, Naris | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Do, Cao Danh | University of Southern Denmark |
Nantareekurn, Worameth | Vidyasirimedhi Institute of Science and Technology |
Rafsanjani, Ahmad | University of Southern Denmark |
Manoonpong, Poramate | Vidyasirimedhi Institute of Science and Technology (VISTEC) |
Jørgensen, Jonas | Center for Soft Robotics, the Maersk Mc-Kinney Moller Institute, |
Keywords: Innovative Robot Designs, Novel Interfaces and Interaction Modalities, Non-verbal Cues and Expressiveness
Abstract: In this work, we introduce a biomorphic soft robotic skin for a hexapod robot platform and a Central Pattern Generator (CPG) based neural controller to generate respiratory-like motions on the skin. The design enables visio-haptic nonverbal communication between humans and robots and improves the robot's aesthetics by enhancing its biomorphic qualities. We investigated if the soft robotic skin could increase user ratings of the robot’s warmth (RoSAS) and reported trust levels during interaction (MDMT). Contrary to our expectations and earlier findings, we did not find any increase in neither warmth nor trust from adding the soft robotic part. Furthermore, comments received from study participants indicate that trust in the robot is influenced by multiple factors, including appearance, movements, haptic qualities, and contextual factors. Based on our results, we propose directions for further research on pneumatically actuated soft robotic skins as means for nonverbal communication in human-robot interaction.
|
|
TuBT6 |
Room T6 |
Novel Interfaces and Interaction Modalities II |
Regular Session |
Chair: Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
|
11:30-11:40, Paper TuBT6.1 | |
I^3: Interactive Iterative Improvement for Few-Shot Action Segmentation |
|
Gassen, Martina | Technical University of Darmstadt |
Metzler, Frederic | Technical University Darmstadt |
Prescher, Erik | Technical University Darmstadt |
Scherf, Lisa | Technische Universität Darmstadt |
Prasad, Vignesh | TU Darmstadt |
Kaiser, Felix | Technical University of Darmstadt |
Koert, Dorothea | Technische Universitaet Darmstadt |
Keywords: Machine Learning and Adaptation, Novel Interfaces and Interaction Modalities, Detecting and Understanding Human Activity
Abstract: Extracting modular segments from raw video demonstrations of high-level actions is important to understand the underlying building blocks for different tasks in human-robot interaction. While (data-hungry) supervised learning approaches for Action Segmentation show good performance when the underlying segments are predefined, their performance degrades when unseen actions are introduced on-the-go as new data samples are scarce. In this regard, Zero- and Few-Shot Learning approaches have shown good performance in generalizing to unseen examples. In Action Segmentation, where each frame needs to be labeled, annotating new data even for a few tasks can become tedious as the number of tasks scale. In this work, we propose Interactive Iterative Improvement (I^3) for Few-Shot Action Segmentation, a Semi-Supervised Interactive Meta-Learning approach for Zero-Shot Learning on unlabeled videos and Few-Shot Learning on small amounts of labeled videos. I^3 consists of a Prototypical Network model for frame-wise prediction coupled with a Hidden-Semi-Markov-Model to prevent over-segmentation. The model is iteratively improved in an interactive manner through users' annotations provided via a webinterface. This is done in a task-agnostic manner that, in theory, can be reused for a number of different actions. Our model provides sequentially accurate segmentations using only a limited amount of labeled data which shows the efficacy of our learning approach. A lower edit distance compared to baselines indicates a lower number of required user edits making it well suited for non-expert users to smoothly provide annotations enabling them to have more control over the learned model.
|
|
11:40-11:50, Paper TuBT6.2 | |
Considerations on Interaction with Manipulator in Virtual Reality Teleoperation Interface for Rescue Robots |
|
Kanazawa, Kotaro | Nagoya Institute of Technology |
Sato, Noritaka | Nagoya Institute of Technology |
Morita, Yoshifumi | Nagoya Institute of Technology |
Keywords: Novel Interfaces and Interaction Modalities, Degrees of Autonomy and Teleoperation, Virtual and Augmented Tele-presence Environments
Abstract: In recent years, commercially available head-mounted display and virtual reality (VR) controllers have made it possible to implement user interfaces that facilitate situational awareness and reduce operator workload.This study focuses on operator interaction and workload in a VR teleoperation interface for rescue robots.Three interaction methods are implemented in this interface, which differ in the timing between the operation in VR space and the movement of the robot.The operating characteristics of these three interaction methods and a conventionally used gamepad are compared through a manipulator teleoperation task with nine participants.
|
|
11:50-12:00, Paper TuBT6.3 | |
Feeling the Slope? Teleoperation of a Mobile Robot Using a 7DOF Haptic Device with Attitude Feedback |
|
Luz, Rute | Instituto Superior Técnico, Institute of Systems and Robotics |
Pereira, Aaron | German Aerospace Center (DLR) |
Corujeira, Jessica | Instituto Superior Técnico, Universidade De Lisboa |
Krueger, Thomas | European Space Agency |
Beck, Jacob | ESA |
den Exter, Emiel | ESA Human Robot Interaction Lab |
Chupin, Thibaud | European Space Agency |
Silva, José Luís | Instituto Universitário De Lisboa (ISCTE-IUL), ISTAR-IUL and Mad |
VENTURA, Rodrigo | Instituto Superior Técnico |
Keywords: Novel Interfaces and Interaction Modalities, Multi-modal Situation Awareness and Spatial Cognition, Human Factors and Ergonomics
Abstract: A well-known challenge in rover teleoperation is the operator's lack of situational awareness (SA). This often leads to an inaccurate perception of the rover's status and surroundings and, consequently, to faulty decision-making by the operator. We present a novel teleoperation interface to control the locomotion of a ground rover with a 7DOF force feedback device (sigma.7), while providing haptic feedback to ensure appropriate SA. In particular, the device provides proprioceptive cues to convey the rover's attitude. This can be particularly useful for environments with insufficient visual cues to estimate attitude (e.g., a cave). In systematic experimental trials controlling a robot in an outdoor environment, we evaluated the validity of employing sigma.7 as an alternative to a standard joystick. We tested the use of attitude as an aid to situational awareness. We found no significant detriment in manoeuvrability compared to a conventional joystick, thus validating the sigma.7 as an effective control device. Regarding SA, results showed no statistical difference between the visual and haptic cues for attitude feedback, thus validating the haptic method as an effective alternative to offloading the visual channel by conveying attitude information through the haptic channel instead of visual cues. Finally, qualitative observations of the participant's behaviour during the experiments showed that operators with haptic feedback were comprehensively aware of the rover's status.
|
|
12:00-12:10, Paper TuBT6.4 | |
Object Identification Using Augmented Reality with Haptic Feedback |
|
Akita, Emmanuel | The University of Texas at Austin |
Regal, Frank | The University of Texas at Austin |
Torres, Kevin | University of Texas at Austin, Nuclear and Applied Robotics Grou |
Majewicz Fey, Ann | University of Texas at Austin |
Pryor, Mitchell | University of Texas |
Keywords: Novel Interfaces and Interaction Modalities, Multi-modal Situation Awareness and Spatial Cognition, Detecting and Understanding Human Activity
Abstract: We propose a novel Augmented Reality (AR) Head Mounted Display (HMD) haptic-enabled device which is capable of providing visual and vibrotactile directional cues to locate objects of interest. Using the vibrotactile cues, the device communicates prioritization information to users without the need for additional graphics. This work builds upon a human-robot teaming AR application, AugRE, which provides both situational awareness and control interfaces for any number of ROS-enabled robotic systems. The vibrotactile haptic component developed attaches to the AR-HMD and uses a sequence of vibrations to direct the user to specific objects in their proximity. The visual haptic component does the same by overlaying a holographic arrow on the HMD. We present results from a pilot study, and discuss system limitations and research areas that may help direct future development for human-robot teaming applications. Results indicate that visual haptic cues provide the best response times. However, high frequency vibrotactile haptic cues may be a viable alternative for some tasks where the visual space is already saturated.
|
|
12:10-12:20, Paper TuBT6.5 | |
Speech-Gesture GAN: Gesture Generation for Robots and Embodied Agents |
|
Liu, Carson Yu | University of New South Wales |
Mohammadi, Gelareh | University of New South Wales |
Song, Yang | University of New South Wales |
JOHAL, Wafa | University of New South Wales |
Keywords: Social Intelligence for Robots, Machine Learning and Adaptation, Anthropomorphic Robots and Virtual Humans
Abstract: Embodied agents, in the form of virtual agents or social robots, are rapidly becoming more widespread. In human-human interactions, humans use nonverbal behaviours to convey their attitudes, feelings, and intentions. Therefore, this capability is also required for embodied agents in order to enhance the quality and effectiveness of their interactions with humans. In this paper, we propose a novel framework that can generate sequences of joint angles from the speech text and speech audio utterances. Based on a conditional Generative Adversarial Network (GAN), our proposed neural network model learns the relationships between the co-speech gestures and both semantic and acoustic features from the speech input. In order to train our neural network model, we employ a public dataset containing co-speech gestures with corresponding speech audio utterances, which were captured from a single male native English speaker. The results from both objective and subjective evaluations demonstrate the efficacy of our gesture-generation framework for Robots and Embodied Agents.
|
|
12:20-12:30, Paper TuBT6.6 | |
Let Me Be Your Service Robot: Exploring Early User Experiences of Human-Robot Collaboration for Service Domains |
|
Golchinfar, David | University of Applied Sciences Bonn-Rhein-Sieg |
Vaziri, Daryoush | University of Applied Sciences Bonn-Rhein-Sieg |
Hennekeuser, Darius | University of Applied Sciences Bonn-Rhein-Sieg |
Stevens, Gunnar | University of Siegens |
Schreiber, Dirk | University of Applied Sciences Bonn-Rhein-Sieg |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation, Novel Interfaces and Interaction Modalities
Abstract: There has been increasing interest in the application of service robots in retail service domains in recent years. In most cases, deployed robot systems focus on serving customer needs autonomously. Specific and individual customer needs often cannot be addressed by these systems, promoting frustration and dissatisfaction in customers. In this study, we investigate the potential of human-robot-collaboration in service domains and how humans may be kept in the loop while customers interact with autonomous service robots. Therefore, we developed a graphical user interface allowing users to control a service robot remotely. We tested the interface with 19 participants to understand their perceptions on the usability of the interface and their user experiences while serving customers in two different use cases. Results illustrate that participants easily learned interacting with the robot and successfully completed the service use cases. They reported diverse user experiences, ranging from feeling odd to having great experiences while remotely operating the robot. We discuss implications of our results for the design of human-robot-interaction in service domains and emphasize a shift of focus from full robot automatization to human-robot collaboration.
|
|
TuCT1 |
Room T1 |
Humanoid Robots in Healthcare: Exploring Real World Applications |
Special Session |
Chair: Sørensen, Linda | Sunnaas Hospital |
Co-Chair: Markelius, Alva Jamina Ka | University of Cambridge |
|
14:00-14:10, Paper TuCT1.1 | |
Health Professionals’ Views on the Use of Social Robots with Vulnerable Users: A Scenario-Based Qualitative Study Using Story Dialogue Method (I) |
|
Saplacan, Diana | University of Oslo |
Schulz, Trenton | Norwegian Computing Center |
Torresen, Jim | University of Oslo |
Pajalic, Zada | VID Specialized Univeristy |
Keywords: Storytelling in HRI, Robot Companions and Social Robots, Human Factors and Ergonomics
Abstract: We used the story dialog method (SDM) to gather the viewpoints of health professionals about the use of social robots in the home and healthcare services with vulnerable users. SDM consists of participants bringing stories that they discuss together. The aim of the study was to address universal design and accessibility issues with robots in specific use situations. We used three social robots in four stories: TIAGo, Romibo, and robot pets. We used the SDM method in two workshops with eight participants. The participants uncovered issues regarding ethics, responsibility, use of data, infrastructure, design, and user concerns based on provided stories and their own experiences. These issues provide important aspects that researchers and roboticists should consider when using robots with vulnerable users and ensuring that a robot is usable by as many people as possible.
|
|
14:10-14:20, Paper TuCT1.2 | |
Humanoid Robots in Healthcare: Lessons Learned from an Innovation Project (I) |
|
Fernandes, Alexandra | Institute for Energy Technology |
Reegård, Kine | Institute for Energy Technology |
Kaarstad, Magnhild | Institute for Energy Technology |
Eitrheim, Maren | Institute for Energy Technology |
Bloch, Marten | Institute for Energy Technology |
Keywords: Assistive Robotics, Androids, Robots in Education, Therapy and Rehabilitation
Abstract: This paper presents lessons learned from an ongoing innovation project exploring the integration of a humanoid robot at a rehabilitation hospital in Norway. By combining human factors and human-robot interaction approaches, we suggest a framework to identify key concepts and methods to evaluate the healthcare staff and patients’ perceptions of the robot. Findings from initial studies on the topic are presented, based upon semi-structured interviews with stakeholders; a survey identifying specific challenges in today's activities at the hospital; and a meeting and a workshop with key stakeholders to identify concrete scenarios and tasks where the humanoid robot could support staff. We wrap up by assessing the usefulness of the framework and discussing key challenges and opportunities for humanoid robots within healthcare, as identified in the current experiences in the project.
|
|
14:20-14:30, Paper TuCT1.3 | |
Challenges of Deploying Assistive Robots in Real-Life Scenarios: An Industrial Perspective (I) |
|
Cooper, Sara | Honda Research Institute Japan |
Ros, Raquel | PAL Robotics |
Lemaignan, Séverin | PAL Robotics |
Keywords: Assistive Robotics, User-centered Design of Robots, Applications of Social Robots
Abstract: With the increase in life expectancy and staff shortage, there is an urgency to understanding the needs of older adults and exploring emerging fields such as social robotics to tackle the challenges of ageing. The paper highlights the importance of providing cognitive support, physical support, reducing loneliness and increasing social engagement among older adults as well as reducing caregiver burden, and suggests that socially assistive robots (SAR) can assist older adults and their carers with such needs. However, the paper also points out that there are several challenges associated with designing and deploying SAR systems, and involving end-users in the design process is necessary to improve user acceptability and adoption. The paper describes the approaches used by PAL Robotics to facilitate real-world deployment of its ARI and TIAGo social robots, and provides examples of how these robots have been used to tackle different healthcare needs.
|
|
14:30-14:40, Paper TuCT1.4 | |
The Robot Will Feel You Now: The Ethics of Artificial Emotional Intelligence in Sex Robots (I) |
|
Sica, Arianna | Østfold University College |
Keywords: Robot Companions and Social Robots, Affective Computing, Creating Human-Robot Relationships
Abstract: Sex robots have emerged as a topic of growing ethical and social concern, especially in terms of their impact on the individuals’ sexual health and their potential to establish loving relationships with users. The implementation of artificial emotional intelligence (AEI) into sex robots could increase the likelihood of users developing feelings of love towards these machines. This article explores whether the integration of AEI would exacerbate or offer a solution to the ethical issues surrounding sex robots, while also evaluating the impact of AEI on users’ emotional and sexual wellbeing. It also proposes some practical guidelines for an ethical design of sex robots and emphasises the need for ongoing dialogue and research on the role of AEI technology in sex robots, seeking to contribute to the broader discussion on the ethical implications of such technological advancements.
|
|
TuCT3 |
Room T3 |
Assistive Robotics II |
Regular Session |
Chair: Winkle, Katie | Uppsala University |
|
14:00-14:10, Paper TuCT3.1 | |
Autonomous or Manual Control? Qualitative Analysis of Control Perceptions from Current Robotic Arm Owners |
|
Wang, Eileen | University of Pittsburgh |
Kane Styler, Breelyn | University of Pittsburgh, Human Engineering Research Laboratorie |
Ding, Dan | University of Pittsburgh |
Keywords: Assistive Robotics, Degrees of Autonomy and Teleoperation, Long-term Experience and Longitudinal HRI Studies
Abstract: Assistive Robotic Manipulators (ARMs) provide individuals with upper limb impairments the ability to independently manipulate objects for Activities of Daily Living (ADLs). In this qualitative study, we interviewed eleven current ARM owners to gather information on how they assess different ARM control methods as well as their perceptions on autonomous behavior for the ARM. Information was gathered by presenting a video demonstrating three control interfaces using touchscreen, voice, and software autonomy, and asking guided questions about opinions and perceptions of each interface. The results of this study show that ARM users prefer autonomy, but do not want their sense of control taken away by the robot. Another insight is that ARM users may benefit from using a combination of different control modalities for different situations, rather than relying on one specific control modality. The types of control modalities that should be used also varies among people with different disabilities and preferences, meaning there is not one control modality that is favored by all. Rather, each individual prefers different control interfaces depending on their needs. By analyzing the perspectives of current ARM owners, common frustrations and desires can be addressed for future ARM control development.
|
|
14:10-14:20, Paper TuCT3.2 | |
The Effect of Tactor Composition and Vibrotactile Stimulation on Sensory Memory for a Haptic Feedback Display |
|
Kelly, Erin | Georgia Institute of Technology |
Wheaton, Lewis | Georgia Tech |
Hammond III, Frank L. | Georgia Institute of Technology |
Keywords: Assistive Robotics, Novel Interfaces and Interaction Modalities
Abstract: Previously, a wearable multimodal sensory feedback device (SFD) was developed to communicate proprioceptive information from a robotic gripper onto the operator’s forearm. The SFD showed promise in that it could effectively communicate proprioceptive sensory information and enhance the body’s natural proprioceptive sense. This study delves into the feedback modes implemented into the feedback device by evaluating the effect that increased skin-stretch in combination with vibrotactile stimulation has on the users' abilities to discern the location of the tactor after time has passed. The SFD used in this study implements a tactor, made of either silicone or foam that translates laterally across the ventral side of the forearm. Subjects were asked to sense the location of the tactor after it had been stationary for a period of time. The experiments’ results indicate that a material that provides increased skin-stretch sensation can benefit the duration of skin-stretch feedback for sensory feedback devices. Additionally, vibrotactile stimulation has shown to be promising though its compatibility with the silicone material was not ideal.
|
|
14:20-14:30, Paper TuCT3.3 | |
Flexible Control and Task Manager System for Non-Contact Delivery Robots in COVID-19 Isolated Facilities |
|
Cho, SungJoon | Korea Institute of Science and Technology |
Lee, Yisoo | Korea Institute of Science and Technology |
Kim, KangGeon | Korea Institute of Science and Technology |
Ihn, Yong Seok | Korea Institute of Science and Technology |
Kim, Jun-Sik | Korea Institute of Science & Technology |
YOU, BUM JAE | KIST (Korea Institue of Science and Technology) |
Keywords: Assistive Robotics, Computational Architectures, Creating Human-Robot Relationships
Abstract: The COVID-19 pandemic has caused a global public health crisis, leading to increased costs for operating essential quarantine facilities and risks of infection for medical staff. To address these issues, we have developed a non-contact delivery robot, UTD-pro, that can deliver food and supplies to patients in an isolated residential treatment center. In case of emergency situations such as communication failure with the control system or malfunction of specific modules, the non-contact delivery robot should be able to perform commands robustly or return home. Otherwise, as before, medical staffs may have to enter the high-risk environment again. In this paper, we propose a flexible and robust remote control system and a robot task management system that enables the robot to execute commands correctly. Our robot task manager allows for autonomous task execution with a single command from the remote control system. Each robot task operates independently, allowing users to change task plans flexibly during robot operation. The sensor data communication and task execution communication processes are also independent, preventing any issue in one process from affecting the other. By applying our system to the robot, we conducted a long-term delivery experiment for a total of 7.723km for 79 days, and achieved 125 successes out of 149 trials. This experiment proves that our system can lead to stable delivery processes and contribute to the high reliability of the control system and the robot task manager.
|
|
14:30-14:40, Paper TuCT3.4 | |
End-To-End Planner for Self-Reconfigurable Modular Robots Collaborative Objects Manipulation, Transport and Handover to Human Application |
|
Morel, Aurélien | Sorbonne Univesité, France / Ecole Polytechnique Fédérale De Lau |
Bolotnikova, Anastasia | EPFL |
Ju, Celinna | EPFL |
Rabaey, Jan M. | University of California: Berkeley |
Ijspeert, Auke | EPFL |
Keywords: Assistive Robotics, Motion Planning and Navigation in Human-Centered Environments
Abstract: Collaborative object manipulation and transport with self-reconfigurable modular robots can take a major role in improving modularity and adaptability of smart-home and factory-like environments. Controlling modules to achieve efficient behaviours is challenging due to the high number of degrees of freedom in the system and the physical constraints. We present an end-to-end planner that discovers collaborative behaviours for modules to manipulate and transport objects to bring them to a human defined place. Our approach is based on a centralized planner using stochastic best-first search with a custom heuristic and pruning strategy. We use Quadratic Programming to define multi-robot controller to evaluate action feasibility for transitions between the search tree nodes with respect to important constraints of the system (collisions, joint and torque limits). The controller can be design to be aware of human reachable space for object handover and use it as a measure to asses closeness to the goal node. Results show that the proposed method can effectively coordinate the actions of multiple robots, leading to an emerging efficient manipulation and transport of objects with variable shapes and weight to within human reachable space. This work brings self-reconfigurable modular robots one step closer to assistive human-robot interaction applications or smart logistics.
|
|
TuCT4 |
Room T4 |
Applications of Social Robots I |
Regular Session |
Chair: Takashio, Kazunori | Keio University |
|
14:00-14:10, Paper TuCT4.1 | |
The Future of Home Appliances: A Study on the Robotic Toaster As a Domestic Social Robot |
|
Ye, Meryl | Cornell University |
Schneiders, Eike | University of Nottingham |
Lee, Wen-Ying | Cornell University |
Jung, Malte | Cornell University |
Keywords: Personalities for Robotic or Virtual Characters, Social Intelligence for Robots, Creating Human-Robot Relationships
Abstract: Robotic appliances are continually being adopted into private homes. However, users have yet to exhibit the same acceptance towards domestic social robots. In this paper, we seek to bridge this issue by augmenting already-existing home appliances with capabilities mimicking social robots. We present a robotic toaster designed with animated movements to enhance and personalize the toast-making experience. Not only does the robotic toaster assist in completing the task itself, it also acts as a conscious agent with whom users may interact in a social and playful manner. Using a series of video vignettes, we identify three key themes of the robotic toaster that influence its relationship with users: these are related to (1) context awareness, (2) increased interactivity through initiative action, and (3) expression of personality despite limited degrees of freedom. Lastly, we discuss how the portrayal of home appliances with social characteristics can potentially serve as an introductory step for social robots in the home.
|
|
14:10-14:20, Paper TuCT4.2 | |
Exploring Measures for Engagement in a Collaborative Game Using a Robot Play-Mediator |
|
Azizi, Negin | University of Waterloo |
Fan, Kevin | University of Waterloo |
Jouaiti, Melanie | Imperial College London |
Dautenhahn, Kerstin | University of Waterloo |
Keywords: Applications of Social Robots, Assistive Robotics
Abstract: Play is valuable in making therapy more enjoyable, and has been studied intensively in human-robot interaction. However, the use of robots as play-mediators in multiplayer games, and the study of the dynamics of players have barely been explored. In this work, pairs of participants played with the MyJay robot in a game with two collaborative conditions (Shared and Fusion). In the Shared condition, participants shared the tasks and in the Fusion condition, participants had to synchronize their commands for the robot. In previous work, we analyzed the video recordings and questionnaires and observed that participants perceived the Fusion condition as more challenging, and requiring more coordination, while the Shared condition was perceived as more enjoyable. This paper will report on new analyses based on physiological and joystick data. The results revealed different patterns of heart rate and usage of the joysticks in the two conditions, while no link between physiological data and enjoyment was found.
|
|
14:20-14:30, Paper TuCT4.3 | |
Pepper on the Job: Applying Social Robots in Employee Training |
|
Donnermann, Melissa | Julius-Maximilians University Wuerzburg |
Rossin, Franziska | Julius-Maximilians-Universität Würzburg |
Lugrin, Birgit | University of Wuerzburg |
Keywords: Applications of Social Robots, Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots
Abstract: Advancing digitisation in working environments brings up the necessity of lifelong learning as well as technology-supported employee training. Research on social robots has already demonstrated their potential to support adults in their learning process. In this study, we focus on potential benefits of applying a social robot for employee training. We conducted a field study in cooperation with a company and set-up two conditions: a robot-supported learning environment and the onscreen learning environment the company usually uses for employee training. Our results show a positive perception of the robot and participants of the robot condition reported significantly more enjoyment while learning. Half of the participants are willing to use it again in the future: some prefer the robot-supported training over the onscreen training, while others were interested to use both options. However, the other half stick with onscreen learning in the future and there were no significant differences between motivation and learning success between the two groups.
|
|
14:30-14:40, Paper TuCT4.4 | |
Autonomous UAV Navigation in Complex Environments Using Human Feedback |
|
Karumanchi, Sambhu Harimanas | University of Illinois, Urbana-Champaign |
Diddigi, Raghuram Bharadwaj | International Institute of Information Technology, Bangalore |
K J, Prabuchandran | Indian Institute of Technology Dharwad |
Bhatnagar, Shalabh | Indian Institute of Science, Bangalore |
Keywords: Applications of Social Robots, Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation
Abstract: Autonomous navigation of Unmanned Aerial Vehicles (UAVs) has real-life applications in remote sensing, wildlife surveillance, search and rescue operations. A popular training paradigm to learn optimal actions for navigating such complex, dynamic, and uncertain environments is Reinforcement Learning (RL), where the optimal decisions are learnt over time through a reward-feedback received from the environment. However, manually constructing a feedback function that can help guide the UAV to accomplish the desired objective is often very hard. Preference-based Reinforcement Learning (PbRL) is an emerging sub-field of RL where the manual construction of reward function is replaced with human feedback. In this setting, a human is presented with a pair of trajectories followed by the RL agent to elicit the subject's preference for one over the other. A PbRL algorithm would then compute an optimal sequence of actions using just the set of preferences collected over different trajectories. In this work, we consider PbRL for UAV navigation and follow an ensemble approach to enhance navigation performance. We demonstrate the efficacy of the proposed algorithm through experiments on a range of complex environments and tasks. Ours is the first work that uses human preferences to solve the UAV navigation problem to the best of our knowledge.
|
|
TuCT5 |
Room T5 |
Motion Planning and Navigation in Human-Centered Environments I |
Regular Session |
Chair: Kim, Soonkyum | Korea Institute of Science and Technology |
|
14:00-14:10, Paper TuCT5.1 | |
Instance-Level Semantic Maps for Vision Language Navigation |
|
Nanwani, Laksh | Robotics Research Center, IIIT Hyderabad, India |
Agarwal, Anmol | International Institute of Information Technology - Hyderabad |
Jain, Kanishk | IIIT Hyderabad |
Prabhakar, Raghav | IIIT Hyderabad |
Monis, Aaron | IIIT Hyderabad |
Mathur, Aditya | IIIT Hyderabad |
Jatavallabhula, Krishna Murthy | MIT |
Abdul Hafez, A. H. | Hasan Kalyoncu Uiversity |
Gandhi, Vineet | IIIT Hyderabad |
Krishna, Madhava | IIIT Hyderabad |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Linguistic Communication and Dialogue
Abstract: Humans have a natural ability to perform semantic associations with the surrounding objects in the environment. This allows them to create a mental map of the environment, allowing them to navigate on-demand when given linguistic instructions. A natural goal in Vision Language Navigation (VLN) research is to impart autonomous agents with similar capabilities. Recent works take a step towards this goal by creating a semantic spatial map representation of the environment without any labeled data. However, their representations are limited for practical applicability as they do not distinguish between different instances of the same object. In this work, we address this limitation by integrating instance-level information into spatial map representation using a community detection algorithm and utilizing word ontology learned by large language models (LLMs) to perform open-set semantic associations in the mapping representation. The resulting map representation improves the navigation performance by two-fold (233%) on realistic language commands with instance-specific descriptions compared to the baseline. We validate the practicality and effectiveness of our approach through extensive qualitative and quantitative experiments.
|
|
14:10-14:20, Paper TuCT5.2 | |
Model-Based Imitation Learning for Real-Time Robot Navigation in Crowds |
|
Moder, Martin | University Duisburg-Essen |
Oezgan, Fatih | Universität Duisburg-Essen |
Pauli, Josef | Universität Duisburg-Essen |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Machine Learning and Adaptation, Motion Planning and Navigation in Human-Centered Environments
Abstract: We are increasingly interacting with robots in our everyday life. To further this development, a critical capability is safe and socially compliant robot navigation in a crowd. In this work, we extract a navigation strategy from past human-to- human interactions with a model-based approach to imitation learning. We propose a hybrid model of crowd dynamics that combines an autoregressive and an inverse autoregressive model for real-time sampling-based planning with respect to human decisions making. Furthermore, we constrain the optimization to allow only admissible velocities for any given robot dynamics that lead to a trajectory on which the robot can safely stop. Ex- periments are conducted in crowded prerecorded environments, where the robot is placed in a variety of scenarios with varying numbers of humans. The results show that the algorithm is able to navigate in these environments with a lower collision rate and a shorter path than the state-of-the-art.
|
|
14:20-14:30, Paper TuCT5.3 | |
Robot Localization and Reconstruction Based on 3D Point Cloud |
|
Chi, Peng | South China University of Technology |
Wang, Zhenmin | South China University of Technology |
Liao, Haipeng | South China University of Technology |
Wu, Xiangmiao | South China University of Technology |
Tian, Jiyu | South China University of Technology |
Zhang, Qin | South China University of Technology |
Keywords: Motivations and Emotions in Robotics, Degrees of Autonomy and Teleoperation, Cooperation and Collaboration in Human-Robot Teams
Abstract: The 3D point cloud is widely used in robot fields because of its accurate positioning results and dense environment information. However, most of the existing methods are real-time positioning and 3D reconstruction in unknown environments. In some scenes that require multiple regular operations, such as robot patrol and maintenance, the stability of the system is slightly insufficient. At present, the methods in this field are mostly based on fixed starting points or manual positioning, with an insufficient degree of automation. In this paper, a real-time robot localization and reconstruction system based on 3D vision is proposed, which includes pose estimation, environment reconstruction, and relocalization based on a 3D point cloud. First, a more accurate pose estimation method is applied for 3D environment reconstruction, using the coordinate transformation of the point cloud and the point cloud matching of the key frames. Then, a new point cloud segmentation method is proposed for local map maintenance to realize point cloud map display and human-robot interaction under real-time network transmission. Finally, a new robot relocalization method is proposed for map updating when the mapping is interrupted or repeated. The M2DGR dataset and real robot test were used to verify the accuracy and effect of the system, where the results showed that our method had a good performance.
|
|
14:30-14:40, Paper TuCT5.4 | |
Wearable Indoor UWB Localization Performance in Smartphone Carrying Contexts: An Investigative Study |
|
Naheem, Khawar | Gwangju Institute of Science and Technology |
Kim, Mun Sang | GIST |
Keywords: Detecting and Understanding Human Activity, Multi-modal Situation Awareness and Spatial Cognition, Androids
Abstract: Embedding of ultra-wideband (UWB) chips inside smartphones brands UWB technology a top contender for indoor pedestrian tracking at centimeter-level accuracy. However, the signal blocking between the wearable UWB sensor (tag) and infrastructure UWB sensors (anchors) can result in meter-level tracking inaccuracy. Accordingly, the varying nature of daily-life smartphone carrying contexts can deteriorate the UWB tracking performance depending on the degree of signal blocking by each body location or context. This paper presents the performance analysis of UWB localization accuracy under daily-life smartphone carrying contexts such as texting, calling, swinging, and pocketing. First, we implemented the extended Kalman filter-based localization algorithm using the raw UWB ranging measurements in each context. Then, we evaluated the UWB localization accuracy by conducting a real-time experiment containing four campaigns in a multipath indoor environment. In each campaign, a pedestrian is equipped with a smartphone-attached tag and a head-mounted tag for comparison. The comparative results established that the 90th percentile of UWB localization inaccuracy increases from 0.51 to 2.72 m for smartphone-attached tag in daily-life carrying contexts than that of 0.38 m of head-mounted tag. Our investigation can contribute to the adoption of smartphone’s built-in UWB chip for indoor UWB pedestrian tracking in daily-life use cases.
|
|
TuCT6 |
Room T6 |
Novel Interfaces and Interaction Modalities III |
Regular Session |
Chair: Lee, Hee Rin | Michigan State University |
|
14:00-14:10, Paper TuCT6.1 | |
Humans' Spatial Perspective-Taking When Interacting with a Robotic Arm |
|
Abrini, Mouad | Sorbonne University |
Auvray, Malika | ISIR, CNRS, Sorbonne-University |
Chetouani, Mohamed | Sorbonne University |
Keywords: Evaluation Methods, Novel Interfaces and Interaction Modalities, Cognitive Skills and Mental Models
Abstract: Perceiving the environment from another person's perspective, in other words, being in someone else's shoes spatially, is not always an easy task. Perspective-taking can be even more challenging when working with a robot as a collaborator. The study reported here aims at investigating humans' level 2 spatial perspective-taking performance when interacting with a collaborative robotic arm through a novel in-person experiment. First, a robotic arm drew ambiguous shapes on a whiteboard and participants had to answer questions that require performing spatial perspective-taking. A metric was used to compute a score based on their responses. Second, participants completed the PTSOT, a test measuring spatial orientation and perspective-taking ability. The results revealed a correlation between the scores computed using our metric and those obtained in the PTSOT. This suggests the efficiency of our new setup and associated evaluation metric in assessing spatial perspective-taking skills in a human-robot interaction context, as well as the validity of our findings, in line with prior studies on perspective-taking.
|
|
14:10-14:20, Paper TuCT6.2 | |
Successful Swarms: Operator Situational Awareness with Modelling and Verification at Runtime |
|
Gu, Yue | University of Glasgow |
Hunt, William | University of Southampton |
Archibald, Blair | University of Glasgow |
Xu, Mengwei | University of Glasgow |
Sevegnani, Michele | School of Computing Science, University of Glasgow |
Soorati, Mohammad Divband | University of Southampton |
Keywords: Novel Interfaces and Interaction Modalities, Human Factors and Ergonomics, Detecting and Understanding Human Activity
Abstract: Robot swarms, through redundancy, offer fault-tolerant distributed sensing and actuation, but can lack complex mission-level decision making. Pairing a human operator with the swarm can improve decision making but only if the operator maintains situational awareness - knowledge of the current state of the swarm - as well as being able to anticipate future states. We show how formal methods, in the form of probabilistic models, executed and verified at runtime alongside the system can aid situational awareness by providing valuable insight into both current and future situations. Two models, for determining task and mission success probabilities, are given, and we show that statistical model checking allows timely approximate predictions that take no more than 1s while staying within 2% of the exact solution. We highlight and implement approaches to display this information to an operator, and show how models can be used to try what-if scenarios before decisions are made.
|
|
14:20-14:30, Paper TuCT6.3 | |
Detecting the Intention of Object Handover in Human-Robot Collaborations: An EEG Study |
|
Rajabi, Nona | KTH Royal Institute of Technology |
Khanna, Parag | KTH Royal Institute of Technology |
Demir Kanik, Sumeyra Ummuhan | Ericsson Research |
Yadollahi, Elmira | KTH |
Vasco, Miguel | INESC-ID |
Björkman, Mårten | KTH |
Smith, Claes Christian | KTH Royal Institute of Technology |
Kragic, Danica | KTH |
Keywords: Novel Interfaces and Interaction Modalities, Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans
Abstract: Human-robot collaboration (HRC) relies on smooth and safe interactions. In this paper, we focus on the human-to-robot handover scenario, where the robot acts as a taker. We investigate the feasibility of detecting the intention of a human-to-robot handover action through the analysis of electroencephalogram (EEG) signals. Our study confirms that temporal patterns in EEG signals provide information about motor planning and can be leveraged to predict the likelihood of an individual executing a motor task with an average accuracy of 94.7%. We also suggest the effectiveness of the time-frequency features of EEG signals in the final second prior to the movement for distinguishing between handover action and other actions. Furthermore, we classify human intentions for different tasks based on time-frequency representations of pre-movement EEG signals and achieve an average accuracy of 63.5% for contrasting every two tasks against each other. The result encourages the possibility of using EEG signals to detect human handover intention in HRC tasks.
|
|
14:30-14:40, Paper TuCT6.4 | |
Hands-Free Physical Human-Robot Interaction and Testing for Navigating a Virtual Ballbot |
|
Song, Seung Yun | University of Illinois at Urbana-Champaign |
Marin, Nadja | The University of Illinois at Urbana-Champaign |
Xiao, Chenzhang | University of Illinois at Urbana-Champaign |
Okubo, Ryu | University of Illinois Urbana-Champaign |
Ramos, Joao | University of Illinois at Urbana-Champaign |
Hsiao-Wecksler, Elizabeth T. | University of Illinois at Urbana-Champaign |
Keywords: Novel Interfaces and Interaction Modalities, Virtual and Augmented Tele-presence Environments, Evaluation Methods
Abstract: A hands-free (HF) lean-to-steer control concept that uses torso motions is demonstrated by navigating a virtual robotic mobility device based on a ball-based robotic (ballbot) wheelchair. A custom sensor system (i.e., Torso-dynamics Estimation System (TES)) was utilized to measure and convert the dynamics of the rider’s torso motions into commands to provide HF control of the robot. A simulation study was conducted to explore the efficacy of the HF controller compared to a traditional joystick (JS) controller, and whether there were differences in performance by manual wheelchair users (mWCUs), who may have reduced torso function, compared to able-bodied users (ABUs). Twenty test subjects (10 mWCUs + 10 ABUs) used the subject-specific adjusted TES while wearing a virtual reality headset and were asked to navigate a virtual human rider on the ballbot through obstacle courses replicating seven indoor environment zones. Repeated measures MANOVA tests assessed performance metrics representing efficiency (i.e., number of collisions), effectiveness (i.e., completion time), comfort (i.e., NASA TLX scores), and robustness (i.e., index of performance). As expected, more challenging zones took longer to complete and resulted in more collisions. An interaction effect was observed such that ABUs had significantly more collisions using JS vs. HF control, while mWCUs had little difference with either interface. All subjects reported greater physical demand was needed for HF control than JS control; although, no users visibly showed or expressed fatigue or exhaustion when using HF control. In general, HF control performed as well as JS control, and mWCUs performed similarly to ABUs.
|
|
TuDT1 |
Room T1 |
SARCHA: Socially Assistive Robots in Clinical and Healthcare Applications |
Special Session |
Chair: Markelius, Alva Jamina Ka | University of Cambridge |
Co-Chair: Sørensen, Linda | Sunnaas Hospital |
|
14:40-14:50, Paper TuDT1.1 | |
Robot-Mediated Job Interview Training for Individuals with ASD: A Pilot Study (I) |
|
Shahverdi, Pourya | Oakland University, Michigan, USA |
Rousso, Katelyn | Intelligent Robotics Lab, Oakland University, Michigan |
Bakhoda, Iman | Intelligent Robotics Laboratory, Oakland University, Michigan |
Huang, Nathan | Oakland University |
Rohrbeck, Kristin | Joanne and Ted Lindsay Foundation Autism Outreach Services (OUCA |
Louie, Wing-Yue Geoffrey | Oakland University |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Assistive Robotics
Abstract: This study aimed to evaluate the effectiveness of robot-mediated training for job interviews for young job seekers with autism spectrum disorder (ASD). The six-week intervention involved mock job interviews with a teleoperated Furhat social robot, targeting nonverbal behavior and communication skills. To measure the efficacy of the intervention, four common nonverbal behavioral challenges among individuals with ASD were identified and quantitative metrics were defined: eye gaze, excessive body movement, atypical vocalization, and orientation toward the interviewer. Results indicated varying levels of success among participants, with some showing consistent improvement and others exhibiting unexpected results from session to session, underscoring the need for personalized, objective, and quantitative analysis. The study highlights the importance of addressing nonverbal communication challenges for individuals with ASD and equipping them with the necessary job market skills. While the pilot results from robot-mediated training appear promising, further research with a larger group including a wide range of participants with ASD is required to generalize the outcomes.
|
|
14:50-15:00, Paper TuDT1.2 | |
The Role of Conversational AI in Ageing and Dementia Care at Home: A Participatory Study (I) |
|
R. Lima, Maria | Imperial College London |
Horrocks, Sophie | Imperial College London |
Daniels, Sarah | Imperial College London |
Lamptey, Moesha | Imperial College London |
Harrison, Matthew | Helix Centre Imperial College London |
Vaidyanathan, Ravi | Imperial College London |
Keywords: User-centered Design of Robots, Applications of Social Robots, Robots in Education, Therapy and Rehabilitation
Abstract: Conversational artificial intelligence (AI) technologies hold significant promise to support the independence, well-being and safety of older adults living with frailty or dementia at home. However, further studies are needed to identify: 1) valuable scenarios of support, 2) desired interactive features, and 3) key challenges preventing long-term adoption and utility in dementia care. In this paper, we explore the role of conversational technology in ageing and dementia care at home. Using a community-based participatory approach, we engaged 20 stakeholders, including people with lived experience of dementia and frailty, to understand preferences, perceived benefits and concerns about integrating conversational AI into daily routines at home. We uncovered key roles of the technology, including support of daily functions, health monitoring, risk mitigation, and cognitive stimulation. We emphasize the need for adapting interactions to different levels of user familiarity and progression of cognitive decline. We address the importance of the communication style and suggest careful use of open-ended questions with target populations. We further discuss feasibility considerations to overcome current barriers to adoption. Overall, this work proposes design guidelines to shape the future conceptualization and development of natural language interactions to support dementia care at home.
|
|
15:00-15:10, Paper TuDT1.3 | |
Socially Assistive Robot ”Sister Robot” As a Covid-19 Response and Its Future Plans in Health Care and Clinical Applications (I) |
|
Malla, Dipawoli | Islington College, London Met University Partnered, Manager - Cr |
Bhandari, Pawan | Tribhvan University |
Keywords: Applications of Social Robots, Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams
Abstract: The COVID-19 pandemic has brought unprecedented challenges to healthcare systems. In Nepal, the shortage of healthcare professionals and the limited capacity of hospitals amplified the need for innovative solutions to assist healthcare professionals and patients. This paper reviews the development of the service robot in Nepal and its usefulness during the Covid-19 pandemic. It also describes the application of such simple service robots as socially assistive in nature for assisting nurses and providing support to the overall hospital management system. This paper explores the potential of these robots in the context of Nepal, discussing additional features and integrations that can upgrade their functionality for a social setting. Furthermore, it also aims to highlight the possibility of the production of socially assistive robots in Nepal and their applications in healthcare.
|
|
15:10-15:20, Paper TuDT1.4 | |
A Pilot Study on Factors of Social Attributes in Desktop-Size Interactive Robots (I) |
|
Sin Tung, Chan | The Hong Kong Polytechnic University |
Chan, Chui Yi | The Hong Kong Polytechnic University |
Chan, Sum Yee | The Hong Kong Polytechnic University |
Zeng, Jingqiang | The Hong Kong Polytechnic University |
Zhong, Junpei | The Hong Kong Polytechnic University |
Keywords: Anthropomorphic Robots and Virtual Humans, Creating Human-Robot Relationships, Robots in Education, Therapy and Rehabilitation
Abstract: Desktop-size robots are becoming commonly used in many fields such as caring, education and entertainment. But factors that affect human perception of desktop-size robots and their interaction are still unclear. The study examined the impact of robot behavior and appearance on human-robot interaction (HRI). The results showed that robots with human-like behavior were perceived more positively than those with random behavior, but the appearance of the robot did not have a significant impact on perception. The findings suggest that the design of human-like behavior should be prioritized in future HRI studies and robot design. Real-world experiments are also recommended to verify the findings for the application of desktop-size robots in the healthcare field.
|
|
TuDT3 |
Room T3 |
Assistive Robotics III |
Regular Session |
Chair: Winkle, Katie | Uppsala University |
|
14:40-14:50, Paper TuDT3.1 | |
What Can I Help You With: Towards Task-Independent Detection of Intentions for Interaction in a Human-Robot Environment |
|
Trick, Susanne | Technische Universität Darmstadt |
Lott, Vilja | Technische Universität Darmstadt |
Scherf, Lisa | Technische Universität Darmstadt |
Rothkopf, Constantin | Frankfurt Institute for Advanced Studies |
Koert, Dorothea | Technische Universitaet Darmstadt |
Keywords: Detecting and Understanding Human Activity, Curiosity, Intentionality and Initiative in Interaction
Abstract: Assistive robots interacting with people promise to increase quality of life and productivity in households, caregiving, or industry settings. Importantly, the quality of such interactions crucially depends on the intuitive ease and reliability of humans being able to request the robot's assistance. Thus, the ability to detect a human's Intention for Interaction (IFI) is beneficial for human-robot interaction across multiple application domains. However, existing works that detect IFIs often focus on single tasks, contexts, or interactions or limit their data collection to invariability in human positions. In contrast, here we aim for a more task-independent IFI detection. We record natural human behavior in an experimental setup with a two-armed robot that includes different tasks and interactions, and different positions and orientations of the human towards the robot. We collected audio and RGB-D data from 21 human subjects in the proposed experimental setup resulting in overall 405 IFIs. Using head orientation, shoulder orientation, distance, speech activity recognition, and hotword detection as features, we trained multimodal probabilistic classifiers. We compare feature fusion and decision fusion using the Bayesian fusion method Independent Opinion Pool. The resulting multimodal classifiers can detect task-independent IFIs from natural human behavior with an F1 score of up to 0.81. Overall, we show that good IFI detection can be achieved by modularly combining individual classifiers probabilistically.
|
|
14:50-15:00, Paper TuDT3.2 | |
Towards Realistic Prosthetic Gait Simulations: Enhancing the Accuracy of OpenSim Analysis by Integrating the Transfemoral Prosthesis Model |
|
Ryu, HyungSeok | Gwangju Institute of Science and Technology(GIST) |
Hong, Woolim | North Carolina State University |
Hur, Pilwon | Gwangju Institute of Science and Technology |
Keywords: Assistive Robotics
Abstract: Powered transfemoral prostheses offer the potential to improve mobility and quality of life for individuals with amputations. This study aimed to develop and validate an OpenSim model of a subject with a unilateral transfemoral amputation wearing a powered transfemoral prosthesis and to compare the model's performance with that of a model without prosthesis characteristics. We utilized experimental walking data from a single transfemoral amputee subject to demonstrate the feasibility of the model. Inverse kinematics and inverse dynamics were performed to compare the results with the encoder and current data of the knee and ankle actuators, which served as ground truth. The model with prosthesis characteristics demonstrated a closer match to the actuator data, particularly during the stance phase, suggesting that it better reflects the dynamic features of a real powered prosthesis. However, discrepancies were observed during the swing phase, highlighting the need for further refinements. This study provides valuable insights into the importance of incorporating prosthesis characteristics in biomechanical models to simulate joint behavior accurately. It has implications for the development and assessment of prosthetic devices.
|
|
15:00-15:10, Paper TuDT3.3 | |
Differing Care Giver and Care Receiver Perceptions of Robot Agency in an In-Home Socially Assistive Robot for Exercise Engagement |
|
Winkle, Katie | Uppsala University |
Moradbakhti, Laura | Imperial College London |
Keywords: Assistive Robotics, Applications of Social Robots, Ethical Issues in Human-robot Interaction Research
Abstract: We present the results of an online, video-based experimental study investigating the impact of robot agency on perceptions of a socially assistive robot (SAR) shown supporting in-home care. We consider two key participant groups: care givers and care receivers. We did not find significant results regarding the impact of agency on overall participant perceptions of the SAR, but we did identify some differences in what these two participant groups might perceive as being best for themselves versus each other. Firstly, care givers perceived more potential benefit from the robot than care receivers did, challenging possible assumptions about who is set to gain most from deployment of these systems. Secondly, care receivers generally perceived the lower agency robot as being more beneficial for themselves, even as they ascribed the higher agency robot more potential to benefit care receivers.
|
|
15:10-15:20, Paper TuDT3.4 | |
An End-To-End Human Simulator for Task-Oriented Multimodal Human-Robot Collaboration |
|
Mehri Shervedani, Afagh | University of Illinois Chicago |
Li, Siyu | University of Illinois at Chicago |
Monaikul, Natawut | University of Illinois at Chicago |
Abbasi, Bahareh | California State University - Channel Island |
Di Eugenio, Barbara | University of Illinois at Chicago |
Zefran, Milos | University of Illinois at Chicago |
Keywords: Novel Interfaces and Interaction Modalities, Assistive Robotics, Multimodal Interaction and Conversational Skills
Abstract: This paper proposes a neural network-based user simulator that can provide a multimodal interactive environment for training Reinforcement Learning (RL) agents in collaborative tasks involving multiple modes of communication. The simulator is trained on the existing ELDERLY-AT-HOME corpus and accommodates multiple modalities such as language, pointing gestures, and haptic-ostensive actions. The paper also presents a novel multimodal data augmentation approach, which addresses the challenge of using a limited dataset due to the expensive and time-consuming nature of collecting human demonstrations. Overall, the study highlights the potential for using RL and multimodal user simulators in developing and improving domestic assistive robots.
|
|
TuDT4 |
Room T4 |
Applications of Social Robots II |
Regular Session |
Chair: Lugrin, Birgit | University of Wuerzburg |
|
14:50-15:00, Paper TuDT4.2 | |
Individual Squash Training Is More Effective and Social with a Humanoid Robotic Coach |
|
Ross, Martin Keith | Heriot-Watt University |
Broz, Frank | TU Delft |
Baillie, Lynne | Heriot-Watt University |
Keywords: Applications of Social Robots, User-centered Design of Robots, Machine Learning and Adaptation
Abstract: With the aim of providing extra motivation to adhere to repetitive, individual sports training, this paper presents an autonomous robotic squash coach capable of high-level personalisation. The system was evaluated in person with 16 participants each conducting three 15-minute solo practice sessions. We compared a baseline, non-coaching robotic condition to two conditions in which the robot executed one of 12 different coaching policies, each of which was based on human coaching data. In one of the coaching conditions, the policy was selected based on categories for personalisation and in the other it was selected randomly among policies. The coaching policy conditions were found to be more enjoyable, more socially competent, and perceived as a more effective coach than the baseline.
|
|
14:50-15:00, Paper TuDT4.2 | |
Influencing Health-Related Decision Making and Therapeutic Alliance with Robot Mobility and Deixis |
|
Terzioglu, Yunus | Northeastern University |
Rebello, Keith | Northeastern University |
Bickmore, Timothy | Northeastern University |
Keywords: Creating Human-Robot Relationships, Assistive Robotics, Robot Companions and Social Robots
Abstract: Recent trends and developments in robotics have enabled socially assistive mobile robotic platforms to be deployed in everyday human lives. These robots have the ability to navigate to a user's location and engage in multimodal interactions to serve a variety of purposes such as promoting health behavior change. We conducted a randomized two-factor experiment to study the utility of robot mobility and multimodal cuing in a collaborative meal assembly task. We found that robot mobility, proxemics, and deictic and verbal cuing have significant positive effects on compliance with the robot's food recommendations and resulting nutritional quality of assembled meals. These robot behaviors also led to stronger therapeutic alliance between the robot and the user and higher user engagement.
|
|
15:00-15:10, Paper TuDT4.3 | |
Face Robot Performing Interaction with Emphasis on Eye Blink Entrainment |
|
Iimori, Masato | Keio University |
Furuya, Yuki | Keio University |
Takashio, Kazunori | Keio University |
Keywords: Applications of Social Robots, Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness
Abstract: Eyes play a significant role in human-human interaction, and blinking is particularly important as it can indicate a pause in the conversation and even lead to eye blink entrainment. However, most communication robots cannot reproduce eye blink movements due to cost constraints. Thus, our aim is to create a low-cost robot that can physically reproduce eye blink movements and induce eye blink entrainment. In this paper, we describe the implementation of the robot and evaluate the subjective impression of the robot's eye blink movements. Our results suggest that the robot's blinking behavior at pauses in the conversation facilitated the participants' understanding of the robot's speech. Our findings also suggest that simulating eye blink entrainment movement can increase the participant's affinity and acceptance towards the robot in certain cases, and if the blinking is not well designed, affinity may be adversely affected.
|
|
15:10-15:20, Paper TuDT4.4 | |
Investigating the Influence of Task-Dependent and Task-Independent Robot Behavior on the Impression of Robots and the User Experience |
|
Chamoto, Yuki | Ritsumeikan University |
Okafuji, Yuki | CyberAgent, Inc |
Matsumura, Kohei | Future University Hakodate |
Baba, Jun | CyberAgent, Inc |
Nakanishi, Junya | Osaka Univ |
Keywords: Applications of Social Robots, User-centered Design of Robots
Abstract: Service robots are beginning to be used as a new kind of support for human labor. However, in many cases, we implement only specific task-dependent behaviors in robots according to the purpose of robot introduction, and rarely implement task-independent behaviors. In general, it is known that noninstrumental functions are one factor that improves user experience. Therefore, task-independent behavior of robots as an aspect of noninstrumental functions also has the potential to improve the impression made by robots and deliver a user experience beyond the users' expectations, during human-robot interaction. This study aims to investigate the influence of task-dependent behavior and task-independent behavior on the impression made by robots, and user experience. We extracted, from previous studies, dialogue task-dependent and dialogue task-independent behaviors during human-robot interaction, and investigated the influence of these behaviors through a video-based survey. The result of the video-based survey shows that dialogue task-dependent behavior improves the functionality of robots and decreases factors of negative user experience, such as frustration, while also fulfilling users' expectation for interaction with robots. It also shows that dialogue task-independent behavior builds a stronger relationship between users and robots and provides a user experience that exceeds users' expectation regarding interaction with robots.
|
|
TuDT5 |
Room T5 |
Motion Planning and Navigation in Human-Centered Environments II |
Regular Session |
Chair: Kim, Soonkyum | Korea Institute of Science and Technology |
|
14:40-14:50, Paper TuDT5.1 | |
Holistic Deep-Reinforcement-Learning-Based Training for Autonomous Navigation in Crowded Environments (withdrawn from program) |
|
Kästner, Linh | T-Mobile, TU Berlin |
Meusel, Marvin | Technische Universität Berlin |
Bhuiyan, Teham | TU Berlin |
Lambrecht, Jens | Technische Universität Berlin |
|
14:50-15:00, Paper TuDT5.2 | |
S&Reg: End-To-End Learning-Based Model for Multi-Goal Path Planning Problem |
|
Huang, Yuan | Waseda University |
Gu, Kairui | Waseda University |
Lee, Hee-hyol | Waseda University |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation
Abstract: In this paper, we propose a novel end-to-end approach for solving the multi-goal path planning problem in obstacle environments. Our proposed model, called S&Reg, integrates multi-task learning networks with a TSP solver and a path planner to quickly compute a closed and feasible path visiting all goals. Specifically, the model first predicts promising regions that potentially contain the optimal paths connecting two goals as a segmentation task. Simultaneously, estimations for pairwise distances between goals are conducted as a regression task by the neural networks, while the results construct a symmetric weight matrix for the TSP solver. Leveraging the TSP result, the path planner efficiently explores feasible paths guided by promising regions. We extensively evaluate the S&Reg model through simulations and compare it with the other sampling-based algorithms. The results demonstrate that our proposed model achieves superior performance in respect of computation time and solution cost, making it an effective solution for multi-goal path planning in obstacle environments. The proposed approach has the potential to be extended to other sampling-based algorithms for multi-goal path planning.
|
|
15:00-15:10, Paper TuDT5.3 | |
VAFOR: Proactive Voice Assistant for Object Retrieval in the Physical World |
|
Satyev, Bekatan | Independent |
Ahn, Hyemin | Ulsan National Institute of Science and Technology |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Linguistic Communication and Dialogue, Assistive Robotics
Abstract: In this paper, we present a proactive robotic voice assistant with a perceive-reason-act loop that carries out pick-and-place operations based on verbal commands. Unlike existing systems, our robot can retrieve a target object not only when the target is explicitly spelled out, but also given an indirect command that implicitly reflects the human intention or emotion. For instance, when the verbal command is ``I had a busy day, so I didn't have much to eat.", the target object would be something that can help with hunger. To successfully estimate the target object from indirect commands, our framework consists of separate modules for the complete perceive-reason-act loop as follows. First, for perception, it runs an object detector on the robot's onboard computer to detect all objects in the surroundings and records a verbal command from a microphone. Second, for reasoning, a list of available objects as well as a transcription of the verbal command are integrated into a prompt for a Large Language Model (LLM) in order to identify the target object in the command. Finally, for action, a TurtleBot3 with a 5 DOF robotic arm finds the target object and brings it to the human. Our experiments show that with a properly designed prompt, the robot can identify the correct target object from implicit commands with at most 97% accuracy. In addition, it is shown that the technique of fine-tuning a language model based on the proposed prompt designing process amplifies the performance of the smallest language model by a factor of five. Our data and code are available at https://github.com/bekatan/vafor
|
|
15:10-15:20, Paper TuDT5.4 | |
Real-Life Experiment Metrics for Evaluating Human-Robot Collaborative Navigation Tasks |
|
Repiso, Ely | LAAS-CNRS, Toulouse |
Garrell, Anais | UPC-CSIC |
Sanfeliu, Alberto | Universitat Politècnica De Cataluyna |
Keywords: Evaluation Methods, Motion Planning and Navigation in Human-Centered Environments, Robot Companions and Social Robots
Abstract: As robots move from laboratories and industries to the real world, they must develop new abilities to collaborate with humans in various aspects, including human-robot collaborative navigation (HRCN) tasks. Then, it is required to develop general methodologies to evaluate these robots' behaviors. These methodologies should incorporate objective and subjective measurements. Objective measurements for evaluating a robot's behavior while navigating with others can be accomplished using social distances in conjunction with task characteristics, people-robot relationships, and physical space. Additionally, the objective evaluation of the task must consider human behavior, which is influenced by changes and the structure of their environment. Subjective evaluations of robot's behaviors can be conducted using surveys that address various aspects of robot usability. This includes people's perceptions of their interaction during their collaborative task with the robot, focusing on aspects such as sociability, comfort, and task-intelligence. Moreover, the communicative interaction between the agents (people and robots) involved in the collaborative task should also be evaluated. Therefore, this paper presents a comprehensive methodology for objectively and subjectively evaluating HRCN tasks.
|
|
TuDT6 |
Room T6 |
Novel Interfaces and Interaction Modalities IV |
Regular Session |
Chair: Lee, Jaeryoung | Chubu University |
|
14:40-14:50, Paper TuDT6.1 | |
RobotScale: A Framework for Adaptable Estimation of Static and Dynamic Object Properties with Object-Dependent Sensitivity Tuning |
|
Pavlic, Marko | Technical University of Munich |
Markert, Timo | Resense GmbH |
Matich, Sebastian | WITTENSTEIN SE |
Burschka, Darius | Technische Universitaet Muenchen |
Keywords: Novel Interfaces and Interaction Modalities, Multimodal Interaction and Conversational Skills, Multi-modal Situation Awareness and Spatial Cognition
Abstract: We propose a framework for the measurement of static and dynamic physical properties of manipulation objects using both robotic tactile and kinesthetic sensing -- in particular data from fingertip force/torque (F/T) and robot joint torque sensors. It completes the manipulation-relevant information about new objects that cannot be estimated from a passive camera observation. The system allows to balance the accuracy and complexity of the estimation system against the costs and complexity of the approach. We evaluate methods that allow to improve robustness against noise and model errors in the manipulation system used for the estimation. The approach is validated on experimental results using data of a torque-controlled robot manipulator and precision F/T sensors.
|
|
14:50-15:00, Paper TuDT6.2 | |
Physical Embodiment versus Novelty – Which Influences Interactions with Embodied Conversational Agents More? |
|
Galiza Cerdeira Gonzalez, Antonio | Tokyo University of Agriculture and Technology |
Mizuuchi, Ikuo | Tokyo University of Agriculture and Technology |
Keywords: Embodiment, Empathy and Intersubjectivity, Novel Interfaces and Interaction Modalities
Abstract: With the increasing presence of embodied conversational agents (ECAs) in our daily lives, it is crucial to understand how the degree of their physical embodiment influences user engagement and perception. Previous research has explored this relationship, revealing a tendency for higher embodiment levels to result in better engagement and performance. However, the potential impact of novelty in the ECA experience has yet to be thoroughly investigated, despite being acknowledged in prior studies. To address this research gap, we conducted an experiment where participants interacted with three distinct Social Plantroid embodiment levels and provided ratings of their perception and preference, while engagement was estimated from the volunteers' facial expressions. Our findings indicate weak to moderate correlations between participants' experience with robots, engagement, and their perceived characteristics of the ECAs, suggesting that both novelty and physical embodiment play a role in shaping interactions with ECAs.
|
|
TuPO |
Room T11 |
Late Breaking Report |
Panel Session |
Chair: Hwang, Minho | Daegu Gyeongbuk Instituute of Science and Technology (DGIST) |
|
15:30-16:30, Paper TuPO.1 | |
Teaching Industrial Robots Using a VR-Based Learning Environment: A Qualitative Study |
|
Arntz, Alexander | University of Applied Sciences Ruhr West |
Straßmann, Carolin | University of Applied Sciences Ruhr West |
Eimler, Sabrina C. | Hochschule Ruhr West, University of Applied Sciences |
Keywords: Virtual and Augmented Tele-presence Environments, Evaluation Methods, Human Factors and Ergonomics
Abstract: This work presents a virtual reality (VR) based learning platform that allows students to explore and experiment with industrial robot manipulators in a virtual environment. This platform provides an immersive experience for students, allowing them to interact with robots in scenarios that would be difficult to replicate in a physical classroom. The VR application is equipped with a variety of interaction mechanics that enable the portrayal of a range of different tasks, giving students a comprehensive understanding of the functionality of industrial robots. The platform also enables network integration, allowing multiple students to be present in the virtual environment simultaneously, promoting collaboration and teamwork. The application is designed as a learning tool that can be extended to fit the requirements of different robotics-related courses. To optimize future iterations, a qualitative study was conducted, where students provided feedback on the VR application, to further improve their learning experience with robotic systems.
|
|
15:30-16:30, Paper TuPO.2 | |
Robotic Assistance for Extended Sensing, Locomotion and Manipulation by Gaze Control |
|
Huang, Shouren | University of Tokyo |
Sørensen, Sune Lundø | University of Southern Denmark |
Cao, Yongpeng | The University of Tokyo |
Ishikawa, Masatoshi | University of Tokyo |
Mikkel, Kjærgaard | University of Southern Denmark |
Yamakawa, Yuji | The University of Tokyo |
Keywords: Assistive Robotics, User-centered Design of Robots, Robots in Education, Therapy and Rehabilitation
Abstract: For people with severe sensory-based motor disorder or musculoskeletal disorders, robotic assistance becomes a promising solution to improve their daily living standards. In this study we propose a robotic assistance method to realize extended sensing, locomotion as well as manipulation for a user to interact with the environment utilizing a 2D/3D gaze interface. Specifically, the proposed method focus on the situation where a user desire to explore the environment beyond their physical perception capability, requiring the robotic assistance of realizing extended sensing. The proposed 2D/3D gaze interface is wearable, unrestrained to users, and it is straightforward to give commands for robot control. Primary studies showed the effectiveness of the method in an indoor environment to fetch an invisible target from a box on a shelf.
|
|
15:30-16:30, Paper TuPO.3 | |
Dementia Prevention Using Flowerpot-Type ``Famileaf'' Robot |
|
Gouko, Manabu | Tohoku Gakuin University |
Ishizumi, Nagisa | Techno Mind Corporation |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Assistive Robotics, Creating Human-Robot Relationships
Abstract: This paper describes a verification experiment conducted to investigate the dementia prevention effect of using an interactive flowerpot-type robot. The ``Famileaf'' flowerpot-type robot can grow plants while interacting with the grower. Famileaf can post a comment on a social networking service to interact with an elderly person (the grower). With this robot, the grower can grow a plant and, with increased attachment, the use of Famileaf is expected to prevent dementia. In this experiment, elderly participants were asked to grow plants using the Famileaf robot, and the dementia prevention effect was verified. The participants were asked to use the Famileaf robot for two weeks, and a questionnaire about changes in their impressions of the plants was administered. The participants also took a dementia prevention self-diagnosis test. As a result, we confirmed that dialog with the Famileaf robot increased the attachment of the grower to the plant and improved the dementia prevention test scores.
|
|
15:30-16:30, Paper TuPO.4 | |
A Study on the High Aspect Ratio Grasp Manipulator with Spiral Zipper Mechanism |
|
choi, myeongjin | Hanyang University |
Park, Inha | Hanyang University |
Bae, Jangho | University of Pennsylvania |
Yim, Mark | University of Pennsylvania |
Seo, TaeWon | Hanyang University |
Keywords: Innovative Robot Designs, Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper presents force estimation method of Variable Topology Truss (VTT) for grab manipulation. In robot’s grab manipulation, force sensing is important due to robot has to maintain the force by sensed data while holding object. However, installation of force sensors to VTT is hard by VTT’s structural problem. So, force estimation method was studied to replace sensing. For force estimation of VTT, force analysis by trusses’ structure was used.
|
|
15:30-16:30, Paper TuPO.5 | |
A Study on the Locomotion Planning Method of VTT Platform on Uneven Surfaces |
|
Park, Inha | Hanyang University |
Bae, Jangho | University of Pennsylvania |
Yim, Mark | University of Pennsylvania |
Seo, TaeWon | Hanyang University |
Keywords: Innovative Robot Designs, Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper presents a locomotion planning with terrain characterization. Variable Topology Truss(VTT) was used as hardware, but it can be adapted to polyhedral robots. For generating a path, a cost function of the terrain characterization depends on robots stability and traversability. Based on the classification, a desired path was generated using Polygon-based Random Tree(PRT) search algorithm. For a stable locomotion planning, the motion primitive was generated an ideal step of VTT and distorted it for matching to the desired path.
|
|
15:30-16:30, Paper TuPO.6 | |
Deep Learning Based Real-Time Korean Sign Language Translation Algorithm |
|
Lim, Wansu | Kumoh National Institute of Technology |
Jeong-in, Kim | Kumoh National Institute of Technology |
Jihwan, Park | Kumoh National Institute of Technology |
Keywords: Interaction Kinesics, Linguistic Communication and Dialogue, Machine Learning and Adaptation
Abstract: Sign language translation plays a vital role in bridging the communication gap between hearing-impaired individuals and the general population. This paper presents a Korean sign language translation algorithm that utilizes gesture analysis and deep learning techniques to enable real-time translation of sign motions. The system model incorporates components for sign gesture input, sign gesture learning, and real-time translation. By leveraging the Kinect camera, the algorithm captures and extracts the features of joint movements and hand shapes, which are then trained using deep learning techniques. Experimental results demonstrate the effectiveness and accuracy of the proposed algorithm, achieving recognition rates of 84% or higher for a variety of sign gestures, including common phrases and numbers. Notably, the algorithm successfully distinguishes between similar gestures and accurately recognizes individual numbers despite similarities in hand shapes.
|
|
15:30-16:30, Paper TuPO.7 | |
Understanding Privacy Concerns with Delivery Robots in Office Environments |
|
Grasso, Maria Antonietta | Naver Labs Europe |
Park, Jisun | Naver Labs Europe |
Willamowski, Jutta | Naver Labs Europe |
Keywords: Ethical Issues in Human-robot Interaction Research
Abstract: Robots powered by Artificial Intelligence (AI) require continuous sensing to function and interact autonomously with the environment where they are deployed. This raises questions around privacy. Technology can be developed to address data privacy textit{by design}, automatically anonymizing data captured as much as possible. However, this may not be enough and giving users a better understanding of how data are captured and used is a complementary way to address privacy. Indeed, studies have shown that a better understanding leads to both, increased comfort, and increased desire to be in control.
|
|
15:30-16:30, Paper TuPO.8 | |
Towards Inclusive Human-Robot Interaction: Designing for Diversity and Accessibility |
|
LAW, Wing Ting | Hong Kong Productivity Council |
Fan, Kam Wah | Hong Kong Productivity Council |
LO, kwok wai | Hong Kong Productivity Council |
Chan, Hing Yi | Hong Kong Productivity Council |
LI, Ki Sing | Hong Kong Productivity Council |
Mo, Tiande | Hong Kong Productivity Council |
Keywords: Social Touch in Human–Robot Interaction, User-centered Design of Robots, Creating Human-Robot Relationships
Abstract: This paper presents a compassionate robotic design that prioritises inclusive Human-Robot Interaction (HRI), catering to the elderly, children, and individuals with visual and auditory impairments. The design incorporates ergonomic principles, inclusive visual and auditory cues, and intuitive user-interface supported by empirical research. A pilot experiment validates the success of the design in enhancing user experience, promoting barrier-free HRI.
|
|
15:30-16:30, Paper TuPO.9 | |
Humans Helping Robots: The Role of Knowledge, Attitudes, and Context of Use |
|
Potinteu, Andreea Elena | University of Tübingen, Leibniz Institute for Knowledge Media |
Said, Nadia | University of Tübingen |
Jahn, Georg | Chemnitz University of Technology |
Huff, Markus | Leibniz-Institut Für Wissensmedien |
Keywords: Creating Human-Robot Relationships
Abstract: Understanding pro-social behavior towards robots is crucial to their integration into our society. To better understand people’s reported willingness to help robots across different contexts (delivery, medical, service, and security), we conducted a study on a German-speaking population (N = 542, representative of age and gender). We assessed knowledge about robots, attitudes, and anthropomorphism investigating their effect on reported willingness to help. Results show that positive attitudes significantly predicted a higher willingness to help. Importantly, having more knowledge about robots increased reported willingness to help. Furthermore, results point to a context-dependency for willingness to help. Unexpectedly, we found no effect of anthropomorphism, neither in the form of robot appearance nor as participant’s own view about robots, on reported willingness to help. Our findings highlight the relevance of knowledge and attitudes in understanding helping behavior toward robots. Eventually, our results raise questions about the relevance of anthropomorphism in pro-sociality toward robots.
|
|
15:30-16:30, Paper TuPO.10 | |
Robots vs. AI - How Attitudes, Familiarity, Anthropomorphism, Knowledge, and Risk-Opportunity Perception Influence Users' Preference for Robots and Artificial Intelligence |
|
Said, Nadia | University of Tübingen |
Wagner, Julia | Reutlingen University |
Potinteu, Andreea Elena | University of Tübingen, Leibniz Institute for Knowledge Media |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Embodiment, Empathy and Intersubjectivity
Abstract: In recent years the fast development of artificial intelligence (AI) and robotic technology has led to tremendous growth in various industries, especially in the medical and transportation sectors. While AI and (humanoid) robots are often viewed as promising technology for the future, there is also an increase in concern about the negative impact of those developments on society. Even though AI and robots could be used interchangeably in many application areas, they fundamentally differ in that robots have a physical presence as opposed to AI being more abstract algorithms running in the background of an application. That poses the question of whether people differ in their preferences regarding robots or AI and what cognitive factors influence people's preferences. Our study investigated the preference for robots or AI in 15 different interaction contexts for a sample of N = 526 of the German population (representative for age and gender). To identify the most important factors influencing participants' preferences for robots or AI, attitudes, perceived anthropomorphism, knowledge, familiarity, and risk-opportunity perception were measured. Results show that attitudes and risk-opportunity perception are the most important predictors for participants' preferences. Furthermore, differences in perception of robots and AI are discussed.
|
|
15:30-16:30, Paper TuPO.11 | |
Aromanoidics: Towards a Framework of Robotic Scents |
|
Hidaka, Shun | Tokyo Institute of Technology |
Kobuki, Sota | Tokyo Institute of Technology |
Seaborn, Katie | Tokyo Institute of Technology |
Venture, Gentiane | The University of Tokyo |
Keywords: Non-verbal Cues and Expressiveness, Novel Interfaces and Interaction Modalities, Anthropomorphic Robots and Virtual Humans
Abstract: People and robots are increasingly coexisting within societies around the world. Improving the human-robot interaction experience has become a key challenge. In human relation-ships, scent is a significant factor. Yet, very little work on how robot scent mediates interactions has been done, and no framework on user perceptions of robot-scent matching exists. The purpose of this study was to establish an initial frame-work of scents for robots for future matching studies. Through an online perceptions study, we explored the relationship between the anthropomorphic level of 19 robots and the scents naïve respondents expected each robot to have. We found a positive correlation between anthropomorphism in robot appearance and scent attribution, with lower anthropomorphism associated with the attribution of metallic or mechanical smells. We offer our initial frame-work for future work on aromanoidics.
|
|
15:30-16:30, Paper TuPO.12 | |
Integration of the Child-Robot Interaction Model to Improve Interplay through Emotional Interaction and Communication |
|
RYBAKOVA, ANASTASIYA | Korea Institute of Science and Technology |
Choi, Jongsuk | Korea Inst. of Sci. and Tech |
Keywords: Child-Robot Interaction, Robots in Education, Therapy and Rehabilitation, Linguistic Communication and Dialogue
Abstract: Social robots are becoming a familiar part of day-to-day life. They interact with humans based on the developed social intelligence which gives the robots the ability to control the interactions of thoughts and feelings, with primal empathy serving as a key to comfortable communication between people. Furthermore, past research indicates that children are more comfortable and well-adapted in interacting with social robots than adults. For that purpose, established in the previous studies, we propose that implementing the empathic ability of a social robot to interact with youths can be helpful in making children feel comfortable while improving their language skills through emotional interaction and communication.
|
|
15:30-16:30, Paper TuPO.14 | |
Comparison of Energy Consumption Rate and Walking Ability According to Exoskeleton Robot Type after Robot-Assisted Over-Ground Walking Training in Motor Complete Spinal Cord Injury |
|
Cho, Duk Youn | National Rehabilitation Research Center |
LIM, JUNG EUN | National Rehabilitation Center |
Yang, SungPhil | national rehabilitation center |
LEE, Jun Min | Korea National Rehabilitation Center |
SHIN, BEONGJU | National Rehabilitation Center |
KIM, ONYOO | National Rehabilitation Center |
|
15:30-16:30, Paper TuPO.15 | |
Design of a Self-Cleanable Electroadhesive Carrier for Stable Conveying System |
|
Lim, Sein | Korea Advanced Institute of Science & Technology (KAIST) |
Kim, Jihoon | KAIST |
Hwang, Geonwoo | Korea Advanced Institute of Science and Technology |
Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
Keywords: HRI and Collaboration in Manufacturing Environments
Abstract: This paper presents the utilization of a self-cleanable electroadhesive chuck (EA chuck) as a technology to prevent the dropping of objects in a high-speed linear motion system. By applying a direct current (DC) voltage to the electrodes of the EA chuck, it generates an electroadhesive (EA) force. Additionally, the chuck induces the electrodynamic dust shield (EDS) effect to autonomously clean its surface when an alternative current (AC) voltage is applied to patterned electrodes within a single layer. Design parameters, including the electrode spacing and insulation layer material, were optimized to determine the suitable EA force and enhance the EDS efficiency for a high-speed conveying system. The results demonstrate that the optimized adhesive chuck ensures stable transportation of objects in the linear motion system. It effectively prevents objects from falling under high acceleration and deceleration conditions.
|
|
15:30-16:30, Paper TuPO.16 | |
Low-Cost and Light-Weight Assistive Suit for Caregivers' Transfer Work and an Evaluation of Compensation of the Load to the Spine's L5/S1 Segment |
|
Sakaki, Taisuke | Kyushu Sangyo University |
ushimi, nobuhiro | Kyushu Sangyo University |
Shimokawa, Toshihiko | Kyushu Sangyo University |
Keywords: Assistive Robotics, Human Factors and Ergonomics, Applications of Social Robots
Abstract: Life-supporting technology helps elderly and/or disabled individuals maintain their ability to engage in activities of daily life such as transferring between a bed and a wheelchair. A caregiver can be trained in these activities to avoid suffering lower back pain, especially in the standing-up motion. Caregivers also help care-receivers use their residual functions more actively. Devices for such care services can be used in hospitals, rehabilitation facilities and nursing homes, as these institutions are facing a workforce shortage, the aging of staff, and heavy care workloads. In scenarios in which a heavy load is placed on the hands, the load to the spine's L5/S1 segment is calculated using the erector spinae muscle force and the posture of the spine derived from the upper-body posture, the angle between the upper-body posture, and the thigh and knee-joint angles. However, a model of this requires complex calculations and a large amount of physical information. Here, we present a simple method that can be used to evaluate the risk of lower-back pain by calculating the load to the L5/S1 segment. This knowledge will help caregivers and therapists manage their risk of lower-back pain. We determined the total compressive forces to the L5/S1 segment in two scenarios: (1) without an assistive suit, as the conventional work mode, and (2) with the assistive suit with the handles at the shoulders, pulled by a care-receiver. We present the simplified model and the method for calculating the load to the L5/S1 segment as an indicator of the risk of lower-back pain in transfer work by caregivers.
|
|
15:30-16:30, Paper TuPO.17 | |
Design of Novel Prosthetic Wrist Using Shape Memory Alloy Actuators and Rolling Contact Joint with sEMG Control |
|
Chung, Chongyoung | Korea Advanced Institute of Science and Technology (KAIST) |
Hyeon, Kyujin | KAIST |
Ma, Jihyeong | Korea Advanced Institute of Science and Technology |
Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
Keywords: Anthropomorphic Robots and Virtual Humans, Robots in Education, Therapy and Rehabilitation, Assistive Robotics
Abstract: This paper proposes a novel prosthetic wrist design that replicates human wrist movements by utilizing surface electromyography (sEMG) signals for control. The design includes two rolling contact joints and shape memory alloy (SMA) spring actuators, which mimic the two-row structures of carpal bones and wrist muscles, respectively. It can perform functional ranges of motion, including 53° for flexion, 50° for extension, 40° for radial deviation, and 42° for ulnar deviation. Due to the restoring force of the SMA spring actuator, it can maintain the stiffness of the wrist even when external forces are applied, while also providing flexibility. Also, mimicking the wrist muscles enables direct mapping without complex motion detection algorithms, resulting in fast and precise control using sEMG signals. To address the cooling rate limitation of SMA actuators, a portable air compressor and nozzle structures are employed to enhance cooling efficiency. The air passing through the nozzle increases in velocity and decreases in temperature, improving the cooling rate.
|
|
15:30-16:30, Paper TuPO.18 | |
Evaluation of a Social Robot at the Reception Desk for Exam Registration During Covid-19 |
|
Steinhaeusser, Sophia C. | University of Wuerzburg |
Donnermann, Melissa | Julius-Maximilians University Wuerzburg |
Lein, Martina | Julius-Maximilians-Universität of Würzburg |
Lugrin, Birgit | University of Wuerzburg |
Keywords: Applications of Social Robots, Evaluation Methods
Abstract: Due to the global covid outbreak in 2019, contact between people had to be minimized, which led to changes in our daily lives. An important tool for reducing interpersonal contact lies in technological progress and new media. Social robots are one of these new technologies as they are able to take on repetitive tasks, e.g. reminding people of covid rules. In a field study we evaluated a robot deployed at the reception desk of exams during the pandemic. Overall, our prototype received positive feedback. We identify further functions to be implemented in future iterations.
|
|
15:30-16:30, Paper TuPO.19 | |
A Data-Driven Approach to Positioning Grab Bars in the Sagittal Plane for Elderly Persons |
|
Bolli, Roberto | MIT |
Asada, Harry | MIT |
Keywords: Human Factors and Ergonomics, Assistive Robotics, Detecting and Understanding Human Activity
Abstract: The placement of grab bars for elderly users is based largely on ADA building codes and does not reflect the large differences in height, mobility, and muscle power between individual persons. The goal of this study is to see if there are any correlations between an elderly user’s preferred handlebar pose and various demographic indicators, self-rated mobility for tasks requiring postural change, and biomechanical markers. For simplicity, we consider only the case where the handlebar is positioned directly in front of the user, as this confines the relevant body kinematics to a 2D sagittal plane. Previous eldercare devices have been constructed to position a handlebar in various poses in space. Our work augments these devices and adds to the body of knowledge by assessing how the handlebar should be positioned based on data on actual elderly people instead of simulations.
|
|
15:30-16:30, Paper TuPO.20 | |
Detecting of Shear Direction with Piezoelectric Sensors in Cylinder Structure |
|
Min, Jiyong | Korea University |
Kim, Hojoon | KIST (Center for Intelligent and Interactive Robotics, KoreaInst |
Lee, Min Hyeok | Korea University |
Cha, Youngsu | Korea University |
Keywords: Detecting and Understanding Human Activity
Abstract: In this paper, we propose a method for detecting shear force directions with piezoelectric sensors inside an object. Specifically, a soft cylindrical structure was selected to insert the piezoelectric sensors. A test was performed by using a vibration exciter to generate shear forces on top of the structure. During the experiment, we applied sinusoidal waves to the soft structure. From the result, sensor values were obtained and we observed a tendency for the voltage response to be dependent on shear force direction.
|
|
15:30-16:30, Paper TuPO.21 | |
Emotional Changes in Children with Developmental Disabilities in Clinical Experiments Using SAR Robots |
|
Lee, Jaeryoung | Chubu University |
Stefanov, Dimitar | Middlesex University |
Keywords: Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction, Assistive Robotics
Abstract: Previous studies have focused on robotic systems and therapies for autism spectrum disorder (ASD). While the results of those studies had an impact, they are experiencing difficulties in practical application. For one example, there are cases where ASD-based systems and interactions fail when treating other developmental disorders. Emotional expression differs depending on the symptoms and degree of each developmental disorder, creating differences in the level of engagement. In this study, it was investigated by comparing the biological signal changes in children with developmental disabilities according to the symptoms of each emotion to be trained during robot-assisted therapy. The results showed that changes in arousal value (such as electrodermal activity) differed depending on the symptoms and their severity. Therefore, instead of a single ASD-based therapy, it is necessary to realize various interactions according to the symptom and their severity.
|
|
15:30-16:30, Paper TuPO.22 | |
Tangible-E-M-Otion: Interactive Cloth That Calms People Down |
|
Lee, Jaeryoung | Chubu University |
Kim, SunKyoung | University of Tsukuba |
Jeon, Eunjeong | Independent Researcher |
Keywords: Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots
Abstract: The purpose of this research is to develop interactive cloth that gives people a sense of calm. The main users of the cloth are those who take a long time to stabilize or suddenly panic when they feel anxiety or tension. Children with developmental disabilities, who often experience these symptoms, seek a sense of calm by entering a large cardboard box when they panic. Therefore, in this study, we will confirm whether the emotions of human beings can be changed by wearing interactive clothes that can output colors. It is hypothesized that the emotional state corresponding to the high arousal value changes to the region of the low arousal value according to Russell’s circumplex model of emotion. EDA is an index indicating the change in emotional status during interactions between the cloth and each participant. Experimental results have shown that the interactive cloth developed has a calming effect on people.
|
|
15:30-16:30, Paper TuPO.23 | |
Developing Social Robots with Empathetic Non-Verbal Cues Using Large Language Models |
|
Lee, Yoon Kyung | Seoul National University |
Jung, Yoonwon | Seoul National University |
Kang, Gyuyi | Seoul National University |
Hahn, Sowon | Seoul National University |
Keywords: Social Intelligence for Robots, Embodiment, Empathy and Intersubjectivity, Cognitive Skills and Mental Models
Abstract: We propose augmenting social robots’ empathetic capacities by integrating non-verbal cues. Our primary contribution encompasses designing and labeling four types of empathetic non-verbal cues (SAFE: Speech, Action (gesture), Facial expression, and Emotion) in a social robot, employing a Large Language Model (LLM). We developed an LLM-based conversational system for a social robot and assessed the alignment of the social cues with those defined by human counselors. Our preliminary results reveal distinct patterns in LLM-based responses, including a preference for calm and positive social emotions ('joy','lively') and frequent nodding gestures. Despite these patterns, our innovative approach has facilitated the development of a social robot capable of context-aware and more authentic interactions. Our work establishes a foundation for future human-robot interactions, emphasizing the pivotal role of verbal and non-verbal cues in constructing social and empathetic robots.
|
|
15:30-16:30, Paper TuPO.24 | |
Effect of Factual and Empathetic Feedback Styles in Robotic Fitness Coaching on Exercise Behavior Change |
|
Lee, Yoon Kyung | Seoul National University |
Park, Yong-Ha | Seoul National University |
Shin, Minjung | Seoul National University |
Hahn, Sowon | Seoul National University |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships, Robots in Education, Therapy and Rehabilitation
Abstract: We investigate the effectiveness of a humanoid fitness coach, implementing two distinct styles of encouragement: factual, providing real-time performance feedback, and empathetic, focusing on emotional support. Our study establishes an experimental setup where participants engage in exercises (e.g., stretching and walking on a treadmill) under two distinct conditions, each influenced by the robot’s real-time feedback styles. Physiological and behavioral indices, such as heart rate and treadmill speed, were monitored, offering insight into the impact of the robotic intervention. Participants’ experiences and perceptions were further explored through post-exercise surveys. Our study highlights the significance of providing practical and informative feedback in real-time to enhance the intensity and effectiveness of exercising, thereby providing implications for fields such as AI-assisted personal fitness, healthcare, and exercise interventions.
|
|
15:30-16:30, Paper TuPO.25 | |
The Influence of Perceived Animacy on Human Perceptions of Robot Errors |
|
Miao, Xin | Tsinghua University |
Zhang, Xiaohan | Beijing Zhipu Huazhang Technology Co., Ltd |
Tang, Jie | Tsinghua University |
Peng, Kaiping | Tsinghua University |
Wang, Fei | Tsinghua University |
Keywords: Applications of Social Robots, Anthropomorphic Robots and Virtual Humans, Storytelling in HRI
Abstract: This paper investigates how animacy affects human perceptions of robot errors by utilizing the Stereotype Content Model (SCM). Experiment 1 used a hypothetical scenario and experiment 2 observed actual human-robot interaction(n =215 and 124, respectively). The results showed that decline in perceived competence of the robot following an error is significantly greater than the decline in perceived warmth. Mediation analysis further revealed that animacy mediated competence and warmth perceptions in response to robot errors. These findings highlight the crucial role of animacy in robot design and provide insights for future robot design.
|
|
15:30-16:30, Paper TuPO.26 | |
Designing Adaptive Navigation Sound for Indoor Delivery Robots |
|
Mouton, Baptiste | Naver Labs Europe |
Abe, Naoko | Naver Labs Europe |
Gallo, Danilo | Naver Labs Europe |
Colombino, Tommaso | Naver Labs Europe |
Lee, Dagyeong | Naver Labs |
Keywords: Sound design for robots, Evaluation Methods, Novel Interfaces and Interaction Modalities
Abstract: This paper presents our on-going project on adaptive sounds for delivery robots deployed in a high-rise office building. It focuses on the design principles and process for creating an adaptive navigation sound. In this paper, we describe the design process of 1) a robot navigation base sound and 2) an adaptation framework. The former consists of creating a base navigation sound considering several factors such as sound function and robot specifications. The latter presents the design process of an adaptive sound by questioning how the navigation sound should be adapted to suit different social contexts in the office building (e.g. food court, office area). One of the key challenges of our research is to design a sound that can provide the robots with a distinct and consistent identity while at the same time being adaptable to diverse environments and social contexts.
|
|
15:30-16:30, Paper TuPO.27 | |
“Take a Smallish Nap!”: Inducing Relaxation Using a Tapping Robot |
|
Furusawa, Minori | University of Tsukuba |
Osawa, Hirotaka | Keio University |
Keywords: Creating Human-Robot Relationships, Embodiment, Empathy and Intersubjectivity
Abstract: In this study, a robot was developed to induce a state of relaxation in the user by performing tapping. Insomnia is now recognized as a significant global health risk. To address this issue, the present study focused on the "Relaxation" method of CBT-I, an insomnia treatment. The tapping robot was designed to assist users in achieving a relaxed state characterized by the dominance of the parasympathetic nervous system and their perception of being "relaxed." The robot can maintain a more consistent rhythm and intensity compared to a human practitioner. Additionally, the robot is expected to utilize sensors and other devices to detect users' mental and physical states, providing appropriate feedback.
|
|
15:30-16:30, Paper TuPO.28 | |
A Hybrid Haptic Simulator for Realistic Car Door Interactions: Design and Implementation |
|
Kim, Ji-Sung | KAIST |
Ma, Jihyeong | Korea Advanced Institute of Science and Technology |
Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
Keywords: HRI and Collaboration in Manufacturing Environments, User-centered Design of Robots, Creating Human-Robot Relationships
Abstract: Designing car doors with optimal haptic sensations is crucial for improving user experience. However, traditional physical prototyping can be time-consuming, costly, and limit the design iterations. In this paper, we propose a hybrid haptic simulator for virtual prototyping that allows users to experience the kinesthetic haptic feedback of opening and closing car doors. The hybrid haptic simulator which utilizes a motor and brake ensures safe and realistic human-simulator interaction despite of high torque requirement (>50Nm). In addition, we have developed an impedance control scheme to control both motor and brake at the same time. The proposed system can render the modeled car door torque profile accurately. Our proposed simulator significantly reduces the need for physical prototyping, thus enhancing the efficiency of car door design development while providing accurate haptic feedback.
|
|
15:30-16:30, Paper TuPO.29 | |
Confluences and Conflicts in Stakeholder Imaginaries of ‘Robots for Care' |
|
de Saille, Stevienna | University of Sheffield |
Cameron, David | University of Sheffield |
Labinjo, Temitope | University of Sheffield |
Keywords: Assistive Robotics, User-centered Design of Robots, Robot Companions and Social Robots
Abstract: Substantial financial investment is being poured into care robotics research without truly understanding what ‘robots for care’ might mean to different people. The project ‘Imagining Robotic Care’ used a mix of policy review, expert interviews and focus group data gathered using LEGO® Serious Play® to investigate how stakeholders and publics imagine robots delivering aspects of care. Findings reveal a fundamental conflict not so much between different groups as between those with direct experience of the health-social care ecosystem (whether as professionals, informal carers or care users), and the policy-level sociotechnical imaginary of using robotics and AI to solve critical problems within the social care system as it now stands.
|
|
15:30-16:30, Paper TuPO.30 | |
FurNav: Development and Preliminary Study of a Robot Direction Giver |
|
Wilson, Bruce W | Heriot-Watt University |
Schlosser, Yann | Heriot-Watt University |
Tarkany, Rayane | Heriot-Watt University |
Moujahid, Meriam | Heriot-Watt University |
Nesset, Birthe | Heriot-Watt University |
Dinkar, Tanvi | Heriot-Watt University |
Rieser, Verena | Heriot-Watt University |
Keywords: Applications of Social Robots, Linguistic Communication and Dialogue, Multimodal Interaction and Conversational Skills
Abstract: When giving directions to a lost-looking tourist, would you first reference the street-names, cardinal directions, landmarks, or simply tell them to walk five hundred metres in one direction then turn left? Depending on the circumstances, one could reasonably make use of any of these direction giving styles. However, research on direction giving with a robot does not often look at how these different direction styles impact perceptions of the robots intelligence, nor does it take into account how users prior dispositions may impact ratings. In this work, we look at generating natural language for two navigation styles using a created system for a Furhat robot, before measuring perceived intelligence and animacy alongside users prior dispositions to robots in a small preliminary study ( N=7). Our results confirm findings by previous work that prior negative attitudes towards robots correlates negatively with propensity to trust robots, and also suggests avenues for future research. For example, more data is needed to explore the link between perceived intelligence and direction style. We end by discussing our plan to run a larger scale experiment, and how to improve our existing study design.
|
|
15:30-16:30, Paper TuPO.31 | |
Discriminating between Autonomous and Human Remote Control in Human-Robot Interaction: The Role of Sensorimotor Adaptation |
|
Ciardo, Francesca | Dr |
Radice, Marta | University of Milano-Bicocca |
Russi, Nicola Severino | IIT |
De Tommaso, Davide | Istituto Italiano Di Tecnologia |
Wykowska, Agnieszka | Istituto Italiano Di Tecnologia |
Keywords: User-centered Design of Robots, Human Factors and Ergonomics, Cooperation and Collaboration in Human-Robot Teams
Abstract: Sensorimotor Synchronization (SMS) plays a crucial role in social interactions. The present study aimed to investigate whether SMS facilitates users in disentangling between autonomous or human remote controllers during HRI. Specifically, we focused on temporal adaptation mechanisms underlying SMS. To this end, we developed an interactive task in which users were asked to synchronize with the iCub robot to play a melody of four tones. The robot was either run autonomously or remotely controlled by a human confederate. The task was administered during the 2022 Science Fair held in Genoa (Italy). The results revealed that specific SMS parameters, such as phase correction and Inter-tap interval, affect the probability of correctly discriminating between autonomous and human remote controllers. The study highlights the importance of considering the level of temporal adaptation displayed by robots during HRI to allow users to correctly identify how the robot is controlled.
|
|
15:30-16:30, Paper TuPO.32 | |
Scale and Motion Adaptive Multi-Object Tracking Algorithm for Unmanned Aerial Vehicles |
|
SONG, INPYO | Sungkyunkwan University |
Lee, Jangwon | Sungkyunkwan University |
Keywords: Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans
Abstract: Tracking multiple objects in aerial videos is a crucial task with numerous applications for unmanned aerial vehicles (UAVs), such as human-drone interaction and real-time suspect tracking by the police. However, this task is challenging due to the fast and unpredictable motion of UAVs and the small size of target objects in the videos caused by the high-altitude and wide-angle views of drones. In this study, we introduce a novel method to overcome these challenges. Specifically, we present a new tracking strategy that involves initiating the tracking of target objects from low-confidence detections, which are frequently encountered in various UAV application scenarios. Additionally, we propose revisiting traditional appearance matching algorithms to improve the association of low-confidence detections. Benchmark evaluations on two UAV-specific datasets (VisDrone2019, UAVDT) and a general dataset (MOT17) reveal that our approach surpasses current state-of-the-art methodologies, showcasing its robustness and adaptability in various tracking environments.
|
|
15:30-16:30, Paper TuPO.33 | |
Evaluation of Operator Performance and Workload in Robotic Teleoperation Assembly Task |
|
Prinz, Theresa | Technical University of Munich, TUM School of Engineering and De |
Wagner, Marlene | Technical University Munich |
Bengler, Klaus | Technical University of Munich |
Keywords: HRI and Collaboration in Manufacturing Environments, Human Factors and Ergonomics, Degrees of Autonomy and Teleoperation
Abstract: This study compares two visualization methods of a teleoperation setup for assembly tasks with respect to performance and workload on human operators. An experimental between-subjects design with 42 participants was used to assess the cognitive strain that the usage of two different visualization methods for (dis)assembly tasks puts on its operators. To measure performance, effectiveness and efficiency dimensions were objectively addressed through task completion time, and the workload was measured subjectively (NASA-TLX) and objectively (one-back task). The results show that incomplete substitution of direct visualization by a two-dimensional video stream leads to significantly lower performance with respect to task completion times. However, no significant differences were found in workload. The results of the evaluations aid in the development of intuitive and efficient human-robot interfaces for teleoperated manufacturing tasks and in considering teleoperation workplaces in the production planning process.
|
|
15:30-16:30, Paper TuPO.34 | |
AI-Based Interactive Telemedical Query System for Medical Inquiries |
|
Burum, Krystian | George Washington University |
Lee, Myungeun | George Washington University |
Teoh, Jia Yuan | George Washington University |
Park, Chung Hyuk | George Washington University |
Keywords: Robots in Education, Therapy and Rehabilitation, Narrative and Story-telling in Interaction, Virtual and Augmented Tele-presence Environments
Abstract: The complexity of medical information can lead to confusion, be associated with bias, and further increase the spread of misinformation. For example, parents who are concerned about the risks of vaccines might postpone or opt out of vaccinating their children, based on information acquired from the internet and social media. Moreover, parents who use biased search terms when seeking information online could land on websites that support misconceptions about vaccines. Our objective in this study is to develop an Artificial Intelligence-based Health Information Query System (AI-HIQ) that can provide informed answers to users’ queries based on objective scientific articles that are peer-reviewed, published medical journals, and expert-written papers. Our first AI-HIQ can retrieve abstracts of research articles from PubMed relevant to the users’ queries and formulate a set of candidate answers via the Bidirectional Encoder Representations from Transformers (BERT) model. AI-HIQ is then designed to pick the best answer from the candidate answers by assessing the sentence similarity score by Universal Sentence Encoder’s embeddings. The second AI-HIQ uses ChatGPT API incorporating various additional guidelines for performance improvement such as prompts to increase accuracy and reduce bias. The GPT (Generative Pretrained Transformer) is a large language model that is specialized in natural language designed with transformer technology and an attention mechanism. It is anticipated that our proposed AI-HIQ system can contribute to providing more objective and well-informed data to the public in terms of many useful topics such as healthcare and reducing biases and misconceptions during information access.
|
|
15:30-16:30, Paper TuPO.35 | |
Design of a Miniature Ultrasound Transducer Using PMN-PT Single Crystal for Side-Lobe Elimination in Mid-Air Haptic Feedback |
|
Han, Jaeseung | KAIST |
Park, Jihwan | KAIST |
Kyung, Ki-Uk | Korea Advanced Institute of Science & Technology (KAIST) |
Keywords: Novel Interfaces and Interaction Modalities, Multi-modal Situation Awareness and Spatial Cognition, HRI and Collaboration in Manufacturing Environments
Abstract: Abstract— As VR and AR content continue to evolve, the demand for high-fidelity haptic feedback is increasing. Mid-air haptic feedback, focusing ultrasonic waves through phase array transducers, has emerged to address this need. However, commercial ultrasonic transducers often generate a side lobe due to their diaphragm size, leading to unintended focal points. To rectify this, our research aims to develop a miniature, high-output ultrasound transducer, less than half the wavelength. We have designed a transducer using PMN-PT single crystal which has a high piezoelectric performance. Through impedance and velocity measurements, we confirmed the resonance frequency of the fabricated transducer with a 3.6mm diaphragm to be 33.2 kHz. The sound pressure performance exhibits a wide directivity profile. By refining and optimizing our approach, we aim to bring potential advancements in the field of mid-air haptic feedback.
|
|
15:30-16:30, Paper TuPO.36 | |
Embracing Digital (Self-)Care: Early Insights from a Field Test of a Social Robot-Assisted Health Monitoring System for Older Adults |
|
Neef, Caterina | TH Köln - University of Applied Sciences |
Linden, Katharina Friederike | TH Köln - University of Applied Sciences |
Richert, Anja | University of Applied Sciences Cologne |
Keywords: Applications of Social Robots, Medical and Surgical Applications, User-centered Design of Robots
Abstract: To enable the successful deployment of autonomously usable health monitoring systems for older adults in uncontrolled, real-life environments, high levels of usability and user experience are essential. This paper presents the initial preliminary findings of an eight-week field test of our social robot-assisted health monitoring system with older adults in assisted living. The initial results indicate high interest and usability of the system, with users learning to interact with the various system components and gaining a better understanding of the system. We also give a first indication of the prerequisites for the use of such systems, namely the importance of the existing technical infrastructure, the involvement of the right target group motivated to use such a system, and the participatory design and development of such systems.
|
|
15:30-16:30, Paper TuPO.37 | |
Anthropomorphic Knee with Human-Mimetic Ligament Constraint Aiming Human-Like Motions |
|
Yamamoto, Yudai | Tokyo University of Agriculture and Technology |
Mizuuchi, Ikuo | Tokyo University of Agriculture and Technology |
Keywords: Anthropomorphic Robots and Virtual Humans, Innovative Robot Designs
Abstract: The purpose of this study is to seek to realize human-like motions by mimicking the number of ligaments, the kinds of ones, and shape of femur and tibia. I proposed that there is a possibility that they improve the degree of humanity. We conducted experiments to ensure that ligaments constrain the motions likewise human’s ligaments. I confirmed that slipping and rolling motions, prevention of overextension, and rotation around yaw axis were realized. Throughout the experiments, I suggested the possibilities of collateral ligaments’contribution to prevent from overextension. In addition to this, we formed two hypotheses to validate what extend these motions affect on humanity and conducted the survey. Consequently, the presence of both motions did not have significant difference. The additional items were expected to have significant difference, however did not have significant difference. We pointed out some problems of simulation videos: did not have whole body movement, not feel harmony with other body parts, and not feel emotions in moving legs. We concluded to have further experiments after modifying the knees in them.
|
|
15:30-16:30, Paper TuPO.38 | |
The Delicate Dance of Unintended Offense: Robots As Agents of Social Repair for Microaggressions |
|
Kim, Boyoung | George Mason University Korea |
Winkle, Katie | Uppsala University |
Korman, Joanna | The MITRE Corporation |
Keywords: Ethical Issues in Human-robot Interaction Research, Applications of Social Robots, Linguistic Communication and Dialogue
Abstract: In the field of human-robot interaction (HRI), there has been a growing interest in examining what positive attitudinal and behavioral changes that social robots can induce and how those influences can be exerted through robots’ natural language speech. In this work, we aimed to explore different kinds of verbal remarks a social robot can make to reinforce positive social norms about the members of historically marginalized groups while breaking prejudices and stereotypes against them. We apply a latest framework proposed for facilitating ethical HRI research and practice to a scenario that portrays a case of gender microaggression. We discuss power differences in this scenario, hypothesize a robot witnessing this incident, and inspect norms, expectations, and outcomes related to possible verbal responses the robot can make to either of the two human interactants in order to enhance positive norms about female professionals and challenge stereotypes against them. We thus demonstrate the application of a theoretical framework for ethical HRI research and practice and explore the use of social robots to promote positive societal changes.
|
|
15:30-16:30, Paper TuPO.39 | |
Real-Time Personality Prediction System Using Multi-Modal Sensor in Human-Robot Interactions |
|
Bhin, Hyeonuk | Korea Institute of Science and Technology |
Lim, Yoonseob | Korea Institute of Science and Technology |
Choi, Jongsuk | Korea Inst. of Sci. and Tech. |
|
15:30-16:30, Paper TuPO.40 | |
Actuation Optimization of Hyper-Vacuum Artificial Muscles |
|
Coutinho, Altair | Sungkyunkwan University |
Rodrigue, Hugo | Sungkyunkwan University |
Keywords: Creating Human-Robot Relationships, Innovative Robot Designs
Abstract: Hyperbaric Vacuum Artificial Muscles (Hyper-VAM) use positive and negative pressures interchangeably through a vacuum-based actuator within a hyperbaric chamber. Unlike other pneumatic artificial muscles, the actuator's performance relies on efficient airflow control in a single chamber, enabling straightforward fluidic strategies. Although the Hyper-VAM contains two chambers, and being able to produce a large range of force, lifting heavy payloads (up to 80 kg), its actuation could be improved by controlling its linear deformation, and actuation speed by making use of the pressure equilibrium between these two chambers as the building block for advanced fluidic strategies. Through closed-loop pneumatic actuation, the actuator can be driven by exchanging air between the two chambers, allowing it to operate using a single pump without requiring air exchange with the environment. It is also shown that it is possible to operate the Hyper-VAM in sub- and hyper-atmospheric conditions during closed-loop actuation and to use the atmosphere as a natural pump starting from a sub-atmospheric atmospheric equilibrium. This work introduces the implementation of the control of the actuator in closed-loop pneumatic operation, also the demonstration and comparison of new fluidic hardware strategies for driving a Hyper-VAM making use of the pressure equilibrium between chambers to increase the speed of actuation.
|
|
15:30-16:30, Paper TuPO.41 | |
Impression Evaluation of Rewarding/Punitive Behavior Using Robotic Gestures and Gaze in the Older Adults |
|
Uchikawa, Otono | Chuo University |
Niitsuma, Mihoko | Chuo University |
Keywords: Non-verbal Cues and Expressiveness, Robot Companions and Social Robots, Creating Human-Robot Relationships
Abstract: As social robots play an increasingly active role in nursing care and medical fields, opportunities for robots to communicate with the older adult are expected to increase. We believe that it is necessary to clarify the behavioral factors of robots that can ultimately elicit certain actions from people, even if they refuse to do so, when robots try to persuade people to perform certain actions. The purpose of this study is to clarify a method of generating robot behaviors that can gain people's attention and make them look back on their action choices by incorporating not only behaviors that make people feel comfortable, but also daringly unpleasant behaviors at effective times. We set up two types of robot behaviors, rewarding behaviors that give comfort and punishing behaviors that give discomfort, and clarified the impression that the two types of behaviors give in communication between the robot and older adult, as well as the behaviors that can gain attention.
|
|
15:30-16:30, Paper TuPO.42 | |
The Impact of ‘Head’ on Robotic Threat Perception in Rats |
|
Jo, Kyeong Im | Korea University |
Jeong, Ji Hoon | Korea University |
Choi, June-Seek | Korea University |
Keywords: Social Intelligence for Robots, Robot Companions and Social Robots
Abstract: There has been ongoing research, investigating how robot characteristics influence the interaction with animals and the resulting relationships. We specifically examine how the presence or absence of a head in the predator robot impacts the animals' responses, including their avoidance strategies, vigilance levels and threat perception. the predator robot modulates the rat’s defensive behavior, including the avoidance and approach strategies. The presence of a robot head reduced the behavioral activity of rats, indicating an increase in their fear level. Further retention test showed that rats maintained a greater distance when the robot's head was present. The findings provide valuable insights into the role of visual cues in predator-prey interactions and enhance our understanding of how the design of predator robots can affect animal behavior.
|
|
15:30-16:30, Paper TuPO.43 | |
A Tunable Tensile Element for Variable Compliance of Tensegrity Robots |
|
Arshad, Vaqas | Sungkyunkwan University |
Jamil, Babar | Sungkyunkwan University |
Rodrigue, Hugo | Sungkyunkwan University |
Keywords: Innovative Robot Designs, HRI and Collaboration in Manufacturing Environments
Abstract: Soft and tensegrity structures are two configurations that have appeared over the years as robots continue to draw inspiration from nature and consequently move towards compliant systems. As artificial manifestations of key biological attributes continue to make their way into robot designs, one premium feature that has been identified is the body’s capacity to change its stiffness to have a more harmonious relationship with its environment. Herein, we present a soft tensile element capable of achieving a high increase in stiffness constant of about 2007% with change in pressure. At a vacuum of 75 kPa, it provides a high blocking force of 284.5 N when stretched by approximately 29% of its original length. The hardware is designed to be used as the tensioned element in a tensegrity structure to animate the characteristic of biological structures to transition between flexible and stiff states to provide variable load-bearing capability and compliance to their larger incorporative bodies. As opposed to classical tensegrity structures for which shape change is always inexpensive in terms of energy on account of their flexible elements, this arrangement can alter the magnitude of its demand for external energy required to change its shape, thus having variable compliance that is crucial for operation in partially mapped environments.
|
|
15:30-16:30, Paper TuPO.44 | |
What Predicts Interpersonal Affect? Preliminary Analyses from Retrospective Evaluations |
|
Parreira, Maria Teresa | Cornell University |
Sack, Michael | Cornell University |
Jung, Malte | Cornell University |
Keywords: Affective Computing, Creating Human-Robot Relationships, Social Intelligence for Robots
Abstract: While the field of affective computing has contributed to greatly improving the seamlessness of human-robot interactions, the focus has primarily been on the emotional processing of the self, rather than the perception of the other. To address this gap, in a user study with 30 participant dyads, we collected the users' retrospective ratings of the interpersonal perception of the other interactant, after a short interaction. We made use of CORAE, a novel web-based open-source tool for COntinuous Retrospective Affect Evaluation. In this work, we analyze how these interpersonal ratings correlate with different aspects of the interaction, namely personality traits, participation balance, and sentiment analysis. Notably, we discovered that conversational imbalance has a significant effect on the retrospective ratings, among other findings. By employing these analyses and methodologies, we lay the groundwork for enhanced human-robot interactions, wherein affect is understood as a highly dynamic and a context-dependent outcome of interaction history.
|
|
15:30-16:30, Paper TuPO.45 | |
I’ll Get by with a Little Help from AI – Initial Exploration of New Perspectives for Non-Engineer Scholars with ChatGPT |
|
Müller, Ana | University of Applied Sciences Cologne |
Richert, Anja | University of Applied Sciences Cologne |
Keywords: Machine Learning and Adaptation, Innovative Robot Designs, Evaluation Methods
Abstract: Large-scale language models (LLMs), like Chat- GPT, have revolutionized text generation, prompting concerns about evaluation, ethics, and interdisciplinary impact. This paper advocates for interdisciplinary collaboration by empow- ering non-engineering scholars to shape technology without coding expertise. Using a practical example from social robotics research, it demonstrates the potential of LLMs in the first steps of programming a Furhat robot for an empirical study. It highlights the opportunities of Human-Artificial Intelligence (AI) collaboration, emphasizing different ways of problem- solving through dialogue between humans and AI. The paper aims to alleviate the fear of missing out for non-engineers, fostering a sense of inclusion.
|
|
15:30-16:30, Paper TuPO.46 | |
GSIP: A GRU-Based System for Human Impression Prediction and Automatic Prosody Selection for Gibberish Speech |
|
Galiza Cerdeira Gonzalez, Antonio | Tokyo University of Agriculture and Technology |
Mizuuchi, Ikuo | Tokyo University of Agriculture and Technology |
Keywords: Linguistic Communication and Dialogue, Affective Computing, Sound design for robots
Abstract: This work presents the development and evaluation of GSIP (Gibberish Speech Impression Predictor), a bi-directional GRU neural network which serves as a human impression prediction model for speech inputs, incorporating both phonetic information and prosody matrices. The objective is to select appropriate acoustic prosody for gibberish and semantic speech. Experimental validation of the proposed system was conducted through a user study. Participants ranked the system's performance against constant prosody and random prosody patterns and completed adapted Godspeed scale questionnaires to assess their perception of the GSIP-based prosody system and conversational agents. The experiment employed three embodied conversational agents, two screen-based avatars and a physical robot. The experiment found support for the understanding that Gibberish Speech is not so engaging for conversation, that higher anthropomorphism degrees create a higher perception of intelligence, and that the proposed system accurately predicts human impression, but failed to generate more engaging reactions than constant prosody.
|
|
15:30-16:30, Paper TuPO.47 | |
Intrinsic Force Sensing on Nonlinear Shape of Collaborative Robots |
|
Jung, Dawoon | Ajou Unversity |
Bu, Seongun | Ajou University |
Kang, Yuna | Ajou University |
Kim, Uikyum | Ajou University |
Keywords: HRI and Collaboration in Manufacturing Environments, Creating Human-Robot Relationships
Abstract: Physical human-robot interaction is emerging as a trend in human-robot interaction, offering both enhanced human safety and the possibility for touch-based interaction on the robot link. However, conventional intrinsic force sensing methods, employing force/torque (F/T) sensors, have limitations which is that the methods only able to be applied to simple surface shapes that can be formulated. In this paper, we present a novel intrinsic force sensing method capable of identifying the contact point on nonlinear shape of robot link, which utilizes the mesh structure of link surface. A algorithm was developed to identify the triangle within a mesh that had been actually contacted by a human, and to subsequently calculate the contact point on the identified triangle. This method was evaluated using a sensorized cover composed of a F/T sensor and a resized model of a collaborative robot link. The results demonstrated that contact points and forces on arbitrary surface regions were successfully detected, and even the gesture signs of interaction could be recognized.
|
|
15:30-16:30, Paper TuPO.48 | |
Design of a Robotic Gripper for Fruits Harvesting with Fin Ray Mechanism |
|
An, Byeongchan | Ajou University |
Song, Minseok | Ajou University |
Kim, Uikyum | Ajou University |
Keywords: Innovative Robot Designs
Abstract: Grippers are essential to harvest fruits automatically. And it is important to solve the problem of a shrinking agricultural labor force. In previous studies, grippers grasp fruits, and twist or add additional DOFs to harvest fruits. In these ways, fruits can be damaged, and it is not economical. This research proposes a soft gripper with the Fin Ray mechanism and linkage for fruits harvesting. The compliant hinges of the Fin Ray are replaced with revolute joints. When the gripper grasps the fruit, the Fin Ray undergoes deformation, causing the linkage to rotate and allowing the cutter to adaptively cut the stem. A force analysis of the linkage and fruit grasping experiments have been conducted. The proposed gripper is made using a 3D printer. This gripper can be applied to various fruits with a simple design change. The target fruit is a tomato in this paper.
|
|
15:30-16:30, Paper TuPO.49 | |
Unsupervised Learning-Based Endoscopic Scene Homography Estimation and Image Stitching |
|
Zhao, ShiZun | Fudan University |
Luo, jingjing | Fudan University |
Wang, Hongbo | Fudan University |
Han, Yuan | Eye & ENT Hospital of Fudan University |
WenXian, Li | Eye & ENT Hospital of Fudan University |
Keywords: Machine Learning and Adaptation, Medical and Surgical Applications
Abstract: This study combines image stitching technology with endoscopic vision to propose a real-time endoscopic image expansion solution, which can solve the problem of limited visibility during tracheal intubation. Specifically, we propose an unsupervised homography estimation network based on hybrid dilation convolution. Compared with other homography estimation methods, the proposed method has a parameter volume of only 332K, and the Mean Absolute Error of the overlapping area after registration is 0.013, with an Structural Similarity Index Measure of 0.985. In addition, this study also designs a simple fusion network to eliminate artifacts in panoramic images and proposes an end-to-end incremental multi-image stitching solution, which can be applied to expand the field of view of real-time endoscopic images.
|
|
15:30-16:30, Paper TuPO.50 | |
Taotie: Designing a Museum Robot Utilizing Cultural Metaphors |
|
Yao, Zhihao | Tsinghua University |
Guo, Yijie | Tsinghua University |
Lu, Yao | Tsinghua University |
Sun, Qirui | Tsinghua University |
Gao, Mingyue | Tsinghua University |
Mi, Haipeng | Tsinghua University |
Keywords: Robots in art and entertainment, Innovative Robot Designs, Assistive Robotics
Abstract: This work investigates a museum robot design method that combines cultural metaphors, as well as a practical investigation of museum robot customisation design for the distinct cultural element of traditional Chinese bronze. Taotie is a museum robot that exhibits 27 different bronze pattern faces through a combination of rotating mechanics, with the objective of boosting visitors' curiosity and motivating them to learn about the culture behind it.
|
|
15:30-16:30, Paper TuPO.51 | |
The Imitation Game: A Dance Task to Explore Social Influence in Child-Robot Mixed Groups |
|
Pusceddu, Giulia | Istituto Italiano Di Tecnologia, Università Di Genova |
Cocchella, Francesca | Italian Institute of Technology/University of Genoa |
Belgiovine, Giulia | Istituto Italiano Di Tecnologia |
Lastrico, Linda | Italian Institute of Technology |
Bogliolo, Michela | Scuola Di Robotica |
Rea, Francesco | Istituto Italiano Di Tecnologia |
Casadio, Maura | University of Genoa |
Sciutti, Alessandra | Italian Institute of Technology |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Child-Robot Interaction, Robots in Education, Therapy and Rehabilitation
Abstract: Given that a significant portion of our daily lives revolves around group interactions, the integration of social robots into everyday existence requires understanding the dynamics that emerge when social robots are introduced into human groups, with the ultimate goal of equipping them with the ability to behave within such group settings effectively and naturally. In this work, we explore the influence among the members of child-robot mixed groups during a motor task. To achieve this, we propose a gamified task in which children and the NAO robot mimic simple gestures. The robot is programmed to perform the actions in both a "typical" and "atypical" manner. We observe the execution of the actions of the participants before and after they have witnessed the robot's version of the moves. Preliminary results show that participants imitate the robot less than they self-report. Additionally, it is observed that the group members tend to focus their gaze on the human individual who initiates the action.
|
|
TuET1 |
Room T1 |
Social Human-Robot Interaction of Human-Care Service Robots |
Special Session |
Chair: Jang, Minsu | Electronics & Telecommunications Research Institute |
Co-Chair: Ahn, Ho Seok | The University of Auckland, Auckland |
|
16:40-16:50, Paper TuET1.1 | |
Can a Robot Elicit Emotions? a Global Optimization Model to Attribute Mental States to Human Users in HRI (I) |
|
Staffa, Mariacarla | University of Naples Parthenope |
D'Errico, Lorenzo | University of Naples Federico II |
Keywords: Monitoring of Behaviour and Internal States of Humans, Personalities for Robotic or Virtual Characters, Cognitive Skills and Mental Models
Abstract: In this work, we are interested in investigating if a distinct personality of the robot may impact the emotional state of the users, which we propose to detect using neuroscience theories that allow us to classify emotions based on valence and arousal metrics derived from brain wave activity analysis. We devised an experimental research study in which EEG data was gathered while individuals interacted with a robot with different personalities. Support Vector Machine, Decision Tree, Random Forest, K-Nearest Neighbors, and Multi-Layer Perceptrons have all been trained using EEG-signal, valence, and arousal data. All proposed classifiers were subjected to a Global Optimization Model (GOM) that used feature selection and hyper-parameter optimization techniques to improve classification results and address common issues that affect classifier accuracy when attempting to solve a supervised learning problem, such as bias-variance trade-off, dimensionality of the input space, and noise in the input data space. The findings of the experiments will be presented and debated
|
|
16:50-17:00, Paper TuET1.2 | |
Evaluation of Large Tweet Dataset for Emotion Detection Model: A Comparative Study between Various ML and Transformer (I) |
|
Lee, Sanghyub John | University of Auckland |
Lim, JongYoon | University of Auckland |
Paas, Leo | The University of Auckland |
Ahn, Ho Seok | The University of Auckland, Auckland |
Keywords: Social Intelligence for Robots, Machine Learning and Adaptation, Social Touch in Human–Robot Interaction
Abstract: Specific emotion detection in written human language is a challenging problem in various research fields, including psychology, neuroscience, and computer science. Twitter is a suitable source for collecting large emotion datasets, as users have provided tweets with emotion hashtags (e.g., #fear, #anger, #sadness, #joy, #surprise, and #disgust) expressing their emotions. However, the criteria for data collection, i.e., the position of representative or synonymous emotion hashtags, remains unclear. Next to this unclarity, we assess the suitability of various machine learning (ML) algorithms for this purpose. In this study, we collected over five million tweets (n=5,645,139) with 24 emotion hashtags and investigated the efficacy of different criteria for collecting tweets. Contrary to previous research, we found that applying any position of representative emotion hashtags can achieve strong performance, rather than applying the last position of synonymous emotion hashtags. Our study shows that the RoBERTa-large transformer model outperforms deep learning algorithms and traditional ML algorithms in terms of specific emotion detection in tweets, especially when trained on a dataset with a balance between size and quality. We also found that larger datasets are more efficient for RoBERTa model training than smaller datasets. Along with these empirical contributions, we share the collected emotion dataset.
|
|
17:00-17:10, Paper TuET1.3 | |
The Video Game to Robot Driver Pipeline: Sociability with Humans-In-The-Loop (I) |
|
Knight, Heather | Oregon State University |
Buchmeier, Sean | Oregon State University |
|
17:10-17:20, Paper TuET1.4 | |
The Effects of Socio-Relational Context and Robotization on Human Group (I) |
|
Kang, Dahyun | Korea Institute of Science and Technology |
Kim, Sangmin | Korea Institute of Science and Technology |
Choi, Jongsuk | Korea Inst. of Sci. and Tech |
Kwak, Sonya Sona | Korea Institute of Science and Technology (KIST) |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots
Abstract: Nunchi is a high context communication skill which makes an individual understand the counterpart's indirect social cues and respond appropriately. Robotic things can perceive and recognize situations, and express appropriate responses to situations. Thus, this study was conducted with the expectation that the robotic things equipped with Nunchi, which understand the situation well and behave appropriately instead of the workers, would allow the workers to focus on their work. In order to investigate the effect of robotization of the objects on the degree to which Nunchi is required for the participant, task load, and the impression of objects according to the socio-relational context, a 2 (socio-relational context: stranger group vs. friend group) X 2 (robotization: robotic things vs. baseline things) mixed-participant experiment was designed. As a result, participants evaluated robotic things as more useful and social than baseline things. In addition, Nunchi was required less for the people in the stranger group when robotic things were used in their work than when baseline things were used. Finally, participants felt their task load reduced when the robotic things were used.
|
|
17:20-17:30, Paper TuET1.5 | |
Abnormal Detection of Worker by Interaction Analysis of Accident-Causing Objects (I) |
|
Kim, Won Shik | UST |
Kim, Kyekyung | Electronics and Telecommunications Research Institute |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Machine Learning and Adaptation
Abstract: In industrial sites, many industrial accidents cause human casualties every year, and deep learning-based object detection and danger zone management technologies are being proposed to minimize accidents. However, existing studies have focused only on object detection which has poor performance when a dangerous situation occurs by interacting two or more accident-causing objects. This paper proposes a method that detects accident risks in advance through object detection and risk interaction analysis between objects. It consists of four modules: image acquisition, object detection, worker action analysis, and risk event detection by dangerous object interaction. YOLOv4 is selected and fine-tuned to detect workers and conveyor objects that cause accidents by interacting objects. After selecting the danger and caution zones, it determines whether or not the detection object exists around the danger zone. A total of 68,621 image datasets collected from industrial sites were created to train and evaluate the system. The mAP of 91.79% for object detection is obtained and the F1 score of 88.9% for risk event detection is obtained.
|
|
17:30-17:40, Paper TuET1.6 | |
To Shake or Not to Shake: Intuitive Reactions of Senior Adults to a Robot Handshake in a Western Culture (I) |
|
van Otterdijk, Maria Theodorus Henricus | University of Oslo |
Saplacan, Diana | University of Oslo |
Baselizadeh, Adel | University of Oslo (UiO) |
Laeng, Bruno | Department of Psychology and with the RITMO Centre for Interdisc |
Torresen, Jim | University of Oslo |
Keywords: Non-verbal Cues and Expressiveness, Robot Companions and Social Robots, Assistive Robotics
Abstract: Robots have the potential to provide everyday life care and support for senior adults, but acceptance is essential for successful implementation in the domestic environment. Nonverbal social behavior can enhance this acceptance, and behavioral cues should be easy and intuitive to understand. However, which factors contribute to senior adults’ intuitive understanding of social cues, such as handshakes? Our research aims to address this question using video observations and semi-structured interviews. Based on a thematic analysis and video observations, our findings indicate that some participants intuitively understood how to shake hands. Most did not shake hands due to not understanding the robot’s behavior or fear. Other identified themes included: contributing features for intuitive handshakes, design improvements, and experiences with the robot’s end effector. Lastly, we found no significant effect between the initial response of the participants to the handshake and either the reaction time or the handshake duration. By designing the gripper and the robot itself in a more familiar, less fear-eliciting way, senior adults might understand the gesture of shaking hands more intuitively.
|
|
17:40-17:50, Paper TuET1.7 | |
Development and Validation of a Motion Dictionary to Create Emotional Gestures for the NAO Robot (I) |
|
Hellou, Mehdi | University of Manchester |
Gasteiger, Norina | University of Manchester |
Kweon, Andy | The University of Auckland |
Lim, JongYoon | University of Auckland |
MacDonald, Bruce | University of Auckland |
Cangelosi, Angelo | University of Manchester |
Ahn, Ho Seok | The University of Auckland, Auckland |
Keywords: Non-verbal Cues and Expressiveness, Assistive Robotics, Robots in art and entertainment
Abstract: Social robots are becoming increasingly present in our daily lives and will continue to be integrated into society to help people with their daily routines. In this paper, we create a general motion dictionary for the NAO robot, to generate emotional gestures when the robot is interacting with humans. We implemented the motions in the context of a museum setting, wherein NAO interacts with visitors as a guide. We present a Motion Dictionary which integrates each gesture’s features and the corresponding emotions. By using the Choregraphe simulator to create the motions and validate them with a real robot, we intend to simplify and help with the generation of emotional gestures for human-robot interaction.
|
|
17:50-18:00, Paper TuET1.8 | |
Connecting without Reaching: How Voice-Cloned Robot Can Enhance Mental Health of Isolated People During a Pandemic (I) |
|
Kim, Jun San | KB Financial Group |
Shin, Soyeon | LG Electronics |
Kang, Dahyun | Korea Institute of Science and Technology |
Lim, Yoonseob | Korea Institute of Science and Technology |
Kwak, Sonya Sona | Korea Institute of Science and Technology (KIST) |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots
Abstract: Voice cloning techniques using deep neural networks have been used for frauds, such as false financial transactions or fake news, limiting their widespread application. However, when incorporated with communication robots, voice cloning may increase social connectedness for isolated people. Herein, we suggested the concept of a voice-cloned communication robot (VCR), equipped with acquaintances’ voices to allow people in isolation to feel connected with their acquaintances. We developed a prototype VCR, conducted exploratory qualitative and quantitative studies, and verified its potential effectiveness in enhancing the mental health of isolated people.
|
|
18:00-18:10, Paper TuET1.9 | |
Deep Learning-Based Head Pose Estimation for Enhancing Nonverbal Communication in Human-Robot Interaction (I) |
|
Yoon, Chanyoung | Korea Institute of Industrial Technology |
Lim, Yoongu | Korea Institute of Industrial Technology |
Lee, Dong-Wook | Korea Institute of Industrial Technology |
Ko, KwangEun | Korea Institute of Industrial Technology |
Keywords: Affective Computing, Applications of Social Robots, Non-verbal Cues and Expressiveness
Abstract: This study presents an approach for enhancing human-robot interaction by developing a deep learning-based algorithm that recognizes nonverbal communicative information, with a focus on head pose estimation. The head pose estimation is a crucial component of nonverbal communication, and to construct an accurate model for it, both a suitable architecture and a large-scale dataset that reflects diverse real-life conditions are crucial. However, it is challenging to annotate the 3D movement of the facial region based on real images. To address this issue, this study proposes a pipeline for creating a large-scale learning dataset consisting of images, and 3D head poses annotations by using a photo-realistic 3D head model. Based on the dataset collected in this way, a deep learning-based 6D object poses model was trained to directly estimate the head pose from an RGB image. The 6D pose model is based on a convolutional neural network with multi-heads designed to perform facial expression classification, facial region detection, and head pose estimation. Experimental results with a custom test dataset show that the proposed method achieves 3.49 degrees of mean absolute errors of Euler angles.
|
|
TuET3 |
Room T3 |
Mental Models of the Human User in Social HRI |
Regular Session |
Chair: Trafton, Greg | Naval Research Laboratory |
|
16:40-16:50, Paper TuET3.1 | |
Towards Benchmarking Human-Aware Robot Navigation: A New Perspective and Metrics |
|
Singamaneni, Phani Teja | LAAS-CNRS |
Favier, Anthony | LAAS-CNRS |
Alami, Rachid | CNRS |
Keywords: Evaluation Methods, Motion Planning and Navigation in Human-Centered Environments, Robot Companions and Social Robots
Abstract: Human-aware robot navigation planning enables robots to traverse human-occupied spaces socially. However, evaluating and benchmarking the `human awareness' of such navigation schemes is challenging. With the growing necessity and research interest in the field, there is a need to define metrics to quantify and benchmark such qualities. In this regard, this paper proposes a set of metrics by looking at the problem from a new perspective. These proposals are made by inspecting the robot's navigation from the viewpoint of a human experiencing it and then defining proxies for the perceived human feelings. Analyses of some commonly occurring human-robot navigation scenarios using these metrics show their capability in benchmarking and differentiating human-aware robot navigation from standard robot navigation.
|
|
16:50-17:00, Paper TuET3.2 | |
Investigating NARS: Inconsistent Practice of Application and Reporting |
|
Rosén, Julia | University of Skövde |
Lagerstedt, Erik | University of Skövde |
Lamb, Maurice | Högskolan I Skövde |
Keywords: Evaluation Methods, Motivations and Emotions in Robotics
Abstract: The Negative Attitude toward Robots Scale (NARS) is one of the most common questionnaires used in the studies of human-robot interaction (HRI). It was established in 2004, and has since then been used in several domains to measure attitudes, both as main results and as a potential confounding factor. To better understand this important tool of HRI research, we reviewed the HRI literature with a specific focus on practice and reporting related to NARS. We found that the use of NARS is being increasingly reported, and that there is a large variation in how NARS is applied. The reporting is, however, often not done in sufficient detail, meaning that NARS results are often difficult to interpret, and comparing between studies or performing meta-analyses are even more difficult. After providing an overview of the current state of NARS in HRI, we conclude with reflections and recommendations on the practices and reporting of NARS.
|
|
17:00-17:10, Paper TuET3.3 | |
Does It Affect You? Social and Learning Implications of Using Cognitive-Affective State Recognition for Proactive Human-Robot Tutoring |
|
Kraus, Matthias | University of Augsburg |
Betancourt, Diana Lucia | Ulm University |
Minker, Wolfgang | Ulm University |
Keywords: Multimodal Interaction and Conversational Skills, Robots in Education, Therapy and Rehabilitation, Degrees of Autonomy and Teleoperation
Abstract: Robotic technology has proven to be advantageous for student learning and social development in educational settings. However, in order to enhance their effectiveness and provide a more human-like tutoring experience, robots must be capable of adapting to the user and exhibiting proactivity. By acting proactively, these intelligent robotic tutors can anticipate potential obstacles and take preventative measures to avoid negative outcomes. However, determining when and how to behave proactively remains an open question. This study investigates how a robotic tutor can utilize a student's cognitive-affective states to trigger proactive tutoring dialogue and improve the learning experience. Specifically, we observed a concept learning task scenario where a robotic assistant proactively assisted the user when negative states, such as frustration and confusion, were detected. In an empirical study involving 40 undergraduate and doctoral students, we evaluated whether the initiation of proactive behavior after the detection of signs of confusion and frustration improves the student's concentration and trust in the robot. We also examined which level of proactive dialogue is most effective for promoting concentration and trust. The results indicate that high levels of proactive behavior can harm trust, especially when triggered during negative cognitive-affective states. However, this behavior does contribute to keeping the student focused on the task when triggered during these states. Based on our findings, we discuss potential future steps for improving the proactive assistance of robotic tutoring systems.
|
|
17:10-17:20, Paper TuET3.4 | |
The Perception of Agency: Scale Reduction and Construct Validity |
|
Trafton, Greg | Naval Research Laboratory |
Frazier, Chelsea | West Point |
Zish, Kevin | Global Systems Technology |
Bio, Branden | National Research Council |
MCCURRY, J. MALCOLM | Peraton |
Keywords: Evaluation Methods, Social Presence for Robots and Virtual Humans, Human Factors and Ergonomics
Abstract: The perception of agency in robots and AI char- acters has become increasingly important as different agents increase their capabilities. Experiment 1 took an existing measure of perceived agency and created a reduced version by using existing Rasch item reduction measures; Eight and five item scales were created. Experiment 2 showed that all three scales (PA, PA8, PA5) were able to capture differences in perceived agency between a cheating robot (higher PA) and a non-cheating robot (lower PA). Experiment 3 showed that all three scales were able to show the predicted positive relationship between perceived agency and perceived moral agency. All three scales also showed high internal validity. Suggestions for the usage of the scales was also discussed.
|
|
17:20-17:30, Paper TuET3.5 | |
Assessing a Virtual Platform's Effectiveness in Exploring Mental Models of Robot Design |
|
Haring, Kerstin Sophie | University of Denver |
Pittman, Daniel | University of Denver |
Train, Nicole | Metropolitan State University |
Dossett, Benjamin | University of Denver |
Laity, Weston | University of Denver |
Toczek, Maisey | University of Denver |
Sinclair, Jordan | University of Denver |
Mamo, Robel | University of Denver |
Keywords: User-centered Design of Robots, Innovative Robot Designs, Novel Interfaces and Interaction Modalities
Abstract: This work presents our strategy for investigating the fundamental guidelines and theories related to robot mind perception, and for establishing a metric for mental models, using our web-based tool, Build-A-Bot. We also discuss the effectiveness and efficiency of our platform by virtue of its inclusive design and its ability to visualize the user's intended representation of a mental model for a robot through a 3D game-like interface. We conducted an observational user test study to assess if the website and the embedded robot building tool are effective and efficient to use for users. We found that the design of the robot creation platform and its associated website are considered intuitive and effective by a majority of our survey population. The Build-A-Bot platform successfully provides the ability for users to visualize their ideal representation of their mental model through an interactive game. Based on the obtained data, we propose further steps to optimize the Build-A-Bot platform for universal usability.
|
|
17:30-17:40, Paper TuET3.6 | |
Human or AI? the Brain Knows It! a Brain-Based Turing Test to Discriminate between Human and Artificial Agents |
|
Pischedda, Doris | University of Pavia |
Kaufmann, Vanessa | Universität Potsdam |
Wudarczyk, Olga | Department of Psychology, Humboldt-Universität Zu Berlin, Berlin |
Abdel Rahman, Rasha | Department of Psychology, Humboldt-Universität Zu Berlin, Berlin |
Hafner, Verena Vanessa | Humboldt-Universität Zu Berlin |
Kuhlen, Anna | Department of Psychology, Humboldt-Universität Zu Berlin, Berlin |
Haynes, John-Dylan | Charité Universitätsmedizin |
Keywords: Evaluation Methods, Linguistic Communication and Dialogue, Multimodal Interaction and Conversational Skills
Abstract: Since the introduction of the Turing Test to measure machine intelligence, more and more sophisticated artificial systems have been developed to pass the test. These systems revealed some limitations of the Turing Test and new versions of the test have been developed over time in an attempt to overcome these shortcomings. Yet, all these variants still rely on the subjective judgments of human interrogators which are subject to biases. Here, we propose the brain-based Turing Test, a novel version of the test that uses implicit information encoded in the human brain to discriminate between human and artificial agents. We highlight multiple benefits of the brain-based Turing Test, outline its possible outcomes, present an empirical test using robot and human interactive communication, and explain how research in human-robot interaction can profit from it.
|
|
17:40-17:50, Paper TuET3.7 | |
Assessing Perceived Discomfort and Proxemic Behavior towards Robots: A Comparative Study between Real and Augmented Reality Presentations |
|
Herzog, Olivia | Technical University of Munich |
Nertinger, Simone | Technical University of Munich |
Wenzel, Katharina Valeska | Technical University of Munich |
Naceri, Abdeldjallil | Technical University of Munich |
Haddadin, Sami | Technical University of Munich |
Bengler, Klaus | Technical University of Munich |
Keywords: Evaluation Methods, Motion Planning and Navigation in Human-Centered Environments, User-centered Design of Robots
Abstract: This paper assesses the usefulness of immersive technology to evaluate perceived discomfort and proxemic behavior towards a robot depending on its size. Therefore, we compared a real and an augmented reality (AR) robot presentation. In a within-subject design, a service humanoid approached participants (N = 32) in four trials in a counterbalanced order. One trial presented a real robot, another showed a same-sized AR version, and two trials showed down-scaled AR versions of the robot. The perceived discomfort and the comfort distance were measured. For the presentation mode comparison, the distance estimation error was measured additionally. The results show that the comfort distance was greater for the AR robot than for the real one. The comfort distance was also greater for the largest robot compared to the smaller sizes, while there was no difference when comparing the smaller ones. There was no difference in perceived discomfort between the presentation modes or the robot sizes. The distance estimation error was greater in AR. The study indicates that results obtained with the AR and real robot are comparable relative to each other. Therefore, utilizing AR could effectively evaluate various versions of robots in terms of the discomfort they induce — a critical prerequisite prior to the manufacturing process. Finally, AR might be more feasible for the evaluation of subjective measures.
|
|
17:50-18:00, Paper TuET3.8 | |
Evaluating the Effectiveness of Iconography for Representing Robot Mental States in the Build-A-Bot Platform |
|
Haring, Kerstin Sophie | University of Denver |
Pittman, Daniel | University of Denver |
Train, Nicole | Metropolitan State University |
Dossett, Benjamin | University of Denver |
Laity, Weston | University of Denver |
Toczek, Maisey | University of Denver |
Sinclair, Jordan | University of Denver |
Mamo, Robel | University of Denver |
Keywords: User-centered Design of Robots, Innovative Robot Designs, Novel Interfaces and Interaction Modalities
Abstract: Robot designers and Human-Robot Interaction (HRI) practitioners can face challenges when people form a mental model of a robot that is not appropriate. Although the field of robotics would benefit significantly from a broad representation of designers, there is currently no comprehensive method of including many people in the design process and no theory of what expectations a robot design feature might elicit. We seek to address these challenges through the creation of a robot design platform, an online tool similar to a character creation interface in a video game, where users create a robot design. By collecting a large number of robot designs from users, we seek to be able to identify aspects of a robot's design that influence the mental models humans ascribe to the robot. To maximize the universal usability of the platform, we conducted a three-part survey to assess which icons should be used to visually represent the mental states ascribed to the robots created by users on the platform. In our assessment, we found nine icons that met our criteria for use in the platform and others that should be further evaluated.
|
|
18:00-18:10, Paper TuET3.9 | |
Participatory Design of a Social Robot and Robot-Mediated Storytelling Activity to Raise Awareness of Gender Inequality among Children |
|
Maure, Romain | Karlsruhe Institute of Technology |
Bruno, Barbara | Karlsruhe Institute of Technology (KIT) |
Keywords: Child-Robot Interaction, User-centered Design of Robots, Storytelling in HRI
Abstract: Gender inequality is a widespread problem in our society. It can manifest itself in many ways and contexts, and starting as early as primary school. While an increasing number of initiatives aim at tackling gender biases and inequalities, few of them are aimed at raising awareness of gender (in)equalities among young children, i.e., at the age in which such inequalities appear in their lives. The potential shown by social robots in teaching non-curricular topics is a promising motivation for exploring their use in this context. Indeed, a social robot could offer children the possibility to discuss gender (in)equality with an intelligent entity that is neither male nor female, but rather a credible outsider with respect to mankind. In this article we present the design process of a social robot, named PixelBot, and associated robot-mediated storytelling activity aimed at raising awareness of gender (in)equality among children. We used a participatory design approach involving 20 children aged 10-13 to acquire (i) their opinion on how a robot should look like and (ii) stories featuring robots and gender (in)equality. Finally, we conducted a study involving 8 children aged 9-10 to test the co-designed robot and robot-based storytelling activity. Results suggest that social robots are a promising avenue to promote gender equality and respect in children.
|
|
TuET4 |
Room T4 |
Applications of Social Robots III |
Regular Session |
Chair: Liu, Baisong | Eindhoven University of Technology |
|
16:40-16:50, Paper TuET4.1 | |
Human Security Robot Interaction and Anthropomorphism: An Examination of Pepper, RAMSEE, and Knightscope Robots |
|
Ye, Xin | University of Michigan |
Robert, Lionel | University of Michigan |
Keywords: Anthropomorphic Robots and Virtual Humans, Applications of Social Robots, Social Presence for Robots and Virtual Humans
Abstract: The rapid growth in the use of security robots makes it critical to better understand their interactions with humans. The impacts of anthropomorphism and interaction scenarios were examined via a 3 x 2 between-subjects experiment. Sixty participants were randomly assigned to interact with one of three security robots (Knightscope, RAMSEE, or Pepper) in either an indoor hallway or an outdoor parking lot scenario in a virtual reality cave. There were significant differences only between Pepper and Knightscope with Pepper rated higher in anthropomorphism, ability, integrity, and desire to use than Knightscope but the interaction scenario has no effect.
|
|
16:50-17:00, Paper TuET4.2 | |
Human-Robot Co-Creativity: A Scoping Review - Informing a Research Agenda for Human-Robot Co-Creativity with Older Adults |
|
Bossema, Marianne | University of Applied Sciences Amsterdam |
Ben Allouch, Somaya | Amsterdam University |
Plaat, Aske | Leiden University |
Saunders, Rob | Leiden University |
Keywords: Applications of Social Robots, Curiosity, Intentionality and Initiative in Interaction, Robots in Education, Therapy and Rehabilitation
Abstract: This review is the first step in a long-term research project exploring how social robotics and AI-generated content can contribute to the creative experiences of older adults, with a focus on collaborative drawing and painting. We systematically searched and selected literature on human-robot co-creativity, and analyzed articles to identify methods and strategies for researching co-creative robotics. We found that none of the studies involved older adults, which shows the gap in the literature for this often involved participant group in robotics research. The analyzed literature provides valuable insights into the design of human-robot co-creativity and informs a research agenda to further investigate the topic with older adults. We argue that future research should focus on ecological and developmental perspectives on creativity, on how system behavior can be aligned with the values of older adults, and on the system structures that support this best.
|
|
17:00-17:10, Paper TuET4.3 | |
What Do People Think of Social Robots and Voice Agents As Public Speaking Coaches? |
|
Forghani, Delara | University of Waterloo |
Ghafurian, Moojan | University of Waterloo |
Rasouli, Samira | Department of Electrical and Computer Engineering, University Of |
Nehaniv, Chrystopher | University of Waterloo |
Dautenhahn, Kerstin | University of Waterloo |
Keywords: Applications of Social Robots, Non-verbal Cues and Expressiveness, Human Factors and Ergonomics
Abstract: Social robots have the potential to serve as coaches for public speaking training. To design successful social robots, it is important to understand the expectations and perceptions of prospective users of such robots. In this paper, we present thematic analyses of comments made by 168 participants in an online study where participants watched videos of agents in the role of a public speaking coach. The study had a between-participant design with three conditions: two conditions with a humanoid social robot in either (1) active listening mode, i.e., using non-verbal backchanneling, or (2) passive listening mode, and (3) a voice assistant agent. The themes identified and discussed can contribute to the development of social robots and other agents as public speaking coaches.
|
|
17:10-17:20, Paper TuET4.4 | |
3 Key Challenges in Designing Advanced Social Robotic Applications |
|
Liu, Baisong | Eindhoven University of Technology |
Tetteroo, Daniel | Eindhoven University of Technology |
Markopoulos, Panos | Eindhoven University of Technology |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots, Ethical Issues in Human-robot Interaction Research
Abstract: Social robotic (SR) applications are expected to advance in the near future, which is indicated by the level of autonomy of SR, and is supported by the maturing of key technologies. However, technical, research-practice communication, and ethical challenges remain in the design of such advanced SR applications. In this paper, we start with our research cases through which we encountered and explored the challenges facing the design practice of advanced SR applications. Then we present an overview of recent design research endeavors in the HRI field and our research cases to demonstrate design research's characteristics, focus, and approaches. Finally, we discuss how the eliciting, speculative, and communicative approaches of design research can support user investigations with the hindrances of technical limitations, communicate between basic & applied research and design practice, and engage in ethical explorations of advanced SR applications.
|
|
17:20-17:30, Paper TuET4.5 | |
PePUT: A Unity Toolkit for the Social Robot Pepper |
|
Ganal, Elisabeth | University of Würzburg |
Siol, Lenny | Julius Maximilians Universität Würzburg |
Lugrin, Birgit | University of Wuerzburg |
Keywords: Applications of Social Robots, Virtual and Augmented Tele-presence Environments, Storytelling in HRI
Abstract: This paper introduces the Pepper Python Unity Toolkit (PePUT), a toolkit for controlling and using the social robot Pepper via Unity and Python. As toolkit components, we present implementations for the speech- and tablet-control, as well as animation and navigation, which can be directly used within Unity. With it, we provide the opportunity of a virtual testbed for the social robot Pepper. In addition, we highlight potential use cases for the components, in particular in a smart environment, as well as the use of PePUT as a research tool. The toolkit with the presented components and the source files are publicly available as open-source project (https://gitlab2.informatik.uni-wuerzburg.de/mi-development/peput) under the MIT license via GitLab for usage, replication, and extension.
|
|
17:30-17:40, Paper TuET4.6 | |
Dreaming up Smart Home Futures: A Story Completion Study |
|
Reig, Samantha | Carnegie Mellon University |
Carter, Elizabeth | Carnegie Mellon University |
Kirabo, Lynn | Carnegie Mellon University |
Fong, Terrence | NASA Ames Research Center (ARC) |
Steinfeld, Aaron | Carnegie Mellon University |
Forlizzi, Jodi | Carnegie Mellon University |
Keywords: Applications of Social Robots
Abstract: Virtual assistants, vacuum robots, security systems, and other smart home technologies are rapidly advancing, evolving, and gaining popularity. This raises questions of how people envision future interactions with smart home systems and how they imagine the future roles of such technologies in society. We deployed an online study that collected fictional short stories from 60 participants about smart home interactions. We identified themes regarding the roles of smart home technologies, social interactions with AI, and concerns about data privacy in the context of the home. We describe our method, discuss insights from the stories that explicitly reflect possible futures and implicitly reflect the present, and make design recommendations based on our findings.
|
|
17:40-17:50, Paper TuET4.7 | |
We All Make Mistakes: Terminal, Non-Critical, Recoverable, and Favorable Interaction Failures between People and a Social Robot |
|
Kamino, Waki | Indiana University Bloomington |
Randall, Natasha | Indiana University |
Saga, Tanya | University of Tsukuba |
Hsu, Long-Jing | Indiana University Bloomington |
Tsui, Katherine | Toyota Research Institute |
Sabanovic, Selma | Indiana University Bloomington |
Nagata, Shinichi | University of Tsukuba |
Keywords: Applications of Social Robots, Multimodal Interaction and Conversational Skills, Detecting and Understanding Human Activity
Abstract: In this paper, we present an in-depth illustration of interaction failures relatively unexplored in the field of human-robot interaction (HRI). Our qualitative analysis of interactions between a social robot and 12 participants sheds light on different types of erroneous interactions initiated by human and robot actors and their outcomes. Our findings show that a small portion of observed failures had fatal impacts on interactions. In most cases, they had little negative effects on interactions or even led to favorable outcomes, causing laughter and giggling from participants, for example. Overall, our study calls for further examination of the roles of failures and contextual factors that influence the consequences of failures in HRI.
|
|
17:50-18:00, Paper TuET4.8 | |
Realizing a Life Well Lived: The Design of a Home Robot to Assist Older Adults with Self-Reflection and Intentional Living |
|
Randall, Natasha | Indiana University |
Saga, Tanya | University of Tsukuba |
Kamino, Waki | Indiana University Bloomington |
Tsui, Kate | Toyota Research Institute |
Sabanovic, Selma | Indiana University Bloomington |
Nagata, Shinichi | University of Tsukuba |
Keywords: Applications of Social Robots, User-centered Design of Robots, Assistive Robotics
Abstract: Previous work suggests that older adults' meaning and happiness may be increased simply by having them engage in self-reflection exercises. Therefore, we design four modules to promote self-reflection, delivered by the QT robot. These modules were created with reference to three different time orientations --- past, present, and future --- and by incorporating aspects of life satisfaction and the PERMA model of well-being. Results show that about half of our older adult participants experienced subjective changes in meaning, happiness, and desire to make positive life changes during each module, with 11 of 15 participants experiencing changes to one of these measures after engaging in all four interactions. We make several suggestions for updating these modules for autonomous and longer-term deployment using the robot.
|
|
18:00-18:10, Paper TuET4.9 | |
High-Speed, High-Quality Robotic Portrait Drawing System |
|
Nasrat, Shady | Pusan National University, Busan, SouthKorea |
Kang, Taewoong | Pusan National University |
Jinwoo, Park | Pusan National University, Busan, SouthKorea |
Kim, Joonyoung | Pusan National University |
Yi, Seung-Joon | Pusan National University |
Keywords: Creating Human-Robot Relationships, Curiosity, Intentionality and Initiative in Interaction, Art pieces supported by robotics
Abstract: Although robotic portrait drawing has been a recurring topic in robotics, most robotic portrait drawing systems have focused on either speed or quality of the drawing due to various technical difficulties in pursuing both goals. In this work, we propose a novel robotic portrait drawing system that uses advanced machine-learning techniques and a variable line width Chinese calligraphy pen to draw a high-quality portrait in a short time. Our approach first detects the human keypoints from the incoming video stream and extracts the dominant human face from the video, and then uses a CycleGAN based algorithm to convert the image style into a black-and-white line drawing. After a number of optimization steps, we use a 6-DOF robotic arm and a calligraphy pen to quickly draw the portrait. The system has been openly demonstrated to the general public at the RoboWorld 2022 exhibition, where the system has drawn portraits of more than 40 visitors with a satisfaction rate of 95%.
|
|
TuET5 |
Room T5 |
Motion Planning and Navigation in Human-Centered Environments III |
Regular Session |
Chair: Nam, Changjoo | Sogang University |
|
16:40-16:50, Paper TuET5.1 | |
Optimal Robot Path Planning in a Collaborative Human-Robot Team with Intermittent Human Availability |
|
Dahiya, Abhinav | University of Waterloo |
Smith, Stephen L. | University of Waterloo |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Motion Planning and Navigation in Human-Centered Environments, Degrees of Autonomy and Teleoperation
Abstract: This paper presents a solution for the problem of optimal planning for a robot in a collaborative human-robot team, where the human supervisor is intermittently available to assist the robot in completing tasks more quickly. Specifically, we address the challenge of computing the fastest path between two configurations in an environment with time constraints on how long the robot can wait for assistance. To solve this problem, we propose a novel approach that utilizes the concepts of budget and critical departure times, which enables us to obtain optimal solution while scaling to larger problem instances than existing methods. We demonstrate the effectiveness of our approach by comparing it with several baseline algorithms on a city road network and analyzing the quality of the obtained solutions. Our work contributes to the field of robot planning by addressing a critical issue of incorporating human assistance and environmental restrictions, which has significant implications for real-world applications.
|
|
16:50-17:00, Paper TuET5.2 | |
Human-Multi-Robot Task Allocation in Agricultural Settings: A Mixed Integer Linear Programming Approach |
|
Lippi, Martina | University of Roma Tre |
Gallou, Jorand | Roma Tre University |
Palmieri, Jozsef | University of Cassino and Southern Lazio |
Gasparri, Andrea | Università Degli Studi Roma Tre |
Marino, Alessandro | University of Cassino and Southern Lazio |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Assistive Robotics
Abstract: The use of heterogeneous human-multi-robot teams enables the combination of complementary skills of these two different types of agents. To have an effective collaboration, it is necessary to define a strategy for allocating and scheduling tasks among them. In this work, we distinguish robots in working robots and service ones: working robots and human operators can perform similar tasks in the environment and both are assisted by service robots. We propose a Mixed-Integer Linear Programming approach that aims to minimize the waiting times of the working agents, the energy consumption of the service robots, and the makespan while ensuring that the velocity constraints of the robots are met and the task ordering is correct. Furthermore, we propose an online updating strategy that tackles changes in the parameters of working agents and adapts the plan accordingly based on a heuristic algorithm. To validate our framework, we analyze a precision agriculture harvesting application with two human operators, two working robots, and two service robots.
|
|
17:00-17:10, Paper TuET5.3 | |
Multi-Floor Danger and Responsiveness Assessment with Autonomous Legged Robots in Catastrophic Scenarios |
|
Betta, Zoe | University of Genova |
Paneri, Serena | University of Genova |
Gaudino, Alessandro | University of Perugia |
Benini, Alessandro | ANPAS |
Recchiuto, Carmine Tommaso | University of Genova |
Sgorbissa, Antonio | University of Genova |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Detecting and Understanding Human Activity
Abstract: In this work, we propose a strategy to implement the first two steps of the DRABC paradigm (Danger, Response, Airway, Breathing, Circulation) used by rescuers in Search and Rescue (SAR) with the use of a mobile quadruped robot. The robot is programmed to autonomously explore and create a map of the environment with the main objective of identifying areas of danger and reporting them to rescuers (first step of DRABC). While completing this first goal the robot must also identify people still inside the building, mark their position but also evaluate the health state of the person and in particular the response (second step of DRABC). Specifically, we propose new strategies for SAR considering that autonomous behaviour is particularly relevant before the human rescuers arrive: therefore, the policy adopted should privilege covering a broader area in the available time, rather than exploring a smaller area in depth. Strategies have been tested with the Spot robot from Boston Dynamics concerning both exploration and health assessment. The software developed and the tests to validate it are thoroughly described and explained.
|
|
17:10-17:20, Paper TuET5.4 | |
ISS/JEM Crew-Task Analysis to Support Astronauts with Intra-Vehicular Robotics |
|
Yamaguchi, Seiko Piotr | Japan Aerospace Exploration Agency (JAXA) |
Itakura, Riichi | Japan Aerospace Exploration Agency (JAXA) |
Inagaki, Tetsuya | Japan Aerospace Exploration Agency (JAXA) |
Keywords: Detecting and Understanding Human Activity, Cooperation and Collaboration in Human-Robot Teams, User-centered Design of Robots
Abstract: This paper proposes employing robotics and automation technology to manned spaceflight to manage repetitive activities for the crew. Current operations on the International Space Station (ISS) were analyzed to better determine target tasks for automation. Crew tasks on the ISS are precisely planned by the ground control and planning teams and then monitored and documented. In this study, astronauts’ tasks related to JAXA’s Japanese Experimental Module were analyzed based on their task names and categorized to determine consistently occulting, repetitive types of work. Based on the categorized tasks' durations, three categories were anticipated for future automation: sample and equipment retrieval (swapping), logistics (cargo handling), and monitoring. This paper discusses automation methods for each category based on JAXA’s ground and in-orbit robotic research and development.
|
|
17:20-17:30, Paper TuET5.5 | |
Context Based Echo State Networks for Robot Movement Primitives |
|
Amirshirzad, Negin | Ozyegin University |
Asada, Minoru | Open and Transdisciplinary Research Initiatives, Osaka Universit |
Oztop, Erhan | Osaka University / Ozyegin University |
Keywords: Affective Computing, Programming by Demonstration, Multi-modal Situation Awareness and Spatial Cognition
Abstract: Reservoir Computing, in particular Echo State Networks (ESNs) offer a lightweight solution for time series representation and prediction. An ESN is based on a discrete time random dynamical system that is used to output a desired time series with the application of a learned linear read-out weight vector. The simplicity of the learning suggests that an ESN can be used as a lightweight alternative for movement primitive representation in robotics. In this study, we explore this possibility and develop Context-based Echo State Networks (CESNs), and demonstrate their applicability to robot movement generation. The CESNs are designed for generating joint or Cartesian trajectories based on a user definable context input. The context modulates the dynamics represented by the ESN involved. The linear read-out weights then can pick up the context-dependent dynamics for generating different movement patterns for different contexts. To achieve robust movement execution and generalization over unseen contexts, we introduce a novel data augmentation mechanism for ESN training. We show the effectiveness of our approach in a learning from demonstration setting. To be concrete, we teach the robot reaching and obstacle avoidance tasks in simulation and in real-world, which shows that the developed system, CESN provides a lightweight movement primitive representation system that facilitate robust task execution with generalization ability for unseen seen contexts, including extrapolated ones.
|
|
17:30-17:40, Paper TuET5.6 | |
Low-Cost Simultaneous Localization and Mapping Using Occupancy Grid, Place Recognition and Semantic Priors |
|
Kenye, Lhilo | Indian Institute of Information Technology Allahabad, India; Nav |
Kala, Rahul | Indian Institute of Information Technology, Allahabad, India |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: Visual Simultaneous Localization and Mapping (VSLAM) tend to face challenges with low-framerate and low-resolution data, as well as in congested environments with moving objects and occlusions, leading to increased drift. To enhance the robustness of VSLAM, semantics are increasingly being employed to address specific limitations. This work adopts a comprehensive approach, incorporating sparse features from semantics for indoor environments. During training, a prior semantic map is built, while some known places relative to the semantic map are recorded, which are then stored in a database. An active place recognition module matches semantics with those in the database. Additionally, a hybrid optimization module estimates the robot's pose by minimizing semantic reprojection errors, ensuring pose proximity to detected places, preserving semantic size consistency, incorporating live feature maps, and ensuring the pose to be within navigable space using an occupancy map made a priori. Experimental comparison demonstrates superior performance over conventional VSLAM, yielding lower error values.
|
|
17:40-17:50, Paper TuET5.7 | |
Automating Real-World Benchmarking of Navigation Approaches in Crowded Environments Using Virtual Laser Scans (withdrawn from program) |
|
Kästner, Linh | T-Mobile, TU Berlin |
Kmiecik, Jacek | Technical University Berlin |
Khorsandi, Niloufar | Technical University Berlin |
Lambrecht, Jens | Technische Universität Berlin |
|
17:50-18:00, Paper TuET5.8 | |
Costmap-Based Local Motion Planning Using Deep Reinforcement Learning |
|
Garrote, Luís Carlos | Institute of Systems and Robotics, University of Coimbra |
Perdiz, João | University of Coimbra |
Nunes, Urbano J. | Instituto De Sistemas E Robotica |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation
Abstract: Local motion planning is an essential component of autonomous robot navigation systems as it involves generating collision-free trajectories for the robot in real-time, given its current position, the map of the environment and a goal. Considering an a priori goal path, computed by a global planner or as the output of a mission planning approach, this paper proposes a Two-Stream Deep Reinforcement Learning strategy for local motion planning that takes as inputs a local costmap representing the robot's surrounding obstacles and a local costmap representing the nearest goal path. The proposed approach uses a Double Dueling Deep Q-Network and a new reward model to avoid obstacles while trying to maintain the lateral error between the robot and the goal path close to zero. Our approach enables the robot to navigate through complex environments, including cluttered spaces and narrow passages, while avoiding collisions with obstacles. Evaluation of the proposed approach was carried out in an in-house simulation environment, in five scenarios. Double and Double Dueling architectures were evaluated; the presented results show that the proposed strategy can correctly follow the desired goal path and, when needed, avoid obstacles ahead and recover back to following the goal path.
|
|
18:00-18:10, Paper TuET5.9 | |
Situating Robots in the Organizational Dynamics of the Gas Energy Industry: A Collaborative Design Study |
|
Lee, Hee Rin | Michigan State University |
Tan, Xiaobo | Michigan State University |
Zhang, Wenlong | Arizona State University |
Deng, Yiming | Michigan State University |
Liu, Yongming | Arizona State University |
Keywords: User-centered Design of Robots, Ethical Issues in Human-robot Interaction Research, Philosophical Issues in Human-Robot Coexistence
Abstract: Human-robot collaboration has been an important topic in the HRI communities. In this paper, we explore how robots can contribute to gas pipeline inspection work, and how they can support one of the most important elements of energy transportation infrastructure. To situate robots in the gas energy industry, we conducted a collaborative design study, where our co-designers were diverse stakeholders: from pipeline researchers to utility workers. The contribution of this paper is threefold: First, we explore gas pipeline work settings as a new context where robots can provide significant benefit, considering that public infrastructure is vast but understudied. Second, we collaboratively envisioned the design and use cases together with workers who are not often invited to human-robot collaboration research. Lastly, we address the importance of viewing humans in human-robot collaboration as ``workers'' whose roles and expertise are shaped within organizational dynamics. This study aims to shed light on the importance of a more nuanced understanding of work contexts and the positionality of robots within organizations.
|
|
TuET6 |
Room T6 |
Robot Perception for Interaction and Communication |
Regular Session |
Chair: Shakeel, Muhammad | Honda Research Institute Japan Co., Ltd |
|
16:40-16:50, Paper TuET6.1 | |
Controllable Motion Synthesis and Reconstruction with Autoregressive Diffusion Models |
|
Yin, Wenjie | KTH |
Tu, Ruibo | KTH Royal Institute of Technology |
Yin, Hang | KTH |
Kragic, Danica | KTH |
Kjellstrom, Hedvig | KTH |
Björkman, Mårten | KTH |
Keywords: Detecting and Understanding Human Activity, Machine Learning and Adaptation, Programming by Demonstration
Abstract: Data-driven and controllable human motion synthesis and prediction are active research areas with various applications in interactive media and social robotics. Challenges remain in these fields for generating diverse motions given past observations and dealing with imperfect poses. This paper introduces MoDiff, an autoregressive probabilistic diffusion model over motion sequences conditioned on control contexts of other modalities. Our model integrates a cross-modal Transformer encoder and a Transformer-based decoder, which are found effective in capturing temporal correlations in motion and control modalities. We also introduce a new data dropout method based on the diffusion forward process to provide richer data representations and robust generation. We demonstrate the superior performance of MoDiff in controllable motion synthesis for locomotion with respect to two baselines and show the benefits of diffusion data dropout for robust synthesis and reconstruction of high-fidelity motion close to recorded data.
|
|
16:50-17:00, Paper TuET6.2 | |
Human-Centered Local Planning for Mobile Robots with 2D Laser Via Pedestrian Behavior Prediction |
|
Hu, Wenfei | Peking University |
fang, shuai | Peking University |
Wang, Yi | Peking University |
Luo, Dingsheng | Peking University |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Social Touch in Human–Robot Interaction
Abstract: Mobile robots often encounter moving pedestrians in their work environment, which poses significant challenges for flexible obstacle avoidance. Traditional local planning methods based on velocity optimization have limitations in dynamic and complex scenarios. Recent advancements utilizing reinforcement learning have exhibited the potential in adapting to changing environments. However, most existing policy-based local planning for mobile robots with 2D laser underestimate the importance of modeling the behavior of pedestrians, including their historical and future trajectories. It is non-trivial to acquire a local planning policy with common sense and conventions of social interaction. In this paper, we propose a human-centered local planning reinforcement learning framework for mobile robots with 2D laser. The proposed framework comprises a behavior learner, a local planner, and a policy learner. Both historical and future trajectories of pedestrians obtained from the behavior learner are incorporated into the local planner. Leveraging environmental and social information, the policy learner facilitates the local planner to acquire human-centered planning policy. Experimental results demonstrate that our method can achieve more flexible and human-centered local planning for mobile robots in dynamic environments with pedestrians.
|
|
17:00-17:10, Paper TuET6.3 | |
Action-Conditioned Deep Visual Prediction with RoAM, a New Indoor Human Motion Dataset for Autonomous Robots |
|
Sarkar, Meenakshi | Indian Institute of Science |
Honkote, Vinayak | Intel Corporation |
Das, Dibyendu | Intel |
Ghose, Debasish | Indian Institute of Science |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Detecting and Understanding Human Activity, HRI and Collaboration in Manufacturing Environments
Abstract: With the increasing adoption of robots across industries, it is crucial to focus on developing advanced algorithms that enable robots to anticipate, comprehend, and plan their actions effectively in collaboration with humans. We introduce the Robot Autonomous Motion (RoAM) video dataset, which is collected with a custom-made turtlebot3 Burger robot in a variety of indoor environments recording various human motions from the robot's ego-vision. The dataset also includes synchronized records of the LiDAR scan and all control actions taken by the robot as it navigates around static and moving human agents. The unique dataset provides an opportunity to develop and benchmark new visual prediction frameworks that can predict future image frames based on the action taken by the recording agent in partially observable scenarios or cases where the imaging sensor is mounted on a moving platform. We have benchmarked the dataset on our novel deep visual prediction framework called ACPNet where the approximated future image frames are also conditioned on action taken by the robot and demonstrated its potential for incorporating robot dynamics into the video prediction paradigm for mobile robotics and autonomous navigation research.
|
|
17:10-17:20, Paper TuET6.4 | |
Feel the Point Clouds: Traversability Prediction and Tactile Terrain Detection Information for an Improved Human-Robot Interaction |
|
Edlinger, Raimund | University of Applied Sciences Upper Austria |
Nuechter, Andreas | University of Würzburg |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Innovative Robot Designs, Assistive Robotics
Abstract: The field of human-robot interaction has been rapidly advancing in recent years, as robots are increasingly being integrated into various aspects of human life. However, for robots to effectively collaborate with humans, it is crucial that they have a deep understanding of the environment in which they operate. In particular, the ability to predict traversability and detect tactile information is crucial for enhancing the safety and efficiency of human-robot interactions. To address this challenge, this paper proposes a method called "Feel the Point Clouds" that use point clouds to predict traversability and detect tactile terrain information for a tracked rescue robot. This information can be used to adjust the robot's behavior and movements in real-time, allowing it to interact with the environment in a more intuitive and safe manner. The experimental results of the proposed method are evaluated in various scenarios and demonstrate its effectiveness in improving human-robot interaction and visualization for a more accurate and intuitive understanding of the environment.
|
|
17:20-17:30, Paper TuET6.5 | |
S2Net: Accurate Panorama Depth Estimation on Spherical Surface |
|
Li, Meng | Alibaba Group |
Wang, Senbo | Alibaba Group |
Yuan, Weihao | Hong Kong University of Science and Technology |
Shen, Weichao | Alibaba Group |
Sheng, Zhe | Alibaba Group |
Dong, Zilong | Company |
Keywords: Deep Learning for Visual Perception, Omnidirectional Vision, Deep Learning Methods
Abstract: Monocular depth estimation is an ambiguous problem, thus global structural cues play an important role in current data-driven single-view depth estimation methods. Panorama images capture the complete spatial information of their surroundings utilizing the equirectangular projection which introduces large distortion. This requires the depth estimation method to be able to handle the distortion and extract global context information from the image. In this paper, we propose an end-to-end deep network for monocular panorama depth estimation on a unit spherical surface. Specifically, we project the feature maps extracted from equirectangular images onto unit spherical surface sampled by uniformly distributed grids, where the decoder network can aggregate the information from the distortion-reduced feature maps. Meanwhile, we propose a global cross-attention-based fusion module to fuse the feature maps from skip connection and enhance the ability to obtain global context. Experiments are conducted on five panorama depth estimation datasets, and the results demonstrate that the proposed method substantially outperforms previous state-of-the-art methods.
|
|
17:30-17:40, Paper TuET6.6 | |
Signs of Language: Embodied Sign Language Fingerspelling Acquisition from Demonstrations for Human-Robot Interaction |
|
Tavella, Federico | The University of Manchester |
Galata, Aphrodite | University of Manchester |
Cangelosi, Angelo | University of Manchester |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Anthropomorphic Robots and Virtual Humans, Assistive Robotics
Abstract: Learning fine-grained movements is a challenging topic in robotics, particularly in the context of robotic hands. One specific instance of this challenge is the acquisition of fingerspelling sign language in robots. In this paper, we propose an approach for learning dexterous motor imitation from video examples without additional information. To achieve this, we first build a URDF model of a robotic hand with a single actuator for each joint. We then leverage pre-trained deep vision models to extract the 3D pose of the hand from RGB videos. Next, using state-of-the-art reinforcement learning algorithms for motion imitation (namely, proximal policy optimization and soft actor-critic), we train a policy to reproduce the movement extracted from the demonstrations. We identify the optimal set of hyperparameters for imitation based on a reference motion. Finally, we demonstrate the generalizability of our approach by testing it on six different tasks, corresponding to fingerspelled letters. Our results show that our approach is able to successfully imitate these fine-grained movements without additional information, highlighting its potential for real-world applications in robotics.
|
|
17:40-17:50, Paper TuET6.7 | |
Learning Clear Class Separation for Open-Set 3D Detector in Autonomous Vehicle Via Selective Forgetting |
|
Hu, Wenfei | Peking University |
Lin, Weikai | Peking University |
Fang, Hongyu | Peking University, Beijing, China |
Wang, Yi | Peking University |
Luo, Dingsheng | Peking University |
Keywords: Machine Learning and Adaptation, Detecting and Understanding Human Activity, Evaluation Methods
Abstract: A trustworthy 3D detector is essential in the perception system of autonomous vehicles, ensuring accurate detection of their surroundings. However, autonomous vehicles have to operate in ever-changing real-world driving scenes, where unknown objects that do not belong to the training set are commonly encountered. Confusion about known and unknown objects could result in severe and dangerous consequences for road safety. To address this problem, we improve the reliability of autonomous driving systems by formulating open-set 3D object detection task. An Open-set 3D Detector (Open3Det) is proposed to reject unknown instances while maintaining performance on known categories. Distinct from 2D objects, clear space separation exists between each 3D instance. Motivated by this, we propose selective forgetting, a novel method capable of filtering out misleading predictions. Given a close-set teacher model, knowledge distillation is introduced to build a open-set student model. The student model preserves its predictions for known objects, whereas predictions of backgrounds and unknown instances are discarded to minimize misleading results. Extensive experiments and visualizations reveal the efficacy of the proposed method.
|
|
17:50-18:00, Paper TuET6.8 | |
A Behavioural Transformer for Effective Collaboration between a Robot and a Non-Stationary Human |
|
Mon-Williams, Ruaridh | The University of Edinburgh |
Stouraitis, Theodoros | Honda Research Institute, University of Edinburgh and RoboPhren |
Vijayakumar, Sethu | University of Edinburgh |
Keywords: Machine Learning and Adaptation, Cooperation and Collaboration in Human-Robot Teams, Monitoring of Behaviour and Internal States of Humans
Abstract: A key challenge in human-robot collaboration is the non-stationarity created by humans due to changes in their behaviour. This alters environmental transitions and hinders human-robot collaboration. We propose a principled meta-learning framework to explore how robots could better predict human behaviour, and thereby deal with issues of non-stationarity. On the basis of this framework, we developed Behaviour-Transform (BeTrans). BeTrans is a conditional transformer that enables a robot agent to adapt quickly to new human agents with non-stationary behaviours, due to its notable performance with sequential data. We trained BeTrans on simulated human agents with different systematic biases in collaborative settings. We used an original customisable environment to show that BeTrans effectively collaborates with simulated human agents and adapts faster to non-stationary simulated human agents than SOTA techniques.
|
|
18:00-18:10, Paper TuET6.9 | |
Recognizing Football Game Events: Handball Based on Computer Vision |
|
Hassan, Mohammad Mehedi | Tokushima University |
Karungaru, Stephen | University of Tokushima |
Terada, Kenji | Tokusihma University |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Machine Learning and Adaptation, Detecting and Understanding Human Activity
Abstract: Football or Soccer is one of the most popular games in the world. "Handball" event in the game is one of the most controversial and important decisions by a single referee. This paper proposes a method to define a "handball event" in a football game by recognizing the hand, ball, and their interaction from a single camera. We trained a model to identify hands and balls using detectron2, then used the vertices of two objects to find out the overlapping situation between these two objects to determine the handball event in a football game. The train and test result of detection was satisfactory with 96% for the hand and 100% for the ball respectively.
|
| |