| |
Last updated on August 29, 2025. This conference program is tentative and subject to change
Technical Program for Thursday August 28, 2025
|
ThIT1 Regular Session, Auditorium 1 |
Add to My Program |
Cooperation and Collaboration in Human-Robot Teams V |
|
|
Chair: Shiomi, Masahiro | ATR |
Co-Chair: Bangalore Raghu, Srikrishna | University of Colorado Boulder |
|
09:30-09:42, Paper ThIT1.1 | Add to My Program |
Assistant Robots with an Agenda Foster Uncooperative Behaviors |
|
Brito, Joana (INESC-ID, Instituto Superior Técnico, Universidade De Lisboa), de Brito Duarte, Regina (INESC-ID, Instituto Superior Técnico, Universidade De Lisboa), Correia de Fonseca, Henrique (INESC-ID, Instituto Superior Técnico, Universidade De Lisboa), Campos, Joana (Disney Research), Correia, Filipa (INESC-ID and Instituto Superior Técnico, Technical University Of), Paiva, Ana (INESC-ID and Instituto Superior Técnico, TechnicalUniversity Of) |
Keywords: Motivations and Emotions in Robotics, Applications of Social Robots, Affective Computing
Abstract: Although research often explores how human-robot interaction influences cooperation in social dilemma scenarios such as the Public Goods Game, most research focuses on robots taking active roles in the game i.e. opponents or team-players. Considering the potential of assistant robots to influence human decision-making, this study explores how an assistant robot with prosocial or individualistic goals influences cooperation in a Public Goods Game. In a between-subjects study (N=60), participants interacted with the robot in one of three conditions: Prosocial, where the robot supported cooperative behavior; Individualistic, where it encouraged self-serving actions; and Control, where the robot provided feedback on the game state without expressing individual goals. Results revealed that participants in the Prosocial or Individualistic conditions contributed less to the public good compared to those in the Control condition. The Prosocial and Individualistic robots were perceived as warmer but evoking more discomfort compared to the Control. Notably, participants who played with the Prosocial robot reported increased trust, not only in the robot but also in their fellow players. These findings suggest that alignment between an assistant robot’s goals and expected social norms plays a key role in trust perception and it also shapes group dynamics. We discuss important considerations for designing assistant robots that provide moral recommendations.
|
|
09:42-09:54, Paper ThIT1.2 | Add to My Program |
Model Cards for AI Teammates: Comparing Human-AI Team Familiarization Methods for High-Stakes Environments |
|
Bowers, Ryan (Georgia Institute of Technology), Agbeyibor, Richard (Georgia Institute of Technology), Kolb, Jack (Georgia Institute of Technology), Feigh, Karen (Georgia Institute of Technology) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, User-centered Design of Robots, Human Factors and Ergonomics
Abstract: We compare three methods of familiarizing a human with an artificial intelligence (AI) teammate ("agent") prior to operation in a collaborative, fast-paced intelligence, surveillance, and reconnaissance (ISR) environment. In a between-subjects user study (n=60), participants either read documentation about the agent, trained alongside the agent prior to the mission, or were given no familiarization. Results showed that the most valuable information about the agent included details of its decision-making algorithms and its relative strengths and weaknesses compared to the human. This information allowed the familiarization groups to form sophisticated team strategies more quickly than the control group. Documentation-based familiarization led to the fastest adoption of these strategies, but also biased participants towards risk-averse behavior that prevented high scores. Participants familiarized through direct interaction were able to infer much of the same information through observation, and were more willing to take risks and experiment with different control modes, but reported weaker understanding of the agent's internal processes. Significant differences were seen between individual participants' risk tolerance and methods of AI interaction, which should be considered when designing human-AI control interfaces. Based on our findings, we recommend a human-AI team familiarization method that combines AI documentation, structured in-situ training, and exploratory interaction.
|
|
09:54-10:06, Paper ThIT1.3 | Add to My Program |
Shared Learning Effects in Evaluations of Machine Teammates |
|
Erven, Loes Wilhelmina Anita Thecla (Radboud University), Thill, Serge (Radboud University), Solaki, Anthia (TNO) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Social Intelligence for Robots, Creating Human-Robot Relationships
Abstract: In teams of humans and several robots, communication within the robot sub-group may occur without the human being necessarily aware of this, e.g., to update each other about task-relevant aspects of the environment. Nonetheless, since this affects their subsequent actions and visible behaviour, it has consequences for teamwork and the humans’ perception of the team, which are not always well-understood. Intra-robot communication and coordination can be beneficial, but may also be experienced negatively due to unexpected group dynamics. In this study, we designed three robot teams, with varying levels of shared learning: one where robots share information about the environment and other robots acknowledge receiving this information, one where this information is shared but receipt is not acknowledged, and one where there is no communication. In each case, robots assisted a human participant in repairing pipes in a simulated environment. We measured perceived entitativity, trust, and attribution of mind to the robots. Overall, our results illustrate that improving the skills of robots (in this case, shared learning) is not sufficient to also improve the human experience of being a member of such a team. How humans perceive artificial agents can be more important than their actual abilities. We thus explore implications for improving the human teammate’s understanding of robotic (social) abilities in hybrid teams.
|
|
10:06-10:18, Paper ThIT1.4 | Add to My Program |
Are We Generalizing from the Exception? an In-The-Wild Study on Group-Sensitive Conversation Design in Human-Agent Interactions |
|
Müller, Ana (University of Applied Sciences Cologne), Jeschke, Sabina (FAU - Friedrich-Alexander University of Erlangen-Nuremberg), Richert, Anja (University of Applied Sciences Cologne) |
Keywords: Social Intelligence for Robots, Linguistic Communication and Dialogue, Multimodal Interaction and Conversational Skills
Abstract: This paper investigates the impact of a group-adaptive conversation design in two socially interactive agents (SIAs) through two real-world studies. Both SIAs – Furhat, a social robot, and MetaHuman, a virtual agent – were equipped with a conversational artificial intelligence (CAI) backend combining hybrid retrieval and generative models. The studies were carried out in an in-the-wild setting with a total of N= 188 participants who interacted with the SIAs - in dyads, triads or larger groups – at a German museum. Although the results did not reveal a significant effect of the group-sensitive conversation design on perceived satisfaction, the findings provide valuable insights into the challenges of adapting CAI for multi-party interactions and across different embodiments (robot vs. virtual agent) highlighting the need for multimodal strategies beyond linguistic pluralization. These insights contribute to the fields of Human-Agent Interaction (HAI), Human-Robot Interaction (HRI), and broader Human-Machine Interaction (HMI), providing insights for future research on effective dialogue adaptation in group settings.
|
|
10:18-10:30, Paper ThIT1.5 | Add to My Program |
One Human and Multi-Robot Collaboration: Evaluating the Impact of Attention-Guided On-Screen Recommendations on Human Multitasking Efficiency, Cognitive Load, and Trust |
|
Kumar, Rajul (George Mason University), Wang, Renke (George Mason University), Yao, Ningshi (George Mason University) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Human Factors and Ergonomics, Cooperation and Collaboration in Human-Robot Teams
Abstract: The effectiveness of human-multirobot collaboration depends on the automation capabilities of robots, human's cognitive loads and multitasking efficiency, trust of humans in robots, as well as the supporting technological system with which human and robots interact. To improve human's multitasking operations, automated decision support recommendation systems have been developed to alleviate cognitive load, maintain human's attentional involvement, and potentially improve the overall performance of the human-robot team. However, these recommendation systems can produce unintended behavioral consequences when considering human attention allocation, which requires experiment validation. In this paper, we present an optimization-based recommendation system extending our previous theoretical work, which will guide human operators to select which robot to help in a simulated challenging multitasking environment, where a human participant monitors and manages multiple robots navigating through an obstacle-rich terrain. Through a controlled experimental study involving 40 human participants, we compared two conditions: optimal recommendations are provided continuously versus recommendations presented only during critical scenarios. Our findings indicate that while the recommendation system provided minimal or even negative effects on actual task performance, it significantly improved participants' perceived performance and trust. Further analysis of user feedback revealed substantial correlations between increased perceived performance, increased trust, reduced cognitive demands, and decreased frustration levels during multitasking. These insights show that recommendation systems enhance user experiences, which are key tools to adoption even when objective gains are limited
|
|
ThIT2 Regular Session, Auditorium 2 |
Add to My Program |
Social Intelligence of Robots V |
|
|
Chair: Niitsuma, Mihoko | Chuo University |
Co-Chair: Akinrintoyo, Emmanuel | Imperial College London |
|
09:30-09:42, Paper ThIT2.1 | Add to My Program |
Fool Me Once, Shame on You: Investigating Human Reactions to Robots That Deceive |
|
Rogers, Kantwon (Massachusetts Institute of Technology), Chang, Jinhee (Georgia Tech), Gorostiaga Zubizarreta, Geronimo (Georgia Institute of Technology), Plate Zelaschi, Octavio (Georgia Institute of Technology), Varakantam, Varish (Georgia Institute of Technology) |
Keywords: Creating Human-Robot Relationships, Robot Companions and Social Robots, Social Intelligence for Robots
Abstract: How do people react to a robot that lies to them? Current robot deception work only explores the effects of decep- tion within the same context that the deception occurs. However, how does deception in one scenario influence how a robot will be predicted to act in a different situation? This work presents a large-scale (N=1296), online, text-based scenario experiment that described an agent tasked with helping with a business deal. We examine how factors of agent embodiment, agent truthfulness, and the outcomes of the agent’s decisions influence people’s trust in the system and how likely participants are to predict that the agent will act maliciously in another scenario. This work finds that when an agent lies to a participant, even when the participant benefits from this deception, this decreases trust and results in an increased prediction that the agent will act with malicious intent in a different scenario. Our results also show that for our scenario, participants evaluate a human more harshly than they do a robot or an AI system without an embodiment. These results add further nuance to evaluating the advantages and disadvantages of designing systems that deceive and, hopefully, encourage more investigations into an otherwise understudied area.
|
|
09:42-09:54, Paper ThIT2.2 | Add to My Program |
Basic Psychological Need Fulfillment in HRI: The Role of Control and Ownership in Shaping Robot Perception |
|
Figge, Jana (Ruhr West University), Straßmann, Carolin (University of Applied Sciences Ruhr West) |
Keywords: Motivations and Emotions in Robotics, Creating Human-Robot Relationships
Abstract: As robots become increasingly autonomous, they may evoke feelings of discomfort, particularly when users lack control over the interaction. One potential approach to mitigating these negative perceptions is granting users control over the robot (e.g., through an external interface for direct operation) and fostering a sense of psychological ownership (PO). We assume that control and ownership of a robot support the fulfillment of the Basic Psychological Needs—autonomy, competence, and relatedness—according to Self-Determination Theory, which play a key role in technology acceptance and a positive user experience. Additionally, we examine whether subjective perceptions of control and ownership differ from their objective counterparts and how these differences impact Basic Psychological Need fulfillment and robot perceptions. A between-subjects design laboratory study (N = 64) was conducted with three experimental conditions: (1) no control or ownership, where the robot autonomously completed a task (Wizard-of-Oz); (2) control, where participants directly operated the robot to perform the task; and (3) control and ownership, where participants controlled the robot and temporarily assumed ownership of it during the task. Results indicate that the combination of control and ownership significantly reduced feelings of discomfort towards the robot and enhanced the psychological need for participants’ competence. Moreover, subjective perceptions of control and ownership played a crucial role in confirming the hypothesized relationships with Basic Psychological Needs.
|
|
09:54-10:06, Paper ThIT2.3 | Add to My Program |
Model Based Human Behaviour and Intention Estimation in Physical Human-Robot Interaction Scenarios |
|
OZKARA, Efe (Middle East Technical University), Ankarali, Mustafa Mert (Middle East Technical University) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Monitoring of Behaviour and Internal States of Humans, Human Factors and Ergonomics
Abstract: This research introduces a framework for physical Human-Robot Interaction (pHRI) scenarios that estimates human behavior and intention. By incorporating this estimation template, velocity-based adaptive controllers can enhance both the effectiveness and comfort of pHRI. Existing methods in the literature primarily rely on sensor readings from the robot, overlooking the dynamics and relationships between the human arm and the robot. In our approach, we model these dynamics and relationships as a spring-damper system and assume that human velocity is the system's input. This formulation enables the use of a Kalman filter to estimate human velocity from the robot's sensor measurements in real-time. With an accurate estimation of human velocity, our approach provides a more precise understanding of human intention, enabling smoother and more adaptive interactions. The real-time prediction of human velocity and intent allows for more intuitive and responsive robot behavior, leading to significant improvements in both the effectiveness and comfort of pHRI scenarios. We conducted systematic experiments using the Franka Emika Panda collaborative manipulator, demonstrating that our approach enhances pHRI performance compared to alternative methods, particularly in terms of responsiveness and user experience.
|
|
10:06-10:18, Paper ThIT2.4 | Add to My Program |
Self-Disclosure Themes and Semantics across Human, Robotic, and Disembodied Conversational Partners |
|
Chiang, Sophie (Department of Computer Science and Technology, University of Cam), Laban, Guy (University of Cambridge), Cross, Emily S (ETH Zurich), Gunes, Hatice (University of Cambridge) |
Keywords: Linguistic Communication and Dialogue, Embodiment, Empathy and Intersubjectivity, Monitoring of Behaviour and Internal States of Humans
Abstract: As social robots and other artificial agents become more conversationally capable, it is important to understand whether the content and meaning of self-disclosure towards these agents changes depending on the agent’s embodiment. In this study, we analysed conversational data from three controlled experiments in which participants self-disclosed to a human, a humanoid social robot, and a disembodied conversational agent. Using sentence embeddings and clustering, we identified themes in participants’ disclosures, which were then labelled and explained by a large language model. We subsequently assessed whether these themes and the underlying semantic structure of the disclosures varied by agent embodiment. Our findings reveal strong consistency: thematic distributions did not significantly differ across embodiments, and semantic similarity analyses showed that disclosures were expressed in highly comparable ways. These results suggest that while embodiment may influence human behaviour in human–robot and human–agent interactions, people tend to maintain a consistent thematic focus and semantic structure in their disclosures, whether speaking to humans or artificial interlocutors.
|
|
ThIT3 Special Session, Auditorium 3 |
Add to My Program |
SS: Social Human-Robot Interaction of Human-Care Service Robots I |
|
|
Chair: Ahn, Ho Seok | The University of Auckland, Auckland |
|
09:30-09:42, Paper ThIT3.1 | Add to My Program |
Optimization of Two-Stage Facial Expression Mimicking Systems: Enhancing Emotional Representation in the EveR-4 H22 Robot (I) |
|
Lee, Minwoo (University of Auckland), Yoo, Chan (University of Auckland), MacDonald, Bruce (University of Auckland), Ahn, Ho Seok (The University of Auckland, Auckland) |
Keywords: Robot Companions and Social Robots, Multi-modal Situation Awareness and Spatial Cognition
Abstract: This paper presents a two-stage facial expression mimicking system using the EveR-4 H22 robot, designed to improve Human-Robot Interaction (HRI) by accurately replicating human emotions. The system follows a two-step process: blendshape extraction, followed by optimized mapping functions that translate human expressions into the robot’s parameters. The parameter training employed Gradient Descent with Regularization, using the Adam Optimizer for 1 million iterations on the custom-labeled data with different emotional categories such as Sad, Fearful, Anger and more. Experimental results show considerable improvements in emotional accuracy, with significant training outcomes that reduced Regularized L2 loss by 1,182 times, capable of accurately mimicking unseen facial emotional expression of an unseen individual. It holds potential applications in domains such as healthcare and customer service, as well as automating generation of demographically broader spectrum of emotional expressions.
|
|
09:42-09:54, Paper ThIT3.2 | Add to My Program |
Exploring the Impact of Nudge Design in Robotic Plate on Enhancing Children's Dietary Habits (I) |
|
Kim, Sangmin (Samsung Electronics Co. Ltd), Choi, Jongsuk (Korea Inst. of Sci. and Tech), Kwak, Sonya Sona (Korea Institute of Science and Technology (KIST)) |
Keywords: User-centered Design of Robots, Child-Robot Interaction
Abstract: This study investigates how nudge design in robotic plates can improve children's eating habits. Establishing healthy dietary habits in childhood is crucial for their growth and development and can decrease the risk of developing diseases later in life. Traditional coercive methods for dietary change have been largely ineffective. In contrast, nudge design, which respects user autonomy and reduces negative reactions, appears to be a promising strategy for positively influencing children's dietary behaviors. We developed a robotic plate based on nudge design principles, incorporating rainbow metaphors, to encourage children to explore a variety of foods and mitigate picky eating tendencies. The study included comparative experiments with two control groups. The findings indicated that the robotic plate with rainbow metaphors attracted more interest, received better evaluations, and was deemed more appropriate and useful than the control plates. Although there were no significant differences in how the plates were used according to observational data, interviews with participants showed a preference for the robotic plate and an increased willingness to try new, healthy foods. The study concludes that nudge design, particularly when using rainbow metaphors, is more effective with younger children, potentially improving their dietary habits. Nonetheless, the influence of such designs on older children might be restricted due to their advanced cognitive development and greater susceptibility to external influences.
|
|
09:54-10:06, Paper ThIT3.3 | Add to My Program |
Converging in Companionship, Diverging in Interaction: A Comparative Study of Korean and Japanese Social Robots (I) |
|
Kim, Dongseon (Sookmyung Women's University), TAN, CHENG KIAN (Singapore University of Social Sciences), Takahashi, Akemi (Bunkyo Gakuin University) |
Keywords: Robot Companions and Social Robots, Social Touch in Human–Robot Interaction
Abstract: With the emergence of advanced social robots, there is a need for a systematic examination of their purpose and various applications. It is also important to understand the cultural elements embedded in them and the effects of these elements on their acceptance in the global market. This study explores the design and human–robot interactions of two social robots, one from South Korea and one from Japan, with contrasting characteristics. Using a phenomenological approach, we analyse Korean users’ interactions and experts’ evaluations of these social robots and identify key differences: intimate touch vs maintaining a safe distance; function-oriented verbal communication vs non-verbal communication for emotional bonding; static vs dynamic attachment formation; user as a care recipient vs user as an active caregiver; and safety-centred vs self-help-centred caregiving approaches. These findings demonstrate how the design and technological capacities of social robots are related to their intended purpose and can affect user experience, although with limited generalizability.They also provide insights into differences in the eldercare cultures of South Korea and Japan. Based on this case study, we suggest that systematic classifications of social robots should consider their intended purpose, scope of use and relevance to specific caregiving environments.
|
|
10:06-10:18, Paper ThIT3.4 | Add to My Program |
What Do Older Adults Want? Exploring Experiences in Multi-Session Personalized Conversations with Companion Robots (I) |
|
Yoo, Yae Rin (Korea Institute of Science and Technology), Yang, Eui Jeong (Korea Institute of Science and Technology), Sung, Jee Eun (Ewha Womans University), Lim, Yoonseob (Korea Institute of Science and Technology) |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Long-term Experience and Longitudinal HRI Studies
Abstract: With advancements in Large Language Models, conversational companion robots are increasingly recognized for their potential in supporting and caring for older adults. This study examines older adults’ experiences and functional preferences for a conversational robot designed to enable personalized interactions across multiple sessions. The robot engages users in open-ended conversations by recalling previous dialogues, integrating personal information, and facilitating memory-stimulating conversations. Thirty healthy older adults participated in three one-hour interaction sessions with the robot, followed by post-interaction surveys. Results showed that participants found the interactions engaging and meaningful, expressing high satisfaction with the robot’s ability to remember them and sustain personalized conversations. Among the robot’s functions, health monitoring and retrospective memory support were identified as highly valued by older adults. These findings provide insights for developing more adaptive and socially engaging robots, emphasizing the importance of personalized memory retention and user-tailored interactions in enhancing social support and cognitive engagement for older adults.
|
|
10:18-10:30, Paper ThIT3.5 | Add to My Program |
DeepSign: Pretrained Vision Transformer for Isolated Sign Language Recognition (I) |
|
Liu, Edmond (University of Auckland), MacDonald, Bruce (University of Auckland), Ahn, Ho Seok (The University of Auckland, Auckland) |
Keywords: Detecting and Understanding Human Activity
Abstract: Isolated sign language recognition is a challenging task involving the learning of complex relationships between spatial and temporal features. Due to the high complexity and relatively small datasets available, state-of-the-art methods of ten adopt language modeling and convolutional neural network based multimodal designs, achieving high accuracy at the cost of significant architectural complexity. Although conceptually simpler, transformers have gained widespread adoption in related computer vision tasks, outperforming 3D convolutional network competitors. However, due to a lack of training data, video transformers struggle with sign language recognition and have not demonstrated competitive accuracy compared to state-of-the-art 3D convolutional neural network designs. We introduce DeepSign, a family of vision transformer based sign language recognition models with superior performance to 3D convolutional neural network designs. Through careful model ablation we select the UniFormerV2 and VideoMAE V2 architectures and perform mixture of dataset pretraining. Our strongest model DeepSign UniFormerV2 large achieves state-of-the-art on the WLASL100 and MSASL100 benchmarks, producing 92.64% and 94% top-1 accuracies respectively. Armed with VideoMAE V2’s powerful pretrained backbone, DeepSign ViT base offers greater efficiency for a small accuracy tradeoff. We hope DeepSign will help advance future sign language research by providing strong foundational models to kickstart experiments.
|
|
ThIT4 Regular Session, Blauwe Zaal |
Add to My Program |
Applications of Social Robots IX |
|
|
Chair: Foster, Mary Ellen | University of Glasgow |
|
09:30-09:42, Paper ThIT4.1 | Add to My Program |
Hyperdimensional Gesture Recognition for Underwater Human Robot Interaction |
|
Tran, Tyler (Naval Research Laboratory), Gyory, Nathaniel (Naval Research Laboratory), Thompson, Hunter (Naval Research Laboratory), Harrison, Anthony (Naval Research Laboratory), Saad, Laura (US Naval Research Laboratory), Trafton, Greg (Naval Research Laboratory), Lawson, Wallace (US Naval Research Laboratory) |
Keywords: Machine Learning and Adaptation, Novel Interfaces and Interaction Modalities, Non-verbal Cues and Expressiveness
Abstract: In this paper, we study the problem of gesture recognition as a method for divers to communicate with an underwater robot. Gesture is a common method of communication between divers, and yet autonomous underwater vehicles have very limited capacity to understand gesture given lighting and visibility constraints (e.g., from water turbidity and diver depth). Traditional deep learning methods are limited in this domain because of a lack of sufficient training data. We show that it is not enough to learn a gesture in a laboratory setting, because the appearance changes dramatically underwater. We show how hyperdimensional computing can solve this problem by permitting hypervectors to serve as abstract representations of gestures, yielding rapid adaptation to new environments and new gestures. We experimentally verify this approach using a novel dataset of 6 diving relevant gestures. We show that we can accurately adapt to a gesture learned in a laboratory setting to work with a gesture observed underwater. Our approach compares favorably to a ResNet-18, which performs well in laboratory conditions (91.9% accuracy), but performs poorly underwater (53.9% accuracy). Our proposed approach is capable of rapid adaptation, resulting in an accuracy of 83.8% on underwater gestures with just one additional example from each class added to the support set. Finally, we also show the ability to adapt to new gestures not present in our original training set. We use hypervectors to learn new gestures from the Sign Language MNIST dataset, providing a high level of accuracy with a limited amount of training data.
|
|
09:42-09:54, Paper ThIT4.2 | Add to My Program |
Analysis of Pre-Handover Peak Speed Timing and Patterns for Human Givers and Receivers |
|
Megyeri, Ava (Wright State University), Banerjee, Sean (Wright State Univeristy), Kyrarini, Maria (Santa Clara University), Banerjee, Natasha Kholgade (Wright State University) |
Keywords: Detecting and Understanding Human Activity, Human Factors and Ergonomics, Cooperation and Collaboration in Human-Robot Teams
Abstract: In this work, we study patterns in the movement times of human participants engaging in object handover during the pre-handover phase, to inform on conducting fluent human-robot handovers. Fluency of object handover between two agents is critical to ensure success of shared overall collaborative goals in the context of larger tasks involving the transferred objects. Human givers and receivers often coordinate their pre-handover movements to ensure fluent transfer, with receivers showing anticipatory behavior and proactive response. Since typical timing patterns of human and robot movements for short-range tasks demonstrate an speedup to a peak speed, and a slowdown to the end point, we analyze relationships between time differences of peak speed attainment relative to start and end (i.e., reach) times, across the giver and receiver. Our work helps inform the design of motion planning algorithms for robots that embody timing patterns during movement that are predictable to the partner.
|
|
09:54-10:06, Paper ThIT4.3 | Add to My Program |
Light Everywhere: Three Studies Investigating a Wall-And-Ceiling Climbing Robot Shedding Light on the Flexible Home |
|
Chao, Hsin-Ming (Cornell University), Shrotri, Shivani (Cornell University), White, Eleanor (Cornell University), Tassari, Bruno Dantas da Silva (Cornell University), Zhang, Cheng (Cornell University), Green, Keith Evan (Cornell University) |
Keywords: User-centered Design of Robots, Innovative Robot Designs, Novel Interfaces and Interaction Modalities
Abstract: This paper presents Light Everywhere, a robotic lighting system that enhances flexibility in domestic spaces by traversing walls and ceilings to provide real-time, task-based illumination. We report a field investigation, an online study, and a between-subjects experiment (N=26) using the WoZ approach comparing Light Everywhere with a conventional desk lamp, evaluating perceived comfort, control, and spatial utilization. Results show the robot supports adaptive behaviors and dynamic space usage. Findings highlight the potential of robotic lighting to redefine housing flexibility and user-driven environmental control, “shedding light” on a novel HRI and smart home design.
|
|
10:06-10:18, Paper ThIT4.4 | Add to My Program |
Assessing the Impact of a Passive Exoskeleton on Firefighter Performance and Physiological Response |
|
Mabulu, Katiso (Northeastern University), Jawed, Rida (Northeastern University), Lin, Albert (Northeastern University), Raine, Lauren (Northeastern University), Padir, Taskin (Northeastern University) |
Keywords: Assistive Robotics, Human Factors and Ergonomics, Monitoring of Behaviour and Internal States of Humans
Abstract: Firefighters operate in hazardous environments with limited ergonomic support, often leading to significant physical strain. While robotics research has explored drones and quadrupeds for firefighting assistance, exoskeletons remain underutilized. This study evaluates the effects of the BackX passive exoskeleton during firefighter search and rescue tasks. Five professional firefighters performed rescue and equipment carry tasks with and without the exoskeleton, while physiological metrics were recorded using the COSMED K5 metabolic analyzer. Results showed a reduction in cardiovascular strain and anaerobic demand when using the exoskeleton; however, energy expenditure increased, likely due to restricted movement and inefficiencies. Post-task surveys indicated reduced perceived exertion and fatigue. These findings suggest that passive exoskeletons may alleviate physical demands but require further development to improve energy efficiency and usability in dynamic emergency scenarios. Continued research is necessary to optimize exoskeleton design for fire service applications and to assess long-term operational benefits.
|
|
10:18-10:30, Paper ThIT4.5 | Add to My Program |
Are You an Expert? Instruction Adaptation Using Multi-Modal Affect Detections with Thermal Imaging and Context |
|
Mohamed, Youssef (KTH Royal Institute of Technology), Lemaignan, Séverin (PAL Robotics), GUNEYSU, ARZU (Bogazici University), Jensfelt, Patric (KTH - Royal Institute of Technology), Smith, Claes Christian (KTH Royal Institute of Technology) |
Keywords: Motivations and Emotions in Robotics, Multimodal Interaction and Conversational Skills, Multi-modal Situation Awareness and Spatial Cognition
Abstract: Human-robot interactions increasingly require adaptive instruction delivery, yet robots struggle to calibrate instruction detail levels without explicit user input. We present a system that automatically modulates instruction granularity using real-time affect detection through multi-modal fusion of thermal imaging, facial expressions, and contextual information. Our transformer-based architecture integrates these signals to enable decisions about instruction delivery based on detected user states. In a between-subjects study (N=40), participants completed assembly tasks under either manual adjustment or automatic adaptation conditions. Results showed significantly fewer manual adjustments in the adaptive condition (0.7 vs 2.0 per session), with comparable user satisfaction across conditions. This work shows the effectiveness of affect-driven adaptive instruction in human-robot interaction, contributing to more responsive robotic interfaces while providing guidelines for balancing automation with user control.
|
|
ThIT5 Regular Session, Auditorium 5 |
Add to My Program |
Motion and Navigation IV |
|
|
Chair: van den Brandt, Gijs | Eindhoven University of Technology |
Co-Chair: Ishii, Kazuo | Kyushu Institiute of Technology |
|
09:30-09:42, Paper ThIT5.1 | Add to My Program |
YCB-Handovers Dataset: Analyzing Object Weight Impact on Human Handovers to Adapt Robotic Handover Motion |
|
Khanna, Parag (KTH Royal Institute of Technology), Dsouza, Karen Jane (KTH Royal Institute of Technology), Wang, Chunyu (KTH Royal Institute of Technology), Björkman, Mårten (KTH), Smith, Claes Christian (KTH Royal Institute of Technology) |
Keywords: Human Factors and Ergonomics, Detecting and Understanding Human Activity, Cooperation and Collaboration in Human-Robot Teams
Abstract: This paper introduces the YCB-Handovers dataset, capturing motion data of 2771 human-human handovers with varying object weights. The dataset aims to bridge a gap in human-robot collaboration research, providing insights into the impact of object weight in human handovers and readiness cues for intuitive robotic motion planning. The underlying dataset for object recognition and tracking is the YCB (Yale-CMU-Berkeley) Object and Model Set, which is an established standard dataset used in algorithms for robotic manipulation, including grasping and carrying objects. The YCB-Handovers dataset incorporates human motion patterns in handovers, making it applicable for data-driven, human-inspired models aimed at weight-sensitive motion planning and adaptive robotic behaviors. This dataset covers an extensive range of weights, allowing for a more robust study of handover behavior and weight variation. Some objects also require careful handovers, highlighting contrasts with standard handovers. We also provide a detailed analysis of the object's weight impact on the human reaching motion in these handovers.
|
|
09:42-09:54, Paper ThIT5.2 | Add to My Program |
Finding the Easy Way through - the Probabilistic Gap Planner for Social Robot Navigation |
|
Probst, Malte (Honda Research Institute Europe GmbH), Wenzel, Raphael (Honda Research Institute Europe GmbH), Puphal, Tim (Honda Research Institute Europe GmbH), Dasi, Monica (Honda Research Institute EU), Steinhardt, Nico Andreas (Honda Research Institute EU), Matsuzaki, Sango (Honda R&D Co., Ltd), Komuro, Misa (Honda R&D Co., Ltd) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Cooperation and Collaboration in Human-Robot Teams, Computational Architectures
Abstract: In Social Robot Navigation, autonomous agents need to resolve many sequential interactions with other agents. State-of-the art planners can efficiently resolve the next, imminent interaction cooperatively and do not focus on longer planning horizons. This makes it hard to maneuver scenarios where the agent needs to select a good strategy to find gaps or channels in the crowd. We propose to decompose trajectory planning into two separate steps: Conflict avoidance for finding good, macroscopic trajectories, and cooperative collision avoidance (CCA) for resolving the next interaction optimally. We propose the Probabilistic Gap Planner (PGP) as a conflict avoidance planner. PGP modifies an established probabilistic collision risk model to include a general assumption of cooperativity. PGP biases the short-term CCA planner to head towards gaps in the crowd. In extensive simulations with crowds of varying density, we show that using PGP in addition to state-of-the-art CCA planners improves the agents' performance: On average, agents keep more space to others, create less tension, and cause fewer collisions. This typically comes at the expense of slightly longer paths. PGP runs in real-time on WaPOCHI mobile robot by Honda R&D.
|
|
09:54-10:06, Paper ThIT5.3 | Add to My Program |
Towards Achieving a Safety-Efficiency Balance in Social Robot Navigation through Safe and Configurable Path Following |
|
Frering, Laurent (Graz University of Technology), Steinbauer-Wagner, Gerald (Graz University of Technology) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: Robots deployed in human spaces are required to achieve their goals safely and efficiently. Those two objectives can interfere with each other, as safety often requires slowing down, stopping, or taking detours. This is complicated even more by the fact that safety is not only a physical consideration, but also a psychological one: even if a robot is able to stop reliably before any collision, driving at high speeds towards humans will lead to low perceived safety, reducing comfort and increasing the risk of unpredictable reactions to the robot. To tackle this problem, we propose to build on an existing technique for safe path following in dynamic environment, and extend it with safety constraints suitable for social robotics environment. We also focus on making those constraints easily configurable to adapt to different use-cases. We evaluate our proposed method against baselines on simulated environments, integrating established metrics for performance, safety, and comfort.
|
|
10:06-10:18, Paper ThIT5.4 | Add to My Program |
DynaTra: A Dynamic Framework for Realistic and Scalable Trajectory Simulation in AV Platforms |
|
Wei, Yige (The University of Edinburgh), Deng, Ziyang (Peking University), Wang, Yi (Tsinghua University), Luo, Dingsheng (Peking University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation, Social Intelligence for Robots
Abstract: Realistic and scalable trajectory simulation is a critical component in autonomous vehicle (AV) research. Existing methods trade off between computational efficiency and behavioral realism—either relying on full-stack autonomous driving systems with high overhead or simplified rule-based models lacking fidelity. In this paper, we propose DynaTra, a hybrid trajectory generation framework that dynamically switches between structured-scenario models—the Enhanced Intelligent Driver Model (EIDM) for autonomous vehicles and the Human Driver Model (HDM) for human drivers—and a full-stack system (Apollo 6.0) for unstructured or complex driving situations. A real-time model selection mechanism, informed by unstructured scenario detection, traffic density, and complex agent behavior recognition, enables adaptive and context-aware switching. Experimental evaluations in realistic simulation environments demonstrate that DynaTra significantly outperforms baseline IDM in both accuracy and efficiency and achieves comparable realism to Apollo 6.0 while maintaining substantially higher simulation speed. The proposed framework offers a principled path toward efficient and realistic AV simulation at scale.
|
|
10:18-10:30, Paper ThIT5.5 | Add to My Program |
HI-Grasp: Human-Inspired Grasp Network for Intuitive and Stable Robotic Grasp |
|
Song, Xinchao (Rochester Institute of Technology), Megyeri, Ava (Wright State University), Wiederhold, Noah (Clarkson University), Banerjee, Sean (Wright State Univeristy), Banerjee, Natasha Kholgade (Wright State University) |
Keywords: Machine Learning and Adaptation, Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams
Abstract: As robots increasingly integrate into human environments, they must interact safely, intuitively, and effectively. We present HI-Grasp, a novel human-inspired grasp network that generates robotic grasps that closely mimic human grasp behaviors observed in human-to-human object handovers while ensuring stability. Our approach combines a deep grasp prediction network with Transformer-based modules to predict human-like, stable grasps. We contribute the dataset HOH-Grasps to train and evaluate HI-Grasp. HOH-Grasps consists of interaction data from the HOH handover dataset annotated with grasp labels, and is available at https://huggingface.co/datasets/tars-home/HOH-Grasps. Using HOH-Grasps, HI-Grasp learns and reproduces human grasp preferences while maintaining grasp stability. Extensive experiments on HOH-Grasps, along with real-world robot tests, show that HI-Grasp outperforms baselines and ablated variants in terms of human alignment and stability. We propose HI-Grasp-Lift, a robot-to-human object handover strategy built on HI-Grasp's predictions, to showcase the practical applicability of our approach.
|
|
ThIT6 Regular Session, Auditorium 6 |
Add to My Program |
Robots in Families, Education, Therapeutic Contexts & Arts VII |
|
|
Chair: Lim, Angelica | Simon Fraser University |
|
09:30-09:42, Paper ThIT6.1 | Add to My Program |
Practitioner Insights on Working with Robots in Autism Therapy: Findings from a Year-Long Interaction in an Autism Center |
|
Amir, Aida (Nazarbayev University), Oralbayeva, Nurziya (Nazarbayev University), Tungatarova, Aida (Nazarbayev University), Telisheva, Zhansaule (Nazarbayev University), Sandygulova, Anara (Nazarbayev University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Child-Robot Interaction
Abstract: Robot-Mediated Interventions (RMIs) promise to help autism specialists support children with Autism Spectrum Conditions (ASC). This practice focuses on developing robot- enhanced support systems in which social robots act as assistants and mediators in autism therapy. Understanding the perspectives and experiences of autism specialists is key to evaluating the added value of robots in educational and therapeutic settings. For this purpose, our team collaborated with an autism center, where over 100 children with ASC participated in RMI sessions one to two times per week for one year. As part of the study, we conducted in-depth interviews with two practitioners, exploring their attitudes toward embedding robots into their daily work and practices. We analyzed data on four key themes - acceptance of RMI, evaluations of RMI effects, procedural outcomes and potential improvements. The results highlight the positive acceptance of robots in autism therapy, although practitioners raised concerns about functional and technical limitations. The findings suggest practical considerations for researchers, practitioners, and robot developers in the design and implementation of RMIs.
|
|
09:42-09:54, Paper ThIT6.2 | Add to My Program |
Image-Driven Robot Drawing with Rapid Lognormal Movements |
|
Berio, Daniel (Goldsmiths College, University of London), Clivaz, Guillaume (Idiap Research Institute), Stroh, Michael (University of Konstanz), Deussen, Oliver (University of Konstanz), Plamondon, Réjean (Polytechnique Montréal), Calinon, Sylvain (Idiap Research Institute), Fol Leymarie, Frederic (Goldsmiths College, University of London) |
Keywords: Robots in art and entertainment
Abstract: Large image generation and vision models, combined with differentiable rendering technologies, have become powerful tools for generating paths that can be drawn or painted by a robot. However, these tools often overlook the intrinsic physicality of the human drawing/writing act, which is usually executed with skillful hand/arm gestures. Taking this into account is important for the visual aesthetics of the results and for the development of closer and more intuitive artist-robot collaboration scenarios. We present a method that bridges this gap by enabling gradient-based optimization of natural human-like motions guided by cost functions defined in image space. To this end, we use the sigma-lognormal model of human hand/arm movements, with an adaptation that enables its use in conjunction with a differentiable vector graphics (DiffVG) renderer. We demonstrate how this pipeline can be used to generate feasible trajectories for a robot by combining image-driven objectives with a minimum-time smoothing criterion. We demonstrate applications with generation and robotic reproduction of synthetic graffiti as well as image abstraction.
|
|
09:54-10:06, Paper ThIT6.3 | Add to My Program |
``It Was Tragic'': Exploring the Impact of a Robot's Shutdown |
|
Oberlender, Agam (Media Innovation Lab, School of Communication, Reichman Universi), Erel, Hadas (Media Innovation Lab, Interdisciplinary Center Herzliya) |
Keywords: Applications of Social Robots, Non-verbal Cues and Expressiveness, Robot Companions and Social Robots
Abstract: It is well established that people perceive robots as social entities, even when they are not designed for social interaction. We evaluated whether the social interpretation of robotic gestures should also be considered when turning off a robot. In the experiment, participants engaged in a brief preliminary neutral interaction while a robotic arm showed interest in their actions. At the end of the task, participants were asked to turn off the robotic arm under two conditions: (1) a Non-designed condition, where all of the robot's engines were immediately and simultaneously turned off, as robots typically shut down; (2) a Designed condition, where the robot’s engines gradually folded inward in a motion resembling ``falling asleep." Our findings revealed that all participants anthropomorphized the robot's movement when it was turned off. In the Non-designed condition, most participants interpreted the robot’s turn-off movement negatively as if the robot had ``died." In the Designed condition, most of the participants interpreted it more neutrally, stating that the robot: ``went to sleep." The robot's turn-off movement also impacted its perception, leading to higher likeability, perceived intelligence, and animacy in the Designed condition. We conclude that the impact of common edge interactions, such as turning off a robot, should be carefully designed while considering people's automatic tendency to perceive robots as social entities.
|
|
10:06-10:18, Paper ThIT6.4 | Add to My Program |
“Help Me, I’m Feeling Down!” - Neurotic Robots Increase Bystander Engagement |
|
O'Neill, Casey (University of Waterloo), Sewruttun, Toushal (University of New Brunswick), Law, Edith (University of Waterloo), Rea, Daniel J. (University of Manitoba) |
Keywords: Personalities for Robotic or Virtual Characters, Robot Companions and Social Robots, Affective Computing
Abstract: We investigate two approaches to designing neurotic behavior – the inability to deal with a stressful situation – and if such behavior can encourage more care shown towards a domestic robot. Robots are generally designed to be polite and selfless when working in daily situations, striving to be prosocial whenever possible. While negative behaviors such as neuroticism may initially seem undesirable, they serve a purpose in human-human interaction and may still be usable by robots in a positive way, such as communicating frustration, doubt, or anger. In our experiment, participants do an unrelated task as a Roomba cleans the room. In a 2x2 (valence and escalation) experiment on neurotic behavior design, the robot collides with objects while the participant is doing their task and displays a positive or negative valanced sound, and escalates (getting quicker or longer) or does not escalate that behavior. Our results showcase that designing neurotic behaviors is not simple, with escalation being a more effective control of perceived neuroticism, and that robot personality can indeed influence people, unprompted, to check on the robot. Further, we did not observe any qualitative indication that this was perceived as bothersome or negative by any of our participants. Our results demonstrate how neurotic behavior, stereotypically undesirable, can be useful in promoting positive interaction and prompts further investigation into non prosocial robot behaviors.
|
|
ThJT1 Regular Session, Auditorium 1 |
Add to My Program |
Cooperation and Collaboration in Human-Robot Teams VI |
|
|
Chair: Hasegawa, Komei | Toyohashi University of Technology |
Co-Chair: Gucsi, Bálint | University of Southampton |
|
10:50-11:02, Paper ThJT1.1 | Add to My Program |
Stochastic Scheduling for Human-Robot Collaboration in Dynamic Manufacturing Environments |
|
Lager, Anders (ABB AB), Miloradovic, Branko (Mälardalen University), Spampinato, Giacomo (ABB Robotics), Nolte, Thomas (Mälardalen University), Papadopoulos, Alessandro Vittorio (Mälardalen University) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments
Abstract: Collaborative human-robot teams enhance efficiency and adaptability in manufacturing, but task scheduling in mixed-agent systems remains challenging due to the uncertainty of task execution times and the need for synchronization of agent actions. Existing task allocation models often rely on deterministic assumptions, limiting their effectiveness in dynamic environments. We propose a stochastic scheduling framework that models uncertainty through probabilistic makespan estimates, using convolutions and stochastic max operators for realistic performance evaluation. Our approach employs meta-heuristic optimization to generate executable schedules aligned with human preferences and system constraints. It features a novel deadlock detection and repair mechanism to manage cross-schedule dependencies and prevent execution failures. This framework offers a robust, scalable solution for real-world human-robot scheduling in uncertain, interdependent task environments.
|
|
11:02-11:14, Paper ThJT1.2 | Add to My Program |
Investigating the Role of Uncertainty in Scalability of Human Multi-Robot Teams |
|
Perkins, Lawrence Dale (University of West Florida), Sevil, Hakki Erhan (University of West Florida), Johnson, Matthew (Inst. for Human & Machine Cognition) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Evaluation Methods, User-centered Design of Robots
Abstract: Scalability of human multi-robot teams is quickly becoming a crucial area of research as autonomous systems become more capable and sophisticated. A key research challenge is developing predictive measures of scalability, such as fan-out. This paper presents the results from a study that confirms the improved accuracy of a novel fan-out model over two previous models. It utilizes a new test domain to assess scalability and investigate the role of uncertainty through a variety of complexities driven by environmental factors, robot behaviors, and human-robot interactions. Our analysis highlights potential enhancements to optimize model accuracy across all the models. Lastly, we will show that when calibrating for measurement error, the new model is bounded, which sets it apart from previous models that are unbounded. The new model provides a more nuanced understanding of the dynamics at play and the factors involved in scaling Human Multi-Robot Teams under uncertainty.
|
|
11:14-11:26, Paper ThJT1.3 | Add to My Program |
A Multi-Modal Interaction Framework for Efficient Human-Robot Collaborative Shelf Picking |
|
Pathak, Abhinav (Robotics Lab, Dubai Future Labs, Dubai, UAE), venkatesan, kalaichelvi (Bits Pilani, Dubai Campus), Taha, Tarek (Dubai Future Labs), Muthusamy, Rajkumar (Dubai Future Foundation) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Multimodal Interaction and Conversational Skills, HRI and Collaboration in Manufacturing Environments
Abstract: The growing presence of service robots in human-centric environments, such as warehouses, demands seamless and intuitive human-robot collaboration. In this paper, we propose a collaborative shelf-picking framework that combines multimodal interaction, physics-based reasoning, and task division for enhanced human-robot teamwork. The framework enables the robot to recognize human pointing gestures, interpret verbal cues and voice commands, and communicate through visual and auditory feedback. Moreover, it is powered by a Large Language Model (LLM) which utilizes Chain of Thought (CoT) and a physics-based simulation engine for safely retrieving cluttered stacks of boxes on shelves, relationship graph for sub-task generation, extraction sequence planning and decision making. Furthermore, we validate the framework through real-world shelf picking experiments such as 1) Gesture-Guided Box Extraction, 2) Collaborative Shelf Clearing and 3) Collaborative Stability Assistance. This work paves the way for more intuitive and effective human-robot collaboration in warehouse environments.A video demonstrating the real-world implementation of our proposed system is available at: https://youtu.be/353zmxMwESg?si=2_T-D7hUZnzl5522
|
|
11:26-11:38, Paper ThJT1.4 | Add to My Program |
Learning Human-To-Robot Handovers through 3D Scene Reconstruction |
|
WU, YUEKUN (Queen Mary University of London), Pang, Yik Lung (Queen Mary University of London), Cavallaro, Andrea (Idiap, EPFL), Oh, Changjae (Queen Mary University of London) |
Keywords: Machine Learning and Adaptation, Cooperation and Collaboration in Human-Robot Teams
Abstract: Learning robot manipulation policies from raw, real-world image data requires a large number of robot-action trials in the physical environment. Although training using simulations offers a cost-effective alternative, the visual domain gap between simulation and robot workspace remains a major limitation. Gaussian Splatting visual reconstruction methods have recently provided new directions for robot manipulation by generating realistic environments. In this paper, we propose the first method for learning supervised-based robot handovers solely from RGB images without the need of real-robot training or real-robot data collection. The proposed policy learner, Human-to-Robot Handover using Sparse-View Gaussian Splatting (H2RH-SGS), leverages sparse-view Gaussian Splatting reconstruction of human-to-robot handover scenes to generate robot demonstrations containing image-action pairs captured with a camera mounted on the robot gripper. As a result, the simulated camera pose changes in the reconstructed scene can be directly translated into gripper pose changes. We train a robot policy on demonstrations collected with 16 household objects and directly deploy this policy in the real environment. Experiments in both Gaussian Splatting reconstructed scene and real-world human-to-robot handover experiments demonstrate that H2RH-SGS serves as a new and effective representation for the human-to-robot handover task.
|
|
ThJT2 Regular Session, Auditorium 2 |
Add to My Program |
Methodological Issues in HRI |
|
|
Chair: Eyssel, Friederike | Bielefeld University |
|
10:50-11:02, Paper ThJT2.1 | Add to My Program |
Development of the Perceived Danger-Short Form (PD-SF) Scale: Scale Reduction and Validation |
|
Saad, Laura (US Naval Research Laboratory), Roesler, Eileen (George Mason University), MCCURRY, J. MALCOLM (Peraton), Gyory, Nathaniel (Naval Research Laboratory), Trafton, Greg (Naval Research Laboratory) |
Keywords: Evaluation Methods, Monitoring of Behaviour and Internal States of Humans
Abstract: The perception of danger in HRI settings has become increasingly important as interactions between robots and humans become more commonplace. Previously, a perceived danger scale was developed and validated. Here, we shortened this scale to create the Perceived Danger-Short Form (PD-SF) scale. Experiment 1 used pre-existing data and standard procedures to shorten the scale from 12 items to 4. Experiment 2 validated the short form in a new experiment where participants observed images of robots holding kitchen items of varying levels of danger in close proximity to a human. PD-SF was able to capture differences across the kitchen items. Results from both experiments indicate that PD-SF is a reliable and psychometrically valid measure of perceived danger in HRI contexts.
|
|
11:02-11:14, Paper ThJT2.2 | Add to My Program |
Mind the Context! Questionnaire Design Can Affect the Attribution of Gender to Robots |
|
Perugia, Giulia (Eindhoven University of Technology), Kolmans, Anne Adriana Christina (Radboud Umc) |
Keywords: Anthropomorphic Robots and Virtual Humans, Evaluation Methods, Ethical Issues in Human-robot Interaction Research
Abstract: The field of Human-Robot Interaction (HRI) has seen growing interest in the topic of gendering robots. However, as this topic has gained momentum rapidly, the research methodologies used to study it have not yet undergone critical analysis and refinement. This study investigates whether multidimensional questionnaires are susceptible to context effects and how respondents' views on gender may amplify these effects. We conducted an online study using LimeSurvey, employing a mixed-model design with questionnaire design as a between-subjects variable (four conditions: three items together, three items with distractors, three items separately, and four items) and robot gender ambiguity as a within-subjects variable (gender ambiguous vs. gender non-ambiguous robots). A total of 160 participants were recruited via Prolific, with 40 assigned to each condition. Participants rated the perceived gender of 18 robots (nine gender ambiguous and nine gender non-ambiguous) using the different questionnaire designs. Results show that questionnaire design can alter the direction and strength of the relationships between masculinity, femininity, and gender neutrality. Moreover, they reveal that the questionnaire used to measure a robot's perceived gender does not influence the ratings of gender at an individual level but it does so at a group level. Finally, they disclose that benevolent sexism sensitizes participants to the attribution of masculinity, whereas non-binary gender makes participants more prone to attribute gender neutrality.
|
|
11:14-11:26, Paper ThJT2.3 | Add to My Program |
"I Know That Other Robot, You Can Turn Them Off": Ingroup Robots Elicit Lower Compliance to Instructions That Undermine Another Robot |
|
Wright, Lauren L. (University of Chicago), Dang, Andre K. (University of Chicago), Sebo, Sarah (University of Chicago) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships, Personalities for Robotic or Virtual Characters
Abstract: As robots become increasingly capable and widespread, they may be placed into roles where they are responsible for giving people instructions (e.g., directing human coworkers in a warehouse). It is important to better understand the factors that may influence human compliance to robot instructions, given that these instructions may undermine or invalidate the efforts of another person or robot. In this work, we investigate to what extent an established robot-robot relationship will impact a person's choice to comply with instructions from one robot to undermine another robot's contributions in a collaborative task. We ran a between-subjects study (N = 50) where participants collaborated with a partner robot to build a series of towers at the direction of a manager robot. These two robots were either presented as an ingroup with a shared history and preferential treatment of one another (ingroup condition) or as an outgroup without shared history and neutral treatment of one another (outgroup condition). During the experiment, the manager robot in both conditions gave the human participant instructions to undermine the efforts of the partner robot. We found that participants in the ingroup condition are significantly less likely to comply with these instructions and also view both robots more positively than those in the outgroup condition. Our results demonstrate that the presence of an ingroup relationship between robots can both lessen compliance with instructions that undermine partnerships and generate a more positive social atmosphere within a human-robot collaboration.
|
|
11:26-11:38, Paper ThJT2.4 | Add to My Program |
Perceive, React, Act – Exploring Bias Experiences, Blame Attributions, and Coping with Algorithmic Bias through Diverse Sampling |
|
Erle, Lukas (Ruhr West University of Applied Sciences), Timm, Lara (University of Applied Sciences Ruhr West), Eimler, Sabrina (Hochschule Ruhr West, University of Applied Sciences), Straßmann, Carolin (University of Applied Sciences Ruhr West) |
Keywords: Ethical Issues in Human-robot Interaction Research, Robot Companions and Social Robots, Human Factors and Ergonomics
Abstract: With the rising prevalence of social robots in public spaces, an increasingly heterogeneous audience of people from across different characteristics (such as age or ethnicity) are possible users. In such diverse interactions, there is a risk of algorithmic bias, which results in a discrimination of certain user groups. Human-robot interaction (HRI) research has thus far predominantly focused on homogeneous samples to describe instances and consequences of algorithmic bias, which is prone to leading to evasive findings. We address this gap by conducting thirteen focus group interviews with a total of N = 92 participants and exploring if and how people have already experienced algorithmic bias in their daily lives. Additionally, we contrast the findings from our socially diversified sample with those obtained from a more homogeneous group of people. Our findings uncover various experiences and coping mechanisms regarding algorithmic bias and demonstrate that examining a more diverse sample reveals findings that would otherwise have remained unnoticed.
|
|
ThJT3 Special Session, Auditorium 3 |
Add to My Program |
SS: Social Human-Robot Interaction of Human-Care Service Robots II/SS:
Social Robots for Mental Health and Well-Being |
|
|
Chair: Salam, Hanan Anna | New York University Abu Dhabi |
Co-Chair: Ahn, Ho Seok | The University of Auckland, Auckland |
|
10:50-11:02, Paper ThJT3.1 | Add to My Program |
Social Robots for Bed-Fall Detection in Hospitals (I) |
|
D'Arco, Luigi (University of Naples Federico II), Marotta, Vincenzo (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II), Rossi, Alessandra (University of Naples Federico II) |
Keywords: Assistive Robotics, Detecting and Understanding Human Activity, Robot Companions and Social Robots
Abstract: Patients falling from their beds is still one of the major complications of the hospital's treatments. To address this issue, this research investigates the use of social robots for the identification of potential bed-related falls in hospitals. Using a robot camera and human pose estimation techniques, the patient's position in the bed is extracted, and a threshold-based algorithm is used to identify any anomalies that could indicate a fall. Due to the absence of publicly available datasets, a synthetic dataset was created using a simulation environment to develop and tune the detection algorithm. A user study was conducted to validate the proposed approach and evaluate people's perception of the robot. The system achieved an accuracy of 90.9% in the controlled setting using real data. Participants rated the robot as significantly more trustworthy and behaviorally aware when it detects a possible fall, suggesting that timely and meaningful assistance improved the perceived social competence of the robot. These findings highlight the feasibility of deploying social robots as monitoring systems in sensitive clinical settings, offering a cost-effective and socially acceptable solution.
|
|
11:02-11:14, Paper ThJT3.2 | Add to My Program |
SignPepper: Machine Learning Powered Sign Language Teaching Robot with Dynamic Lesson Feedback (I) |
|
Liu, Edmond (University of Auckland), Tracey, Finn (The University of Auckland), MacDonald, Bruce (University of Auckland), Ahn, Ho Seok (The University of Auckland, Auckland) |
Keywords: Assistive Robotics, Multimodal Interaction and Conversational Skills, Detecting and Understanding Human Activity
Abstract: Sign languages are widely used forms of communication by the deaf and hearing impaired communities. Unfortunately due to a lack of qualified teachers, robot sign language teaching systems have been proposed, aiming to aid in sign language lessons. In this paper we conduct a study utilizing the SignPepper system we developed; a humanoid Pepper robot based system with capabilities in sign demonstration, verbal communication and sign language recognition. Specifically, SignPepper adopts a 3D convolutional neural network trained on 100 American sign language signs, Whisper turbo for speech recognition, ChatGPT 4o for context phrasing and Pepper’s built-in text-to-speech functionality . Our study consisted of 33 participants split into two groups, 18 participants were taught sign language by SignPepper whilst the other 15 were shown the same signs as videos. The same sign recognition neural network is used for evaluating the recall accuracy of students in both groups. Survey results showed the SignPepper group had higher sign recall accuracy, greater interest in learning more sign language and greater engagement. However, comfort during the lesson and comfort towards robotic platforms as teaching platforms was lower than the video group. The SignPepper group also rated instruction clarity as slightly lower. Our results indicate that the physical dexterity limits of the Pepper robot platform are a major limitation; as performance of signs may not match human experts exactly. Student comfort is also an area which requires future improvements, nevertheless SignPepper demonstrates strong viability for the adoption of robotic sign language teaching systems.
|
|
11:14-11:26, Paper ThJT3.3 | Add to My Program |
Towards User-Friendly MR Solutions for Cognitive and Motor Stimulation in Active Ageing (I) |
|
Gabbi, Marta (University of Modena and Reggio Emilia), Villani, Valeria (University of Modena and Reggio Emilia), Sabattini, Lorenzo (University of Modena and Reggio Emilia) |
Keywords: Cognitive Skills and Mental Models, Creating Human-Robot Relationships, Virtual and Augmented Tele-presence Environments
Abstract: The global population is growing at an unprecedented pace, resulting in an increase in the number of people affected by cognitive decline. This paper presents a MR application for older adults with cognitive decline, aiming to stimulate cognitive and motor functioning and promote active ageing. Developed using Unity and deployed to HoloLens 2, the system allows users to interact with holograms through a series of engaging games. Before having elderly people with cognitive decline test the MR approach, preliminary studies were conducted to assess both its feasibility and usability. The results are encouraging, with participants reporting the approach is easy to learn and perform. They also felt confident and successful in accomplishing what they were asked to do. The immersive nature of MR has the potential to transform ageing into an experience filled with opportunities rather than limitations.
|
|
ThJT4 Regular Session, Blauwe Zaal |
Add to My Program |
Applications of Social Robots X |
|
|
Chair: Zhang, Ruohan | Eindhoven University of Technology, Industrial Engineering and Innovation Science Department |
|
10:50-11:02, Paper ThJT4.1 | Add to My Program |
Intelligent Sampling for Predicting the Performance of Hub-Based Swarms |
|
Jain, Puneet (Brigham Young University), Dwivedi, Chaitanya (Amazon), Goodrich, Michael A. (Brigham Young University) |
Keywords: Machine Learning and Adaptation, Evaluation Methods
Abstract: This paper presents an inductive learning algorithm to predict the performance of hub-based swarms solving the best-of-N problem. Since a major constraint in learning swarm behavior is the high computational cost of obtaining sample data, it is desirable to ensure the right samples are used to train the models. The paper's main contribution is formulating and comparing various sampling techniques to improve performance prediction using manageable amounts of training data. We compare random sampling with in-distribution sampling and out-of-distribution sampling, and then apply the lessons learned to modify random sampling to improve sampling. Results show that in-distribution sampling has the best F1 across different sampling techniques for classifying slow versus fast convergence. Model performance indicates that an informed combination of in-distribution and out-of-distribution sampling produces the highest classification accuracy of the swarm's time-to-converge.
|
|
11:02-11:14, Paper ThJT4.2 | Add to My Program |
Pose-Dependent Dynamic Behaviour of a Machining Robot: Modal Analysis |
|
Denkena, Berend (University of Hannover), Buhl, Henning (University of Hannover), Araoud, Mohamed Taha (University of Hannover) |
Keywords: Innovative Robot Designs
Abstract: Industrial robots are increasingly used in machining due to their cost-effectiveness and flexibility. However, their structural compliance limits precision in high-material-removal-rate (MRR) operations, as lower stiffness leads to inaccuracies and increased vibrations. This paper presents a novel machining robot prototype featuring a hybrid drive system that achieves up to eight times higher stiffness than conventional robots. The second joint incorporates a torque motor added to the servo motor, enabling active stiffening and vibration compensation. Prior studies have demonstrated the pose-dependent dynamic behaviour of robotic structures, necessitating detailed analysis to enable active vibration compensation. Experimental modal analysis (EMA) and finite element analysis (FEA) were employed to characterize this pose-dependent behaviour. EMA revealed significant variations in natural frequencies across configurations, while FEA simulations achieved eigenfrequency predictions within 1–2% of experimental values. The findings highlight the importance of pose-dependent dynamic modelling and contribute to the development of strategies for enhancing machining precision with industrial robots.
|
|
11:14-11:26, Paper ThJT4.3 | Add to My Program |
Anticipatory Fall Detection in Humans with Hybrid Directed Graph Neural Networks and Long Short-Term Memory |
|
Cho, Younggeol (Istituto Italiano Di Tecnologia (IIT)), Solak, Gokhan (Italian Institute of Technology, Genoa), Nocentini, Olivia (Istituto Italiano Di Tecnologia), Lorenzini, Marta (Istituto Italiano Di Tecnologia), Fortuna, Andrea (Politecnico Di Milano), Ajoudani, Arash (Istituto Italiano Di Tecnologia) |
Keywords: Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans, Assistive Robotics
Abstract: Detecting and preventing falls in humans is a critical component of assistive robotic systems. While significant progress has been made in detecting falls, the prediction of falls before they happen, and analysis of the transient state between stability and an impending fall remain unexplored. In this paper, we propose a anticipatory fall detection method that utilizes a hybrid model combining Dynamic Graph Neural Networks (DGNN) with Long Short-Term Memory (LSTM) networks that decoupled the motion prediction and gait classification tasks to anticipate falls with high accuracy. Our approach employs real-time skeletal features extracted from video sequences as input for the proposed model. The DGNN acts as a classifier, distinguishing between three gait states: stable, transient, and fall. The LSTM-based network then predicts human movement in subsequent time steps, enabling early detection of falls. The proposed model was trained and validated using the OUMVLP-Pose and URFD datasets, demonstrating superior performance in terms of prediction error and recognition accuracy compared to models relying solely on DGNN and models from literature. The results indicate that decoupling prediction and classification improves performance compared to addressing the unified problem using only the DGNN. Furthermore, our method allows for the monitoring of the transient state, offering valuable insights that could enhance the functionality of advanced assistance systems.
|
|
11:26-11:38, Paper ThJT4.4 | Add to My Program |
Improving Tactile Gesture Recognition with Optical Flow |
|
Zhong, Shaohong (University of Oxford), Albini, Alessandro (University of Oxford), Caroleo, Giammarco (University of Oxford), Cannata, Giorgio (University of Genova), Maiolino, Perla (University of Oxford) |
Keywords: Social Touch in Human–Robot Interaction
Abstract: Tactile gesture recognition systems play a crucial role in Human-Robot Interaction (HRI) by enabling intuitive communication between humans and robots. The literature mainly addresses this problem by applying machine learning techniques to classify sequences of tactile images encoding the pressure distribution generated when executing the gestures. However, some gestures can be hard to differentiate based on the information provided by tactile images alone. In this paper, we present a simple yet effective way to improve the accuracy of a gesture recognition classifier. Our approach focuses solely on processing the tactile images used as input by the classifier. In particular, we propose to explicitly highlight the dynamics of the contact in the tactile image by computing the dense optical flow. This additional information makes it easier to distinguish between gestures that produce similar tactile images but exhibit different contact dynamics. We validate the proposed approach in a tactile gesture recognition task, showing that a classifier trained on tactile images augmented with optical flow information achieved a 9% improvement in gesture classification accuracy compared to one trained on standard tactile images.
|
|
11:38-11:50, Paper ThJT4.5 | Add to My Program |
Designing Multi-Touchpoint Privacy Conversations for Service Robots |
|
Grasso, Maria Antonietta (Naver Labs Europe), PARK, JISUN (Naver Labs Europe), Willamowski, Jutta (Naver Labs Europe) |
Keywords: Ethical Issues in Human-robot Interaction Research, Novel Interfaces and Interaction Modalities, Applications of Social Robots
Abstract: Robots are increasingly present in spaces inhabited by humans. From a privacy-sensitive design perspective, they present challenges, as they acquire data about their environment to act autonomously and interact with their users. This may raise privacy concerns among robot users and bystanders. To address these concerns, we propose a multi-touchpoint design enabling users and bystanders to investigate how their privacy is protected. These touchpoints include (1) embodied interaction with the robot, either directly, whenever encountering a robot, or (2) later, in a dedicated physical space, (3) interaction with a virtual AI chatbot through a web site, and (4) interaction with a human Data Protection Officer. We evaluated this design and the usefulness of the proposed touchpoints in two studies. Our findings are threefold: first, all touchpoints are useful and complement each other; second, different people have different preferences; and third, the attributes of the situation (i.e. location, busyness, contextuality and sensitivity) impact the choice of the touchpoint they would use to ask their questions.
|
|
ThJT5 Regular Session, Auditorium 5 |
Add to My Program |
TRUST Regular Session I |
|
|
Chair: Jain, Neera | Purdue University |
|
10:50-11:02, Paper ThJT5.1 | Add to My Program |
Towards Understanding the Impact of Swarm Motion on Human Trust |
|
Abu-Aisheh, Razanne (University of Bristol), Suneesh, Shyamli (Lancaster University), Didiot-Cook, Tom (University of Bristol), Jones, Simon (University of Bristol), Nunez Sardinha, Emanuel (Bristol Robotics Lab, University of the West of England), Munera, Marcela (University of West England), Hauert, Sabine (University of Bristol) |
Keywords: User-centered Design of Robots, Detecting and Understanding Human Activity
Abstract: Robot swarms are decentralised systems that use simple rules to achieve collective goals, yet their real-world deployment is limited by a lack of understanding of human trust and perception. This study examines how swarm motion affects the trust of novice users in a service-oriented swarm, using an automated cloakroom as a test case. We conducted 20 human trials, where participants interacted with a swarm exhibiting either structured (grid-like) or organic (adaptive) motion, with performance controlled across conditions. Trust and perception were assessed via self-reporting questionnaires and eye-tracking data. Results indicate that performance and reliability, rather than motion, are the key drivers of trust. However, motion influenced perceived predictability, highlighting its role in designing transparent and user-friendly swarm systems.
|
|
11:02-11:14, Paper ThJT5.2 | Add to My Program |
Enhancing Human-Robot Trust and Collaboration in Unmanned Surface Vehicles through Fault Diagnosis |
|
Hu, Yang (University College London), Aldhaheri, Sara (TII), Wang, Yanchao (University College London), Wu, Peng (University College London), Liu, Yuanchang (University College London) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation, Machine Learning and Adaptation
Abstract: Unmanned surface vehicles (USVs) require reliable fault diagnosis to ensure effective human-robot collaboration, yet operators often lack transparent, real-time insights into system failures, undermining trust. This paper presents a novel fault diagnosis system for USVs that enhances human-robot trust and collaboration by integrating model-based and data-driven approaches. The proposed method combines an Extended Kalman Filter (EKF) for physics-based state estimation with online logistic regression (OLR) for adaptive, real-time fault classification, detecting thruster faults in average 0.5 seconds with 95% accuracy across 40 simulation runs per fault condition. The system demonstrates robust performance even under challenging environmental conditions, maintaining reliable detection and low false positive rates in strong ocean winds up to 15 m/s. A multimodal graphical user interface (GUI) communicates fault probabilities, disturbance trends, and vehicle status, making diagnostic reasoning transparent to operators. Results show the system identifies port and starboard thruster faults with high reliability and fast response. This system illustrates a practical step toward trustworthy, collaborative USV operations, aligning with the growing need for human-centric robotic autonomy.
|
|
11:14-11:26, Paper ThJT5.3 | Add to My Program |
Effects of Synchronous Movement on Human Trust in Robots |
|
Marji, Michelle (University of Wisconsin-Madison), Doshi, Megh Vipul (University of Wisconsin-Madison), Suresh, Siddharth (University of Wisconsin-Madison), Zinn, Michael (University of Wisconsin - Madison), Mutlu, Bilge (University of Wisconsin–Madison), Niedenthal, Paula M. (University of Wisconsin-Madison) |
Keywords: Cooperation and Collaboration in Human-Robot Teams
Abstract: Robot-human trust is an important concern as robots become integrated into human spaces. We tested a method grounded in psychological theory to increase human-robot trust—synchronous motion. Human participants completed a goal-oriented ball-moving task with a robotic arm to sound cues that were synchronous or asynchronous with the robot’s pacing. Participants were instructed to follow sound cues without information about synchrony. We found that participants in the synchrony condition trusted the robot to complete a new task that was comparable to the task they completed, significantly more than the asynchrony condition. However, this effect did not extend to harder tasks. The participants in the synchrony condition also believed that the robot had more influence on the outcomes of the new task compared to the asynchrony condition. On average, participants’ trust increased with the robotic arm after completing the task, regardless of condition. We report findings from a thematic analysis that demonstrate that participants in the synchrony condition found synchrony to be beneficial, while participants in the asynchrony condition found it cognitively taxing to be out-of-sync. Results from this work may be used to improve human-robot interactions in various contexts.
|
|
11:26-11:38, Paper ThJT5.4 | Add to My Program |
Beyond Failures: A Comparative Study of Two Different Trust Violations and Their Effect on Trust |
|
Söhngen, Yannic (University Duisburg-Essen), Guerra, Enrico (University of Duisburg-Essen), Prilla, Michael (University of Duisburg-Essen) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships
Abstract: As human-robot interaction (HRI) grows in importance both in practice and research, the emphasis often lies on close collaboration and the complex dynamics of humans and robots working together. This development necessitates a deep understanding of crucial aspects of that interaction. One of these aspects is trust. The dynamics of trust appear to be diverse, and several factors can have an impact. To deepen our understanding of the multiple facets that trust in HRI entails, this research investigates the effects of different types of trust violations on trust during a handover task. Contrary to previous works focusing primarily on robotic failures, however, this paper emphasizes trust violations that appear to represent more realistic cases for understanding trust violations. Therefore, we compare a violation that can be seen as a clear failure with one that can be seen as a deviation from what a user would expect from the robot. Moreover, we implemented these two violations on two types of robots, a technical robotic arm and a humanoid robot, and compared their effects on trust. The empirical study was conducted with 40 participants. The results of the study revealed that there are no significant differences regarding the violation's impact on trust--neither did the comparison of the robot types yield any significant differences. Potential reasons for this, along with its implications, are discussed.
|
|
11:38-11:50, Paper ThJT5.5 | Add to My Program |
Trust-ACT: Integrating Trust in Imitation Learning |
|
Lingg, Nico (Imperial College London), Demiris, Yiannis (Imperial College London) |
Keywords: Machine Learning and Adaptation
Abstract: In human-robot interaction, trust traditionally serves as a performance metric rather than a control variable, limiting its potential to shape robot behavior. In this paper, we present Trust-ACT, a framework that integrates trust labels into a conditional variational autoencoder (CVAE) policy for imitation learning. Our approach trains on demonstrations where operators explicitly executed movements with varying characteristics—fast, smooth, and confident for high-trust, moderate with occasional pauses for medium-trust, and deliberately hesitant, jerky, and error-prone for low-trust—enabling the robot to generate trajectories that reflect these distinct trust-specific behaviors. We address a significant challenge in CVAE architectures, posterior collapse, through a partial input masking technique that preserves a meaningful and diverse latent space. Furthermore, we develop a trust prediction model that acts as a reward function, enabling a best-of-n strategy to identify trajectories that maintain trust characteristics while maximizing reliability. Experiments show our trust-conditioned policies maintain distinct motion characteristics across trust levels while our best-of-n sampling approach consistently improves success rates in all trust conditions. Our results demonstrate the promise of trust conditioning as a pathway to more controllable and human-aligned policy generation.
|
|
ThJT6 Special Session, Auditorium 6 |
Add to My Program |
SS: Bridging Trust and Context: Dynamic Interactions in HAI I |
|
|
Chair: Fukuchi, Yosuke | Tokyo Metropolitan University |
Co-Chair: Imai, Michita | Keio University |
|
10:50-11:02, Paper ThJT6.1 | Add to My Program |
Shaping Attitudes with a Multi-Attribute Utility Model in Personalized Human-Agent Persuasion (I) |
|
LYU, SIQI (Gifu University), Terada, Kazunori (Gifu University) |
Keywords: Linguistic Communication and Dialogue, Social Intelligence for Robots, Anthropomorphic Robots and Virtual Humans
Abstract: Attitude formation involves both rational beliefs (“should do”) and personal desires (“want to do”). We developed and evaluated an AI dialogue system for personalized persuasion addressing both rational and desire dimensions by integrating Multi-Attribute Utility Model and Elaboration Likelihood Model. Our system identifies each individual’s high-priority perspectives and adapts persuasive messages accord- ingly, leveraging both central-route and peripheral-route cues. Using nuclear power plant restarts in Japan as a test case, our experiment (N=148) compared three strategies: Positive, Negative, and Neutral. The Positive condition significantly increased both rational “Should” and subjective “Want” dimensions, while the Negative condition decreased them; the Neutral condition produced no notable changes. Results indicate that tailoring messages to core values is critical for effective persuasion, though overly forceful approaches risk psychological reactance. Future work should address dynamic persuasive route selection, balance persuasive intensity with user autonomy, and consider broader demographic diversity.
|
|
11:02-11:14, Paper ThJT6.2 | Add to My Program |
Exploring the Effect of Robot Assistance Costs on Trust and Prosocial Behavior through Video Stimuli (I) |
|
Hang, Chenlin (The Graduate University for Advanced Studies), Shiomi, Masahiro (ATR), Yamada, Seiji (National Institute of Informatics) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Robot Companions and Social Robots
Abstract: Understanding how different levels of robotic assistance influence human perception, trust, and prosocial behavior is critical in human-robot interaction (HRI) research. This study investigates how the cost of help provided by a robot affects human perception, trust, and prosocial behavior by presenting participants with a video-based experiment. In the experiment, participants observed a humanoid robot, Sota, providing assistance under two conditions: high-cost help, where the robot shared power from its own battery, and low- cost help, where the robot facilitated power transfer from an external mobile battery. Results showed that participants perceived the robot as more anthropomorphic and intelligent in the high-cost condition, with increased trust ratings in both performance and moral trust dimensions. However, no significant difference was observed in participants’ prosocial behavior towards the robot. These findings suggest that while higher-cost robotic assistance enhances perception and trust, it does not necessarily lead to greater prosocial responses from humans. This study contributes to the broader understanding of how varying levels of robotic assistance impact human social responses and has implications for designing socially interactive robots in cooperative settings.
|
|
11:14-11:26, Paper ThJT6.3 | Add to My Program |
Trust between Humans and Robots: Do People Perceive That Robots Trust Them? (I) |
|
Tsumura, Takahiro (Toyo University), Yamada, Seiji (National Institute of Informatics) |
Keywords: Creating Human-Robot Relationships, Embodiment, Empathy and Intersubjectivity, Social Intelligence for Robots
Abstract: As AI and robots become increasingly integrated into daily life, fostering trust in robots is essential for establishing long-term human-robot relationships. Enhancing people's trust in robots can help mitigate anxiety and aversion toward them. While previous research has primarily focused on trust in robots based on their performance and achievements, this study explores the impact of robots appearing to trust humans on human decision-making. In this study, a robot performed the Prisoner's Dilemma task three times with participants. We examined whether participants' choices in the game were influenced by the robot's eye color (blue, red), the robot's behavior (available, not available), and before/after the task using a three-factor mixed design. The first experiment assessed whether the robot appeared to trust participants using a questionnaire. The second measured participants' trust in the robot after the interaction. Analysis results indicated that as the number of Prisoner's Dilemma interactions increased, participants were more likely to betray the robot. However, trust in the robot increased after the task, suggesting that participants felt more trusted by the robot, which in turn enhanced their own trust in it. This study introduces a novel perspective on human-robot relationships, highlighting how making people feel trusted by a robot can foster greater trust toward it.
|
|
11:26-11:38, Paper ThJT6.4 | Add to My Program |
How Is Your Smile Being Interpreted? the Smiling Agents and Their Perceived Sincerity, Trustworthiness, and Friendliness (I) |
|
Hnin, Thiri Ko (Shizuoka University), Takeuchi, Yugo (Shizuoka University) |
Keywords: Non-verbal Cues and Expressiveness, Monitoring of Behaviour and Internal States of Humans, Social Presence for Robots and Virtual Humans
Abstract: Although a smile is usually interpreted positively in social settings, it is not always valid with the change in the background knowledge of the situation. This is true especially in situations where one believes that the community in which they are is corrupted and might cause harm in some way. We believe that the same reasoning can be applied to HCI. But does that mean it is better not to smile in the corrupted community? For that, we designed an experiment that contains two conditions, Low-Risk and High-Risk. Our findings show a trend suggesting that friendliness is valued more in low-risk conditions, whereas in high-risk situations, this effect appears diminished. This study contributed to understanding the functionality of a smile in HCI where the risk is accompanied.
|
|
11:38-11:50, Paper ThJT6.5 | Add to My Program |
Performance Trust in High-Stakes Heterogeneous Human-Machine Teams: Insights from Interviews with Team CoSTAR and Mars 2020 (I) |
|
Kim, Boyoung (George Mason University Korea), Jannone, Jordan (California State University, Northridge), Lopez Rodriguez, Zulma E (California State University, Northridge), Huerta, Rachel (California State University, Northridge), Sagaran, Elijah (California State University, Northridge), Ochoa, Nathalie (California State University Northridge), Shubina, Daria (ARCS), Morrell, Benjamin (Jet Propulsion Laboratory, California Institute of Technology), Milano, Michael (NASA Jet Propulsion Laboratory, California Institute of Technolo), Kaufmann, Marcel (California Institute of Technology), HO, NHUT (California State University, Northridge) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships, Ethical Issues in Human-robot Interaction Research
Abstract: We examined how trust is conceptualized toward both humans and robots in heterogeneous human-machine teams. Based on interviews from Team CoSTAR (DARPA Subterranean Challenge) and the Mars 2020 mission, we employed a mixed-methods approach combining content analysis, thematic analysis, and text mining. Performance trust emerged as the dominant dimension across both teammate types, though distinct factors shaped how it was applied to humans versus robots. Moral trust was limited, particularly toward robots. This work contributes to understanding how trust is formed and differentiated in high-stakes contexts involving real machines and real human teammates operating in environments with real consequences.
|
|
ThKT1 Regular Session, Auditorium 1 |
Add to My Program |
Virtual and Telepresence II |
|
|
Chair: Goodrich, Michael A. | Brigham Young University |
Co-Chair: de Heuvel, Jorge | University of Bonn |
|
12:50-13:02, Paper ThKT1.1 | Add to My Program |
Exploring Social Presence in Long-Distance Family Communication with Telepresence Robots in the Wild |
|
Seo, Jiyeon Amy (University of Michigan), Cho, Hyungjun (University of Florida), Cao, Huajie (Michigan State University), Lee, Hee Rin (Michigan State University) |
Keywords: Virtual and Augmented Tele-presence Environments, Social Presence for Robots and Virtual Humans
Abstract: As families increasingly live apart due to reasons such as education, employment, or personal independence, they rely on computer-mediated communication (CMC) tools to stay connected. While tools like audio and video calls help bridge the distance, they often fall short in fostering social presence. This study explores how telepresence robots, with their physicality and mobility, may enhance social presence in remote family communication. We hypothesize that telepresence robots will increase perceived social presence compared to current CMC tools. To test this, we conducted a within-subjects study with eight families, deploying telepresence robots in their homes for two weeks. The results show that among the five factors encompassed by social presence—perceived presence, psychological closeness, seamless communication, and conversational involvement—four were significantly increased after using telepresence robots, while communication privacy showed no effect.
|
|
13:02-13:14, Paper ThKT1.2 | Add to My Program |
Bot Appetit! Exploring How Robot Morphology Shapes Perceived Affordances Via a Mise En Place Scenario in a VR Kitchen |
|
Ringe, Rachel (University of Bremen), Thiele, Leandra (University of Bremen), Pomarlan, Mihai (Universitatea Politehnica Timisoara), Zargham, Nima (University of Bremen, Digital Media Lab), Nolte, Marc Robin (University of Bremen), Hurrelbrink, Lars (University of Bremen), Malaka, Rainer (University of Bremen) |
Keywords: Assistive Robotics, User-centered Design of Robots
Abstract: This study explores which factors of the visual design of a robot may influence how humans would place it in a collaborative cooking scenario as well as how these features may influence task delegation. Human participants were asked to set up a kitchen for cooking alongside a robot companion while considering the robot's appearance. We collected multimodal data for the arrangements created by the participants, transcripts of their think-alouds as they were performing the task, and transcripts of their answers to structured post-task questionnaires. Based on analyzing this data, we formulate several hypotheses: humans prefer to collaborate with biomorphic robots; human beliefs about sensory capabilities of robots are less influenced by the visual aspect of the robot than beliefs about action capabilities; and humans will implement fewer avoidance strategies when sharing space with gracile robots. We intend to verify these hypotheses in follow-up studies.
|
|
13:14-13:26, Paper ThKT1.3 | Add to My Program |
To Physically Embody or Not? a Comparison of Virtual vs. Physical Robots As Exercise Coaches for Older Adults |
|
Lehocki, Fedor (Slovak University of Technology in Bratislava), Dudasko, Stefan (Slovak University of Technology in Bratislava), Vrins, Anita (Vrije Universiteit Amsterdam), Tirpakova, Veronika (Faculty of Physical Education and Sport, Comenius University Bra), Discantiny, Imrich (Faculty of Informatics and Information Technologies, Slovak Univ), Putekova, Silvia (Faculty of Health Care and Social Work, Trnava University), Alimardani, Maryam (Tilburg University) |
Keywords: Applications of Social Robots, Embodiment, Empathy and Intersubjectivity
Abstract: As social robots gain prominence in supporting older adults’ health and well-being, understanding their effectiveness compared to virtual agents remains critical. This study investigated older adults’ perceptions of a physically embodied robot versus its virtual counterpart when taking on the role of an exercise coach. We recruited 25 healthy older adults, each of whom performed a series of exercises with both the physical NAO robot and its virtual simulation displayed on a computer screen. Participants’ experiences were assessed using the Unified Theory of Acceptance and Use of Technology (UTAUT) and the User Engagement Scale (UES) questionnaires collected after each condition. Results indicated that the Perceived Sociability of the NAO robot was significantly higher in the physically embodied condition compared to the virtual condition. However, no significant differences were found in Anxiety, Attitude, Perceived Enjoyment, Perceived Usefulness, Social Intelligence, or Trust. Similarly, the physically embodied NAO scored higher in Perceived Usability and Aesthetic Elements, but no significant differences were observed in Focused Attention and Reward Factor. These results suggest that physical embodiment could enhance perceptions of sociability and usability, however, it does not necessarily impact all engagement-related factors. Our findings contribute to the design of future socially assistive technologies in eldercare.
|
|
13:26-13:38, Paper ThKT1.4 | Add to My Program |
Enhancing Social Presence in Dyadic Text-Chatting with a Robot Avatar Expressing Users' Actions |
|
Nakamura, Yasutaka (Nagoya Institute of Technology), Harata, Seiichi (Nagoya Institute of Technology), Sakuma, Takuto (Nagoya Institute of Technology), Tanaka, Yoshihiro (Nagoya Institute of Technology), Nankaku, Yoshihiko (Nagoya Institute of Technology), Kato, Shohei (Nagoya Institute of Technology) |
Keywords: Social Presence for Robots and Virtual Humans, Non-verbal Cues and Expressiveness, Applications of Social Robots
Abstract: Text-based Computer-Mediated Communication (CMC) diminishes the social presence of the conversational partner owing to the lack of nonverbal cues exchanged in face-to-face (FtF) communication. This study aims to enhance social presence in a dyadic text chat by employing a robot avatar that supplements nonverbal cues. In particular, we propose a chat system with a robot avatar that can express gestures in response to the user's actions (typing, sending messages) and read the user's messages aloud. In the evaluation experiment, nine pairs of participants (for a total of 18 participants) used two types of chat system: the proposed system and the baseline system in which a robot avatar only reads messages aloud.As a result, the proposed robot avatar significantly increased social presence compared to the baseline system. Furthermore, the proposed gesture expressions significantly improved the ease of chatting. Specifically, our results suggest that robot gestures in response to typing might have an effect similar to nonverbal cues in FtF communication.
|
|
13:38-13:50, Paper ThKT1.5 | Add to My Program |
Training of GUI-Based Avatar Robot Operation through Sharing Operation with Expert |
|
Ota, Tomoyuki (Nagoya Institute of Technology), Nishimura, Takumi (Nagoya Institute of Technology), Takeuchi, Kazuaki (Ory Laboratory), Yoshifuji, Ory (Ory Laboratory), Hatada, Yuji (The University of Tokyo), Tanaka, Yoshihiro (Nagoya Institute of Technology) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Virtual and Augmented Tele-presence Environments, Embodiment, Empathy and Intersubjectivity
Abstract: This study aims to investigate the effects of sharing operations on the training of an avatar robot. In this study, we employed an operational platform for the avatar robot called OriHime-T, which integrates head and hand movements with wheel-based mobility. This platform allows both an expert and a learner to share the same control screen during training, Two tasks were conducted: the Time Attack and the Candy Delivered tasks in which candy is delivered to a customer. The experiments showed that compared to the conventional method in which experts observe learners' operations and provide advice, our proposed method, which incorporates immediate intervention, led to a reduction in Time Attack completion times and an improvement in the success rate of the Candy Delivered task. These results suggest that sharing operations effectively facilitates the transmission of abstract judgment criteria and operational sensations.
|
|
ThKT2 Special Session, Auditorium 2 |
Add to My Program |
SS: Theory of Mind in Human-Robot Interaction |
|
|
Chair: Holthaus, Patrick | University of Hertfordshire |
|
12:50-13:02, Paper ThKT2.1 | Add to My Program |
Leash As a Cue: Visual Indicators for Third-Party Acceptance across Resistance Levels (I) |
|
Hanawa, Momo (The University of Tokyo), Tokida, Satomi (The University of Tokyo), Ishiguro, Yoshio (The University of Tokyo) |
Keywords: User-centered Design of Robots, Non-verbal Cues and Expressiveness, Robot Companions and Social Robots
Abstract: Unexpected robot encounters in public spaces can cause discomfort for third parties, yet the acceptability of accompanying robots is not yet fully understood. Visual cues strongly influence robot impressions; however, the interaction between explicit visual indicators and individual robot resistance in determining acceptability remains unexplored. We investigated how visual relationship indicators—connection visibility (the physical linkage between the robot and handler) and control visibility (the evident authority of the handler)—influence acceptance based on individuals' levels of robot resistance. In our experiment, 23 participants encountered a mobile robot under three operation methods: Autonomous (no visual indicators), Joystick (only control visibility), and Leash (both connection and control visibility), with participants divided into high-resistance (n=12) and low-resistance (n=11) groups based on their NARS scores. Results indicate that Leash had the highest acceptability, with high-resistance participants showing significant differences across methods and benefiting from explicit visual indicators, unlike low-resistance participants who were largely unaffected. These findings offer important design implications for accompanying robots in public spaces, suggesting that employing visually explicit relationship indicators is an effective strategy for enhancing acceptability, particularly among individuals with robot resistance.
|
|
13:02-13:14, Paper ThKT2.2 | Add to My Program |
"Once Upon a Time...": An Adaptive Robotic Behavior for Engaging Cooperative Storytelling (I) |
|
Barbato, Mario (University of Naples "Federico II"), Raggioli, Luca (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Storytelling in HRI, Multimodal Interaction and Conversational Skills, Machine Learning and Adaptation
Abstract: Assistive robots can be valuable conversational partners for cooperative tasks, such as storytelling, fostering creativity and social bonding. Through the use of foundational models, such as LLMs, robots can more effectively and naturally generate story narrations that are enjoyed by humans. In such a scenario, however, it is fundamental to consider the users' feedback and reactions to adapt the story and the interaction in a way that actively sustains their interest. In this work, we propose an LLM-assisted storytelling generation method that employs different robot's communication modalities to stimulate the user's behavioral, affective, and cognitive engagement during the interaction and affect the narration of the story. Moreover, we investigated the introduction of an adaptive interaction policy to choose the most suitable actions based on the user's observed engagement. We conducted a user study with 36 participants to assess our proposed approach, and demonstrated that it manages to effectively assist participants in an engaging way, with the robot being perceived as friendly and trustworthy. Moreover, the policy adaptation results in a perception of the robot with a higher arousal while a more interactive approach led to a better perceived social intelligence.
|
|
13:14-13:26, Paper ThKT2.3 | Add to My Program |
Perceptions of a Robot's Emotional Expressions Are Influenced by Users' Emotional States (I) |
|
Shenoy, Sudhir (University of Virginia), Clark, Matthew (University of Virginia), Islam, Ariful (University of Virginia), Doryab, Afsaneh (University of Virginia) |
Keywords: Affective Computing, Cognitive Skills and Mental Models
Abstract: Humans tend to project their own emotions onto others, perceiving others’ emotional states as more similar to their own. Yet, no study has investigated this emotional projection in robots, despite being frequently anthropomorphized into having emotions. In this work, we demonstrate how participants’ emotional state influences their perception of a robot’s emotions. We conducted a user study in which participants felt varying emotions and stress levels. In each state, they observed a robot perform an emotionally ambiguous action and indicated the emotion they believed the robot was communicating. Our results indicate that our participants projected their own emotional state onto the robot. Increased stress manifested in participants as increased uncertainty about the robots’ emotions. As participants became more familiar with the robot, their felt and perceived emotions remained partially aligned; however, the effect of stress diminished. Our findings suggest future social robotic studies must consider participants’ emotional states in their evaluations.
|
|
13:26-13:38, Paper ThKT2.4 | Add to My Program |
ToMCAT: Benchmark for Socially Assistive Robots with Theory of Mind of Children Assembling Tangram Puzzles (I) |
|
Wilson, Jason (Franklin & Marshall College), Rabkina, Irina (Barnstorm Research Corp), Roberts, Mark (Naval Research Laboratory), Hiatt, Laura M. (Naval Research Laboratory) |
Keywords: Social Intelligence for Robots, Monitoring of Behaviour and Internal States of Humans, Assistive Robotics
Abstract: Assistive robots will be more effective if they can accurately reason about the intentions and beliefs of the user (i.e., have Theory of Mind (ToM)). ToM benchmarks allow us to examine how well an artificial agent (e.g., robot) is able to do ToM reasoning in a given scenario. However, there is a need for ToM benchmarks that are more representative of the challenges faced in assistive robotics. Existing benchmarks from AI and HRI make simplifying assumptions, such as simply defined goals, plans that are indicative of goals, and no user errors. To address the challenges from relaxing these assumptions, we propose the Theory of Mind of Children Assembling Tangrams (ToMCAT) dataset. The data is derived from videos of children building tangram puzzles while being assisted by a social robot. As a baseline benchmark, we evaluated two approaches for how well they can recognize which puzzle this child is building based on a single observation. Analogical reasoning correctly recognized the puzzle more than 75% of the time and had perfect accuracy for puzzle states that were close to complete. However, an out-of-the-box commercial LLM correctly recognized the puzzle only 60% of the time and was accurate on less than 80% of the completed puzzles. Our results suggest that the ToMCAT dataset provides challenges for recognizing the intended puzzle of a child. Furthermore, the dataset provides opportunities to examine additional ToM reasoning capabilities. ToMCAT serves as a useful benchmark to facilitate the advancement of ToM reasoning for assistive robotics.
|
|
13:38-13:50, Paper ThKT2.5 | Add to My Program |
Behavioral Variability and Mental State Attribution: Exploring Human Perceptions of Robot Theory of Mind in an Inverted Paradigm |
|
Cimafonte, Martina (University of Naples Parthenope), D'Errico, Lorenzo (University of Naples Federico II), Matarese, Marco (Italian Institute of Technology), Staffa, Mariacarla (University of Naples Parthenope) |
Keywords: Creating Human-Robot Relationships, Embodiment, Empathy and Intersubjectivity, Robot Companions and Social Robots
Abstract: Understanding and ascribing to others’ intentions and beliefs based on the observed behavior is a key aspect of people’s everyday social lives. This is crucial also in Human- Robot Interaction because both humans and robots need to make sense of each others’ behavior in collaborative settings. Such a complex mechanism is known as the Theory of Mind (ToM), and it still holds secrets, although it has been investigated in HRI for several years. This study focuses on the second-order ToM attributions using an inverted Sally-Anne paradigm, a well-established False Belief task. The humanoid robot Pepper, equipped with vision algorithms, assumes the role of Anne and predicts where Sally (a human researcher) will search for a ball, contingent on Sally’s presence or absence during its relocation by a neutral experimenter. Two scenarios are tested: Sally exits the room (false belief) or observes the relocation (true belief). We had two experimental conditions, where the Pepper robot exhibited passive (monotonic voice, rigid gestures) and active (dynamic voice, fluid gestures) behaviors, respectively. Participants, as external observers, watch video recordings of the interactions and answer structured questions to assess how behavioral cues influence robots’ ToM attributions. Results showed that people tended to ascribe high-level ToM skills to the active robot rather than to the passive one, highlighting the importance of designing robots with appropriate expressive behaviors. By examining how humans interpret a robot’s capacity for second-order ToM, this work advances our understanding of the cognitive assumptions people make about artificial agents and offers a foundation for developing socially intelligent systems that can seamlessly integrate into collaborative environments.
|
|
13:50-14:02, Paper ThKT2.6 | Add to My Program |
Cognitive Agentic AI: Probabilistic Novelty Detection for Continual Adaptation in HRI (I) |
|
Ghamati, Khashayar (University of Hertfordshire), Amirabdollahian, Farshid (The University of Hertfordshire), Faria, Diego (University of Hertfordshire), Zaraki, Abolfazl (University of Hertfordshire) |
Keywords: Machine Learning and Adaptation, Robot Companions and Social Robots, Cognitive Skills and Mental Models
Abstract: Adapting to novel tasks in human-robot interaction (HRI) is crucial for long-term autonomy, yet remains a major challenge for autonomous agents deployed in unpredictable open-world settings. This paper introduces CAPA-AI, a novel framework that integrates probabilistic novelty detection with continual post-deployment adaptation achieved via transfer learning to address this challenge. The framework’s novelty detection component employs conditional probability and the Jaccard Index to identify unfamiliar tasks by quantifying their deviation from the agent’s knowledge base of previously learned tasks. Upon detecting a novel task, the agent utilises transfer learning to repurpose prior knowledge and update its models without retraining from scratch. We detail the design of CAPA-AI, including an isolated learning phase for initial skill acquisition and the construction of a dynamic knowledge base. The complete system was deployed on a social robot in real-world HRI scenarios to evaluate its performance. Experimental results demonstrated that the agent accurately detects novel tasks and adapts to them, achieving adaptation and novelty detection accuracies of 80% and 89%, respectively. These findings underscore the efficacy of the proposed approach and highlight a significant step towards robust open-world deployment of AI agents in HRI, where continuous adaptation and the safe handling of unforeseen tasks are essential.
|
|
ThKT3 Special Session, Auditorium 3 |
Add to My Program |
SS: Adaptive and Adaptable Robots in Social Interactions |
|
|
Chair: Andriella, Antonio | Institut De Robòtica I Informàtica Industrial |
Co-Chair: Louie, Wing-Yue Geoffrey | Oakland University |
|
12:50-13:02, Paper ThKT3.1 | Add to My Program |
Effects of Interpretability Methods for Understanding Failures in Social Robot Learning (I) |
|
Tyshka, Alexander (Oakland University), Louie, Wing-Yue Geoffrey (Oakland University) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, User-centered Design of Robots, Robot Companions and Social Robots
Abstract: Learning from demonstration (LfD) is a common method for teaching novel tasks to social robots, but non-experts can struggle to teach optimally without understanding the robot's failures. Interpretability techniques present a possible solution to this problem, but research on interpretability in social robotics has been limited and focuses on improving users' perceptions of robots rather than helping them understand the robot's internal model and the true causes of its failures. We address this gap with an online study evaluating how well non-experts can diagnose a social robot's failures during learning using causal explanations and a novel visual transparency interface. While neither method improved performance for all users, participants who displayed high interaction with the visual interface showed a significantly improved ability to diagnose errors. Our findings suggest visual interfaces may be a promising alternative to causal explanations for teaching social robots and highlight the challenges that remain in helping non-experts understand social robot failures during LfD.
|
|
13:02-13:14, Paper ThKT3.2 | Add to My Program |
Can You Handle the Truth? the Effects of Robots Correcting Users' Misalignment on Trust and Perceived Social Competence (I) |
|
Hellou, Mehdi (University of Manchester), Angelopoulos, Georgios (Interdepartmental Center for Advances in Robotic Surgery - ICARO), Vinanzi, Samuele (Sheffield Hallam University), Rossi, Alessandra (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II), Cangelosi, Angelo (University of Manchester) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Assistive Robotics, Cognitive Skills and Mental Models
Abstract: For social robots to collaborate effectively, they must infer and correct false human beliefs, especially when misconceptions directly impact task outcomes or pose safety risks to humans. In this work, we investigated whether a robot's ability to detect and rectify users' false beliefs improves trust and perceived social competence. In an in-person between-subject study, 98 participants collaborated with a robot to solve a task. Participants interacted with either a robot that actively corrected their false beliefs by using Theory of Mind or one that complied with their incorrect instructions. Contrary to expectations, trust, mental state attribution, and perceived warmth or competence did not differ between groups. The results also showed that human reluctance to trust the robot's input persisted, suggesting that belief correction alone cannot overcome relational barriers. In addition, the study showed that participants who trusted the robot's corrections perceived it as more socially attuned.
|
|
13:14-13:26, Paper ThKT3.3 | Add to My Program |
Gender Differences in Learning-By-Teaching a Social Robot: Insights from a Primary School Study (I) |
|
Tarakli, Imene (Sheffield Hallam University), Vinanzi, Samuele (Sheffield Hallam University), Di Nuovo, Alessandro (Sheffield Hallam University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction
Abstract: Social robots hold great promise for supporting children’s learning, yet their effectiveness may depend on how well they align with individual learner characteristics. This study investigates the role of gender in shaping the outcomes of Learning-by-Teaching (LbT) with a social robot. In a primary school setting, 53 children (aged 8–9) participated in French language tasks under either a robot-assisted LbT condition or a self-practice condition. While no significant effects were found, girls consistently showed higher learning and retention gains when teaching the robot compared to practicing alone, an effect not observed in boys. Contrary to expectations, girls and boys spent similar time on task, used help equally, and reported comparable perceptions of the learning activity. Exploratory analyses revealed that girls who found the task more difficult learnt more, aligning with theories of desirable difficulty, and that higher learning gains were linked to lower perceived competence, suggesting possible signs of Imposter Syndrome. These findings highlight the complex interplay between cognitive and emotional factors in robot-assisted learning and emphasise the need for personalised educational technologies that adapt not only to performance but also to learner identity and psychological experience.
|
|
13:26-13:38, Paper ThKT3.4 | Add to My Program |
A Robotic Assistant for Personalised Diet Recommendation (I) |
|
Raggioli, Luca (University of Naples Federico II), Ciccarelli, Francesco (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II), Rossi, Alessandra (University of Naples Federico II) |
Keywords: Applications of Social Robots, Assistive Robotics, Machine Learning and Adaptation
Abstract: Food recommender systems have become valuable tools across various domains, including health-oriented applications that provide personalised dietary advice. Recent studies have shown the potential of integrating recommender systems with assistive robots to promote healthy eating habits, especially among older adults. While transformers and Large Language Models showed advanced reasoning capability for effective recommendation systems, they might have limited knowledge and understanding of the users' personal preference and requirements. This lack of information can negatively affect their effectiveness and user's satisfaction. We present a novel transformer-assisted, multi-interface recommendation system for generating food recommendations based on user profiles using a custom dataset including dietary and nutritional information. We conducted a user study with 40 participants for evaluating whether a robot is able to persuade users' in accepting its food recommendation. Our study found that participants responded positively to the interactions with the robot, showing high satisfaction and trust in the recommendations.
|
|
13:38-13:50, Paper ThKT3.5 | Add to My Program |
Exploring the Potential of Robotic Coaching in eSports: A Pilot Study on Social Robots for Gaming Performance Enhancement (I) |
|
Pallonetto, Luca (University of Naples Federico II), D'Arco, Luigi (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Robots in art and entertainment, Social Intelligence for Robots, Machine Learning and Adaptation
Abstract: Social robots have gained significant importance in Human-Robot Interaction, particularly in domains requiring personalized support, such as education, healthcare, and entertainment. The rise of e-sports has created a demand for effective coaching systems that can provide tailored guidance to players, paving the way for the integration of social robots as e-coaches. This pilot study explores the role of Furhat, a social robot, as an e-coach in a football video game. Thanks to computer vision techniques, Furhat analyzes the human player's performance in real-time and provides adaptive feedback tailored to individual gameplay styles. The study investigates the effectiveness of the robot in providing both technical guidance and emotional support. Forty participants, divided into casual and hardcore gamers, engaged with Furhat in a controlled experimental setting. Results revealed that casual gamers sought general guidance and linguistic clarity, while hardcore gamers prioritized context-relevant, well-timed feedback. Although robot gender had a minimal overall impact, a statistically significant interaction was observed between robot gender and gamer type on adaptability perception (p = 0.046). Performance data showed that 85% of participants either maintained or improved their gameplay, with a majority reporting positive comfort and engagement levels during robotic interaction.
|
|
13:50-14:02, Paper ThKT3.6 | Add to My Program |
The Impact of Adaptive Emotional Alignment on Mental State Attribution and User Empathy in HRI (I) |
|
Buracchio, Giorgia (University of Turin), Callegari, Ariele (University of Turin), Donini, Massimo (University of Turin), Gena, Cristina (Università Di Torino), Lieto, Antonio (Università Di Torino), Lillo, Alberto (University of Turin), Mattutino, Claudio (Università Di Torino), Mazzei, Alessandro (Università Di Torino - Dipartimento Di Informatica), Pigureddu, Linda (University of Turin), Striani, Manuel (DiSIT - University of Piemonte Orientale), Vernero, Fabiana (Università Degli Studi Di Torino) |
Keywords: Affective Computing, Motivations and Emotions in Robotics, Robot Companions and Social Robots
Abstract: The paper presents an experiment on the effects of adaptive between agents, considered a prerequisite for empathic communication, in Human-Robot Interaction (HRI). Using the NAO robot, we investigate the impact of an emotionally aligned, empathic, dialogue on these aspects: (i) the robot's persuasive effectiveness, (ii) the user's communication style, and (iii) the attribution of mental states and empathy to the robot. In an experiment with 42 participants, two conditions were compared: one with neutral communication and another where the robot provided responses adapted to the emotions expressed by the users. The results show that emotional alignment does not influence users' communication styles or have a persuasive effect. However, it significantly influences attribution of mental states to the robot and its perceived empathy.
|
|
ThKT4 Regular Session, Blauwe Zaal |
Add to My Program |
Applications of Social Robots XI |
|
|
Chair: Brandao, Martim | King's College London |
|
12:50-13:02, Paper ThKT4.1 | Add to My Program |
Kinesthetic vs Imitation: Analysis of Usability and Workload of Programming by Demonstration Methods |
|
Maric, Bruno (Univeristy of Zagreb, Faculty of Electrical Engineering and Comp), Zoric, Filip (University of Zagreb, Faculty of Electrical Engineering and Comp), Petric, Frano (University of Zagreb, Faculty of Electrical Engineering and Comp), Orsag, Matko (University of Zagreb, Faculty of Electrical Engineering and Comp) |
Keywords: Programming by Demonstration, HRI and Collaboration in Manufacturing Environments, Human Factors and Ergonomics
Abstract: Programming by Demonstration (PbD) is a simple and efficient way to program robots without coding. PbD enables unskilled operators to demonstrate and guide robots to execute even the most complex tasks. This work aims to compare two approaches to PbD with a comprehensive user study, focusing on a common human skill. Each participant had to demonstrate to a robot how to draw a simple pattern both using a virtual marker and kinesthetic teaching. To evaluate differences between these demonstration approaches, we conducted a user study with 24 participants, benchmarking programmed trajectories, NASA raw task load index (rTLX), and system usability scale (SUS). We evaluated the similarity of the executed trajectories measuring difference between demonstrated and ideal trajectories. Our results show that human demonstration using a virtual marker is on average 8 times faster, superior in terms of quality and imposes 2 times less overall workload than kinesthetic teaching.
|
|
13:02-13:14, Paper ThKT4.2 | Add to My Program |
Leveraging Interface Force: A Potential Alternative Metric for Evaluating Back Support Exoskeletons |
|
LEONG, JOSHUA WEI REN (National University of Singapore), Kwok, Thomas M. (University of Waterloo) |
Keywords: Assistive Robotics, Evaluation Methods, Human Factors and Ergonomics
Abstract: This study presents a novel and practical method for evaluating back support exoskeletons (BSEs) in real-world scenarios where electromyograph (EMG) and metabolic cost (MC) evaluations would be challenging. We propose using interface force as a potential alternative metric for exoskeleton evaluations. To measure interface force between the user and BSE, we integrated a compact load cell into the exoskeleton’s thigh cuff. This small load cell allows for precise force measurements without significantly affecting the BSEs kinematics or inertia. Unlike EMG and MC evaluations, interface force is unaffected by sweat and other human factors, nor hinder users’ movements. This enables real-time assessment of the BSEs assistance in both laboratory and real-world workplace environments. Experimental data showed that there was a statistically significant strong correlation between the peak interface force and peak EMG performance reduction during a repetitive lifting task. This innovative sensing interface offers a promising alternative to EMG measurements, facilitating more reliable and practical evaluation of BSE performance in field tests with real-world workplace.
|
|
13:14-13:26, Paper ThKT4.3 | Add to My Program |
Evaluating Appearance-Based Gaze Pattern for Human-Robot Interaction |
|
Cheng, Linlin (Vrije Universiteit Amsterdam), Hindriks, Koen (Vrije Universiteit Amsterdam), de Bruijn, Mark (University of Massachusetts Lowell), Belopolsky, Artem (Vrije Universiteit Amsterdam) |
Keywords: Evaluation Methods, Detecting and Understanding Human Activity, Non-verbal Cues and Expressiveness
Abstract: Appearance-based gaze estimation, an accessible and unobtrusive alternative to eye tracking, has advanced significantly, yet their adoption in human-robot interaction (HRI) remains limited. A key barrier is the lack of clarity on how they compare to high-precision eye trackers. To address this, we evaluate this method against eye-tracker glasses in an HRI setting using calibration and attention detection tasks. We assess performance across different cameras (4K and robot’s built-in camera) and participant conditions (with and without glasses). Results show that the 4K camera and participants without glasses yield higher accuracy and precision. With a simple offset correction, this method achieves comparable performance to eye-tracker glasses for average gaze pattern but struggles with detecting gaze patterns over time. It also demonstrates potential for real-time robot attention detection. We conclude that appearance-based gaze estimation is a viable, cost-effective alternative to traditional eye tracking in HRI, particularly for average gaze pattern detection.
|
|
13:26-13:38, Paper ThKT4.4 | Add to My Program |
Human-Inspired Compliance Discrimination with a Multi-Degrees of Freedom Robotic Manipulator |
|
Zinelli, Lucia (University of Pisa), Pagnanelli, Giulia (University of Pisa), Bianchi, Matteo (University of Pisa) |
Keywords: Degrees of Autonomy and Teleoperation
Abstract: In humans, touch-mediated compliance perception integrates sensory feedback with adaptive motor control strategies that regulate internal muscle co-contraction. This mechanism enables the extraction of meaningful information from contact with objects, allowing for precise compliance discrimination. Inspired by this capability, in [1] we developed a biomimetic approach that combines a soft optical tactile sensor, the TacTip -which mimics the main structure of the human fingertip-, with a computational model of human touch (tactile flow) and a single degree of freedom (dof) Variable Stiffness Actuator (VSA), to infer the compliance of the explored specimen. By mapping human muscular co-contraction patterns to the control of the VSA -which emulates the agonist-antagonist behaviour of human muscles-, we demonstrated that our model-based estimation approach achieved high-accuracy results. In this work, we demonstrate the effectiveness of our method using multi-dofs robotic platforms. The goal is to provide a contribution towards the deployment of robots with advanced perceptual and motor capabilities, working alongside and with humans. We considered a 7-dofs robotic manipulator, the Franka Emika Panda, and mapped human co-contraction profiles through Cartesian Impedance regulation. We achieved a maximum compliance estimation error of 6%, with no statistically significant differences compared to the results obtained with the single-dof VSA, confirming the robustness and generalizability of our technique to more complex robotic systems.
|
|
13:38-13:50, Paper ThKT4.5 | Add to My Program |
Assessing Pedestrian Behavior Around Autonomous Cleaning Robots in Public Spaces: Findings from a Field Observation |
|
Raab, Maren (Ulm University), Miller, Linda (Ulm University), Zeng, Zhe (Ulm University), Jansen, Pascal (Ulm University, Institute of Media Informatics), Baumann, Martin (Ulm University), Kraus, Johannes (Johannes-Gutenberg University of Mainz) |
Keywords: Human Factors and Ergonomics, Interaction Kinesics, User-centered Design of Robots
Abstract: As autonomous robots become more common in public spaces, spontaneous encounters with laypersons are more frequent. For this, robots need to be equipped with communication strategies that enhance momentary transparency and reduce the probability of critical situations. Adapting these robotic strategies requires consideration of robot movements, environmental conditions, and user characteristics and states. While numerous studies have investigated the impact of distraction on pedestrians' movement behavior, limited research has examined this behavior in the presence of autonomous robots. This research addresses the impact of robot type and robot movement pattern on distracted and undistracted pedestrians' movement behavior. In a field setting, unaware pedestrians were videotaped while moving past two working, autonomous cleaning robots. Out of N = 498 observed pedestrians, approximately 8% were distracted by smartphones. Distracted and undistracted pedestrians did not exhibit significant differences in their movement behaviors around the robots. Instead, both the larger sweeping robot and the off-set rectangular movement pattern significantly increased the number of lateral adaptations compared to the smaller cleaning robot and the circular movement pattern. The off-set rectangular movement pattern also led to significantly more close lateral adaptations. Depending on the robot type, the movement patterns led to differences in the distances of lateral adaptations. The study provides initial insights into pedestrian movement behavior around an autonomous cleaning robot in public spaces, contributing to the growing field of human-robot interaction research in the field.
|
|
ThKT5 Regular Session, Auditorium 5 |
Add to My Program |
TRUST Regular Session II |
|
|
Chair: Fant-Male, James | Tampere University |
Co-Chair: Radka, Basia | University of Washington |
|
12:50-13:02, Paper ThKT5.1 | Add to My Program |
Anthropomorphic Robots: Form Matters Especially in Case of Failure |
|
Söhngen, Yannic (University Duisburg-Essen), Alsagara, Mohaimn, Layth, Abbas (University Duisburg-Essen), Prilla, Michael (University of Duisburg-Essen) |
Keywords: Anthropomorphic Robots and Virtual Humans, HRI and Collaboration in Manufacturing Environments, User-centered Design of Robots
Abstract: In human-robot interaction, the effect of anthropomorphism on trust is an interesting topic for researchers. However, there is still no clear understanding of how an anthropomorphic design affects trust. To address this, we designed a comparative study with a robot with a human-like shape and a technically shaped robot that could perform the same cooperative task. The task was first performed correctly and then with a failure by the robots. In the correct condition, both robots were similarly trusted by the participants. However, in the failure condition, the participants trusted the anthropomorphically shaped robot more. This indicated that only in the case of failure does anthropomorphism affect trust, suggesting that anthropomorphism dampens the dissolution of trust in the case of failure. Future research should consider how different types of manipulations for anthropomorphism or combinations of manipulations affect trust.
|
|
13:02-13:14, Paper ThKT5.2 | Add to My Program |
Trust Dynamics in Augmented Reality-Mediated Human-Robot Teams: Impact of Performance, Feedback, and Error Severity |
|
Dossett, Benjamin (Parsons Corporation), Sharma, Janamejay (University of Denver), Gregory, Jason M. (US Army Research Laboratory), Haring, Kerstin Sophie (University of Denver), Reardon, Christopher M. (MITRE) |
Keywords: Virtual and Augmented Tele-presence Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: As robots evolve into collaborators in human-robot teams, appropriately calibrated trust becomes crucial. This study investigates trust dynamics in an Augmented Reality-based human-robot teaming system, focusing on the interplay between robot performance and robot-to-human feedback. In an experiment with 32 participants, we examined how robot feedback influences user trust, particularly when it is mismatched with robot performance. The results show that while robot-to-human feedback does not significantly affect trust on its own, it positively affects user responses when matched to performance. Robot performance had a stronger influence on trust than feedback, and error severity significantly impacted trust levels. These findings contribute to understanding trust calibration in human-robot interactions and provide insight for designing effective trust-aware robotic systems, addressing critical gaps in existing research, and offering implications for improving human-robot collaboration across various domains.
|
|
13:14-13:26, Paper ThKT5.3 | Add to My Program |
The Role of Dispositional Trust in Adaptive Automation for Trust Calibration |
|
Wielatz, Margaret (Purdue University), Pandya, Maitri (Purdue University), Yuh, Madeleine (Purdue University), Jain, Neera (Purdue University) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Detecting and Understanding Human Activity, User-centered Design of Robots
Abstract: A long-standing challenge in human-automation interaction is adapting automation to the individual characteristics of different humans. As such, “adaptive automation” is intended to be responsive, in real time, to the human. For human-automation interaction scenarios requiring trust calibration, this generally requires adaptations tailored to situational and learned trust factors which change during the human’s interaction. However, given that a human’s dispositional trust factors also influence their dynamic trust interactions with automation, we aim to answer the question “do different dispositional trust characteristics (toward automation) warrant different types of adaptive automation?” To do this, we build on prior work in which distinct trust dynamics were identified among humans interacting with an intelligent decision aid in a simulated reconnaissance mission. We design a new experiment that enables us to 1) classify participants into one of the two identified trust behaviors based upon a limited set of observations and 2) evaluate each participant’s performance with two different adaptive automation schemes—one customized to their dispositional characteristic and one that is generalized to the broad population based on a single model of trust behavior. Based on data collected from 85 participants, we show that although the customized adaptive automation policies do not produce statistically significant differences in mission performance outcomes than the general one, identifying a participant’s trust behavior using model-based classification is useful for determining which individuals may benefit most from assistance in calibrating their trust.
|
|
13:26-13:38, Paper ThKT5.4 | Add to My Program |
Beyond Scripted Apologies: Calibrating Trust with Dynamically Generated Responses |
|
Perkins, Russell (Umass Lowell), Robinette, Paul (University of Massachusetts Lowell) |
Keywords: Social Intelligence for Robots, Robot Companions and Social Robots
Abstract: Trust calibration is a challenge in human-robot interaction (HRI). Miscalibrated trust can result in overreliance or distrust of robotic systems, ultimately reducing the effectiveness of collaboration. This paper presents a novel AI-driven approach to trust calibration that integrates adaptive verbal apologies by incorporating user feedback. A QT Robot dynamically adjusts its responses based on an identification of the specific error by a user. The robot then generates an apology that incorporates this feedback using ChatGPT. We conducted a CAPTCHA-based assisted decision-making experiment with 40 participants to determine whether adaptive apology improves trust more than static apology. The levels of trust before and after the interaction were measured using the Multidimensional Measure of Trust (MDMT) survey. The results indicate that adaptive trust repair generated by LLM significantly improved user perceptions in the dimensions of reliability, transparency and dependability. These findings demonstrate the effectiveness of personalized, real-time trust interventions and contribute to the growing body of research on trust calibration by introducing a dynamic, adaptive system that enhances collaboration and trust adaptation.
|
|
13:38-13:50, Paper ThKT5.5 | Add to My Program |
See What I Mean? Expressiveness and Clarity in Robot Display Design |
|
Ebisu, Matthew (Tufts University), YU, HANG (Tufts University), Aronson, Reuben (Tufts University), Short, Elaine Schaertl (Tufts University) |
Keywords: Non-verbal Cues and Expressiveness
Abstract: Non-verbal visual symbols and displays play an important role in communication when humans and robots work collaboratively. However, few studies have investigated how different types of non-verbal cues affect objective task performance, especially in a dynamic environment that re- quires real-time decision-making. In this work, we designed a collaborative navigation task where the user and the robot only had partial information about the map on each end and thus the users were forced to communicate with a robot to complete the task. We conducted our study in a public space and recruited 37 participants who randomly passed by our setup. Each participant was shown two modalities: animated anthropomorphic eyes and animated icons, or static anthropomorphic eyes and static icons. We found that participants interacting with a robot with animated expressions reported the greatest level of trust, while participants interpreted static icons the best and participants with a robot with static eyes had the highest completion success. These results suggest that while animation can foster trust with robots, they can still benefit from the addition of familiar static icons that are easier to interpret to optimize communication. We published our code, designed symbols, and collected results online at: https://github.com/mattufts/huamn_Cozmo_interaction
|
|
ThKT6 Special Session, Auditorium 6 |
Add to My Program |
SS: Bridging Trust and Context: Dynamic Interactions in HAI II/SS: Fluidity
in Human-Robot Interaction |
|
|
Chair: Terada, Kazunori | Gifu University |
|
12:50-13:02, Paper ThKT6.1 | Add to My Program |
Trust Estimation of Manipulator's Behaviors for Human-Robot Interaction (I) |
|
Kaneko, Sota (The Graduate University for Advanced Studies, SOKENDAI), Yun, Nungduk (The Graduate University for Advanced Studies, SOKENDAI), Yamada, Seiji (National Institute of Informatics) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Cooperation and Collaboration in Human-Robot Teams
Abstract: Trust, the cornerstone of human-robot interaction, is a key element in fostering a synergistic human-robot relationship. Trust facilitates the appropriate utilization of these systems, thereby optimizing their potential benefits. A failure to appropriately gauge the level of trust in a robot can have grave consequences, including potential misuse and accidents, underscoring the critical importance of accurate trust assessment in fostering a harmonious and safe human-robot collaboration. To avert such issues, it is imperative to calibrate trust levels accurately. To address this need, we have developed a novel estimation model for trust, leveraging the capabilities of structural equation modeling (SEM) to address the challenges posed by latent variables. The proposed model demonstrated a 70% accuracy in estimating trust during a manipulator's successful and failed behaviors with uncertainty. The outcomes demonstrate the efficacy of the proposed method in surpassing conventional approaches.
|
|
13:02-13:14, Paper ThKT6.2 | Add to My Program |
Comparing Agent-Based VR Stress Therapy: Single vs. Group Interventions (I) |
|
Islam, Monirul (Shizuoka University), Takeuchi, Yugo (Shizuoka University) |
Keywords: Detecting and Understanding Human Activity, Virtual and Augmented Tele-presence Environments, User-centered Design of Robots
Abstract: This study presents a comparative analysis of Virtual Reality (VR) based single and group stress reduction sessions using a Large Language Model (LLM) agent as a virtual therapist. A total of 42 participants were recruited. 21 participants attended a single session, while the remaining participants were divided into 7 groups, with 3 members in each group, and joined the group session. Participants engaged in immersive VR environments designed for stress relief with interactive support from the LLM agent. Quantitative results revealed that both sessions effectively reduced stress, as measured by the Heart rate. However, group sessions demonstrated slightly better outcomes, including 14.7% greater STAI reduction (M=12.95, SD=2.4), 7.47% lower perceived workload (M=50.43, SD=4.2), and 20.53% higher engagement than single session as evidenced by interaction points (M=22.9) and sentiment positivity (M=0.32). Qualitative feedback highlighted peer support and collaborative dynamics in group settings, while single sessions provided a more personalized experience. By exploring how LLM agents perform across these contexts, this research highlights their potential to enhance VR therapy’s scalability, engagement, and effectiveness. These findings inform the design of accessible, impactful mental health interventions, advancing the integration of VR and AI to meet diverse stress management needs.
|
|
13:14-13:26, Paper ThKT6.3 | Add to My Program |
“Fluency in Failure”: The Impact of Interaction Modalities on Human-Robot Collaboration in Error Recovery (I) |
|
Kostolani, David (TU Wien), Brenter, Bernd (TU Wien), Schlund, Sebastian (TU Wien) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Novel Interfaces and Interaction Modalities
Abstract: In collaborative assembly, humans and robots must coordinate their actions efficiently to achieve a common goal. The quality of this coordination is often measured by fluency, which refers to interactions being perceived as mutually engaging and highly synchronized. While fluency has been studied in various task assignment models, from sequential turn-taking to reciprocal effort, most research focuses on predictable interactions. However, in-the-wild applications of human-robot collaboration also include workflow interruptions, such as when a robot performs an incorrect task. We investigated which interaction modality best facilitates both communicating errors to the robot and fluent workflow recovery. In our study (n=29), participants engaged in a collaborative assembly task with deliberate robot failures. They then had to communicate the failure to the robot, and we examined whether the interaction modality influenced perceived fluency and the time it took to complete the task after signalling the error. The three interaction modalities tested included a graphical user interface, haptic interactions, and detection of implicit cues via a camera-based system. Our results indicate that implicit communication led to the fastest task recovery, the highest perceived fluency, and the strongest sense of user-robot bonding.
|
|
13:26-13:38, Paper ThKT6.4 | Add to My Program |
Haptic Communication in Human-Human and Human-Robot Co-Manipulation (I) |
|
Allen, Katherine H. (Tufts University), Rogers, Chris (Tufts University), Short, Elaine Schaertl (Tufts University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Interaction Kinesics, Detecting and Understanding Human Activity
Abstract: When a human dyad jointly manipulates an object, they must communicate about their intended motion plans. Some of that collaboration is achieved through the motion of the manipulated object itself, which we call “haptic communication.” In this work, we captured the motion of human-human dyads moving an object together with one participant leading a motion plan about which the follower is uninformed. We then captured the same human participants manipulating the same object with a robot collaborator. By tracking the motion of the shared object using a low-cost IMU, we can directly compare human-human shared manipulation to the motion of those same participants interacting with the robot. Intra-study and post-study questionnaires provided participant feedback on the collaborations, indicating that the humanhuman collaborations are significantly more fluent, and analysis of the IMU data indicates that it captures objective differences in the motion profiles of the conditions. The differences in objective and subjective measures of accuracy and fluency between the human-human and human-robot trials motivate future research into improving robot assistants for physical tasks by enabling them to send and receive anthropomorphic haptic signals.
|
| |