| |
Last updated on August 28, 2025. This conference program is tentative and subject to change
Technical Program for Wednesday August 27, 2025
|
WeFT1 Regular Session, Auditorium 1 |
Add to My Program |
Cooperation and Collaboration in Human-Robot Teams II |
|
|
Chair: Robert, Lionel | University of Michigan |
|
08:30-08:42, Paper WeFT1.1 | Add to My Program |
"I Can't See You, but I Trust You!": Exploring the Impact of Reduced Vision on Object Handover Times in Human-Robot Collaboration |
|
Jaspaert, Lukas (Faculty of Technology, Bielefeld University), Hindemith, Lukas (Bielefeld University), Schneider, Sebastian (University of Twente) |
Keywords: Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams, Novel Interfaces and Interaction Modalities
Abstract: Despite the potential benefits of assistive robots for aging populations or individuals with special interactive requirements, such as reduced vision or hearing, there is limited research in this area. We present a repeated-measures study that included twenty-nine participants who interacted in two phases, either blindfolded first and then not blindfolded or vice versa. The study measured trust and negative attitudes towards robots (NARS) before and after handover interactions, as well as social robot attributes (RoSAS) and handover times. Results revealed no significant differences in trust levels or social attributes. However, negative attitudes and handover times were significantly lower when participants interacted first blindfolded. These findings highlight the importance of considering reduced vision capabilities in Human-Robot Collaboration and suggest potential strategies for enhancing interaction experiences in this context. Further research could explore additional factors influencing collaborative interactions and inform the development of more inclusive and effective assistive robot technologies.
|
|
08:42-08:54, Paper WeFT1.2 | Add to My Program |
Estimating Situation Awareness for Human-Robot Teaming |
|
Ali, Arsha (University of Michigan), Robert, Lionel (University of Michigan), Tilbury, Dawn (University of Michigan) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Human Factors and Ergonomics, Monitoring of Behaviour and Internal States of Humans
Abstract: When humans supervise multiple semi-autonomous robots while also attending to their own tasks simultaneously, they may lack the situation awareness needed to assist their robot teammates. There is a need to monitor the human's situation awareness in real-time, so interventions can be taken to improve poor situation awareness. While prior work has developed models to estimate human situation awareness, they rely heavily on advanced machine learning models and a single source of input through eye-tracking that can pose operational challenges. We develop a real-time human situation awareness estimator based on data from a human-robot teaming experiment. The situation awareness estimator uses simple and interpretable logistic regression models that take inputs from both eye-tracking and behavioral measures. Cross-validation demonstrated the situation awareness estimator had an average accuracy of 74%. The estimator is robust to missing inputs, and can monitor human situation awareness non-intrusively in real-time.
|
|
08:54-09:06, Paper WeFT1.3 | Add to My Program |
Negotiation of Assignation Plans in Human-Robot Team Task Scheduling |
|
Fuster-Palà, Llum (Universitat Politècnica De Catalunya - BarcelonaTech (UPC)), Dalmasso Blanch, Marc (Institut De Robòtica I Informàtica Industrial), Aubach-Altes, Artur (Universitat Politècnica De Catalunya - BarcelonaTech (UPC)), Izquierdo-Badiola, Silvia (Eurecat), Sanfeliu, Alberto (Universitat Politècnica De Cataluyna), Garrell, Anais (UPC-CSIC) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Robot Companions and Social Robots, Applications of Social Robots
Abstract: In recent years, considerable attention has been given to improving human-robot collaboration. Despite advances in robotic capabilities and interaction techniques, achieving a fair distribution of tasks remains challenging due to the dynamic nature of human preferences and situational constraints. This paper presents a novel negotiation framework that enables robots to effectively communicate with humans to facilitate fair and adaptive task allocation. Our approach leverages automated planning techniques with the Planning Domain Definition Language (PDDL), explicitly encoding tasks, constraints, and preferences from both human and robotic perspectives. Task allocation is optimized based on three key criteria: the robot’s effort, the human’s effort, and overall task success. Additionally, we integrate a Natural Language Processing (NLP) model that interprets human preferences and informs the negotiation process, ensuring that the robot generates task proposals aligned with human input. The negotiation follows an alternating-offer protocol, with the robot employing a sigmoid conceder strategy to iteratively refine task allocation, leading to balanced and mutually acceptable plans. To evaluate our approach, we conduct a comprehensive user study with non-trained volunteers interacting with the robot, assessing the effectiveness, fairness, and adaptability of the proposed system in real-world scenarios.
|
|
09:06-09:18, Paper WeFT1.4 | Add to My Program |
Recognition and Anticipation of Human Actions in a Human-Robot Collaborative Assembly Scenario |
|
Zoppi, Giorgio (Università Politecnica Delle Marche), Forlini, Matteo (Università Politecnica Delle Marche), Palmieri, Giacomo (Università Politecnica Delle Marche), Neto, Pedro (University of Coimbra) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans
Abstract: The key challenge in expanding human-robot collaboration is enabling robots to understand and adapt to human needs, fostering seamless interaction. This research develops an open-source framework integrated with a collaborative robotic arm to recognize, predict, and anticipate human actions in an assembly scenario. Human action classification was performed by comparing various approaches, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks, combined with YOLO-based object recognition. The study utilized a pneumatic cylinder as an example component for the assembly, along with the KUKA LBR iiwa 7 R800 robotic arm. The action recognition model, developed from scratch with a custom dataset, achieved 95% offline accuracy and over 90% online. The action prediction model, trained on real assembly sequences from human demonstrations, suggests the next robot action. This approach improves the flexibility and customization by allowing the assembly sequence to be learned directly observing the operator performing the task, without prior knowledge (except for the objects to detect). This advancement enhances cobots ability to recognize, predict, and anticipate human actions, improving intuitive and efficient collaboration in manufacturing.
|
|
09:18-09:30, Paper WeFT1.5 | Add to My Program |
Human-Robot Collaborative Transport Personalization Via Dynamic Movement Primitives and Velocity Scaling |
|
Franceschi, Paolo (SUPSI), Bussolan, Andrea (Scuola Universitaria Professionale Della Svizzera Italiana), Pomponi, Vincenzo (SUPSI-ISTePS), Avram, Oliver (SUPSI-ISTePS), Baraldo, Stefano (Scuola Universitaria Professionale Della Svizzera Italiana), Valente, Anna (SUPSI-ISTePS) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments, Motion Planning and Navigation in Human-Centered Environments
Abstract: Nowadays, industries are showing a growing inter- est in human-robot collaboration, particularly for shared tasks. This requires intelligent strategies to plan a robot’s motions, considering both task constraints and human-specific factors such as height and movement preferences. This work introduces a novel approach to generate personalized trajectories using Dynamic Movement Primitives (DMPs), enhanced with real- time velocity scaling based on human feedback. The method was rigorously tested in industrial-grade experiments, focusing on the collaborative transport of an engine cowl lip section. Comparative analysis between DMP-generated trajectories and a state-of-the-art motion planner (BiTRRT) highlights their adaptability combined with velocity scaling. Subjective user feedback further demonstrates a clear preference for DMP- based interactions. Objective evaluations, including physiologi- cal measurements from brain and skin activity, reinforce these findings, showcasing the advantages of DMPs in enhancing human-robot interaction and improving user experience.
|
|
WeFT2 Regular Session, Auditorium 2 |
Add to My Program |
Social Intelligence of Robots II |
|
|
Chair: Droog, Simone de | Amsterdam University of Applied Sciences |
Co-Chair: Ashkenazi, Shaul | University of Glasgow |
|
08:30-08:42, Paper WeFT2.1 | Add to My Program |
I Can't Help Myself! "Asking for Help" through an Elicitation Study in the Wild |
|
Liang, Claire Yilan (Massachusetts Institute of Technology), Ricci, Andy Elliot (Bates College), Jung, Malte (Cornell University), Kress-Gazit, Hadas (Cornell University) |
Keywords: Detecting and Understanding Human Activity, Motion Planning and Navigation in Human-Centered Environments, Non-verbal Cues and Expressiveness
Abstract: In this work, we examine robots ``asking for help" in unpredictable human spaces. We focus on an open question particularly relevant for robots deployed in public-- ``how do people help robots?" We present an elicitation study that shows how asking for help in a real-world field study yields valuable and sometimes unexpected information. From our study, we examine the responses that strangers have towards a robot asking for spatial directions and extract valuable themes that can inform future asking-for-help systems. Our analysis provides a wide range of information, from geometric and topological information in natural language to details about rejection during an interaction. Further, we also provide anecdotes of valuable outlier behavior that can only be captured through a study in a real public space. Through our work, we show an example of the importance of in-the-wild studies and discuss how the rich information they contribute will help robots effectively ask for help.
|
|
08:42-08:54, Paper WeFT2.2 | Add to My Program |
Embodied AI As Companion: How Loneliness, Gender and Culture Shape Attitudes towards AI and Robots |
|
Heck, Franziska Elisabeth (Edinburgh Napier University), Sobolewska, Emilia (Edinburgh Napier University), Meharg, Debbie (Napier University), Fabian, Khristin (Edinburgh Napier University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Creating Human-Robot Relationships, Robot Companions and Social Robots
Abstract: Loneliness among university students is a widespread problem, affecting mental health and academic performance. While AI-driven chatbots offer digital companionship, their lack of embodiment may limit their ability to foster meaningful connections. In contrast, social robots, a form of embodied artificial intelligence (EAI), offer physical presence and non-verbal interaction. This study examines how social (lack of a broad social network) and emotional loneliness (lack of close emotional bonds) influence students' attitudes towards AI and robots, considering gender and cultural background. A cross-sectional online survey (N = 250) was analysed using Partial Least Squares Structural Equation Modelling (PLS-SEM). Results show that social loneliness reduces negativity towards robots, but does not increase preference for their companionship, while emotional loneliness increases scepticism towards AI, but does not significantly affect attitudes towards robots. Gender moderates these effects, with emotionally lonely women expressing greater negativity towards robots and socially lonely women expressing greater openness. Cultural background does not moderate these relationships, but individualistic participants show less positivity towards AI and robots. Frequent interaction with AI and robots correlates with more positive attitudes, suggesting that familiarity promotes acceptance. These findings provide insights into how loneliness and demographics shape attitudes towards AI and inform the development of EAI interventions for student well-being.
|
|
08:54-09:06, Paper WeFT2.3 | Add to My Program |
Robots Waiting for the Elevator: Integrating Social Norms in a Low-Data Regime Goal Selection Problem |
|
Racca, Mattia (NAVER LABS Europe), Willamowski, Jutta (Naver Labs Europe), Colombino, Tommaso (Naver Labs Europe), Monaci, Gianluca (NAVER LABS Europe), Gallo, Danilo (Naver Labs Europe) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation, Social Intelligence for Robots
Abstract: As robots increasingly share spaces with people, it becomes important for them to behave according to our social norms. In this paper, we explore the problem of finding socially acceptable locations for a robot to wait for a shared elevator by learning from expert annotations. Access to relevant, unlabeled data is however scarce in this setting and annotations expensive to gather, as they require explicit knowledge about the social norms, the robot, and the service it carries out. We tackle this low-data regime as follows. First, we use Procedural Content Generation to generate plausible waiting scenes to be annotated. Second, we leverage available sociological studies and operationalize relevant social norms as feature maps. We train a variety of models with only 125 procedurally-generated expert-annotated scenes, testing the impact of the proposed feature maps. In our ablation study, the feature maps help the models' performance and their generalization capabilities to non-synthetic, real scenes. We inspect the decisions taken by the best models, probing their strengths and weaknesses, and identifying general issues and discussing potential solutions.
|
|
09:06-09:18, Paper WeFT2.4 | Add to My Program |
Privacy and Transparency in Human-Robot Conversations: Effects on Self-Disclosure |
|
Zhong, Xiyu (Karlsruhe Institute of Technology (KIT)), Maure, Romain (Karlsruhe Institute of Technology), Bruno, Barbara (Karlsruhe Institute of Technology (KIT)) |
Keywords: Human Factors and Ergonomics, Robot Companions and Social Robots, Applications of Social Robots
Abstract: As social robots increasingly integrate into human society, their ability to sense our surroundings legitimately raises privacy concerns. The objective of this study is twofold. First, we explore the possibility of providing social robots with privacy-preserving sensing, i.e. the ability of extracting necessary sensory information while preserving users' privacy. Second, we investigate whether the use of such privacy preserving sensing as well as transparency with respect to the robot's sensing capabilities can encourage individuals to self disclose during human-robot conversations. A 2x2 between-subject experiment was conducted with 28 participants, who engaged in a conversation with the PixelBot robot. The results suggest that conversational robots can perform effective privacy-preserving feature extraction during interactions with people, but no statistically significant effects were found between the use of such privacy-preserving sensing, nor the robot's transparency, and the breadth and depth of self-disclosure.
|
|
09:18-09:30, Paper WeFT2.5 | Add to My Program |
Lost in Transparency? Exploring Uni and Multimodal Transparency Declarations in Human-Robot Interaction |
|
Helgert, André (University of Applied Sciences Ruhr West), Erle, Lukas (Ruhr West University of Applied Sciences), Dittmann, Andre (Ruhr West University of Applied Sciences), Eimler, Sabrina (Hochschule Ruhr West, University of Applied Sciences), Straßmann, Carolin (University of Applied Sciences Ruhr West) |
Keywords: Creating Human-Robot Relationships, Applications of Social Robots, Human Factors and Ergonomics
Abstract: Transparent communication about how social robots collect and process personal data is essential, especially as their presence in public spaces continues to grow and their applications become more widespread. While previous research has primarily focused on the content and creation of explainable transparency, less attention has been given to which modalities robots should use to convey transparency in the first place. To close this gap, we examined different single modality approaches for communicating transparency and compared them to various combined modalities for transparency explanation, since these have the potential to convey information more efficiently through multiple channels. We conducted a virtual reality (VR) two-part laboratory experiment in which N = 106 participants interacted with a virtual Pepper robot and had to disclose personal data to it. The study design consisted of six conditions: a control group without transparency communication, two multimodal conditions where transparency declarations were presented through multiple channels, and three unimodal conditions where a single channel was used for transparency communication. The results show that the unimodal group was more effective than both the multimodal and control groups in delivering clear and understandable transparency declarations. This suggests that unimodal approaches to transparency may be the preferable option. This study provides insights into transparency declarations in HRI and offers key takeaways on how transparency can be communicated most effectively.
|
|
WeFT3 Regular Session, Auditorium 3 |
Add to My Program |
Affective Artificial Agents II |
|
|
Chair: Sugaya, Midori | Shibaura Institute of Technology |
Co-Chair: Hofstede, Bob Matthias | Vilans |
|
08:30-08:42, Paper WeFT3.1 | Add to My Program |
Don't Throw Me, I Will Cry: Developing a Robot That Responds Emotionally to Rough Handling |
|
Goto, Hayata (ATR), Iio, Takamasa (Doshisha University), Sumioka, Hidenobu (ATR), Shiomi, Masahiro (ATR) |
Keywords: Detecting and Understanding Human Activity, Non-verbal Cues and Expressiveness, Social Touch in Human–Robot Interaction
Abstract: As societies age rapidly, baby-sized robots using non-verbal, infant-like interactions have gained attention for their potential emotional benefits, even for users with cognitive impairments. Although occasional rough handling can disrupt natural interactions with these robots, little attention has been paid to mechanisms that recognize such behaviors and respond appropriately. To address this issue, we implemented and evaluated a function that detects rough handling using onboard inertial sensors and triggers crying sounds in a baby-sized robot. The experimental results indicated that while this crying response effectively signaled the robot's discomfort--leading participants to perceive it as "experiencing unpleasantness"--it did not improve the impression ratings of the robot or significantly increase users' guilt. These findings are likely influenced by the short-term, task-driven nature of the experiment, necessitating further investigation in more natural, long-term interactions.
|
|
08:42-08:54, Paper WeFT3.2 | Add to My Program |
A Deep Learning-Based Emotion Recognition Pipeline for Public Speaking Anxiety Detection in Social Robotics |
|
Boldo, Michele (University of Verona), Forghani, Delara (University of Waterloo), Bombieri, Nicola (University of Verona), Dautenhahn, Kerstin (University of Waterloo), Nehaniv, Chrystopher (University of Waterloo) |
Keywords: Affective Computing, Machine Learning and Adaptation, Social Intelligence for Robots
Abstract: Social robots are increasingly employed as personalized coaches in educational settings, offering new opportunities for applications such as public speaking training. In this domain, emotional self-regulation plays a crucial role, especially for students presenting in a non-native language. This study proposes a novel pipeline for detecting public speaking anxiety (PSA) using multimodal emotion recognition. Unlike traditional datasets that typically rely on acted emotions, we consider spontaneous data from students interacting naturally with a social robot coach. Emotional labels are generated through knowledge distillation, enabling the creation of soft labels that reflect the emotional valence of each presentation. We introduce a lightweight multimodal model that integrates speech prosody and body posture to classify speakers by anxiety level, without relying on linguistic content. Evaluated on a collected dataset of student presentations, the system achieves 74.67% accuracy and an F1-score of 0.64. The model can operate completely disconnected from the transmission network on an NVIDIA Jetson board, safeguarding data privacy and demonstrating its feasibility for real-world deployment.
|
|
08:54-09:06, Paper WeFT3.3 | Add to My Program |
Situated Haptic Interaction: Exploring the Role of Context in Affective Perception of Robotic Touch |
|
ren, qiaoqiao (AIRO - IDLab - University of Ghent - IMEC), Belpaeme, Tony (University of Ghent - IMEC) |
Keywords: Social Touch in Human–Robot Interaction, Affective Computing, Social Intelligence for Robots
Abstract: Affective interaction is not merely about recognizing emotions; it is an embodied, situated process shaped by context and co-created through interaction. In affective computing, the role of haptic feedback within dynamic emotional exchanges remains underexplored. This study investigates how situational emotional cues influence the perception and interpretation of haptic signals given by a robot. In a controlled experiment, 32 participants watched video scenarios in which a robot experienced either positive actions (such as being kissed), negative actions (such as being slapped) or neutral actions. After each video, the robot conveyed its emotional response through haptic communication, delivered via a wearable vibration sleeve worn by the participant. Participants rated the robot’s emotional state—its valence (positive or negative) and arousal (intensity)—based on the video, the haptic feedback, and the combination of the two. The study reveals a dynamic interplay between visual context and touch. Participants’ interpretation of haptic feedback was strongly shaped by the emotional context of the video, with visual context often overriding the perceived valence of the haptic signal. Negative haptic cues amplified the the perceived valence of the interaction, while positive cues softened it. Furthermore, haptics override the participants’ perception of arousal of the video. Together, these results offer insights into how situated haptic feedback can enrich affective human-robot interaction, pointing toward more nuanced and embodied approaches to emotional communication with machines.
|
|
09:06-09:18, Paper WeFT3.4 | Add to My Program |
Triggering Anthropomorphism or Depicting a Robot Character: The Effects of Human-Like Timing of Emotional Expression in Human-Robot Interactions |
|
Jelinek, Matous (University of Southern Denmark), Asadi, Ali (University of Southern Denmark), Willum Bech, Caroline (University of Southern Denmark), Fischer, Kerstin (University of Southern Denmark) |
Keywords: Anthropomorphic Robots and Virtual Humans, Non-verbal Cues and Expressiveness, Linguistic Communication and Dialogue
Abstract: Much work on human-robot interaction has shown that such interactions can profit from implementing human-like behaviors, in line with theoretical approaches that assume that human-like social cues ’trigger’ or ’evoke’ social behaviors towards the respective robot. However, there is also evidence that people treat interactions with robots in special ways, that they have different expectations and attend to different communicative tasks than in interactions with other humans; especially those features that are geared towards efficiency in interaction seem not to be relevant or even perceived positively in human-robot interaction. In this paper, we investigate the effects of the relative timing of emotional expression while speaking; in a controlled in-person interactive experiment with N=56, participants interacted with a simulated robot that either presented certain emotional behaviors after the respective utterance or timed with the main content units during speech, which had been determined empirically in a prior study of interactions between humans. Results show that even though the ill-timed emotional expressions cause interruptions and problems with respect to turn-taking, participants prefer the robot that plays emotional behaviors after the utterance – thus deprioritizing the efficiency and turntaking requirements of human interaction. The results thus support a constructive perspective on human-robot interaction, where participants engage in sophisticated sense-making based on the character depicted and their own understanding of the interaction situation.
|
|
09:18-09:30, Paper WeFT3.5 | Add to My Program |
Exploring the Role of Robot's Movements for a Transparent Affective Communication |
|
Raggioli, Luca (University of Naples Federico II), Esposito, Raffaella (University of Naples Federico II), Rossi, Alessandra (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Emotional Robotics, Multi-Modal Perception for HRI, Social HRI
Abstract: Robots operating in human-populated environments must be able to convey their intentions clearly. Displaying emotions can be an effective way for robots to express their internal state and a means to react to humans’ behaviors. While facial expressions provide an immediate representation of the robot’s “feelings”, there might be situations where only facial expressions are not enough to express the robot’s intent appropriately, and multi-modal affective modalities are required. However, the characterization of the robot’s movements has not been sufficiently and thoroughly investigated. In this work, we argue that transparent non-verbal behaviors, with particular attention to the robot’s movements (e.g., arms, head, velocity), can be crucial for effective communication between robots and humans. We collected responses from N=967 people observing the robot during a science fair. Our results outline how movements can contribute to conveying emotions transparently. This is espe- cially possible when no conflicting signals are present. However, facial expression is still the most dominant modality when other modalities are not aligned with the movement’s intended emotion.
|
|
WeFT4 Regular Session, Blauwe Zaal |
Add to My Program |
Applications of Social Robots VI |
|
|
Chair: Rossi, Silvia | Universita' Di Napoli Federico II |
Co-Chair: Tanjim, Tauhid | Cornell University |
|
08:30-08:42, Paper WeFT4.1 | Add to My Program |
Accessible and Pedagogically-Grounded Explainability for Human–Robot Interaction: A Framework Based on UDL and Symbolic Interfaces |
|
Rodríguez Lera, Francisco Javier (Universidad De León), Fernandez Hernandez, Raquel (CPEE Nuestra Señora Del Sagrado Corazón), Lopez Gonzalez, Sonia (CPEE Nuestra Señora Del Sagrado Corazón), González-Santamarta, Miguel Ángel (University of León), Rodríguez, Francisco Jesús (Universidad De León), Fernández Llamas, Camino (University of León) |
Keywords: Robots in Education, Therapy and Rehabilitation, Assistive Robotics, Cognitive Skills and Mental Models
Abstract: This paper presents a novel framework for accessible and pedagogically-grounded robot explainability, designed to support human–robot interaction (HRI) with users who have diverse cognitive, communicative, or learning needs. We combine principles from Universal Design for Learning (UDL) and Universal Design (UD) with symbolic communication strategies to facilitate the alignment of mental models between humans and robots. Our approach employs Asterics Grid and ARASAAC pictograms as a multimodal, interpretable front-end, integrated with a lightweight HTTP-to-ROS 2 bridge that enables real-time interaction and explanation triggering. We emphasize that explainability is not a one-way function but a bidirectional process, where human understanding and robot transparency must co-evolve. We further argue that in educational or assistive contexts, the role of a human mediator (e.g., a teacher) may be essential to support shared understanding. We validate our framework with examples of multimodal explanation boards and discuss how it can be extended to different scenarios in education, assistive robotics, and inclusive AI.
|
|
08:42-08:54, Paper WeFT4.2 | Add to My Program |
Crowdsourcing eHMI Designs: A Participatory Approach to Autonomous Vehicle-Pedestrian Communication |
|
Cumbal, Ronald (Uppsala University), Gurdur Broo, Didem (Uppsala University), Castellano, Ginevra (Uppsala University) |
Keywords: User-centered Design of Robots, Innovative Robot Designs, Human Factors and Ergonomics
Abstract: As autonomous vehicles become more integrated into shared human environments, effective communication with road users is essential for ensuring safety. While previous research has focused on developing external Human-Machine Interfaces (eHMIs) to facilitate these interactions, we argue that involving users in the early creative stages can help address key challenges in the development of this technology. To explore this, our study adopts a participatory, crowd-sourced approach to gather user-generated ideas for eHMI designs. Participants were first introduced to fundamental eHMI concepts, equipping them to sketch their own design ideas in response to scenarios with varying levels of perceived risk. An initial pre-study with 29 participants showed that while they actively engaged in the process, there was a need to refine task objectives and encourage deeper reflection. To address these challenges, a follow-up study with 50 participants was conducted. The results revealed a strong preference for autonomous vehicles to communicate their awareness and intentions using lights (LEDs and projections), symbols, and text. Participants' sketches prioritized multi-modal communication, directionality, and adaptability to enhance clarity, consistently integrating familiar vehicle elements to improve intuitiveness.
|
|
08:54-09:06, Paper WeFT4.3 | Add to My Program |
Workspace Sharing with Proximity-Aware Robots: A Pilot on User Perspective |
|
Borelli, Simone (University), Morandini, Sofia (Università Di Bologna), Giovinazzo, Francesco (University of Genoa), Grella, Francesco (University of Genova), Fraboni, Federico (Università Di Bologna), Cannata, Giorgio (University of Genova) |
Keywords: HRI and Collaboration in Manufacturing Environments, Evaluation Methods, Human Factors and Ergonomics
Abstract: This paper presents a study on key Human Factors considered in a Human-Robot Interaction (HRI) manufacturing scenario. We investigate user-perceived trust in collaborative robots, targeting crucial aspects such as acceptance, interaction fluency, cognitive workload, and usability. The experimental study is focused on a car door inspection and assembly task, where a human operator and a cobot operate side by side within a small shared workspace. The second link of the robot platform is equipped with 30 distributed proximity sensors that map the surrounding environment and detect nearby obstacles. Two distinct control strategies are evaluated for generating collision avoidance motions. The first strategy, Sensor Mounting (SM), leverages the sensors’ mounting locations as control inputs to generate reactive avoidance motions, as described in [1]. The second approach, Whole-Body (WB), utilizes any point within the robot’s geometric model, enabling both sensorized and non-sensorized links to respond to unpredictable events, as detailed in [2]. 24 subjects were involved in the experimental trials, performing assembly actions alongside a UR10e robot. Without prior knowledge of the control strategies employed, participants completed an online survey to rate their overall experience in both robot operating conditions (SM and WB). Results suggested that the WB controller did not compromise the system’s perceived usability, trustworthiness, or efficiency. No statistically significant differences were observed among key subjective metrics (p > 0.05). Acceptance, usefulness, and satisfaction scores remained consistently high across both conditions. Finally, qualitative insights suggested users’ preference for the WB control strategy, often described as more adaptive and responsive.
|
|
09:06-09:18, Paper WeFT4.4 | Add to My Program |
Nudging without Words: Movement-Only Cues from a Robot Manipulator Influence Human Decisions |
|
Brscic, Drazen (Kyoto University), Scassellati, Brian (Yale) |
Keywords: Non-verbal Cues and Expressiveness, Interaction Kinesics
Abstract: Robots are increasingly present in our everyday environments, offering services and products. But can they influence our choices through movement alone? This paper investigates whether a robot manipulator can nudge user decisions using only its arm movements, without speech, facial expressions, or physical contact. We first identified plausible nudging motions through a bodystorming session, then designed and implemented three composite nudges (positive, neutral, and negative) using a UR5 robot arm. In a video-based online study (N=35), participants more often chose positively nudged items and avoided negatively nudged ones. A small in-person study (N=9) confirmed the effect. These results demonstrate that movement-only nudges can influence decision-making and highlight the potential of subtle physical behaviors for nonverbal persuasion.
|
|
09:18-09:30, Paper WeFT4.5 | Add to My Program |
Towards an AI-Driven Elderly Assistance Framework with Multi-Sensor Data for Real-Time Fall Detection |
|
ALQASAMA, AMJAD (University of Bath), Assaf, Tareq (University of Bath), Martinez-Hernandez, Uriel (University of Bath) |
Keywords: Detecting and Understanding Human Activity
Abstract: Wearable sensors enable continuous human activity monitoring for health, rehabilitation, and assistive applications. This study investigates the feasibility of a belt-mounted array of multi-placement Inertial Measurement Units (IMUs) for real-time fall detection and activity recognition. A deep learning framework based on Long Short-Term Memory (LSTM) networks is developed and compared against classical machine learning models, including Support Vector Machines (SVM), Random Forest, and XGBoost. The experimental setup employs a custom prototype integrating the Adafruit ICM- 20948 IMU sensor across three different devices: a knee- mounted sensor and a waist-mounted sensor, along with the Huzzah32 microcontroller, utilizing Bluetooth Low Energy (BLE) for real-time data transmission. Experimental results show that the LSTM model achieves the highest recognition accuracy of 93.6% using data from a knee-mounted sensor, outperforming all traditional machine learning models such as Random Forest, SVM, and XGBoost. These findings underscore the potential of IMU-based wearable systems for reliable and portable fall detection, contributing to enhanced elderly home care and emergency response applications
|
|
WeFT5 Regular Session, Auditorium 5 |
Add to My Program |
Autonomy and Teleoperation I |
|
|
Chair: Sugano, Shigeki | Waseda University |
Co-Chair: Mészáros, Anna | Delft University of Technology |
|
08:30-08:42, Paper WeFT5.1 | Add to My Program |
Teleoperation of Pouring Work with Robotic Arm Using Interface for Cartesian Coordinate Manipulation |
|
Kato, Fumiya (Institute of Science Tokyo), Miura, Satoshi (Institute of Science Tokyo) |
Keywords: HRI and Collaboration in Manufacturing Environments, Novel Interfaces and Interaction Modalities, User-centered Design of Robots
Abstract: Pouring work in casting comes with risks because it involves handling high-temperature molten metal. Automation is in demand; however, owing to the specialized nature of pouring work, flexible operation remains challenging. This study developed a teleoperation system for pouring work using a robotic arm. This developed interface for Cartesian coordinate manipulation, called iFeel Desktop Haptic Device (IFHD), was applied. Additionally, a force feedback model was developed based on the amount remaining in a ladle to provide the operator with force feedback and enhance operability. Experiments were performed using three operation methods: IFHD, IFHD with force feedback, and a tablet device (conventional method). The results revealed that the completion time when using IFHD decreased significantly (p = 0.0185) compared with the tablet, without a decrease in accuracy. Moreover, when using IFHD with force feedback, the operator’s brain activity decreased significantly compared with IFHD without force feedback (p = 0.0341). This suggests that the proposed force feedback contributes to the reduction of the operator’s stress during pouring operations.
|
|
08:42-08:54, Paper WeFT5.2 | Add to My Program |
Haptic-Based User Authentication for Tele-Robotic System |
|
Yu, Rongyu (University of Glasgow), Chen, Kan (University of Glasgow), Deng, Zeyu (Southern Methodist University), Wang, Chen (Louisiana State University), Kizilkaya, Burak (University of Glasgow), Li, Liying Emma (University of Glasgow) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Detecting and Understanding Human Activity
Abstract: Teleoperated robots rely on real-time user behavior mapping for remote tasks, but ensuring secure authentication remains a challenge. Traditional methods, such as passwords and static biometrics, are vulnerable to spoofing and replay attacks, particularly in high-stakes, continuous interactions. This paper presents a novel anti-spoofing and anti-replay authentication approach that leverages distinctive user behavioral features extracted from haptic feedback during human–robot interactions. To evaluate our authentication approach, we collect a large-scale dataset from 15 participants performing seven tasks and develop a transformer-based deep learning model to extract time-series features from haptic signals. By analyzing user-specific force dynamics, our method achieves over 90% accuracy in both user identification and task classification, demonstrating its potential for enhancing access control and identity assurance in tele-robotic systems.
|
|
08:54-09:06, Paper WeFT5.3 | Add to My Program |
A Depth-Assisted Teleoperation System: Automatic Alignment for Precision Assembly Tasks |
|
KIM, DONGHYUN (University of Science and Technology), Lee, Hunjo (Korea University of Science and Technology, Korea Institute of I), Yang, Gi-Hun (KITECH) |
Keywords: Degrees of Autonomy and Teleoperation, HRI and Collaboration in Manufacturing Environments, Virtual and Augmented Tele-presence Environments
Abstract: Teleoperation systems play a crucial role in environments that are challenging for direct human access. In conventional systems, 2D displays lack depth information, making it difficult to accurately determine the position and orientation of objects. Additionally, in precision assembly tasks such as peg-in-hole, task time varies significantly depending on the operator’s level of expertise. To address these challenges, this study proposes the Depth-Assisted Teleoperation System (DATS), which utilizes a Time-of-Flight (ToF) depth camera. DATS provides intuitive depth perception using Segment Anything Model 2 (SAM2)-based mouse interactions on a single-viewpoint monitor. Operators can intuitively recognize the position and orientation of objects through overlaid colors and coordinate values. The core function of DATS is automatic alignment, which moves and aligns the object to the tilted target surface until contact is established. Experimental results demonstrate that the use of DATS in peg-in-hole tasks reduces task completion time by approximately 51% compared to conventional manual operation. Notably, nonexpert operators (Beginners) exhibited more consistent task times across trials. Additionally, operators reported an average cognitive load reduction of 63.8% based on the NASA Task Load Index (NASA-TLX) assessment. These findings indicate that DATS significantly enhances the efficiency and consistency of precision assembly tasks, benefiting both beginners and experts in teleoperation systems.
|
|
09:06-09:18, Paper WeFT5.4 | Add to My Program |
Field Testing an Assistive Robot Teleoperation System for People Who Are Legally Blind |
|
Thamaraiselvan, Vishwaak Chandran (University of Texas at Arlington), Salunkhe, Param Dhairyasheel (University of Texas at Arlington), Theofanidis, Michail (University of Texas at Arlington), Gans, Nicholas (Nick) (University Texas at Arlington) |
Keywords: User-centered Design of Robots, Degrees of Autonomy and Teleoperation, Multi-modal Situation Awareness and Spatial Cognition
Abstract: This paper presents our preliminary study on enabling individuals who are legally blind to safely operate mobile robots and vehicles. To achieve this, we developed a teleoperation with accessibility at its core. The system incorporates features that enhance usability and situational awareness, including assistive control based on artificial potential fields to prevent collisions and ensure smooth navigation. It also provides multimodal feedback through (a) haptic vibrations on the gamepad controller, which convey the proximity of nearby objects detected by the robot's laser sensor, and (b) color-coded overlays that differentiate paths, obstacles, and people through semantic segmentation performed by a deep neural network on the robot’s camera feed. To evaluate its effectiveness, we partnered with the Austin Lighthouse to conduct experiments in which legally blind participants used the system to successfully guide the robot through a testing area with obstacles.
|
|
09:18-09:30, Paper WeFT5.5 | Add to My Program |
Human-Autonomy Collaboration for Escaping Local Minima |
|
Gilbert, Alia (University of Michigan), Kaur, Gurnoor (University of Michigan), Mendez, Kevin (University of Michigan), Xie, Yule (University of Michigan), Robert, Lionel (University of Michigan), Tilbury, Dawn (University of Michigan) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation, Curiosity, Intentionality and Initiative in Interaction
Abstract: Effective human supervision of autonomous robots in high-stakes scenarios requires efficient intervention, particularly when unmanned ground vehicles (UGVs) encounter local minima problems. This study investigates user interface designs to support human intervention in resolving such issues without a complete system takeover. We conducted a human-subjects experiment comparing two intervention methods: direct waypoint selection via mouse input and directional commands via arrow keys. Participants supervised two UGVs while simultaneously performing a secondary task, simulating real-world multitasking scenarios. Results demonstrate that mouse-based waypoint selection led to significantly more efficient UGV paths than arrow key controls and was also preferred by participants. Our findings contribute to the design of human-autonomy interfaces.
|
|
WeFT6 Regular Session, Auditorium 6 |
Add to My Program |
Motion and Navigation I |
|
|
Chair: Ogata, Tetsuya | Waseda University |
Co-Chair: Garcia Goo, Hideki | University of Twente |
|
08:30-08:42, Paper WeFT6.1 | Add to My Program |
Advanced AI-Based Slip Detection on an ESP32 for Bionic Grippers |
|
Khan, Mohammad Haziq (Reutlingen University), Radke, Mario Alexander (Yahata GmbH), Hanna, Majd (Hochschule Reutlingen), Danner, Michael (Bochum University of Applied Science), Liu, Hongbing Liu (Shanghai University of Engineering Science), Raetsch, Matthias (University of Reutlingen) |
Keywords: Assistive Robotics, Innovative Robot Designs, Machine Learning and Adaptation
Abstract: Slip detection is a critical aspect of robotic manipulation, enhancing the stability and dexterity of robotic grasping systems. Conventional slip detection approaches often rely on computationally intensive hardware, limiting their applicability in lightweight and mobile robotic platforms. In this work, we present an edge-computing solution for slip detection using an ESP32 microcontroller, enabling on-device inference without external dependencies. The developed system is implemented on a Fin Ray-inspired soft robotic gripper, mounted on a UR3 robotic arm. Experimental validation using diverse objects with varying material properties demonstrates that our ESP32-based solution achieves reliable slip detection in real-time, with the responsiveness required for dynamic robotic grasping applications. By leveraging lightweight AI models, our proposed method provides an efficient, cost-effective and scalable solution for slip detection in mobile and stand-alone robotic systems.
|
|
08:42-08:54, Paper WeFT6.2 | Add to My Program |
Design and Experimental Evaluation of Extended Social-DSM for Social-Aware Navigation |
|
Chen, Xiang (University of Kaiserslautern), Liu, Steven (University of Kaiserslautern) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Social Intelligence for Robots, Evaluation Methods
Abstract: For deploying autonomous mobile robots in the proximity of people, the integration of social norms has been shown to contribute to comfort and socially-aware behavior. In our previous work, we have investigated socially aware collision avoidance based on Dynamical System Modulation (DSM). However, there is a lack of experimental research on how robots can avoid a person in a comfortable manner. In this paper, we extend our previously proposed Socially Aware Dynamical System Modulation (Social-DSM for socially-aware robot navigation by incorporating speed consideration and proactive motion generation based on human intention estimation. The novel framework was implemented on a real robot and validated through a proof-of-concept experiment conducted in a controlled environment, supplemented by a participant survey.
|
|
08:54-09:06, Paper WeFT6.3 | Add to My Program |
An Adaptive Social Medial Axis Framework for Efficient Navigation in Human-Centered Environments |
|
dos Santos, Tamires (Federal University of ABC), G. Macharet, Douglas (Universidade Federal De Minas Gerais) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Social Intelligence for Robots, Applications of Social Robots
Abstract: The navigation of mobile robots in social semi-structured environments, such as airports, poses significant challenges due to human presence and unpredictable crowd dynamics. Effective robot navigation requires not only collision avoidance but also adherence to socially acceptable movement patterns. Traditional path planning algorithms, such as Dijkstra and A*, are designed for static environments and do not account for social factors like human density or dynamic environmental changes, potentially leading to inefficient or unsafe navigation. This paper proposes an adaptive navigation framework that integrates social information into both global and local planning. The global planner constructs a medial axis-based navigation graph, dynamically adjusted according to a human density map, while the local planner employs the Elastic Band technique to refine trajectories in real time, responding to unforeseen obstacles. The proposed system was implemented in a simulated environment inspired by an airport and compared with a traditional, non-social planning method. The results showed superior adaptability, efficiency, and safety when navigating within human-centered environments.
|
|
09:06-09:18, Paper WeFT6.4 | Add to My Program |
Falconry-Like Palm Landing by a Flapping-Wing Drone Based on the Human Gesture Interaction and Distance-Aware Flight Planning |
|
Numazato, Kazuki (The University of Tokyo), Kan, Keiichiro (The University of Tokyo), Masaki Kitgawa, Masaki (The University of Tokyo), Li, Yunong (The University of Tokyo), Kübel, Johannes (University of Tokyo), Zhao, Moju (The University of Tokyo) |
Keywords: Social Touch in Human–Robot Interaction, Motion Planning and Navigation in Human-Centered Environments, Creating Human-Robot Relationships
Abstract: Flapping-wing drones have attracted significant attention due to their biomimetic flight. They are considered more human-friendly due to their characteristics such as low noise and flexible wings, making them suitable for human-drone interactions. However, few studies have explored the practical interaction between humans and flapping-wing drones. On establishing a physical interaction system with flapping-wing drones, we can acquire inspirations from falconers who guide birds of prey to land on their arms. This interaction interprets the human body as a dynamic landing platform, which can be utilized in various scenarios such as crowded or spatially constrained environments. Thus, in this study, we propose a falconry-like interaction system in which a flapping-wing drone performs a palm landing motion on a human hand. To achieve a safe approach toward humans, we design a motion planning method that considers both physical and psychological factors of the human safety such as the distance from the user, the altitude, the approach direction, and the drone’s velocity. We use a commercial flapping platform with the implemented motion planning and conduct experiments to evaluate the palm landing performance and safety. The results demonstrate that our approach enables safe and smooth hand landing interactions. To the best of our knowledge, it is the first time to achieve a contact-based interaction between flapping-wing drones and humans.
|
|
09:18-09:30, Paper WeFT6.5 | Add to My Program |
User Perception of Socially-Aware Robot Navigation with Engagement-Based Proxemics |
|
YAMABATA, Yuta (The University of Tokyo), Venture, Gentiane (The University of Tokyo) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Detecting and Understanding Human Activity, Human Factors and Ergonomics
Abstract: Autonomous mobile robots that operate in human environments are expected to navigate in ways that feel safe to people. Proxemics—a concept of interpersonal distance—has a potential to support this goal. Previous studies have proposed proxemic-based navigation using single human cues, such as emotion or posture. However, relying on a single cue may be insufficient to accurately estimate human intent, limiting the potential for socially acceptable robot navigation. In this study, we present an engagement-aware navigation framework that dynamically adapts proxemic distance by incorporating multiple cues, including facial expressions, body and head orientation, and gaze. In an experiment replicating a daily-life scenario, results indicated that our method enhances perceived intelligence compared to a constant-distance approach, particularly for users with little or no prior robot experience. Furthermore, experienced users exhibited different comfort responses, suggesting that prior robot experience significantly influences human-robot proxemics.
|
|
WeFT7 Regular Session, Auditorium 7 |
Add to My Program |
Robots in Families, Education, Therapeutic Contexts & Arts IV |
|
|
Chair: Matarese, Marco | Italian Institute of Technology |
|
08:30-08:42, Paper WeFT7.1 | Add to My Program |
Emotive Design of a Robot Study Companion to Support University Learning |
|
Calafa', Miriam (University of Tartu), Baksh, Farnaz (University of Tartu), Zorec, Matevž Borjan (University of Tartu), Kruusamäe, Karl (University of Tartu) |
Keywords: Robots in Education, Therapy and Rehabilitation, Non-verbal Cues and Expressiveness, Motivations and Emotions in Robotics
Abstract: University students often face unique challenges in self-regulated learning, such as low motivation, emotional fatigue, and academic stress. While socially interactive robots hold promise as study companions, many current systems lack emotional expressivity and contextual relevance at the university level. This paper presents the design and simulation of a multimodal emotional expression system for the Robot Study Companion (RSC), aimed at supporting students’ emotional engagement and learning motivation. Six target emotions: joy, caring, pride, anger, fun, and surprise, were selected for their documented positive effects on learner behavior and mapped to common academic scenarios. Using a digital twin framework, emotional expressions were developed across motion, facial design, color, and voice. This simulation enabled rapid, iterative design refinement and supports culturally adaptive testing across geographic contexts. Rather than focusing solely on traditional learning outcomes, this work emphasizes students' affective responses, impressions of the robot, and motivational impact. The project lays the foundation for cross-cultural user studies, beginning in Guyana and Estonia, which will evaluate emotional recognition, user experience, and the effectiveness of expressive behavior in enhancing academic engagement. These insights aim to inform the future development of emotionally intelligent and culturally responsive robotic companions for higher education.
|
|
08:42-08:54, Paper WeFT7.2 | Add to My Program |
"Who Should I Believe?": User Interpretation and Decision-Making When a Family Healthcare Robot Contradicts Human Memory |
|
Wang, Hong (Uppsala University), Calvo-Barajas, Natalia (Uppsala University), Winkle, Katie (Uppsala University), Castellano, Ginevra (Uppsala University) |
Keywords: Creating Human-Robot Relationships, Robots in Education, Therapy and Rehabilitation
Abstract: This paper presents a study that examines how varying a robot's level of transparency and sociability influences user interpretation, decision-making and perceived trust when faced with conflicting information from a robot. In a 2 × 2 between-subjects online study, 176 participants watched videos of a Furhat robot acting as a family healthcare assistant and suggesting a fictional user to take medication at a different time from that remembered by the user. Results indicate that robot transparency influenced users' interpretation of information discrepancies: with a low transparency robot, the most frequent assumption was that the user had not correctly remembered the time, while with the high transparency robot, participants were more likely to attribute the discrepancy to external factors, such as a partner or another household member modifying the robot’s information. Additionally, participants exhibited a tendency toward over-trust, often prioritizing the robot’s recommendations over the user's memory, even when suspecting system malfunctions or third-party interference. These findings highlight the impact of transparency mechanisms in robotic systems, the complexity and importance associated with system access control for multi-user robots deployed in home environments, and the potential risks of users' over-reliance on robots in sensitive domains such as healthcare.
|
|
08:54-09:06, Paper WeFT7.3 | Add to My Program |
Supporting Productivity Skill Development in College Students through Social Robot Coaching: A Proof-Of-Concept |
|
Lalwani, Himanshi (New York University Abu Dhabi), Salam, Hanan Anna (New York University Abu Dhabi) |
Keywords: Applications of Social Robots, Robot Companions and Social Robots, Robots in Education, Therapy and Rehabilitation
Abstract: College students often face academic challenges that hamper their productivity and well-being. Although self-help books and productivity apps are popular, they often fall short. Books provide generalized, non-interactive guidance, and apps are not inherently educational and can hinder the development of key organizational skills. Traditional productivity coaching offers personalized support, but is resource-intensive and difficult to scale. In this study, we present a proof-of-concept for a socially assistive robot (SAR) as an educational coach and a potential solution to the limitations of existing productivity tools and coaching approaches. The SAR delivers six different lessons on time management and task prioritization. Users interact via a chat interface, while the SAR responds through speech (with a toggle option). An integrated dashboard monitors progress, mood, engagement, confidence per lesson, and time spent per lesson. It also offers personalized productivity insights to foster reflection and self-awareness. We evaluated the system with 15 college students, achieving a System Usability Score of 79.2 and high ratings for overall experience and engagement. Our findings suggest that SAR-based productivity coaching can offer an effective and scalable solution to improve productivity among college students.
|
|
09:06-09:18, Paper WeFT7.4 | Add to My Program |
Privacy Perceptions in Robot-Assisted Well-Being Coaching: Examining the Roles of Information Transparency, User Control, and Proactivity |
|
Nilgar, Atikkhan Faridkhan (University of Siegen), Dietrich, Manuel (Honda Research Institute Europe), Van Laerhoven, Kristof (University of Siegen) |
Keywords: Robot Companions and Social Robots, Robots in Education, Therapy and Rehabilitation, Creating Human-Robot Relationships
Abstract: Social robots are increasingly recognized as valuable supporters in the field of well-being coaching. They can function as independent coaches or provide support alongside human coaches, and healthcare professionals. In coaching interactions, these robots often handle sensitive information shared by users, making privacy a relevant issue. Despite this, little is known about the factors that shape users’ privacy perceptions. This research aims to examine three key factors systematically: (1) the transparency about information usage, (2) the level of specific user control over how the robot uses their information, and (3) the robot’s behavioral approach – whether it acts proactively or only responds on demand. Our results from an online study (N = 200) show that even when users grant the robot general access to personal data, they additionally expect the ability to explicitly control how that information is interpreted and shared during sessions. Experimental conditions that provided such user control received significantly higher ratings for perceived privacy appropriateness and trust. Compared to user control, the effects of transparency and proactivity on privacy appropriateness perception were low, and we found no significant impact. The results are suggesting that merely informing users or proactive sharing is insufficient without accompanying user control. These insights underscore the need for further research on mechanisms that allow users to manage robots’ information processing and sharing, especially when social robots take on more proactive roles alongside humans.
|
|
09:18-09:30, Paper WeFT7.5 | Add to My Program |
Teaching Methods Shape Expectations, but Performance Determines Human Trust in Robot Learners |
|
Chi, Vivienne Bihe (Brown University), Malle, Bertram (Brown University) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Social Learning and Skill Acquisition Via Teaching and Imitation, Human Factors and Ergonomics
Abstract: To learn the complex norms and behaviors of society, social robots will need human teachers. But teachers must trust their learners so they will continue teaching them. The present study experimentally assigned different teaching methods (instruction, evaluation, or free choice between them) to human teachers of virtual robots. Human trust formation (and recovery from initial trust loss) was robust over these methods as long as robots markedly improved over the course of their training. Teaching methods elicited different initial expectations in teachers, but in the end, robots’ improving performance made all teachers converge at high levels of trust.
|
|
WeGT1 Regular Session, Auditorium 1 |
Add to My Program |
Cooperation and Collaboration in Human-Robot Teams III |
|
|
Chair: Robinette, Paul | University of Massachusetts Lowell |
|
12:50-13:02, Paper WeGT1.1 | Add to My Program |
Evaluating Pointing Gestures for Target Selection in Human-Robot Collaboration |
|
Sassali, Noora M. K. (Tampere University), Pieters, Roel S. (Tampere University) |
Keywords: HRI and Collaboration in Manufacturing Environments
Abstract: Pointing gestures are a common interaction method used in Human-Robot Collaboration for various tasks, ranging from selecting targets to guiding industrial processes. This study introduces a method for localizing pointed targets within a planar workspace. The approach employs pose estimation to detect shoulder and wrist keypoints, and uses linear extrapolation to extract gesturing data from an RGB-D stream. The study proposes a rigorous methodology and comprehensive analysis for evaluating pointing gestures and target selection in typical robotic tasks. In addition to evaluating accuracy, the gesturing method is integrated into a proof-of-concept robotic system, which includes object detection, speech transcription, and speech synthesis to demonstrate the integration of multiple modalities in a collaborative application. Finally, a discussion over method limitations and performance is provided to understand its role in multimodal robotic systems. All developments are available at: https://github.com/NMKsas/gesture_pointer.git.
|
|
13:02-13:14, Paper WeGT1.2 | Add to My Program |
Help or Hinderance: Understanding the Impact of Robot Communication in Action Teams |
|
Tanjim, Tauhid (Cornell University), St George, Jonathan (Weill Cornell Medical College), Ching, Kevin (Weill Cornell Medicine), Taylor, Angelique (Cornell Tech) |
Keywords: Assistive Robotics, Multimodal Interaction and Conversational Skills, Cooperation and Collaboration in Human-Robot Teams
Abstract: The human-robot interaction (HRI) field has recognized the importance of enabling robots to interact with teams. Human teams rely on effective communication for successful collaboration in time-sensitive environments. Robots can play a role in enhancing team coordination through real-time assistance. Despite significant progress in human-robot teaming research, there remains an essential gap in how robots can effectively communicate with action teams using multimodal interaction cues in time-sensitive environments. This study addresses this knowledge gap in an experimental in-lab study to investigate how multimodal robot communication in action teams affects workload and human perception of robots. We explore team collaboration in a medical training scenario where a robotic crash cart (RCC) provides verbal and non-verbal cues to help users remember to perform iterative tasks and search for supplies. Our findings show that verbal cues for object search tasks and visual cues for task reminders reduce team workload and increase perceived ease of use and perceived usefulness more effectively than a robot with no feedback. Our work contributes to multimodal interaction research in the HRI field, highlighting the need for more human-robot teaming research to understand best practices for integrating collaborative robots in time-sensitive environments such as in hospitals, search and rescue, and manufacturing applications.
|
|
13:14-13:26, Paper WeGT1.3 | Add to My Program |
Minimization Method Comparison for Bi-Level Optimization in the Context of Physical Collaboration |
|
Hadj Sassi, Sonia-Laure (LAAS, CNRS, Université Toulouse 3), Benoussaad, Mourad (INP-ENI of Tarbes), Watier, Bruno (LAAS, CNRS, Université Toulouse 3) |
Keywords: Programming by Demonstration, User-centered Design of Robots
Abstract: Physical human-robot interaction (pHRI) requires an adapted robot control to ensure both efficient task execution and human safety and comfort. In this regard, human-in-the-loop robot controls have been shown to be a viable option. Thus, accurately modeling human behaviors based on the task performed becomes necessary and optimization methods appeared to be efficient in this matter. In this study, we compare two minimization methods -Nelder-Mead simplex method, frequently used for this purpose, and genetic algorithm NSGAII- to identify the weights of cost functions in our optimal control problem (OCP). This problem was formulated to effectively model the movement of a human who performs a collaborative pick-and-place with a human partner, as we consider human-human collaboration to be the most natural form of collaboration for human comfort. The movement is an average one computed from data from 30 subjects, ie. 15 pairs. The results demonstrate that NSGAII found a solution better correlated with our measured data, achieved lower root mean-squared errors (RMSE) across all axes and converged faster than the Nelder-Mead method.
|
|
13:26-13:38, Paper WeGT1.4 | Add to My Program |
SENSE: A Force-Sensor-Free, Model-Based Framework for Estimating External Interaction Forces on Humanoid Robots |
|
Fedsi, Chouaib (IBISC Laboratory, University of Evry Paris-Saclay), Mallem, Malik (Université D'Evry), Guiatni, Mohamed (Ecole Militaire Polytechnique) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Assistive Robotics, Computational Architectures
Abstract: This study introduces SENSE, an innovative sensorless external force estimation framework for humanoid robots, built on a model-driven approach. Unlike conventional methods that depend on the simplified Linear Inverted Pendulum (LIP) model and offline data processing, SENSE enables real-time and online estimation. By leveraging centroidal dynamics and integrating angular momentum, our approach accurately infers external forces without the need for Force/Torque (F/T) sensors. The proposed method is validated in simulation using the NAO humanoid robot model within the qiBullet physics engine, under both static standing and realistic dynamic walking conditions. Robustness is evaluated using two external force profiles: constant and time-varying (e.g., sinusoidal). Results show that SENSE provides accurate and stable force estimation, even in challenging scenarios such as foot contact transitions where low-cost sensors like Force-Sensing Resistors (FSRs) typically become unreliable. With its low computational cost and reliance only on onboard sensors, SENSE offers a practical alternative to expensive F/T sensors for medium-sized humanoid platforms. To promote reproducibility and further research, the full implementation is publicly available.
|
|
13:38-13:50, Paper WeGT1.5 | Add to My Program |
Robustness to Object Occlusions in Human-Robot Collaborative Assembly Using Compact Prediction Trees |
|
Semeraro, Francesco (The University of Manchester), Pilato, Giovanni (CNR, National Research Council of Italy), Cangelosi, Angelo (University of Manchester) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Computational Architectures
Abstract: The advancement of collaborative robots emphasizes fast, efficient, and adaptive learning, particularly in environments with visual occlusions. While deep learning has proven effective in robotics, its reliance on large datasets, high computational resources, and limited interpretability poses significant challenges. To address these issues, we explored the use of the Compact Prediction Tree as an efficient and explainable machine learning approach for sequential pattern recognition. We used this algorithm in a human-robot collaboration scenario, where a robot assisted a user to assemble a cubic scaffold by handing over the correct assembly pieces, even under partial visibility of the workspace. Experimental validations showed that the system completed the task perfectly when a single component of the assembly was occluded, and maintained an average task completion rate of at least 0.8 when two components were occluded, even when trained on only 20% of the dataset. The code repository of the robotic system is publicly available.
|
|
WeGT2 Regular Session, Auditorium 2 |
Add to My Program |
Social Intelligence of Robots III |
|
|
Chair: Nakagawa, Satoshi | The University of Tokyo |
|
12:50-13:02, Paper WeGT2.1 | Add to My Program |
The Duration of Robot Gaze Affects People’s Attitudes towards Humanoid Robots |
|
Roselli, Cecilia (Italian Institute of Technology), Lombardi, Maria (Italian Institute of Technology), Natale, Lorenzo (Istituto Italiano Di Tecnologia), Wykowska, Agnieszka (Istituto Italiano Di Tecnologia) |
Keywords: Robot Companions and Social Robots, Non-verbal Cues and Expressiveness, User-centered Design of Robots
Abstract: Gaze plays a crucial role in human social behavior. Notably, the same applies also to interactions between humans and robots, as gaze can communicate intentions and express interest or aversion similarly to what happens among humans. Besides the direction of gaze (direct vs. averted gaze), its temporal characteristics, such as duration, significantly affect our perception and interpretation of the other’s behavior. In the context of Human-Robot Interaction (HRI), this is still poorly investigated. Thus, the present study aimed to investigate whether, and how, the duration of the robot direct gaze impacts participants’ attitudes towards robots. To do so, participants observed the humanoid robot iCub, whose direct gaze varied in duration between 1 and 8 seconds. Then, they used three Likert scales to rate to what extent the robot gaze made them feel i) comfortable, ii) trustful, and iii) threatened, with participants’ rating operationalizing their attitudes towards the robot. Results showed that, overall, a positive relationship emerged between the duration of the robot gaze and participants’ attitudes, i.e., longer gaze duration led to higher ratings for all three Likert scales.
|
|
13:02-13:14, Paper WeGT2.2 | Add to My Program |
An LLM-Based Architecture for Socially Intelligent Robot Navigation Based on Social Cues |
|
Ruo, Andrea (University of Modena and Reggio Emilia, Italy), Cacace, Jonathan (Eurecat), Dalmau-Moreno, Magí (Eurecat Technology Centre), Sabattini, Lorenzo (University of Modena and Reggio Emilia), Villani, Valeria (University of Modena and Reggio Emilia) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Social Intelligence for Robots, Non-verbal Cues and Expressiveness
Abstract: The integration of robots into human-shared environments has driven advancements in social robotics and natural Human-Robot Interaction. A key challenge in this field is to enable robots to interpret and respond to social cues, ensuring fluid, context-aware, and socially acceptable interactions. To address this, we propose a two-layer architecture for social navigation. The high-level reasoning layer utilizes Large Language Models to interpret contextual and environmental cues, generating constraints for navigation. These constraints are then enforced by the low-level layer, which employs Control Barrier Functions to ensure smooth and socially compliant robot movements. We validate our approach in a dynamic simulation environment, demonstrating effective constraint enforcement and socially acceptable navigation behavior.
|
|
13:14-13:26, Paper WeGT2.3 | Add to My Program |
Beyond the Plane: A 3D Representation of Human Personal Space for Socially-Aware Robotics |
|
Ribeiro, Caio (Universidade Federal De Minas Gerais), G. Macharet, Douglas (Universidade Federal De Minas Gerais) |
Keywords: Social Intelligence for Robots, Applications of Social Robots, Cognitive Skills and Mental Models
Abstract: As robots become increasingly present in human environments, they must exhibit socially appropriate behavior, especially by respecting personal space, a psychological boundary influencing comfort based on proximity. While most existing models focus on 2D representations, the vertical dimension is often overlooked. We propose a novel three-dimensional personal space model that integrates horizontal proximity (XY-plane) with vertical sensitivity (Z-axis). The Z-axis discomfort function is derived using Maximum Permissible Pressure (MPP) to identify sensitive body regions and is converted into a continuous function via a fuzzy system. This is combined with a traditional planar discomfort model using a geometric mean to produce a complete 3D discomfort representation. To our knowledge, this is the first method capable of evaluating discomfort in 3D space at any robot component’s position, accounting for the person’s configuration and height. Our results underscore the importance of vertical modeling and demonstrate adaptability across different individuals heights.
|
|
13:26-13:38, Paper WeGT2.4 | Add to My Program |
Granting a Second Chance: How Recovery Strategies Shape Perceptions of Intelligent Agent Errors |
|
Miao, Xin (Tsinghua University), Tang, Jie (Tsinghua University), Peng, Kaiping (Tsinghua University), Wang, Fei (Tsinghua University) |
Keywords: Applications of Social Robots, Creating Human-Robot Relationships, Social Intelligence for Robots
Abstract: Intelligent agents(IA) are increasingly integrated into daily life, performing tasks ranging from household assistance to supporting various industries. However, their inevitable errors present significant challenges in human-IA interactions, often leading to reduced user trust and satisfaction. Consequently, effective recovery strategies are crucial for mitigating the negative impacts of such errors. This study investigates how different recovery strategies and error types influence human perceptions across two experimental studies. The results show that recovery strategies significantly improve human perceptions of IAs following errors. While perceptions of warmth can recover to levels comparable to no-error conditions, perceptions of competence do not fully recover. Furthermore, the effectiveness of recovery strategies varies depending on the type of error. Specifically, for execution errors, strategies such as apologies, providing alternative options, and self-repair are effective. However, the impact of apology strategies demonstrates inconsistency across different contexts. For planning errors, only self-repair strategies consistently yield positive effects, particularly improving perceptions of warmth. These findings offer valuable insights for the design and development of intelligent agents, emphasizing the importance of tailoring recovery strategies to specific error types. By adopting approaches focus on different error types, designers can optimize human perceptions and foster more effective collaboration in human-IA interactions.
|
|
13:38-13:50, Paper WeGT2.5 | Add to My Program |
Fool Me Twice, Shame on Me: Being Deceived by a Robot Does Not Make People More Cautious |
|
Kimura, Yuki (Nara Women's University), Anzai, Emi (Nara Women's University), Saiwaki, Naoki (Nara Women's University), Shiomi, Masahiro (ATR) |
Keywords: Applications of Social Robots, Narrative and Story-telling in Interaction, Robot Companions and Social Robots
Abstract: Due to the distribution of various deceptive techniques in our daily environments, we must consider a way of educating interaction literacy, i.e., enabling trust calibration and appropriate responsive action through critical thinking and observations of interacting partners. For this purpose, in this study, we conducted an experiment where a social robot deliberately deceives participants under low-risk and non-harmful tasks (such as consent form readings) in both face-to-face and online survey settings, as an analogy of an evacuation drill or inoculation theory. We developed a semi-autonomous robot system where the robot provides information with or without deceptive strategies to attempt to either mislead (with deception) or encourage careful reading (without deception) of the consent form. After the tasks, the participants were informed of the robot’s role and evaluated their impressions of it. Additionally, following the laboratory experiment, participants voluntarily completed a follow-up online survey and participated in a similar consent form reading task online. The experimental results showed that the participants were more deceived due to the robot’s deceptive strategies, but their negative impressions of the robot were limited. Moreover, the participants who were deceived during the first task were also deceived in the second task. This suggests that our approach can be used as a promising method for identifying individuals who are easily deceived by others in non-harmful situations, thus aiding in the development of interaction literacy programs.
|
|
WeGT3 Regular Session, Auditorium 3 |
Add to My Program |
Storytelling in HRI |
|
|
Chair: Tsumura, Takahiro | Toyo University |
|
12:50-13:02, Paper WeGT3.1 | Add to My Program |
Introducing and Evaluating a System for an Automatic Multimodal Robotic Storyteller Featuring the Pepper Robot |
|
Steinhaeusser, Sophia C. (University of Würzburg), Maier, Sophia (University of Würzburg), Lugrin, Birgit (University of Wuerzburg) |
Keywords: Storytelling in HRI, Non-verbal Cues and Expressiveness, Affective Computing
Abstract: Storytelling has accompanied humans since the beginning of mankind, and the reception of stories has become one of the most popular leisure activities. With their multimodal abilities, social robots bear great potential as storytellers. In this contribution, we present the Automatic Multimodal Robotic Storyteller - a system implemented for the Pepper robot which allows to automatically annotate a given story with emotions and perform it by the robot using emotional body language, emotion-inducing music, and colored lights. The system is empirically based on the results of several studies that are briefly summarized in this work. In an encompassing study, we compared the final Automatic Multimodal Robotic Storyteller to traditional storytelling media formats, namely text and audio book. While reading a story is preferred for the perceived control it offers, robotic storytellers are as well received as today's traditional storytelling media in most aspects of storytelling experience, and even surpass the audio book in terms of emotion induction. Given this potential, our Automatic Multimodal Robotic Storyteller system could help robotic storytellers become a common storytelling medium in the future by making them accessible to a wide range of stories, by simply applying our pipeline to any given story in text format. To do so, the system will be made publicly available upon acceptance of this manuscript. The approach can also serve as guidance for other robotic storytellers.
|
|
13:02-13:14, Paper WeGT3.2 | Add to My Program |
Design and Evaluation of Engaging Storytelling Experience through Interactive Scripted Performance with a Character Robot |
|
Nagata, Ayaha (University of Tokto), Sawada, Tomoka (The University of Tokyo), Ichikura, Aiko (University of Tokyo), Obinata, Yoshiki (The University of Tokyo), Kanazawa, Naoaki (The University of Tokyo), Makabe, Tasuku (The University of Tokyo), Yanokura, Iori (University of Tokyo), Okada, Kei (The University of Tokyo) |
Keywords: Storytelling in HRI, Applications of Social Robots
Abstract: This study proposes an interactive scripted performance with a character robot to create an engaging storytelling experience. By involving human participants as performers alongside the robot, we investigated whether this approach could facilitate interaction and foster engagement in the storytelling process. We developed a character robot system capable of executing predefined phrases and movements to reinforce immersion, thereby enhancing deeper engagement. We then conducted interactive scripted performance events at after-school care facilities. The results suggest that interactive scripted performance encouraged participants to perceive the robot as communicative and to engage in the play actively. Moreover, the findings also imply that the developed character robot enhanced participants' story immersion and induced physical engagement.
|
|
13:14-13:26, Paper WeGT3.3 | Add to My Program |
Teachable Social Robots: Managing Expectations in Highly Anthropomorphic Designs |
|
Majumder, Tanu (Department of Social Science, RPTU Kaiserslautern-Landau, Kaiser), Ashok, Ashita (University of Kaiserslautern-Landau), Rosén, Julia (McMaster University), Sevinc, Azra (University of Kaiserslautern-Landau), Berns, Karsten (University of Kaiserslautern) |
Keywords: Storytelling in HRI, Social Learning and Skill Acquisition Via Teaching and Imitation, Anthropomorphic Robots and Virtual Humans
Abstract: Highly anthropomorphic robots risk triggering expectation mismatch that can lead to disappointment when robot behavior falls short. This study investigates how actively teaching a social humanoid robot to narrate a story influences user expectations, negative attitudes, anxiety, and perceptions of storytelling quality, compared to passive observation. University students (N=40) were assigned to either a teaching or non-teaching condition. Teaching participants instructed the robot using speech and gestures, while the non-teaching group observed the robot narrate the resulting storytelling video. Results showed that active teaching reduced expectation shifts, suggesting greater alignment between user beliefs and robot capability. However, robot-related anxiety increased in the teaching group, while the non-teaching group consistently reported higher negative attitudes. Storytelling quality was more strongly influenced by robot anthropomorphism in the non-teaching group. Participants who blamed the robot gave lower storytelling ratings, whereas those who blamed the AI model or programmer were more lenient. These findings highlight the importance of managing expectations through interactive teaching of robot tutee.
|
|
13:26-13:38, Paper WeGT3.4 | Add to My Program |
RoboBuddy in the Classroom: Exploring LLM-Powered Social Robots for Storytelling in Learning and Integration Activities |
|
Tozadore, Daniel (University College London (UCL)), Ertug, Nur (EPFL), Chaker, Yasmine (École Polytechnique Fédérale De Lausanne), Abderrahim, Mortadha (École Polytechnique Fédérale De Lausanne) |
Keywords: Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction, Storytelling in HRI
Abstract: Creating and improvising scenarios for content approaching is an enriching technique in education. However, it comes with a significant increase in the time spent on its planning, which intensifies when using complex technologies, such as social robots. Furthermore, addressing multicultural integration is commonly embedded in regular activities due to the already tight curriculum. Addressing these issues with a single solution, we implemented an intuitive interface that allows teachers to create scenario-based activities from their regular curriculum using LLMs and social robots. We co-designed different frameworks of activities with 4 teachers and deployed it in a study with 27 students for 1 week. Beyond validating the system's efficacy, our findings highlight the positive impact of integration policies perceived by the children and demonstrate the importance of scenario-based activities in students' enjoyment, observed to be significantly higher when applying storytelling. Additionally, several implications of using LLMs and social robots in long-term classroom activities are discussed.
|
|
13:38-13:50, Paper WeGT3.5 | Add to My Program |
EmojiVoice: Towards Long-Term Controllable Expressivity in Robot Speech |
|
Tuttösí, Paige (Simon Fraser University), Mehta, Shivam (KTH Royal Institute of Technology), Syvenky, Zachary (Simon Fraser University), Burkanova, Bermet (Simon Fraser University), Henter, Gustav Eje (KTH Royal Institute of Technology), Lim, Angelica (Simon Fraser University) |
Keywords: Sound design for robots, Non-verbal Cues and Expressiveness, Storytelling in HRI
Abstract: Humans vary their expressivity when speaking for extended periods to maintain engagement with their listener. Although social robots tend to be deployed with "expressive" joyful voices, they lack this long-term variation found in human speech. Foundation model text-to-speech systems are beginning to mimic the expressivity in human speech, but they are difficult to deploy offline on robots. We present EmojiVoice, a free, customizable text-to-speech (TTS) toolkit that allows social roboticists to build temporally variable, expressive speech on social robots. We introduce emoji-prompting to allow fine-grained control of expressivity on a phase level and use the lightweight Matcha-TTS backbone to generate speech in real-time. We explore three case studies: (1) a scripted conversation with a robot assistant, (2) a storytelling robot, and (3) an autonomous speech-to-speech interactive agent. We found that using varied emoji prompting improved the perception and expressivity of speech over a long period in a storytelling task, but expressive voice was not preferred in the assistant use case.
|
|
WeGT4 Regular Session, Blauwe Zaal |
Add to My Program |
Applications of Social Robots VII |
|
|
Chair: Tapus, Adriana | ENSTA Paris, Institut Polytechnique De Paris |
|
12:50-13:02, Paper WeGT4.1 | Add to My Program |
STREAK: Streaming Network for Continual Learning of Object Relocations under Household Context Drifts |
|
Bartoli, Ermanno (KTH Royal Institute of Technology), Dogan, Fethiye Irmak (KTH Royal Institute of Technology), Leite, Iolanda (KTH Royal Institute of Technology) |
Keywords: Machine Learning and Adaptation, Assistive Robotics
Abstract: In real-world settings, robots are expected to assist humans across diverse tasks and still continuously adapt to dynamic changes over time. For example, in domestic environments, robots can proactively help users by fetching needed objects based on learned routines, which they infer by observing how objects move over time. However, data from these interactions are inherently non-independent and non-identically distributed (non-i.i.d.), e.g., a robot assisting multiple users may encounter varying data distributions as individuals follow distinct habits. This creates a challenge: integrating new knowledge without catastrophic forgetting. To address this, we propose papername{} (Spatio Temporal RElocation with Adaptive Knowledge retention), a continual learning framework for real-world robotic learning. It leverages a streaming graph neural network with regularization and rehearsal techniques to mitigate context drifts while retaining past knowledge. Our method is time- and memory-efficient, enabling long-term learning without retraining on all past data, which becomes infeasible as data grows in real-world interactions. We evaluate papername{} on the task of incrementally predicting human routines over 50+ days across different households. Results show that it effectively prevents catastrophic forgetting while maintaining generalization, making it a scalable solution for long-term human-robot interactions.
|
|
13:02-13:14, Paper WeGT4.2 | Add to My Program |
Human Interactions with Autonomous Mobile Robots in Public Spaces: A Survey |
|
Karakaya, Rabia (University of York), Camara, Fanta (University of York), Perinpanayagam, Suresh (University of York) |
Keywords: Human Factors and Ergonomics, Motion Planning and Navigation in Human-Centered Environments, Multimodal Interaction and Conversational Skills
Abstract: As autonomous mobile robots (AMRs) are increasingly deployed in public spaces, understanding how they are designed to interact with humans is crucial. This paper reviews existing research on AMRs in real-world environments, analysing their development and deployment from a human-robot interaction (HRI) perspective. Through analysis of 46 selected studies from the Scopus and Web of Science databases, this study examines the interaction strategies employed in AMRs, the key design requirements, and the challenges they face in human environments. The findings highlight a growing emphasis on delivery, assistance, and guide robots, with interaction methods primarily relying on visual or explicit cues. This study also identifies challenges related to public perception, safety, and usability, emphasising the need for improved design strategies to enhance HRI effectiveness and ensure the seamless integration of AMRs into everyday environments.
|
|
13:14-13:26, Paper WeGT4.3 | Add to My Program |
Human-Robot Collaboration in Surgery: Advances and Challenges towards Autonomous Surgical Assistants |
|
Colan, Jacinto (Nagoya University), Davila, Ana (Nagoya University), Yamada, Yutaro (Nagoya University), Hasegawa, Yasuhisa (Nagoya University) |
Keywords: Medical and Surgical Applications, Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation
Abstract: Human-robot collaboration in surgery represents a significant area of research, driven by the increasing capability of autonomous robotic systems to assist surgeons in complex procedures. This systematic review examines the advancements and persistent challenges in the development of autonomous surgical robotic assistants (ASARs), focusing specifically on scenarios where robots provide meaningful and active support to human surgeons. Adhering to the PRISMA guidelines, a comprehensive literature search was conducted across the IEEE Xplore, Scopus, and Web of Science databases, resulting in the selection of 32 studies for detailed analysis. Two primary collaborative setups were identified: teleoperation-based assistance and direct hands-on interaction. The findings reveal a growing research emphasis on ASARs, with predominant applications currently in endoscope guidance, alongside emerging progress in autonomous tool manipulation. Several key challenges hinder wider adoption, including the alignment of robotic actions with human surgeon preferences, the necessity for procedural awareness within autonomous systems, the establishment of seamless human-robot information exchange, and the complexities of skill acquisition in shared workspaces. This review synthesizes current trends, identifies critical limitations, and outlines future research directions essential to improve the reliability, safety, and effectiveness of human-robot collaboration in surgical environments.
|
|
13:26-13:38, Paper WeGT4.4 | Add to My Program |
Context-Aware Risk Estimation in Home Environments: A Probabilistic Framework for Service Robots |
|
ISHII, SENA (Tohoku University), Chikhalikar, Akash (Tohoku University), Ravankar, Ankit A. (Tohoku University), Salazar Luces, Jose Victorio (Tohoku University), Hirata, Yasuhisa (Tohoku University) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Assistive Robotics, Detecting and Understanding Human Activity
Abstract: We present a novel framework for estimating accident-prone regions in everyday indoor scenes, aimed at improving real-time risk awareness in service robots operating in human-centric environments. As robots become integrated into daily life, particularly in homes, the ability to anticipate and respond to environmental hazards is crucial for ensuring user safety, trust, and effective human-robot interaction. Our approach models object-level risk and context through a semantic graph-based propagation algorithm. Each object is represented as a node with an associated risk score, and risk propagates asymmetrically from high-risk to low-risk objects based on spatial proximity and accident relationship. This enables the robot to infer potential hazards even when they are not explicitly visible or labeled. Designed for interpretability and lightweight onboard deployment, our method is validated on a dataset with human-annotated risk regions, achieving a binary risk detection accuracy of 75%. The system demonstrates strong alignment with human perception, particularly in scenes involving sharp or unstable objects. These results underline the potential of context-aware risk reasoning to enhance robotic scene understanding and proactive safety behaviors in shared human-robot spaces. This framework could serve as a foundation for future systems that make context-driven safety decisions, provide real-time alerts, or autonomously assist users in avoiding or mitigating hazards within home environments.
|
|
13:38-13:50, Paper WeGT4.5 | Add to My Program |
LLM-Based Ambiguity Detection in Natural Language Instructions for Collaborative Surgical Robots |
|
Davila, Ana (Nagoya University), Colan, Jacinto (Nagoya University), Hasegawa, Yasuhisa (Nagoya University) |
Keywords: Medical and Surgical Applications, Cooperation and Collaboration in Human-Robot Teams, Linguistic Communication and Dialogue
Abstract: Ambiguity in natural language instructions poses significant risks in safety-critical human-robot interaction, particularly in domains such as surgery. To address this, we propose a framework that uses Large Language Models (LLMs) for ambiguity detection specifically designed for collaborative surgical scenarios. Our method employs an ensemble of LLM evaluators, each configured with distinct prompting techniques to identify linguistic, contextual, procedural, and critical ambiguities. A chain-of-thought evaluator is included to systematically analyze instruction structure for potential issues. Individual evaluator assessments are synthesized through conformal prediction, which yields non-conformity scores based on comparison to a labeled calibration dataset. Evaluating Llama 3.2 11B and Gemma 3 12B, we observed classification accuracy exceeding 60% in differentiating ambiguous from unambiguous surgical instructions. Our approach improves the safety and reliability of human-robot collaboration in surgery by offering a mechanism to identify potentially ambiguous instructions before robot action.
|
|
WeGT5 Regular Session, Auditorium 5 |
Add to My Program |
Autonomy and Teleoperation II |
|
|
Chair: Sasaki, Yoko | National Institute of Advanced Industrial Science and Technology |
Co-Chair: Malik, Muhammad Abdul Basit | King's College London |
|
12:50-13:02, Paper WeGT5.1 | Add to My Program |
We-Information Can Facilitate Performance in Joint Teleoperation Over a Humanoid Robot |
|
Wozniak, Mateusz (Italian Institute of Technology), Ari, Ilkay (Italian Institute of Technology), De Tommaso, Davide (Istituto Italiano Di Tecnologia), Wykowska, Agnieszka (Istituto Italiano Di Tecnologia) |
Keywords: Embodiment, Empathy and Intersubjectivity, Cooperation and Collaboration in Human-Robot Teams, Applications of Social Robots
Abstract: In this study, we developed a setup allowing two participants (“operators”) to jointly control a single humanoid robot body. Specifically, each operator controlled one robot arm, using their anatomically congruent hand (left hand controlling a left robot arm and vice versa). In our task participants had to move each robot arm into one of two possible positions (arm raised or lowered). We used this setup to investigate (1) whether presenting prior information about the relationship between movements performed by each participant (“We-information”) can facilitate performance in this task, (2) how joint control over a robot body affects sense of control over the robot and sense of joint agency with the other operator, and (3) how it influences the perceived boundaries between oneself and the others (the robot and the other operator), the so-called “self-other” overlap. We found that (1) “We-information” increased the speed of task performance, but only for simpler configurations, (2) participants experienced high level of sense of control over the robot which increased throughout the task, and (3) a short session of joint control over a humanoid led to pronounced increase in self-other overlap (blurring of boundaries) with both the robot and a co-operator. We discuss implications of our results for understanding of human body representation and how they can inform future applications, such as exoskeletons for individuals affected with hemiplegia.
|
|
13:02-13:14, Paper WeGT5.2 | Add to My Program |
Preserving Sense of Agency: User Preferences for Robot Autonomy and User Control across Household Tasks |
|
Yang, Claire (University of Washington), Patel, Heer (University of Washington), Kleiman-Weiner, Max (University of Washington), Cakmak, Maya (University of Washington) |
Keywords: Degrees of Autonomy and Teleoperation, Assistive Robotics
Abstract: Roboticists often design with the assumption that assistive robots should be fully autonomous. However, it remains unclear whether users prefer highly autonomous robots, as prior work in assistive robotics suggests otherwise. High robot autonomy can reduce the user's sense of agency, which represents feeling in control of one's environment. How much control do users, in fact, want over the actions of robots used for in-home assistance? We investigate how robot autonomy levels affect users' sense of agency and the autonomy level they prefer in contexts with varying risks. Our study asked participants to rate their sense of agency as robot users across four distinct autonomy levels and ranked their robot preferences with respect to various household tasks. Our findings revealed that participants' sense of agency was primarily influenced by two factors: (1) whether the robot acts autonomously, and (2) whether a third party is involved in the robot's programming or operation. Notably, an end-user programmed robot highly preserved users' sense of agency, even though it acts autonomously. However, in high-risk settings, e.g., preparing a snack for a child with allergies, they preferred robots that prioritized their control significantly more. Additional contextual factors, such as trust in a third party operator, also shaped their preferences.
|
|
13:14-13:26, Paper WeGT5.3 | Add to My Program |
Autonomous Robotic System for Power Cable Tracking and Mapping Using Hybrid Localization in GPS-Denied and GPS-Rich Environments |
|
Vohra, Mohit (Dubai Electricity and Water Authority Research and Development C), Althani, Thani (Dubai Electricity & Water Authority) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Multi-modal Situation Awareness and Spatial Cognition, User-centered Design of Robots
Abstract: Accurate underground power cable mapping is essential for infrastructure maintenance and fault detection. However, traditional GPS-based localization is unreliable in GPS-denied environments, necessitating alternative localization strategies. This paper presents an autonomous robotic system that employs a hybrid localization framework, integrating RTK-GPS for outdoor positioning and ORB-SLAM3 for visual-based localization in GPS-denied environments. The system features a cable locator sensor for real-time power cable detection, while a human-in-the-loop GUI enables manual GPS corrections to mitigate positioning errors. The ORB-SLAM3 based localization module is integrated with GPS-based localization to ensure mapping consistency across varying operational conditions. Real-world experiments validate the system’s capability to accurately track and map power cables in both GPS-rich outdoor areas and GPS-denied environments, demonstrating the effectiveness of this hybrid localization approach for robotic infrastructure inspection and maintenance.
|
|
13:26-13:38, Paper WeGT5.4 | Add to My Program |
Bi-LAT: Bilateral Control-Based Imitation Learning Via Natural Language and Action Chunking with Transformers |
|
Kobayashi, Takumi (Osaka University), Kobayashi, Masato (The University of Osaka / Kobe University), Buamanee, Thanpimon (Osaka University), Uranishi, Yuki (Osaka University) |
Keywords: Machine Learning and Adaptation, Social Learning and Skill Acquisition Via Teaching and Imitation, Degrees of Autonomy and Teleoperation
Abstract: We present Bi-LAT, a novel imitation learning framework that unifies bilateral control with natural language processing to achieve precise force modulation in robotic manipulation. Bi-LAT leverages joint position, velocity, and torque data from leader-follower teleoperation while also integrating visual and linguistic cues to dynamically adjust applied force. By encoding human instructions such as ``softly grasp the cup'' or ``strongly twist the sponge'' through a multimodal Transformer-based model, Bi-LAT learns to distinguish nuanced force requirements in real-world tasks. We demonstrate Bi-LAT's performance in (1) unimanual cup-stacking scenario where the robot accurately modulates grasp force based on language commands, and (2) bimanual sponge-twisting task that requires coordinated force control. Experimental results show that Bi-LAT effectively reproduces the instructed force levels, particularly when incorporating SigLIP among tested language encoders. Our findings demonstrate the potential of integrating natural language cues into imitation learning, paving the way for more intuitive and adaptive human–robot interaction. For additional material, please visit the website: https://mertcookimg.github.io/bi-lat/
|
|
13:38-13:50, Paper WeGT5.5 | Add to My Program |
Reinforcement Learning-Based Trust Dynamics Prediction Model for Teleoperated Human-Robot Interaction |
|
García Cárdenas, Juan José (ENSTA - Institute Polytechinique De Paris), Tapus, Adriana (ENSTA Paris, Institut Polytechnique De Paris) |
Keywords: Cognitive Skills and Mental Models, HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation
Abstract: Trust plays a crucial role in user performance during teleoperated human-robot interaction. This study presents a reinforcement learning (RL) model that adapts to dynamic trust levels using physiological data and task performance metrics. Participants completed a complex teleoperation task under three conditions: (C1) limited feedback, (C2) AI-generated verbal guidance, and (C3) AI guidance paired with real-time RViz visualization. Physiological indicators, such as blink rate, galvanic skin response (GSR), and facial temperature along with task performance metrics like success rate and completion time were tracked. Statistical analyses revealed that increased task complexity in C1 reduced trust and increased cognitive load, leading to poorer performance. AI-generated guidance in C2 improved task understanding and performance, supporting Hypothesis H2. In C3, combining AI guidance with RViz visualization further boosted trust and reduced cognitive load, partially confirming Hypothesis H3. The RL model successfully adapted guidance strategies based on real-time user states, and additional testing showed that the agent's adaptive strategies significantly increased user trust and improved performance. These results underscore the potential of adaptive RL models to enhance trust and efficiency in teleoperated human-robot systems
|
|
WeGT6 Regular Session, Auditorium 6 |
Add to My Program |
Motion and Navigation II |
|
|
Chair: Sugiura, Hisashi | Yanmar Co., Ltd |
|
12:50-13:02, Paper WeGT6.1 | Add to My Program |
A Human-In-The-Loop Metaheuristic Approach to Multiobjective Path Planning |
|
Liang, Shiming (University of Pennsylvania), Manjanna, Sandeep (Plaksha University), Shipley, Thomas (Department of Psychology and Neuroscience at Temple University), Hsieh, M. Ani (University of Pennsylvania) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation, Novel Interfaces and Interaction Modalities
Abstract: This paper introduces a novel multiobjective human-in-the-loop planning algorithm for information-driven path planning. We formulate the path planning problem as a multiobjective orienteering problem, aiming to optimize multiple survey objectives under operational constraints. Inspired by Indicator-based Fitness Evaluation and Tabu Search, the algorithm efficiently predicts high-scoring paths, which are presented to a human expert for refinement of waypoints. Once data is collected at the next waypoint, the expert updates the objectives of the relevant points of interest, allowing for dynamic adjustments based on evolving survey requirements. Tailored for autonomous geological surveys, we validate our approach with real-world data from Sage Hen, CA. The proposed solver outperforms existing MOOP solvers by achieving better results with lower variance, resulting in improved survey path coverage.
|
|
13:02-13:14, Paper WeGT6.2 | Add to My Program |
Immersive Explainability: Visualizing Robot Navigation Decisions through XAI Semantic Scene Projections in Virtual Reality |
|
de Heuvel, Jorge (University of Bonn), Mueller, Sebastian (University of Bonn, Lamarr Institute), Wessels, Marlene (Johannes Gutenberg-University Mainz), Akhtar, Aftab (University of Bonn), Bauckhage, Christian (University of Bonn), Bennewitz, Maren (University of Bonn) |
Keywords: Novel Interfaces and Interaction Modalities, Motion Planning and Navigation in Human-Centered Environments
Abstract: End-to-end robot policies achieve high performance through neural networks trained via reinforcement learning (RL). Yet, their black box nature and abstract reasoning pose challenges for human-robot interaction (HRI), because humans may experience difficulty in understanding and predicting the robot's navigation decisions, hindering trust development. We present a virtual reality (VR) interface that visualizes explainable AI (XAI) outputs and the robot’s lidar perception to support intuitive interpretation of RL-based navigation behavior. By visually highlighting objects based on their attribution scores, the interface grounds abstract policy explanations in the scene context. This XAI visualization bridges the gap between obscure numerical XAI attribution scores and a human-centric semantic level of explanation. A within-subjects study with 24 participants evaluated the effectiveness of our interface for four visualization conditions combining XAI and lidar. Participants ranked scene objects across navigation scenarios based on their importance to the robot, followed by a questionnaire assessing subjective understanding and predictability. Results show that semantic projection of attributions significantly enhances non-expert users’ objective understanding and subjective awareness of robot behavior. In addition, lidar visualization further improves perceived predictability, underscoring the value of integrating XAI and sensor for transparent, trustworthy HRI.
|
|
13:14-13:26, Paper WeGT6.3 | Add to My Program |
A Robotic Walker with Self-Induced Adaptive Speed Control for Freezing of Gait in Parkinson's Disease |
|
Iwamoto, Kengo (Graduate School of Life Science and Systems Engineering, Kyushu), Yamasaki, Kakeru (Kyushu Institute of Technology), Shibata, Tomohiro (Kyushu Institute of Technology) |
Keywords: Assistive Robotics, Monitoring of Behaviour and Internal States of Humans, Robots in Education, Therapy and Rehabilitation
Abstract: Freezing of gait (FoG) is a common and debilitating symptom in patients with Parkinson’s disease (PD), often leading to falls and reduced quality of life. This study proposes a robotic walker system that provides rhythmic self-induced stimuli by adapting its speed based on the user’s gait phases. The walker utilizes pressure sensors embedded in insoles to detect stance and swing phases in real-time and adjusts its speed accordingly to promote more stable walking. Experimental results with both healthy older adults and PD patients suggest potential improvements in gait cadence and step length under the proposed control method. Challenges and future improvements are also discussed.
|
|
13:26-13:38, Paper WeGT6.4 | Add to My Program |
Transparent Social Navigation for Autonomous Mobile Robots Via Vision-Language Models |
|
Sotomi, Oluwadamilola (New York University), Kodi, Devika (NYU), Arab, Aliasghar (NYU) |
Keywords: Social Touch in Human–Robot Interaction, Social Intelligence for Robots, Motion Planning and Navigation in Human-Centered Environments
Abstract: Service and assistive robots are increasingly being deployed in dynamic social environments; however, ensuring transparent and explainable interactions remains a significant challenge. This paper presents a multimodal explainability module that integrates vision language models and heat maps to improve transparency during navigation. The proposed system enables robots to perceive, analyze, and articulate their observations through natural language summaries. User studies (n=30) showed a preference of majority for real-time explanations, indicating improved trust and understanding. Our experiments were validated through confusion matrix analysis to assess the level of agreement with human expectations. Our experimental and simulation results emphasize the effectiveness of explainability in autonomous navigation, enhancing trust and interpretability.
|
|
13:38-13:50, Paper WeGT6.5 | Add to My Program |
Threshold-Based Intended Forearm Motion Detection for Elbow-Forearm Exoskeletons Using Shear Force Sensors |
|
Cheng, Hiu Yee Hilary (National University of Singapore), Kwok, Thomas M. (University of Waterloo), Yu, Haoyong (National University of Singapore) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Detecting and Understanding Human Activity
Abstract: Upper limb exoskeletons can greatly benefit individuals with arm weakness by helping them perform daily activities. However, existing designs do not account for forearm rotation, which is essential for many activities. As to intention-control, existing exoskeletons commonly depend on skin-attached sensors like electromyography (EMG) or force myography (FMG), which are susceptible to sensor placement and require large computational demands for signal processing. This paper explored using shear force sensors to detect users' intention for the forearm exoskeleton. Alongside the implementation of sensors in the exoskeleton, we developed a Finite State Machine (FSM) control framework with a threshold-based online classification method. We collected sensing data of five participants' forearm motion and set the classification threshold based on the dataset. Then, the exoskeleton executes joint motion with impedance control based on the motion intention classified in daily activities. To evaluate this framework, we implemented this approach into a microcontroller and conducted a human experiment, involving multi-joint motions, with 15 healthy participants. The results show over 97% success rate for robot motion execution, and no significant difference in joint coordination and forearm muscle activation with and without exoskeleton. This approach with FSM and threshold-based classification allows users to control robot joints in real time according to their intentions while ensuring their safety through predefined control logic and impedance control.
|
|
WeGT7 Regular Session, Auditorium 7 |
Add to My Program |
Robots in Families, Education, Therapeutic Contexts & Arts V |
|
|
Chair: Fitter, Naomi T. | Oregon State University |
Co-Chair: Ye, Xin | University of Michigan |
|
12:50-13:02, Paper WeGT7.1 | Add to My Program |
Adaptive versus Non-Adaptive Mathematics Tutoring by Social Robots in Tanzanian Primary Schools |
|
Ntahomvukye, Elina C. (MZUMBE UNIVERSITY), Rutatola, Edger P. (Mzumbe University), Daudi, Morice (Mzumbe University), Komba, Mercy Mlay (Mzumbe University), Stroeken, Koen (Ghent University), Belpaeme, Tony (University of Ghent - IMEC) |
Keywords: Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction, Social Learning and Skill Acquisition Via Teaching and Imitation
Abstract: The use of social robots in education is increasingly being explored as a way to enhance learner engagement and improve learning outcomes. However, most research to date has focused on one-to-one tutoring in high-resource settings, leaving open questions about how social robots perform in group learning contexts—especially in low-resource environments. This study is one of the first to investigate human-robot interaction (HRI) in a low-resource African context, specifically in Tanzanian primary schools. We examined how a social robot tutor can support group-based mathematics learning, comparing the effects of adaptive versus non-adaptive tutoring strategies. Through an experimental, mixed-methods research design, we evaluated pupils’ learning outcomes, engagement, and classroom interactions. Our findings show that social robot tutoring has a significant positive impact on learning outcomes, with adaptive tutoring leading to slightly higher knowledge gains than non-adaptive tutoring. Qualitative observations further reveal that the presence of the robot fostered motivation, engagement, and collaborative classroom dynamics. This work demonstrates the potential of social robots to support group learning in under-resourced educational settings and highlights the importance of extending HRI research beyond well-resourced contexts.
|
|
13:02-13:14, Paper WeGT7.2 | Add to My Program |
Brain-Robot Interface for Exercise Mimicry |
|
Bettosi, Carl (Heriot-Watt University), Nault, Emilyann (Heriot-Watt University & University of Edinburgh), Baillie, Lynne (Heriot-Watt University), Garschall, Markus (AIT Austrian Institute of Technology GmbH), Romeo, Marta (Heriot-Watt University), Wais-Zechmann, Beatrix (AIT Austrian Institute of Technology GmbH), Binderlehner, Nicole (AIT Austrian Institute of Technology GmbH), Georgiou, Theodoros (Heriot-Watt University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Social Intelligence for Robots
Abstract: For social robots to maintain long-term engagement as exercise instructors, rapport-building is essential. Motor mimicry-imitating one's physical actions-during social interaction has long been recognized as a powerful tool for fostering rapport, and it is widely used in rehabilitation exercises where patients mirror a physiotherapist or video demonstration. We developed a novel Brain-Robot Interface (BRI) that allows a social robot instructor to mimic a patient’s exercise movements in real-time, using mental commands derived from the patient’s intention. The system was evaluated in an exploratory study with 14 participants (3 physiotherapists and 11 hemiparetic patients recovering from stroke or other injuries). We found our system successfully demonstrated exercise mimicry in 12 sessions, however, accuracy varied. Participants had positive perceptions of the robot instructor, with high trust and acceptance levels, which were not affected by the introduction of BRI technology.
|
|
13:14-13:26, Paper WeGT7.3 | Add to My Program |
Motivating Students' Self-Study with Goal Reminder and Emotional Support |
|
Cho, Hyung Chan (Purdue University), Cha, Go-Eum (Purdue University), Liu, Yanfu (Purdue University), Jeong, Sooyeon (Purdue University) |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, User-centered Design of Robots
Abstract: While the efficacy of social robots in supporting people in learning tasks has been extensively investigated, their potential impact in assisting students in self-studying contexts has not been investigated much. This study explores how a social robot can act as a peer study companion for college students during self-study tasks by delivering task-oriented goal reminder and positive emotional support. We conducted an exploratory Wizard-of-Oz study to explore how these robotic support behaviors impacted students' perceived focus, productivity, and engagement in comparison to a robot that only provided physical presence (control). Our study results suggest that participants in the goal reminder and the emotional support conditions reported greater ease of use, with the goal reminder condition additionally showing a higher willingness to use the robot in future study sessions. Participants' satisfaction with the robot was correlated with their perception of the robot as a social other, and this perception was found to be a predictor for their level of goal achievement in the self-study task. These findings highlight the potential of socially assistive robots to support self-study through both functional and emotional engagement.
|
|
13:26-13:38, Paper WeGT7.4 | Add to My Program |
A Robot That Supports Collaborative Art Appreciation through Visual Thinking Strategies |
|
Iwata, Minori (Kobe University), Shiomi, Masahiro (ATR), Takiguchi, Tetsuya (Kobe University) |
Keywords: Applications of Social Robots, Linguistic Communication and Dialogue, Robots in art and entertainment
Abstract: Social robots are increasingly being used as interactive partners to facilitate people’s understanding and appreciation of art. Considering people's stages of aesthetic development is essential to richer understanding of artworks, but such a viewpoint is less of a focus in human-robot interaction contexts. Therefore, we developed a robot system for collaborative art appreciation using a Visual Thinking Strategies (VTS) method to enhance people’s engagement with art by considering their stages of aesthetic experience. We conducted an experiment to investigate the effectiveness of our system in supporting art appreciation of participants in a laboratory setting. We also investigated the effects of embodiment, i.e., the physical body of a robot, on art-appreciation support. The experiment results indicate that our system significantly increased the intention to use it, which is related to social acceptance, and embodiment significantly influenced likeability, perceived intelligence, and perceived enjoyment.
|
|
13:38-13:50, Paper WeGT7.5 | Add to My Program |
Teachers Perceive Distinct Competency Profiles in Soft and Hard Social Robots for Supporting Learning |
|
Leisten, Luca M. (ETH Zurich), Caruana, Nathan (Flinders University), Cross, Emily S (ETH Zurich) |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Robot Companions and Social Robots
Abstract: The promise of social robot applications for children’s education has attracted growing enthusiasm over the past decade, with the potential to augment and support diverse learning outcomes. However, the adoption of education robots and their expected benefits for children are yet to be realised, due to complexity, cost, and variability between robots. Soft robots offer a possible solution. However, a concern is that these robots may be seen as less competent, decreasing their adoption and utility in learning environments. In this preregistered, mixed-methods study, we investigated teachers’ (textit{n} = 120) perception of 12 hard and soft social robots along different dimensions, learning tasks, roles, and contexts. Teachers perceived hard robots as more competent, human-like, and familiar than soft robots. Soft robots were perceived as more physically/visually warm. Hard robots were also more likely to be perceived as suitable for “technical tasks” and adopting a teacher/tutor role for supporting the learning of adults or groups. Soft robots were more likely to be evaluated as suitable for use with younger learners in individual learning contexts and playing the role of a co-learner/novice. This study provides a detailed account of how soft and hard robot features influence teachers’ perceptions of robot suitability for education applications. The findings directly inform how to optimise the design and situation of social robots to maximize adoption, effectiveness, and accessibility across diverse learners and learning contexts. By highlighting the nuanced trade-offs between competence and warmth, this research challenges theoretical assumptions that complex hard robots are universally superior in educational settings.
|
|
WeHT1 Regular Session, Auditorium 1 |
Add to My Program |
Cooperation and Collaboration in Human-Robot Teams IV |
|
|
Chair: Lim, Yoonseob | Korea Institute of Science and Technology |
Co-Chair: Neef, Caterina | Karlsruhe Institute of Technology |
|
14:10-14:22, Paper WeHT1.1 | Add to My Program |
Say'n'Fly: An LLM-Modulo Online Planning Framework to Automate UAV Command and Control |
|
Döschl, Björn (University of the Bundeswehr Munich), Kiam, Jane Jean (University of the Bundeswehr Munich) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation
Abstract: Command and Control (C2) for Unmanned Aerial Vehicle (UAV) missions requires the dynamic coordination of task planning, environmental perception, and navigation strategies. While Large Language Model (LLM-) modulo planning frameworks have shown promise in addressing such complexity, due to their capabilities to understand natural language and common sense, their limited robustness remains a challenge. Recent adaptions of such frameworks made online planning possible, but they do not scale well with the number of actions. This limits the applicability in robotic frameworks with large sets of parameterized actions, such as those encountered in UAV C2. We introduce Say'n'Fly, an LLM-modulo online planning framework that streamlines the action space by discarding infeasible actions using domain-specific knowledge, while leveraging online heuristic search to mitigate reward uncertainty and automate the C2 processes for UAVs. Test results for our Search and Rescue (SAR) validation scenarios show that Say'n'Fly is 70 % more efficient compared to existing frameworks while maintaining or exceeding success rates.
|
|
14:22-14:34, Paper WeHT1.2 | Add to My Program |
Human-Robot Teaming Field Deployments: A Comparison between Verbal and Non-Verbal Communication |
|
Tanjim, Tauhid (Cornell University), Ekpo, Promise (Cornell University), Cao, Huajie (Michigan State University), St George, Jonathan (Weill Cornell Medical College), Ching, Kevin (Weill Cornell Medicine), Lee, Hee Rin (Michigan State University), Taylor, Angelique (Cornell Tech) |
Keywords: Assistive Robotics, Medical and Surgical Applications, Cooperation and Collaboration in Human-Robot Teams
Abstract: Healthcare workers (HCWs) encounter challenges in hospitals, such as retrieving medical supplies quickly from crash carts, which could potentially result in medical errors and delays in patient care. Robotic crash carts (RCCs) have shown promise in assisting healthcare teams during medical tasks through guided object searches and task reminders. Limited exploration has been done to determine what communication modalities are most effective and least disruptive to patient care in real-world settings. To address this gap, we conducted a between-subjects experiment comparing the RCC’s verbal and non-verbal communication of object search with a standard crash cart in resuscitation scenarios to understand the impact of robot communication on workload and attitudes toward using robots in the workplace. Our findings indicate that verbal communication significantly reduced mental demand and effort compared to visual cues and with a traditional crash cart. Although, frustration levels were slightly higher during collaborations with the robot compared to a traditional cart. These research insights provide valuable implications for human-robot teamwork in high-stakes environments.
|
|
14:34-14:46, Paper WeHT1.3 | Add to My Program |
Collaborative and Reproducible HRI Research through a Web-Based Wizard-Of-Oz Platform |
|
O'Connor, Sean (Bucknell University), Perrone, L. Felipe (Bucknell University) |
Keywords: Computational Architectures, User-centered Design of Robots
Abstract: Human-robot interaction (HRI) research plays a pivotal role in shaping how robots communicate and collaborate with humans. However, conducting HRI studies can be challenging, particularly those employing the Wizard-of-Oz (WoZ) technique. WoZ user studies can have technical and methodological complexities that may render the results irreproducible. We propose to address these challenges with HRIStudio, a modular web-based platform designed to streamline the design, the execution, and the analysis of WoZ experiments. HRIStudio offers an intuitive interface for experiment creation, real-time control and monitoring during experimental runs, and comprehensive data logging and playback tools for analysis and reproducibility. By lowering technical barriers, promoting collaboration, and offering methodological guidelines, HRIStudio aims to make human-centered robotics research easier and empower researchers to develop scientifically rigorous user studies.
|
|
14:46-14:58, Paper WeHT1.4 | Add to My Program |
Towards a Collaborative Robotic Surgical Assistant: Leveraging Gaussian Mixture Models for Synchronous Control and Visual-Haptic Proprioceptive Feedback |
|
Madera, Jonathan (University of Texas at Austin), Majewicz Fey, Ann (University of Texas at Austin) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Medical and Surgical Applications, Degrees of Autonomy and Teleoperation
Abstract: Communication between a human and an intelligent robotic system is an essential component for facilitating effective collaboration. In this paper, we describe a preliminary methodology for creating a collaborative surgical assistant. The proposed framework utilizes Gaussian Mixture Modeling (GMM) of surgical skills. Our approach leverages the GMM to allow for a human to synchronously execute manipulation tasks with the robotic system. Key components of our approach are modified GMM regression techniques which provide the synchronized control policy for the surgical assistant (automatic) manipulator and communicates trajectory information in the form of augmented reality visual cues and haptic guidance proprioceptive feedback to the human operator during task execution. A semi-autonomous surgical knot tie experiment, where the human operator collaborates with an automatic manipulator, was conducted to validate the proof of concept. We show that the variability by the human user allows the team to complete the task and provide insights into future system improvements.
|
|
14:58-15:10, Paper WeHT1.5 | Add to My Program |
Co-Adaptation in Human-Robot Training Scenarios |
|
Pietras, Emilia Theresa Jeanne Seigneur (University of Southern Denmark), Kiefer, Bernd (DFKI), Hall, Stephanie (University of Bath), Dhanda, Mandeep (University of Bath), Zhao, Haoruo (University of Bath), Dhokia, Vimal (University of Bath), Borzone, Guglielmo (University of Southern Denmark, SDU), Krüger, Norbert (University of Southern Denmark), Bodenhagen, Leon (University of Southern Denmark) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Monitoring of Behaviour and Internal States of Humans
Abstract: In human-robot collaboration scenarios, mutual adaptation between the human and robot must occur to ensure high task performance. This requires robotic systems to be capable of reasoning based on a long-term history of interactions. In this paper, we present and evaluate a robot simulation system that facilitates adaptive robot behavior using ontology-based reasoning and behavior trees in an interactive robotic scanning task. A study with 38 participants compares our adaptive system with a static system in team performance and perceived system usability. Our results suggest that use of the adaptive system significantly reduced session time, leading users to perform the task 19.5% faster. Furthermore, participants reported significantly lower fatigue levels, while maintaining the same task performance as those using the static system.
|
|
WeHT2 Regular Session, Auditorium 2 |
Add to My Program |
Social Intelligence of Robots IV |
|
|
Chair: Thill, Serge | Radboud University |
Co-Chair: Bossema, Marianne | University of Applied Sciences Amsterdam |
|
14:10-14:22, Paper WeHT2.1 | Add to My Program |
RoboButler: Frustration-Aware Assistive User Localisation for Social Robots in Office Environments |
|
Gucsi, Bálint (University of Southampton), Tuyen, Nguyen Tan Viet (University of Southampton), Chu, Bing (University of Southampton), Tarapore, Danesh (University of Southampton), Tran-Thanh, Long (University of Warwick) |
Keywords: Assistive Robotics, Motivations and Emotions in Robotics
Abstract: In human-robot interactions (HRI), it is crucial for robots to be accepted by users and that they find robotic assistance attempts helpful rather than frustrating. Working towards this goal, we investigate the problem of frustration-aware robot behaviour planning in human-robot interaction contexts without continuous user contact or live feedback. Specifically, we address the question of how social robots can efficiently localise users and assist them with errands of various importance in office environments, while minimizing the frustration experienced by their human colleagues to enhance the overall interaction experience. Doing so, we design a frustration-aware decision-making and learning framework building on multiarmed bandit approaches and knapsack algorithms, in addition to developing a Psychology-based model of frustration tailored for HRI settings with limited user contact. Then we evaluate our approach on realistic user behaviour datasets, simulating the interactions' robotic components in Gazebo with a TIAGo robot, and perform further scalability analysis in graph-based simulations. The experimental results demonstrate that the proposed framework achieves localisation success rates and travel times that converge towards oracle values (outperforming other structured learning benchmarks) while yielding an estimated up to 75% less frustration – indicating the proposed framework's suitability for advancing to user studies and deployment in real-world scenarios.
|
|
14:22-14:34, Paper WeHT2.2 | Add to My Program |
Graph-Based Loop Closure Detection for Interaction Mapping |
|
Warren, Philippe (Université De Sherbrooke), Maheux, Marc-Antoine (Université De Sherbrooke), Létourneau, Dominic (Université De Sherbrooke), Ferland, François (Université De Sherbrooke), Michaud, Francois (Universite De Sherbrooke) |
Keywords: Detecting and Understanding Human Activity, Multi-modal Situation Awareness and Spatial Cognition, Social Intelligence for Robots
Abstract: Interaction understanding and activity recognition require the ability to identify patterns in sequences of events. A robot recording what is going on in real-life settings could be able to exploit these patterns using a representation similar to a map of the observed interactions. This paper examines the use of perceptual constructs, generated by deep neural networks that process visual and audio data, to build graph-based interaction mapping from the observation of human activities. Patterns are detected using loop closure detections, similarly to what is done in simultaneous localization and mapping approach. Results suggest that the graph-based interaction mapping approach is able to create condensed representations of interaction events, and find patterns in sequences of perceptual constructs that create loops. Such technique could help design socially intelligent robots that can derive an understanding of their environment from observing and learning from their interactions with people.
|
|
14:34-14:46, Paper WeHT2.3 | Add to My Program |
A Robot That Listens: Enhancing Self-Disclosure and Engagement through Sentiment-Based Backchannels and Active Listening |
|
Tran, Hieu (Purdue University), Cha, Go-Eum (Purdue University), Jeong, Sooyeon (Purdue University) |
Keywords: Robot Companions and Social Robots, Social Intelligence for Robots, User-centered Design of Robots
Abstract: As social robots get more deeply integrated into our everyday lives, they will be expected to engage in meaningful conversations and exhibit socio-emotionally intelligent listening behaviors when interacting with people. Active listening and backchanneling could be one way to enhance robots' communicative capabilities and enhance their effectiveness in eliciting deeper self-disclosure, providing a sense of empathy, and forming positive rapport and relationships with people. Thus, we developed an LLM-powered social robot that can exhibit contextually appropriate sentiment-based backchanneling and active listening behaviors (active listening+backchanneling) and compared its efficacy in eliciting people's self-disclosure in comparison to robots that do not exhibit any of these listening behaviors (control) and a robot that only exhibits backchanneling behavior (backchanneling-only). Through our experimental study with sixty-five participants, we found the participants who conversed with the active listening robot perceived the interactions more positively, in which they exhibited the highest self-disclosures, and reported the strongest sense of being listened to. The results of our study suggest that the implementation of active listening behaviors in social robots has the potential to improve human-robot communication and could further contribute to the building of deeper human-robot relationships and rapport.
|
|
14:46-14:58, Paper WeHT2.4 | Add to My Program |
I Love Lemurs! What's Your Favorite Animal? Generating Personality-Driven Conversations for the Tabletop Robot Haru |
|
Cao, Lu (Honda Research Institute Japan), Nguyen, Tung (Honda Research Institute Japan, Co., Ltd), Reisert, Paul (Beyond Reason), Nichols, Eric (Honda Research Institute Japan), Maeda, Chikara (Honda Research Institute Japan), Lam, Darryl Sean (Honda Research Institute Japan), Siskind, Sarah Rose (Hello SciCom), Gomez, Randy (Honda Research Institute Japan Co., Ltd) |
Keywords: Personalities for Robotic or Virtual Characters, Linguistic Communication and Dialogue, Interaction with Believable Characters
Abstract: The use of social robots is rapidly expanding across various domains, including education and healthcare. To achieve human-like interactions, these robots should possess well-rounded personalities. A carefully designed personality enhances a social robot’s persuasiveness and increases its appeal during interactions with humans. This paper explores how personality traits can be effectively leveraged to generate conversational responses for social robots, making human-robot interactions more engaging. We introduce a knowledge base that profiles the multifaceted dimensions of the social robot Haru. To generate contextually appropriate responses, we employ a retrieval-augmented generation approach to retrieve relevant personality traits. Additionally, we propose a method that integrates result filtering and prompt engineering to ensure consistency in Haru’s responses. To evaluate the effectiveness of our approach, we conduct a preliminary annotation survey assessing the retrieved personality traits and generated responses. The results demonstrate that our method improves conversational flow and enhances response faithfulness to retrieved personality traits. A demo of our approach can be seen at https://www.youtube.com/watch?v=5wCQDBeSkG8.
|
|
14:58-15:10, Paper WeHT2.5 | Add to My Program |
Conveying Emotion and Intention through Quadruped Robotic Motion: A Validation Study Using Canine-Inspired Movements |
|
Yang, Victoria Ya-Ting (Karlsruhe Institute of Technology (KIT)), Biernacka, Katharina (Karlsruhe Institute of Technology (KIT)), Bruno, Barbara (Karlsruhe Institute of Technology (KIT)) |
Keywords: User-centered Design of Robots, Motivations and Emotions in Robotics
Abstract: This work explores whether canine-inspired motions on a quadruped robot effectively convey emotion and intent, specifically investigating (i) participants' recognition rate of the intended emotions & intents; (ii) the emotions that the robot movements elicit in the participants and (iii) the influence of prior experience with robots and dogs on the recognition rate. A user study involving 35 participants revealed that the movements designed to convey alert, neutral, and yes/agree exceeded the average human recognition rate for robotic emotional expressions through body gestures reported in prior work, while the analysis of the alignment between participants' emotional responses to the movements and their intended emotional content shed light on possible reasons for misinterpretations. Interestingly, prior experience with robots and dogs was found to have no significant impact on the recognition rates.
|
|
WeHT3 Regular Session, Auditorium 3 |
Add to My Program |
Physical and Virtual Robots in Aerial, Transportation and Manufacturing
Applications |
|
|
Chair: Green, Keith Evan | Cornell University |
Co-Chair: Müller, Ana | University of Applied Sciences Cologne |
|
14:10-14:22, Paper WeHT3.1 | Add to My Program |
Energy Shaping Control in Underactuated Robot Systems with Underactuation Degree Two |
|
Salamat, Babak (AI Aided Aeronautical Engineering and Product Development), Elsbacher, Gerhard (University), Andrea M. Tonello, Andrea (Alpen-Adria Universitat) |
Keywords: Aerial Systems: Mechanics and Control, Engineering for Robotic Systems, Education Robotics
Abstract: Stabilizing a 6DOF underactuated mechanical system (mathbb{R}^{3} times SO(3)) without a cascade structure while utilizing a full-state feedback controller presents a significant challenge. Furthermore, due to the complexities of its dynamics and the degree of underactuation, designing an energy-shaping controller for such a system has not been achieved until now. This paper introduces a solution to the potential energy shaping problem for 6DOF underactuated mechanical systems by employing the Interconnection and Damping Assignment Passivity-Based Control (IDA-PBC) approach. We extend the solution of partial differential equations (PDEs) from a 2D framework to a comprehensive 3D scenario. Simulation results using a quadrotor as a benchmark example demonstrate its effectiveness.
|
|
14:22-14:34, Paper WeHT3.2 | Add to My Program |
Generative AI for Intelligent Manufacturing Virtual Assistants in the Semiconductor Industry |
|
Lin, Chin-Yi (University of Texas at EL Paso), Tsai, Tsung-Han (National Taipei University of Business), Tseng, Tzu-Liang (Bill) (University of Texas at El Paso) |
Keywords: Intelligent and Flexible Manufacturing, Semiconductor Manufacturing
Abstract: As semiconductor manufacturing complexity escalates, the intricacy of corresponding manufacturing systems intensifies. These extensive systems necessitate diverse engineering expertise for effective operation and analysis. For instance, yield engineers analyze yield systems, process engineers interpret FDC parameters, and equipment engineers monitor device equipment health. Traditional manufacturing systems, reliant on manual data analysis and fixed algorithms, suffer from slow decision-making and limited adaptability. They are susceptible to human error, reactive maintenance, and restricted user interaction confined to technical interfaces and business hours. Additionally, scalability and integration pose significant challenges, inflating operational costs and hampering resource efficiency. This paper introduces an Intelligent Manufacturing Virtual Assistant (IMVA) specifically designed for the semiconductor industry. By harnessing the power of Large Language Models (LLMs) and AI Agents, IMVA enhances yield analysis and seamlessly integrates with existing systems and tools. It exhibits high accuracy in defect detection through advanced data analysis and report generation. Furthermore, IMVA facilitates natural language interaction, rendering it user-friendly and accessible to non-technical personnel. Consequently, IMVA markedly improve operational efficiency and cost-effectiveness compared to traditional manufacturing systems. The efficacy of IMVA is demonstrated through the Wide-bandgap (WBG) process, showcasing its capability to simplify root cause analysis and provide comprehensive yield reports.
|
|
14:34-14:46, Paper WeHT3.3 | Add to My Program |
Industrial Robots Energy Consumption Modeling, Identification and Optimization through Time-Scaling |
|
Wang, Zuoxue (Chongqing University), Jiang, Pei (Chongqing University), Li, Xiaobin (Chonging University), Cao, Huajun (Chongqing University), Wang, Xi Vincent (KTH Royal Institute of Technology), Li, Xiangfei (Huazhong University of Science and Technology), Cheng, Min (Chongqing University) |
Keywords: Industrial Robots, Task Planning, Sustainable Production and Service Automation, energy modeling
Abstract: Industrial robots (IRs) have considerable energysaving potential due to their vast application scale and wide range of applications. Although substantial work on the energy consumption (EC) optimization of IRs has emerged, most optimization approaches require prior knowledge of the IRs' dynamic characteristics and the electro-mechanical parameters of their drive systems, which are typically not provided by IR manufacturers. Therefore, this paper proposes an EC modeling and optimization method based on the time-scaling technique and custom identification experimental data without joint torque information. Specifically, this paper develops an energy characteristic parameter sub-model (ECPSM) to formulate the EC resulting from configuration transitions. Additionally, theoretical proof demonstrates that all coefficients in the proposed ECPSM can be identified based on the data of a finite number of identification experiments. Building upon the proposed EC model, a bi-directional dynamic programming algorithm optimizes the IR's trajectory for energy-saving, while utilizing parallel processing significantly reduces the time required for the optimization process.
|
|
14:46-14:58, Paper WeHT3.4 | Add to My Program |
Ultimate Passivity: Balancing Performance and Stability in Physical Human-Robot Interaction |
|
Guo, Xinliang (The University of Melbourne), LIU, ZHEYU (The University of Melbourne), Crocher, Vincent (The University of Melbourne), Tan, Ying (The University of Melbourne), Oetomo, Denny (The University of Melbourne), Stienen, Arno H.A. (Delft University of Technology) |
Keywords: Physical Human-Robot Interaction, Haptics and Haptic Interfaces, Compliance and Impedance Control, Passivity
Abstract: Haptic interaction is critical in physical Human-Robot Interaction, given its wide applications in manufacturing, medical and healthcare, and various industry tasks. A stable haptic interface is always needed while the human operator interacts with the robot. Passivity-based approaches have been widely utilised in control design as a sufficient condition for stability. However, it is a conservative approach which sacrifices performance to maintain stability. This paper proposes a novel concept to characterise an ultimately passive system, which can achieve the boundedness of the energy in the steady-state. A so-called Ultimately Passive Controller (UPC) is then proposed. This algorithm switches the system between a nominal mode for keeping desired performance and a conservative mode when needed to remain stable. An experimental evaluation on two robotic systems, one admittance-based and one impedance-based, demonstrates the potential interest of the proposed framework compared to existing approaches. The results demonstrate the possibility of UPC in finding a more aggressive trade-off between haptic performance and system stability, while still providing a stability guarantee.
|
|
WeHT4 Regular Session, Blauwe Zaal |
Add to My Program |
Applications of Social Robots VIII |
|
|
Chair: Lim, Sharmayne | Cornell University |
Co-Chair: Seassau, Tilly | King's College London |
|
14:10-14:22, Paper WeHT4.1 | Add to My Program |
A New Perspective and Approach to Evaluating Human-Robot Interaction Safety Considering Human Pain Sensation and Skin Contact Conditions |
|
Li, Fengyu (Toyama Prefectural University) |
Keywords: HRI and Collaboration in Manufacturing Environments, Evaluation Methods, Ethical Issues in Human-robot Interaction Research
Abstract: The adoption of collaborative robots is rapidly advancing and enhancing efficiency. However, safety in human-machine interaction has long been a crucial concern in various fields, including manufacturing plants and product development. Existing standards, such as ISO 10218 and ISO/TS 15066, set force and pressure limits based on pain thresholds. Improving safety in collaborative tasks requires better pain assessment and monitoring contact conditions. Given individual differences in pain perception, refined safety standards should consider lighter, non-harmful contact and integrate psychological and social pain factors to enhance human-robot interaction safety. Creating a human body dummy for interaction simulation experiments is an effective experimental method. In this study, mechanical stimulus experiments were conducted simulating pinch environments on a human finger within a safe range. Using a pain assessment tool, the relationship between pain intensity, pain cycle, and contact force was analyzed considering both sensory and emotional pain factors. Furthermore, a hand dummy equipped with phalange dummy modules with built-in flexible sensors was designed. The effectiveness of distinguishing different contact conditions was evaluated through a friction contact experiment.
|
|
14:22-14:34, Paper WeHT4.2 | Add to My Program |
Integrating Perceptions: A Human-Centered Physical Safety Model for Human-Robot Interaction |
|
Pandey, Pranav Kumar (University of Georgia), Parasuraman, Ramviyas (University of Georgia), Doshi, Prashant (University of Georgia) |
Keywords: Monitoring of Behaviour and Internal States of Humans, User-centered Design of Robots, Evaluation Methods
Abstract: Ensuring safety in human-robot interaction (HRI) is essential to foster user trust and enable the broader adoption of robotic systems. Traditional safety models primarily rely on sensor-based measures, such as relative distance and velocity, to assess physical safety. However, these models often fail to capture subjective safety perceptions, which are shaped by individual traits and contextual factors. In this paper, we introduce and analyze a parameterized general safety model that bridges the gap between physical and perceived safety by incorporating a personalization parameter, rho, into the safety measurement framework to account for individual differences in safety perception. Through a series of hypothesis-driven human-subject studies in a simulated rescue scenario, we investigate how emotional state, trust, and robot behavior influence perceived safety. Our results show that rho effectively captures meaningful inter-individual differences, driven by affective responses, trust in task consistency, and clustering into distinct user types. Specifically, our findings confirm that predictable and consistent robot behavior (H1, H3), as well as the elicitation of positive emotional states (H2), significantly enhance perceived safety. Moreover, responses cluster into a small number of user types (H6), supporting adaptive personalization based on shared safety models. Notably, participant role (H4) significantly shaped safety perception, and repeated exposure (H5) reduced perceived safety only for CAS participants, emphasizing the impact of physical interaction and experiential change. These findings highlight the importance of adaptive, human-centered safety models that integrate both psychological and behavioral dimensions, offering a pathway toward more trustwort
|
|
14:34-14:46, Paper WeHT4.3 | Add to My Program |
Learning Secondary Tool Affordances from Human Actions Using the iCub Robot |
|
Ding, Bosong (Tilburg University), Oztop, Erhan (Osaka University / Ozyegin University), Spigler, Giacomo (Tilburg University), Kirtay, Murat (Tilburg University) |
Keywords: Detecting and Understanding Human Activity, Computational Architectures, Machine Learning and Adaptation
Abstract: Tools and other objects offer agents a range of potential actions, commonly referred to as affordances. Each tool is typically designed with a primary purpose in mind -like a hammer's function to drive nails. However, tools can also serve purposes beyond their original design. These alternative uses represent secondary affordances, extending the tool's utility beyond its primary intended function. While prior robotics research on affordance perception and learning has primarily focused on primary affordances, our work addresses the less-explored area of learning secondary tool affordances from human partners. Using the iCub robot equipped with three cameras, we observed humans performing actions on twenty objects using four different tools in ways that deviate from their primary purposes. For example, the iCub observed humans using rulers not for measuring but to push, pull, and move objects. In this setting, we constructed a dataset by taking pictures of objects before and after each action is executed. To model secondary affordance learning, we trained three neural networks (ResNet-18, ResNet-50, and ResNet-101) on three prediction tasks using these raw images as input: (1) identifying which tool was used to move an object, (2) predicting the tool with additional action category information, and (3) jointly predicting both the tool and action performed. Our results demonstrate that deep learning architectures enable the iCub robot to successfully predict secondary tool affordances, thereby paving the road for human-robot collaborative object manipulation involving complex affordances.
|
|
14:46-14:58, Paper WeHT4.4 | Add to My Program |
Welcome to Aibo’s Hometown! Framing Social Robots As Cultural Resources for the Local Economy |
|
Kamino, Waki (Cornell University), Jung, Malte (Cornell University), Sabanovic, Selma (Indiana University Bloomington) |
Keywords: Creating Human-Robot Relationships, Applications of Social Robots, Long-term Experience and Longitudinal HRI Studies
Abstract: This paper explores the case study of Kota, Japan, as "aibo’s furusato" -- or hometown -- through ethnographic observation and interviews to investigate how diverse stakeholders, including local government, industry partners, and robot owners, collaboratively construct the town's identity as a tourist destination centered around the social robot aibo. Our findings reveal how new robotic technologies are incorporated into existing frameworks of local economic production while also creating new opportunities for social interaction, as well as cultural and economic production that connect robotics to traditional products and practices. This work more broadly supports an understanding of robot design that goes beyond the artifacts itself to encompass a broader socio-technical infrastructure that supports robots' meaningful use.
|
|
14:58-15:10, Paper WeHT4.5 | Add to My Program |
Flat Tube Bending Actuator for Shape-Changing Wearable Technology |
|
Nipatphonsakun, Kawinna (Kanazawa University), Hayashi, Ikumi (Kanazawa University), Watanabe, Tetsuyou (Kanazawa University) |
Keywords: Assistive Robotics, Innovative Robot Designs, Robots in Education, Therapy and Rehabilitation
Abstract: Shape-changing garments have gained increasing attention in wearable technology due to their potential applications in adaptive wearables, assistive technology, and soft robotics. This paper presents the design, fabrication, and experimental evaluation of a flat tube bending actuator, focusing on air pressure, flat tube design, and resulting deformation. The study investigates the relationship between applied pressure and bending angles, the curvature per length, and air leakage for each design. Additionally, the actuators were integrated into a shape-changing garment with optimized flat tube designs. Various air-supplying methods were analyzed to optimize the supplying pressure to resist the leakage of the fabric. Experimental results demonstrate controllable shape transformations by each type of actuator, highlighting the actuator's adaptability for wearable applications.
|
|
WeHT5 Regular Session, Auditorium 5 |
Add to My Program |
Ethical Issues |
|
|
Chair: Perugia, Giulia | Eindhoven University of Technology |
Co-Chair: Tanevska, Ana | Uppsala University |
|
14:10-14:22, Paper WeHT5.1 | Add to My Program |
Ethically-Aware Participatory Design of a Productivity Social Robot for College Students |
|
Lalwani, Himanshi (New York University Abu Dhabi), Salam, Hanan Anna (New York University Abu Dhabi) |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots, Robots in Education, Therapy and Rehabilitation
Abstract: College students often face academic and life stressors affecting productivity, especially students with Attention Deficit Hyperactivity Disorder (ADHD) who experience executive functioning challenges. Conventional productivity tools typically demand sustained self-discipline and consistent use, which many students struggle with, leading to disruptive app-switching behaviors. Socially Assistive Robots (SARs), known for their intuitive and interactive nature, offer promising potential to support productivity in academic environments, having been successfully utilized in domains like education, cognitive development, and mental health. To leverage SARs effectively in addressing student productivity, this study employed a Participatory Design (PD) approach, directly involving college students and a Student Success and Well-Being Coach in the design process. Through interviews and a collaborative workshop, we gathered detailed insights on productivity challenges and identified desirable features for a productivity-focused SAR. Importantly, ethical considerations were integrated from the onset, facilitating responsible and user-aligned design choices. Our contributions include comprehensive insights into student productivity challenges, SAR design preferences, and actionable recommendations for effective robot characteristics. Additionally, we present stakeholder-derived ethical guidelines to inform responsible future implementations of productivity-focused SARs in higher education.
|
|
14:22-14:34, Paper WeHT5.2 | Add to My Program |
An Ethical Risk Assessment of a Social Robot in the Workplace |
|
Dowthwaite, Liz (University of Nottingham), Lancaster, Karen (University of Nottingham), Marsh, Elizabeth (University of Nottingham), McClaughlin, Emma (University of Nottingham), Barnard, Pepita (University of Nottingham), Caleb-Solly, Praminda (University of Nottingham), Cameron, Harriet (University of Nottingham), Craigon, Peter (University of Nottingham), Magassouba, Aly (University of Nottingham), Moir, Frederick (University of Nottingham), Webb, Helena (University of Nottingham) |
Keywords: Ethical Issues in Human-robot Interaction Research, Applications of Social Robots, Robot Companions and Social Robots
Abstract: This research study scopes the ethical implications associated with the deployment of social robots within workplace environments, in the context of an increased need of employee wellbeing. Progress in the domains of artificial intelligence (AI) and social robotics present a potential avenue for improving workplace wellbeing. However, it is necessary conduct a comprehensive evaluation of the impacts and ethical considerations associated with these emerging technologies. For this purpose, we carried out an ethical risk assessment for a telepresence robot programmed to function as a social robot in the workplace, which we named 'Cheerbot'. After introducing Cheerbot’s functions, the paper describes an ethical risk assessment process, which involves identifying potential hazards, the likelihood of the hazard occurring, potential consequences (harms), and a risk exposure rating for each hazard. Results are presented for three hypothetical scenarios, and potential mitigations for the highest rated risks are suggested. The findings highlight the value of proactively identifying and mitigating ethical risks from harms, ensuring responsible deployment of robotics aimed at supporting workplace wellbeing.
|
|
14:34-14:46, Paper WeHT5.3 | Add to My Program |
Reimagining Informed Consent in Human-Robot Interaction: Introducing the RoboConsent Framework |
|
Rosén, Julia (McMaster University), Geiskkovitch, Denise Y. (McMaster University) |
Keywords: Ethical Issues in Human-robot Interaction Research, User-centered Design of Robots, Embodiment, Empathy and Intersubjectivity
Abstract: Informed consent is an integral process in human-robot interaction (HRI); however, current practices have been criticized for overlooking the social, psychological, and embodied complexities of interacting with robots. Social robots’ embodied, human-like design and social behavior can lead to misaligned expectations that pose risks such as deception, overtrust, poor user experience, and psychological harm for users. Moreover, robots often collect personal data in ways that are not always visible or understood by users. Typically, informed consent does not address such issues, highlighting the need for consent processes tailored to HRI. In this paper, we reimagine informed consent and introduce the RoboConsent framework, drawing from previous HRI research highlighting these issues and feminist consent models that address power imbalances and move toward a user-centered process. The framework consists of five components that ensure meaningful informed consent and six principles that guide how it can be obtained. These work in tandem to create informed consent practices that address the unique dynamics of HRI.
|
|
14:46-14:58, Paper WeHT5.4 | Add to My Program |
Oh F**k! How Do People Feel about Robots That Leverage Profanity? |
|
Shippy, Madison (Oregon State University), Zhang, Brian John (Oregon State University), Fitter, Naomi T. (Oregon State University) |
Keywords: Sound design for robots, Linguistic Communication and Dialogue, Robotic Etiquette
Abstract: Profanity is nearly as old as language itself, and cursing has become particularly ubiquitous within the last century. At the same time, robots in personal and service applications are often overly polite, even though past work demonstrates the potential benefits of robot norm-breaking. Thus, we became curious about robots using curse words in error scenarios as a means for improving social perceptions by human users. We investigated this idea using three phases of exploratory work: an online video-based study (N = 76) with a student pool, an online video-based study (N = 98) in the general U.S. population, and an in-person proof-of-concept deployment (N = 52) in a campus space, each of which included the following conditions: no-speech, non-expletive error response, and expletive error response. A surprising result in the outcomes for all three studies was that although verbal acknowledgment of an error was typically beneficial (as expected based on prior work), few significant differences appeared between the non-expletive and expletive error acknowledgment conditions (counter to our expectations). Within the cultural context of our work, the U.S., it seems that many users would likely not mind if robots curse, and may even find it relatable and humorous. This work signals a promising and mischievous design space that challenges typical robot character design.
|
|
WeHT6 Regular Session, Auditorium 6 |
Add to My Program |
Motion and Navigation III |
|
|
Chair: Kato, Shohei | Nagoya Institute of Technology |
Co-Chair: Cañete, Raquel | Universidad De Sevilla |
|
14:10-14:22, Paper WeHT6.1 | Add to My Program |
Robot Arms Too Short? Explaining Motion Planning Failures Using Design Optimization |
|
Wu, Wenxi (King's College London), Brandao, Martim (King's College London) |
Keywords: User-centered Design of Robots, Novel Interfaces and Interaction Modalities, Motion Planning and Navigation in Human-Centered Environments
Abstract: Motion planning algorithms are a fundamental component of robotic systems. Unfortunately as shown by recent literature, their lack of explainability makes it difficult to understand and diagnose planning failures. The feasibility of a motion planning problem depends heavily on the robot model, which can be a major reason for failure. We propose a method that automatically generates explanations of motion planner failure based on robot design. When a planner is not able to find a feasible solution to a problem, we compute a minimum modification to the robot's design that would enable the robot to complete the task. This modification then serves as an explanation of the type: "the planner could not solve the problem because robot links X are not long enough". We demonstrate how this explanation conveys what the robot is doing, why it fails, and how the failure could be recovered if the robot had a different design. We evaluate our method through a user study, which shows our explanations help users better understand robot intent, cause of failure and recovery, compared to other methods. Moreover, users were more satisfied with our method's explanations, and reported that they understood the capabilities of the robot better after exposure to the explanations.
|
|
14:22-14:34, Paper WeHT6.2 | Add to My Program |
MRHaD: Mixed Reality-Based Hand-Drawn Map Editing Interface for Mobile Robot Navigation |
|
Taki, Takumi (Osaka University), Kobayashi, Masato (The University of Osaka / Kobe University), Iglesius, Eduardo (Osaka University), Chiba, Naoya (Osaka University), Shirai, Shizuka (Osaka University), Uranishi, Yuki (Osaka University) |
Keywords: Novel Interfaces and Interaction Modalities, Motion Planning and Navigation in Human-Centered Environments, Virtual and Augmented Tele-presence Environments
Abstract: Mobile robot navigation systems are increasingly relied upon in dynamic and complex environments, yet they often struggle with map inaccuracies and the resulting inefficient path planning. This paper presents MRHaD, a Mixed Reality-based Hand-drawn Map Editing Interface that enables intuitive, real-time map modifications through natural hand gestures. By integrating the MR head-mounted display with the robotic navigation system, operators can directly create hand-drawn restricted zones(HRZ), thereby bridging the gap between 2D map representations and the real-world environment. Comparative experiments against conventional 2D editing methods demonstrate that MRHaD significantly improves editing efficiency, map accuracy, and overall usability, contributing to safer and more efficient mobile robot operations. The proposed approach provides a robust technical foundation for advancing human-robot collaboration and establishing innovative interaction models that enhance the hybrid future of robotics and human society. For additional material, please check: https://mertcookimg.github.io/mrhad/
|
|
14:34-14:46, Paper WeHT6.3 | Add to My Program |
Gait Rehabilitation for Individuals after Anterior Cruciate Ligament Reconstruction Using a Lightweight Unpowered Exoskeleton |
|
Bao, Bingsheng (Institute of Robotics & Intelligent Systems, Shaanxi Key Laborat), Zhu, Aibin (Xi'an Jiaotong University), Feng, Pengpeng (The Fourth Medical Center of Chinese PLA General Hospital), Wu, Xinyu (Xi'an Jiaotong University), Zhang, Jing (Xi'an Jiaotong University), Wang, Jing (Xi'an Jiaotong University), Zhang, Yu (Xi'an Jiaotong University), Li, Meng (Xi'an Jiaotong University), Li, Xiao (The Fourth Medical Center of Chinese PLA General Hospital), Guan, Zhenpeng (Peking University Shougang Hospital) |
Keywords: Robots in Education, Therapy and Rehabilitation, Creating Human-Robot Relationships, Social Touch in Human–Robot Interaction
Abstract: Currently, few unpowered exoskeletons are used in the actual rehabilitation of patients. For individuals after anterior cruciate ligament (ACL) reconstruction, they usually exhibit abnormal gait characteristics, and this abnormal gait increases the likelihood of osteoarthritis (OA) in the long term. For these individuals, considering that the knee joint may be unstable due to insufficient muscle strength especially during the weight acceptance phase, this paper proposes a knee-extension-assisted unpowered exoskeleton for knee protection as well as gait rehabilitation training. This lightweight exoskeleton, weighing only 450g, holds promise for enhancing gait stability and lowering the risk of joint osteoarthritis. Custom-designed miniature gas springs contribute to energy storage and provide specified stiffness support to the human knee joint. Gait experiments conducted on six subjects (after ACL reconstruction nearly 1 month) indicated that walking with this exoskeleton significantly reduced peak rectus femoris (RF) muscle activity by 36.4% in weight acceptance phase. The results demonstrated the potential of this exoskeleton to relieve stress on the knee extensor muscles, safeguard patients during recovery, and assist in early gait training.
|
|
14:46-14:58, Paper WeHT6.4 | Add to My Program |
Bridging the Gap with PRoMo: What Users Expect from Robot Navigation in Shared Environments |
|
Nikolovska, Kristina (Constructor University), Maurelli, Francesco (Constructor University), Kappas, Arvid (Constructor University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, User-centered Design of Robots, Evaluation Methods
Abstract: Robot navigation plays a critical role in how people perceive, accept, and collaborate with robots in shared environments. This study presents PRoMo (Preference for Robot Motion Questionnaire), a user-centered questionnaire designed to capture human expectations of robot navigation behavior, independent of specific robot forms or tasks. The questionnaire consolidates 28 empirically grounded behaviors into five thematic categories: safety, predictability, proximity, speed and path selection, and responsiveness. Responses from 140 participants reveal strong preferences for navigation strategies that respect personal space, avoid blind spots, and signal awareness through subtle motion cues. Open-ended analyses highlights additional concerns, including robot noise and emotional comfort, suggesting that movement is perceived not only as spatial but also as sensory and expressive. Importantly, subjective familiarity with robots showed stronger correlations with behavior preferences than objective experience. These findings offer a generalizable framework for designing socially appropriate robot navigation strategies in human-centered spaces.
|
|
14:58-15:10, Paper WeHT6.5 | Add to My Program |
Socially-Aware Mobile Robot Navigation: Pedestrian Behavior Modeling and Adaptive Motion Planning |
|
HAN, Jinwoo (Univeristy of Tokyo), Sasaki, Yoko (National Institute of Advanced Industrial Science and Technology) |
Keywords: Detecting and Understanding Human Activity, Motion Planning and Navigation in Human-Centered Environments, Social Intelligence for Robots
Abstract: In crowded environments, such as downtown areas on weekends or office corridors during commute hours, it is challenging for an autonomous mobile robot to reach its destination when navigating among moving pedestrians. This challenge requires advanced techniques that enable robots to understand and interact with human behavior, leveraging a wide range of real-world data. We propose two key components to address this crowd navigation challenge. First, the behavior of pedestrians around the robot is modeled using data collected by the robot. Specifically, K-means++ is applied to the trajectory data to identify distinct pedestrian behavior patterns; Subsequently, a transformer-based VAE reproduces pedestrians' reactive motions to robot actions. Second, a ResNet-18 + A3C-based deep reinforcement learning algorithm trains the robot to generate safe navigation actions in a simulation environment that reflects actual pedestrian behavior. We validate our approach through real-world experiments conducted at a science museum, demonstrating significant improvements over existing methods.
|
|
WeHT7 Regular Session, Auditorium 7 |
Add to My Program |
Robots in Families, Education, Therapeutic Contexts & Arts VI |
|
|
Chair: Laban, Guy | University of Cambridge |
|
14:10-14:22, Paper WeHT7.1 | Add to My Program |
"My Name Is Sonrie, and I Come from Afar!" – Co-Designing a Social Robot for Multicultural Early Education |
|
Bixio, Anna Allegra (Università Di Genova), Nardelli, Alice (University of Genoa), Stopponi, Alice (Università Di Perugia), Filomia, Maria (Università Di Perugia), Bartolini, Alessia (Università Di Perugia), Milella, Marco (Università Degli Studi Di Perugia), Sgorbissa, Antonio (University of Genova), Recchiuto, Carmine Tommaso (University of Genova) |
Keywords: Child-Robot Interaction, Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots
Abstract: Co-design is widely used in educational contexts to involve stakeholders and make them active participants in the learning process. This study presents the co-design process conducted with teachers, educators, and families before introducing a social robot in four Italian preschools and a nursery. The robot is expected to promote intercultural awareness in a highly culturally diverse educational environment. We consider the co-design process an essential step, as teachers and educators, by knowing the social rules and pedagogical concepts of each specific educational context, can tailor the unique characteristics of the robot (being embodied and equipped with social behaviors) to effectively benefit their pedagogical reality. In addition, stakeholders, through co-design, can incorporate cultural awareness of children and their families into the robot design. The results obtained after the co-design process highlight that the co-creation of robotic applications and the robot’s imagery before its actual introduction into activities is fundamental for developing a framework tailored to a specific educational context, for reformulating the project’s prerogatives in such a way that it becomes part of the educational reality, and for giving teachers the opportunity to familiarize themselves with the robot, understand its capabilities, and exploit them according to their educational context.
|
|
14:22-14:34, Paper WeHT7.2 | Add to My Program |
Assessment of Cancer Patients' Well-Being through Electrodermal Activity |
|
Meijer, Anneloes L. (Utrecht University), Pivin-Bachler, Julie R. (Utrecht University), Alvarez-Benito, Gloria (University of Seville), Amores-Carredano, J. Gabriel (Universidad De Sevilla), Gomez, Randy (Honda Research Institute Japan Co., Ltd), van den Broek, Egon L. (Utrecht University) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Child-Robot Interaction, Machine Learning and Adaptation
Abstract: Hospitalized pediatric cancer patients often experience anxiety. Social robots have been proposed as intelligent monitors, embodied mediators, and embodied companions to watch over and support children with such distress. To do so, we propose to augment social robots with biosensors. ElectroDermal Activity recordings of pm22-hour were collected from 8 in-hospital pediatric cancer patients and 6 survivors outside the hospital, together with their diaries and an anxiety questionnaire. To optimally exploit the limited data gathered, external datasets were used to build classification models, with the best performing model achieving a cross-validated F1-score of 0.59 (SD=0.12) on the test set. The models delivered promising out-of-distribution predictions of high-arousal on the pediatric recordings. The limited number of labels available did not allow to validate all high-arousal segments, confirming the challenge of gathering reliable ground truth labels in pediatric hospitals. We suggest data collection methods to foster the further development of augmented social robots.
|
|
14:34-14:46, Paper WeHT7.3 | Add to My Program |
What People Share with a Robot When Feeling Lonely and Stressed and How It Helps Over Time |
|
Laban, Guy (University of Cambridge), Chiang, Sophie (Department of Computer Science and Technology, University of Cam), Gunes, Hatice (University of Cambridge) |
Keywords: Motivations and Emotions in Robotics, Linguistic Communication and Dialogue, Long-term Experience and Longitudinal HRI Studies
Abstract: Loneliness and stress are prevalent among young adults and are linked to significant psychological and health-related consequences. Social robots may offer a promising avenue for emotional support, especially when considering the ongoing advancements in conversational AI. This study investigates how repeated interactions with a social robot influence feelings of loneliness and perceived stress, and how such feelings are reflected in the themes of user disclosures towards the robot. Participants engaged in a five-session robot-led intervention, where a LLM-powered QTrobot facilitated structured conversations designed to support cognitive reappraisal. Results from linear mixed-effects models show significant reductions in both loneliness and perceived stress over time. Additionally, semantic clustering of 560 user disclosures towards the robot revealed six distinct conversational themes. Results from Kruskal-Wallis H-test demonstrate that participants reporting higher loneliness and stress, more frequently engaged in socially focused disclosures, such as friendship and connection, whereas lower distress was associated with introspective and goal-oriented themes (e.g., academic ambitions). By exploring both how the intervention affects well-being, as well as how well-being shapes the content of robot-directed conversations, we aim to capture the dynamic nature of emotional support in human–robot interaction.
|
|
14:46-14:58, Paper WeHT7.4 | Add to My Program |
Improving Robot Learning Outcomes in Human-Robot Teaching: The Role of Human Teachers' Awareness of a Robot's Visual Constraints |
|
Aliasghari, Pourya (University of Waterloo), Nehaniv, Chrystopher (University of Waterloo), Ghafurian, Moojan (University of Waterloo), Dautenhahn, Kerstin (University of Waterloo) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Programming by Demonstration, Human Factors and Ergonomics
Abstract: To be able to learn effectively, robots sometimes will need to select more suitable human teachers. We propose an attribute in human teachers for robots that learn through visual observations, namely human teachers' awareness of and attention to the robot's visual capabilities and constraints, and explore how it affects robot learning outcomes. In an in-person experiment involving 72 participants who taught three physical tasks to an iCub humanoid robot, we manipulated teachers' awareness of the robot's visual constraints by offering the visual perspective of the robot in one of the experimental conditions. Participants who were able to see the robot's vision output paid increased attention to ensuring task objects were visible to the robot when providing demonstrations of physical tasks. This emphasis on attention to the robot's view resulted in better learning outcomes for the robot, as indicated by lower perception error rates and higher learning scores. This study contributes to understanding factors in human teachers that lead to better learning outcomes for robots.
|
|
14:58-15:10, Paper WeHT7.5 | Add to My Program |
AI-Gadget Kit: Integrating Swarm User Interfaces with LLM-Driven Agents for Tabletop Game Applications |
|
Guo, Yijie (Tsinghua University), Wang, Ruhan (Tsinghua University), Huang, Zhenhan (University of Tsukuba), Yao, Zhihao (Tsinghua University), Yu, Tianyu (University of California, Berkeley), Xu, Zhiling (Tsinghua University), Zhao, Xinyu (Tsinghua University), Li, xueqing (Tsinghua University), Mi, Haipeng (Tsinghua University) |
Keywords: Robots in art and entertainment, Novel Interfaces and Interaction Modalities, Narrative and Story-telling in Interaction
Abstract: While Swarm User Interfaces (SUIs) have succeeded in enriching tangible interaction experiences, their limitations in autonomous action planning have hindered the potential for personalized and dynamic interaction generation in tabletop games. Based on the AI-Gadget Kit we developed, this paper explores how to integrate LLM-driven agents to enable SUIs to execute interaction tasks within tabletop games. After defining the design space of this kit, we elucidate the method for designing agents that can extend the meta-actions of SUIs to motion planning. Furthermore, we introduce an add-on prompt method that simplifies the design process for four interaction relationships in tabletop games. Lastly, we present an example that illustrates the potential of AI-Gadget Kit to construct personalized complex interactions in SUI tabletop games.
|
|
WeLBR Interactive Session, Senaatszaal/Voorhof |
Add to My Program |
Late Breaking Reports II (same Papers As in LBR I) 12: 50-15: 10 |
|
|
Chair: Ricci, Andy Elliot | Bates College |
|
12:50-15:10, Paper WeLBR.1 | Add to My Program |
HRSP2mix: Human-Robot Speech Interruption Corpus |
|
Wang, Kangdi (Heriot-Watt University), Aylett, Matthew (Heriot-Watt University, CereProc Ltd.), Pidcock, Christopher John (CereProc Ltd.) |
|
12:50-15:10, Paper WeLBR.2 | Add to My Program |
Metaphysical Masks and Robots: Studying Movement, Intent, and Perception for Social Robotics |
|
Wallace, Benedikte (University of Oslo) |
|
12:50-15:10, Paper WeLBR.3 | Add to My Program |
A Preliminary Study on the Effectiveness of a Social Robot on Stress Reduction through Deep Breathing |
|
Rosenthal-von der Pütten, Astrid Marieke (RWTH Aachen University), Liu, Guiying (RWTH Aachen University), Alhabboub, Hani Alassiri (RWTH Aachen University), Song, Heqiu (RWTH Aachen University) |
|
12:50-15:10, Paper WeLBR.4 | Add to My Program |
The Dream Robot: Medical Hypnosis for Children by a Robot in a Hospital Setting |
|
Weda, Judith (University of Applied Sciences Utrecht), Droog, Simone de (Amsterdam University of Applied Sciences), Klompmaker, Elise Amke (University of Applied Sciences Utrecht), Ligthart, Mike (Vrije Universiteit Amsterdam), Ul Husan, Sobhaan Javaid (Vrije Universiteit Amsterdam), Veld, Sofie (Vrije Universiteit Amsterdam), Hendriks, Fleur (Vrije Universiteit Amsterdam), Vlieger, Arine (St Antonius Hospital), Smakman, Matthijs (Vrije Universiteit Amsterdam) |
|
12:50-15:10, Paper WeLBR.5 | Add to My Program |
PyiCub: Rapid Prototyping of iCub Applications for Human-Robot Interaction Scenarios |
|
De Tommaso, Davide (Istituto Italiano di Tecnologia), Piacenti, Enrico (Istituto Italiano di Tecnologia), Currie, Joel (University of Aberdeen), Migno, Gioele (Istituto Italiano di Tecnologia), Gharb, Mohammad (Istituto Italiano di Tecnologia), Wykowska, Agnieszka (Istituto Italiano di Tecnologia) |
|
12:50-15:10, Paper WeLBR.6 | Add to My Program |
Designing an LLM-Powered Social Robot for Supporting Emotion Regulation in Parent-Child Dyads |
|
Li, Jing (Eindhoven University of Technology), Li, Sheng (Institute of Science Tokyo), Barakova, Emilia I. (Eindhoven University of Technology), Schijve, Felix (Eindhoven University of Technology), Hu, Jun (Eindhoven University of Technology) |
|
12:50-15:10, Paper WeLBR.7 | Add to My Program |
On the Capabilities of LLMs for Classifying and Segmenting Time Series of Fruit Picking Motions into Primitive Actions |
|
Konstantinidou, Eleni (Hellenic Mediterranean University), Kounalakis, Nikolaos (Hellenic Mediterranean University), Efstathopoulos, Nikolaos (Hellenic Mediterranean University), Papageorgiou, Dimitrios (Hellenic Mediterranean University) |
Keywords: Linguistic Communication and Dialogue, Detecting and Understanding Human Activity, Programming by Demonstration
Abstract: Despite their recent introduction to human society, Large Language Models (LLMs) have significantly affected the way we tackle mental challenges in our everyday lives. From optimizing our linguistic communication to assisting us in making important decisions, LLMs, such as ChatGPT, are notably reducing our cognitive load by gradually taking on an increasing share of our mental activities. In the context of Learning by Demonstration (LbD), classifying and segmenting complex motions into primitive actions, such as pushing, pulling, twisting etc, is considered to be a key-step towards encoding a task. In this work, we investigate the capabilities of LLMs to undertake this task, considering a finite set of predefined primitive actions found in fruit picking operations. By utilizing LLMs instead of simple supervised learning or analytic methods, we aim at making the method easily applicable and deployable in a real-life scenario. Three different fine-tuning approaches are investigated, compared on datasets captured kinesthetically, using a UR10e robot, during a fruit-picking scenario.
|
|
12:50-15:10, Paper WeLBR.8 | Add to My Program |
A Robot of One’s Own: The Impact of User-Driven Customization on Human–Robot Interactions |
|
Voges, Amelie (University of Glasgow), Cross, Emily S (ETH Zurich), Foster, Mary Ellen (University of Glasgow) |
Keywords: Creating Human-Robot Relationships, User-centered Design of Robots
Abstract: Despite a strong shift towards user-centric design methodologies in the field of social robotics, little empirical research has directly investigated the impact of user-driven customization of a robot on the human—robot relationship. In this mixed-methods study, we investigated the effects of robot customization on first impressions of a humanoid robot. Participants across three experimental groups (N = 162) rated their perception of a robot before and after customizing it and were then qualitatively interviewed about their thoughts on the robot customization process. Contrary to our hypotheses, we found no evidence to suggest that customization influenced perceptual assessments of the robot. However, our qualitative findings highlight that participants mostly perceived customization as enjoyable and meaningful, and that it imbued the robot with a sense of identity and made it more pleasant to interact with. We argue that whilst customization alone may not reliably influence first impressions of social robots, it holds the potential to enhance the robots' relevance and appeal for users.
|
|
12:50-15:10, Paper WeLBR.9 | Add to My Program |
Multi-Material Pneumatic Linear Actuators Inspired by Facial Muscles: Design, Fabrication and Characterization |
|
Saini, Vijay (Indian Institute of Technology, Roorkee), Pathak, Pushparaj Mani (Indian Institute of Technology Roorkee) |
Keywords: Motivations and Emotions in Robotics, Non-verbal Cues and Expressiveness, Creating Human-Robot Relationships
Abstract: Biomimetic humanoid robots offer substantial potential in medical and social domains, particularly where lifelike human interaction is critical. This study introduces the design, fabrication, and performance analysis of a soft pneumatic actuator developed specifically for facial expression generation in humanoid robots. Replicating the biomechanics of human facial muscles presents significant complexity; to overcome this, we designed a multimaterial, vacuum-driven soft actuator leveraging localized beam buckling. When subjected to negative pressure, the actuator’s internal cavities collapse directionally, producing controlled longitudinal contraction. The actuator is fabricated through a three-step process: (1) Fused deposition modeling (FDM) 3D printing of a Thermoplastic polyurethane (TPU)-based core, (2) silicone molding to form the compliant outer skin, and (3) final assembly. By adjusting geometric dimensions, the actuator’s deformation can be programmed for specific expressions. Experimental evaluation showed a peak contraction of about 10 mm without load and a blocked force of 3.81 N, when the contraction of the actuator was restricted completely and applied with 70% vacuum. A force of approximately 2 N was sufficient to produce a 10 mm displacement in a silicone facial skin simulant, validating the actuator’s effectiveness in generating facial expressions.
|
|
12:50-15:10, Paper WeLBR.10 | Add to My Program |
Towards Effective Sign Language-Based Communication in Human-Robot Interaction: Challenges and Considerations |
|
Tan, Sihan (Institute of Science Tokyo), Khan, Nabeela (Institute of Science Tokyo), Yen, Benjamin (Institute of Science Tokyo), Ashizawa, Takeshi (Institute of Science Tokyo), Nakadai, Kazuhiro (Institute of Science Tokyo) |
Keywords: Non-verbal Cues and Expressiveness, Multimodal Interaction and Conversational Skills, Assistive Robotics
Abstract: Sign language serves as the primary means of communication for deaf and hard-of-hearing individuals. While deep learning-based techniques have paved the way for inclusive and diverse sign language processing (SLP), this research seldom extends into the robotics field. The area of human-robot interaction using sign language remains largely unexplored. This position paper urges the robotics community to recognize the integration of sign language in human-robot communication as a research domain with significant social and scientific potential. In this paper, we first identify the research gap between deep learning-based SLP and robotics. We then provide detailed analyses of the gaps in each field that must be bridged to fully realize sign language-based human-robot communication.
|
|
12:50-15:10, Paper WeLBR.11 | Add to My Program |
Construction of a Cohabitative STEAM Learning Environment Using a Weak Robot “Toi” |
|
Honjo, Nen (Toyohashi University of Technology), Hasegawa, Komei (Toyohashi University of Technology), Okada, Michio (Toyohashi University of Technology) |
Keywords: Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction, Robots in art and entertainment
Abstract: When a “social robot” with a sense of life and sociality enters children's classrooms, what kind of interactions and learning will be generated there? The authors have been studying “weak robots,” such as the “Sociable Trash Box,” which picks up trash and collects it, while successfully eliciting help from children. To apply the concept of weak robots to STEAM learning, we have developed “Toi,” a robot for “Cohabitative STEAM Learning,” which aims to grow together with children in their daily lives, not only in assembly and programming. In this paper, we conducted fieldwork using Toi and examined its potential and impact on children's learning through questionnaires and interviews with both students and teachers. In particular, by comparing the two fieldwork methods, it was confirmed that the Toi can create continuous learning through the formation of attachments and collaborative learning mediated by the Toi.
|
|
12:50-15:10, Paper WeLBR.12 | Add to My Program |
Towards Urgency Perception in HRI |
|
Halilovic, Amar (Ulm University), Chandrayan, Vanchha (Ulm University), Krivic, Senka (University of Sarajevo) |
Keywords: Ethical Issues in Human-robot Interaction Research, Applications of Social Robots, Evaluation Methods
Abstract: In time-sensitive human-robot interaction (HRI), conveying urgency is critical for eliciting timely and appropriate human responses. This paper presents an in-person user study that investigates how prosodic (voice pitch) and verbal (phrasing) cues affect urgency perception, compliance, satisfaction, and trust in a mobile robot during an unscripted hallway encounter. Participants, engaged in a fake delivery task, encountered a robot that issued context-appropriate and help-seeking requests under five different urgency conditions. Initial results from behavioral measures (reaction time, compliance) and subjective ratings (urgency perception, explanation satisfaction) indicate that both pitch and phrasing modulate urgency perception. However, further analysis is needed to get richer results and more confident conclusions. This work contributes to the growing body of research on socially intelligent robot behavior in dynamic, real-world settings.
|
|
12:50-15:10, Paper WeLBR.13 | Add to My Program |
Insights from Interviews with Teachers and Students on the Use of a Social Robot in Computer Science Class in Sixth Grade |
|
Schenk, Ann-Sophie (RWTH Aachen University), Schiffer, Stefan (RWTH Aachen University), Song, Heqiu (RWTH Aachen University) |
Keywords: Robot Companions and Social Robots, Robots in Education, Therapy and Rehabilitation, Applications of Social Robots
Abstract: In this paper we report on first insights from interviews with teachers and students on using social robots in computer science class in sixth grade. Our focus is on learning about requirements and potential applications. We are particularly interested in getting both perspectives, the teachers’ and the learners’ view on how robots could be used and what features they should or should not have. Results show that teachers as well as students are very open to robots in the classroom. However, requirements are partially quite heterogeneous among the groups. This leads to complex design challenges which we discuss at the end of this paper.
|
|
12:50-15:10, Paper WeLBR.14 | Add to My Program |
User-Centered Iterative Design of a Robotic Upper-Body Trainer |
|
Sznaidman, Yael (Ben Gurion University), Handelzalts, Shirley (Ben Gurion University), Edan, Yael (Ben-Gurion University of the Negev) |
Keywords: Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots
Abstract: This paper presents the iterative, user-centered design process of a robotic trainer system for upper-body rehabilitation in individuals with lower-limb orthopedic injuries. The system, originally adapted from a robotic physical training platform designed for older adults, was refined through multiple development cycles, incorporating feedback from both physical therapists and patients. Initial improvements were guided by therapist questionnaires and focused on enhancing motivation, personalization, instructional clarity, and feedback. In subsequent iterations, interface enhancements, additional exercise options, and improved feedback mechanisms were introduced. Strategies were also developed to address the limitations in the camera’s limited ability to accurately recognize patient movements, informed by therapist focus groups and a pilot study. Each development phase prioritized user engagement, safety, and personalization, with the overarching goal of enhancing the system’s effectiveness and applicability in real-world rehabilitation settings.
|
|
12:50-15:10, Paper WeLBR.15 | Add to My Program |
A Robot Repertoire for Assisting Teaching Children with Autism |
|
Schulz, Trenton (Norwegian Computing Center), Torrado Vidal, Juan Carlos (Norwegian COmputing Center), Badescu, Claudia (University of Oslo), Fuglerud, Kristin Skeide (Norsk Regnesentral (Norwegian Computing Center)) |
|
|
12:50-15:10, Paper WeLBR.16 | Add to My Program |
Supporting Autism Therapies with Social Affective Robots |
|
Redondo, Alberto (Mathematical Sciences Institute, CSIC), Cooper, Sara (IIIA-CSIC), Mayoral-Macau, Arnau (Artificial Intelligence Institute, CSIC), Pascual, Alvaro (Mathematical Sciences Institute, CSIC), Pou, Tomeu (Artificial Intelligence Institute, CSIC), Rios, David (Mathematical Sciences Institute, CSIC), del Rio, Jose M. (Mathematical Sciences Institute, CSIC), Rodriguez-Soto, Manel (Artificial Intelligence Research Institute, IIIA-CSIC), Rodríguez-Aguilar, Juan Antonio (Artificial Intelligence Institute, CSIC), Ros, Raquel (IIIA-CSIC) |
Keywords: Child-Robot Interaction, Assistive Robotics, Applications of Social Robots
Abstract: This work presents a social emotional robot designed to support therapists in treating children with autism. Building on earlier experiences with the AIsoy robot, the EMOROBCARE project aims at building a low-cost robot that integrates advanced perception capacities, autonomous decision-making capacities, both under normal circumstances and under exceptions, as well as emotional interaction capabilities. For this, the robot integrates technologies from ASR, LLMs, TTS, computer vision, and affective decision making, adapted to run on low-cost SBC devices. Its design allows therapeutic games guided by the therapist to reinforce cognitive, emotional, and social skills in children with autism. A multiobjective utility model adjusts the robot’s behaviour depending on the child’s emotional state and environmental context. The system operates asynchronously and activates an intervention model during exceptions. In the medium term, the goal is that the robot will partly replicate therapy sessions at home with enhanced autonomous capacities, as well as translate the experience to other assistive activities with the elderly or as teaching assistants, to name but a few.
|
|
12:50-15:10, Paper WeLBR.17 | Add to My Program |
Into the Mind of AI: How Uncertainty and Sociality Motivation Shape Chatbot Anthropomorphism |
|
Stojnšek, Katja (Masaryk University) |
Keywords: Anthropomorphic Robots and Virtual Humans, Cognitive Skills and Mental Models, Monitoring of Behaviour and Internal States of Humans
Abstract: Anthropomorphism plays a crucial role in human-computer interaction (HCI), robotics, and, in my case of interest, artificial intelligence (AI) chatbots. People commonly anthropomorphize nonhuman agents such as pets and gods, imbuing humanlike capacities and mental experiences to them. According to prior research, there are three psychological determinants that underlie anthropomorphism when individuals try to comprehend such agents: elicited agent knowledge, effectance motivation, and sociality motivation. Since existing research on chatbot anthropomorphism has not kept pace with advancements in AI technology, particularly the increasing sophistication of LLMs, I will test whether chatbot predictability and users' levels of loneliness influence the anthropomorphization of AI chatbots using an experimental method. The experiment is not only relevant for obtaining new empirical results that support the cognitive and motivational determinants of anthropomorphism, but also contributes to the discussion on the impact of AI chatbot design.
|
|
12:50-15:10, Paper WeLBR.18 | Add to My Program |
Beyond Words: Designing Nonverbal Error Responses with Performers for Healthcare Robots |
|
Garcia Goo, Hideki (University of Twente), Evers, Vanessa (University of Twente) |
Keywords: Non-verbal Cues and Expressiveness, Social Touch in Human–Robot Interaction, Robotic Etiquette
Abstract: Robots operating in public spaces such as hospitals are bound to make social mistakes (e.g., invading personal space or failing to interpret social cues). Such errors can reduce trust and acceptance, especially if they are not properly addressed. While verbal strategies like apologies or explanations are common, they can raise unrealistic expectations of a robot’s capabilities. This paper explores nonverbal approaches to error mitigation, focusing on movement, posture, sound, and shape-change as expressive modalities. We conducted a qualitative study involving 26 performers from improv and dance backgrounds, who enacted social navigation error scenarios as either a healthcare robot (Harmony) or hospital stakeholders. Participants performed localization mistakes, calmness interruptions, and social expectation failures while using restricted robot-like modalities. Through video analysis, we identified recurring error behaviours, stakeholder responses, and a limited set of mitigation strategies including distancing and bowing. These findings offer insights into how robots might recover from social errors without relying on speech, and highlight the value of performer-informed methods in robot behaviour design. The results also support the development of a shape-changing robot platform and its potential as a communication tool in sensitive human-robot interaction contexts.
|
|
12:50-15:10, Paper WeLBR.19 | Add to My Program |
Do I See Myself in Them? Exploring the Effects of Robot-Human Gender Congruence on Perceived Anthropomorphism and Intelligence |
|
Moodley, Thosha (Hogeschool Utrecht), de Haas, Mirjam (HU University of Applied Sciences), Maas, Julia (Hogeschool Utrecht) |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots, Social Intelligence for Robots
Abstract: Social robots are often gendered, which can influence user perceptions. In this between-subjects study, festival participants observed a gendered robot - male, female, or non-binary - presenting an ethical dilemma and then completed a perception questionnaire. Robots whose gender matched that of participants were initially perceived as more Anthropomorphic; however, this effect was no longer statistically significant when the non-binary condition was excluded, suggesting that the non-binary robot may have lowered Anthropomorphism in the gender-incongruent group. The non-binary robot was also least accurately identified and received the lowest Anthropomorphism scores, highlighting the complexity of representing gender fluidity in HRI. Perceived Intelligence was unaffected by robot gender, participant gender, or gender congruency, suggesting that participants evaluated intelligence without gender bias. A positive correlation between Anthropomorphism and Perceived Intelligence emerged, consistent with prior literature. Gendered response patterns were also observed, with women displaying greater empathy and more neutral attitudes. These findings underscore the importance of mindful gender design in robots and point to non-binary representation and the Anthropomorphism–Intelligence link as promising directions for future research.
|
|
12:50-15:10, Paper WeLBR.20 | Add to My Program |
Effective Recovery Strategies in Conversations with Older Adults |
|
Ashkenazi, Shaul (University of Glasgow), Webber, Bonnie (University of Edinburgh), Wolters, Maria (University of Edinburgh) |
Keywords: Linguistic Communication and Dialogue, Cognitive Skills and Mental Models, Ethical Issues in Human-robot Interaction Research
Abstract: Misunderstandings happen both in interactions between humans and in interactions between humans and voice assistants. Successful voice assistants know how to recover from such misunderstandings gracefully. We compared the effectiveness of two recovery strategies, AskRepeat (request user to repeat the sentence) and RepromptThenSay (repetition of prompt, followed by instruction) for younger and older users. The strategies were tested with 26 participants, 13 younger (aged 22-29) and 13 older (aged 66-81). Overall, users recovered successfully from problems they encountered with the system. Older and younger users performed equally well, and we found that RepromptThenSay was more effective for both age groups. Older users encountered more issues when using the system and were more likely to be annoyed with it, but found it as likable and habitable as younger users. We conclude that recovery strategies may need to be adapted to specific challenges and expectations instead of age.
|
|
12:50-15:10, Paper WeLBR.21 | Add to My Program |
Tactile Object Recognition Based on a Tactile Image with a Single Grasp of Robotic Hand |
|
Do, Hyunmin (Korea Institute of Machinery and Materials), Park, Jongwoo (Korea Institue of Machinery & Materials), Ahn, Jeongdo (Korea Institute of Machinery and Materials), Lee, Joonho (Korea Institute of Machinery & Materials (kimm)), Jung, Hyunmok (Korea Institute of Machinery and Materials) |
Keywords: Machine Learning and Adaptation, Novel Interfaces and Interaction Modalities
Abstract: Recently, there has been growing interest in object recognition using tactile sensors. This paper proposes a tactile image-based object recognition method, wherein tactile images are constructed from sensor data acquired through a robotic hand. The proposed approach enables object recognition with a single grasp. Experimental results using the YCB object set demonstrate the effectiveness of the proposed method.
|
|
12:50-15:10, Paper WeLBR.22 | Add to My Program |
Persona-Driven Design of Inclusive Robotic Workcells Using Value-Integrated LLMs |
|
Kim, Da-Young (Korea Institute of Robotics & Technology Convergence(KIRO)), KIM, Yongkuk (Korea Institute of Robotics & Technology Convergence (kiro)), LYM, Hyo Jeong (Korea Institute of Robotics and Technology Convergence (KIRO)), Hwang, Dokyung (Korea Institute of Robotics & Technology Convergence), Kim, Min-Gyu (Korea Institute of Robotics and Technology Convergence), Jung, Eui-Jung (Korea Institute of Robot and Convergence) |
Keywords: User-centered Design of Robots, HRI and Collaboration in Manufacturing Environments, Assistive Robotics
Abstract: This study examines the use of Large Language Models combined with Schwartz’s Theory of Basic Human Values to develop personas for Human-Robot Interaction service design in robotic workcells for workers with upper-limb disabilities. Traditional approaches often fail to identify users’ latent needs, particularly due to communication challenges. By utilizing real interview data and value-based modeling, the resulting personas informed Customer Journey Maps and robot service strategies. Expert evaluations by UX designers and robotic engineers found that value-informed personas more accurately represented user contexts and provided more realistic, actionable insights than those based solely on demographic data.
|
|
12:50-15:10, Paper WeLBR.23 | Add to My Program |
Development of an Artificial Intelligence-Based Connector Assembly Status Prediction Algorithm |
|
Lee, Joonho (Korea Institute of Machinery & Materials (kimm)), Ahn, Jeongdo (Korea Institute of Machinery and Materials), Lee, Young Hoon (University of Southern California), Park, Jongwoo (Korea Institue of Machinery & Materials), Kim, Hwi-su (Korea Institute of Machinery & Materials), Park, Dongil (Korea Institute of Machinery and Materials (KIMM)) |
Keywords: Machine Learning and Adaptation
Abstract: In this study, we present an AI-based approach for automating connector assembly by predicting the assembly state during the mating process. To generate training data, a series of mating experiments were conducted in which a robot sequentially attempted to mate connectors at 296 predefined XY coordinates (1 mm increments) under a vertical 10 N preload for 10 s, while two six-axis force/torque sensors recorded reaction forces at the end effector and connector base. Analysis of these force profiles revealed characteristic static and fluctuation patterns caused by connector material properties and robot joint stiffness, which can degrade state-estimation accuracy. Building on these insights, we trained a robot- and visionagnostic model that leverages only relative position and force relationships to accurately infer connector contact and collision states. Experimental results demonstrate the model’s potential to enable precise assembly under tight tolerances—especially with collaborative robots of lower positional accuracy—and to eliminate the need for external vision systems by relying solely on F/T sensing.
|
|
12:50-15:10, Paper WeLBR.24 | Add to My Program |
Designing VR Environments for the Research and Development of Social Robots Supporting Older Adults’ Daily Activities: A Case Study |
|
Li, Yanzhe (Technical University of Delft), Dudzik, Bernd (Delft University of Technology), Neerincx, Mark (TNO) |
Keywords: Evaluation Methods, Virtual and Augmented Tele-presence Environments, Novel Interfaces and Interaction Modalities
Abstract: As the aging population grows, supporting older adults in daily life is becoming increasingly important. Social assistive robots offer promising solutions, but Human-Robot Interaction research often struggles to balance ecological validity with experimental control. We adopt an iterative co-design approach, involving older adults in formative and summative evaluations, and integrating domain knowledge, scientific insights on aging, and technological constraints. Virtual Reality provides a practical middle ground by enabling immersive and manageable simulations of real-world settings. This paper presents insights from a project in which we developed a VR environment to explore how a robot might support prospective memory in daily activities. While addressing this specific use case, we encountered challenges and solutions that we believe are broadly relevant. From this, we distill: (1) a case study demonstrating how VR can be used to study social assistive robotics in realistic settings, (2) a design process that emerged from the project to guide the systematic development of such environments and how it was appolied, and (3) actionable guidance for using VR with older adults to design and evaluate social robots (e.g., mitigating motion sickness, controller complexity, and disorientation).
|
|
12:50-15:10, Paper WeLBR.25 | Add to My Program |
Special Educational Needs Teachers’ Perspectives on Social Robots in Supporting Children with Migration Backgrounds in Switzerland |
|
Tozadore, Daniel (University College London (UCL)), Seebohm, Leonie (PH Bern) |
Keywords: Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction, Linguistic Communication and Dialogue
Abstract: Children with migrant backgrounds often face linguistic and cultural barriers that affect their integration into school environments. In Switzerland’s multilingual and culturally diverse educational landscape, Special Educational Needs (SEN) teachers play a crucial role in supporting these students. While social robots are gaining attention as tools for inclusive education, little is known about SEN teachers’ perspectives on their integration in this specific context. Addressing this gap, this qualitative study explored how SEN teachers in the Canton of Bern perceive the use of social robots to support migrant children's integration. Five female SEN teachers participated in semi-structured interviews informed by visual and video stimuli. Findings revealed generally cautious and mixed attitudes toward social robots. Teachers identified potential for supporting language learning and intercultural understanding, while expressing reservations about their use in social-emotional domains. Practical concerns such as cost, technical reliability, and ethical clarity also emerged. The study highlights the need for context-sensitive, teacher-informed approaches to technology integration in inclusive classrooms.
|
|
12:50-15:10, Paper WeLBR.26 | Add to My Program |
Introduction to Hybrid Type Cable-Driven Manipulator System with Vision Based Controller |
|
Noh, Kangmin (Korea University), Oh, YunChae (Korea Univesity), Jeong, Hyunhwan (Korea University) |
Keywords: Innovative Robot Designs, User-centered Design of Robots, Motion Planning and Navigation in Human-Centered Environments
Abstract: In this paper, we present a hybrid cable-driven manipulator system that integrates a cable-driven serial manipulator with a cable-driven parallel manipulator platform. This proposed manipulator system benefits from the strengths of both serial and parallel cable-driven manipulator systems. As a result of this feature, the system offers a broad range of directional (orientation) capabilities from the serial manipulator system, along with extensive positional movements derived from the parallel manipulator system. The design, modeling, and both kinematic and static analyses of the proposed hybrid cable-driven manipulation system are presented. The validity and practicality of the proposed hybrid system are verified through numerical simulations and an experimentation with a vision-based controller carried out with a prototype system.
|
|
12:50-15:10, Paper WeLBR.27 | Add to My Program |
Impact of Gaze-Based Interaction and Augmentation on Human-Robot Collaboration in Critical Tasks |
|
Jena, Ayesha (Lund University), Reitmann, Stefan (Chemnitz University of Technology), Topp, Elin Anna (Lund University - LTH) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Human Factors and Ergonomics, Detecting and Understanding Human Activity
Abstract: We present a user study analyzing head-gaze-based robot control and foveated visual augmentation in a simulated search-and-rescue task. Results show that foveated augmentation significantly improves task performance, reduces cognitive load by 38%, and shortens task time by over 60%. Head-gaze patterns analysed over both the entire task duration and shorter time segments show that near and far attention capture is essential to better understand user intention in critical scenarios. Our findings highlight the potential of foveation as an augmentation technique and the need to further study gaze measures to leverage them during critical tasks.
|
|
12:50-15:10, Paper WeLBR.28 | Add to My Program |
Quantifying Block Play Behavior in the Parent-Child Interaction Therapy Using Skeletal and Object Recognition |
|
Miyaji, Asahi (Chuo University), Sawada, Ryusei (Chuo University), Vincze, David (Chuo University), Niitsuma, Mihoko (Chuo University) |
Keywords: Detecting and Understanding Human Activity, Child-Robot Interaction, Robots in Education, Therapy and Rehabilitation
Abstract: Parent-Child Interaction Therapy (PCIT) is a therapeutic intervention that targets children with behavioral problems and their caregivers, emphasizing structured play sessions known as “special time.” However, both face-to-face and remote implementations of PCIT face challenges in objectively evaluating parent-child interactions and play activities in the absence of direct therapist supervision. In this study, we propose an autonomous system for quantitatively analyzing parent-child play situations during “special time.” The system implements parent-child identification based on skeletal information and play activity assessment using object detection. By employing 3D pose estimation, the system identifies parents and children and tracks their positions and movements. Simultaneously, instance segmentation and clustering are used to obtain quantitative indicators, including the number and arrangement of blocks, cluster ratios, and behavioral metrics. This framework aims to provide a comprehensive and objective evaluation of parent-child interactions. We also conducted an experiment with a real parent and a child using the proposed system.
|
|
12:50-15:10, Paper WeLBR.29 | Add to My Program |
Configuring Audio-Visual Segments for Real-Time Active Speaker Detection |
|
Lee, Woo-Jin (University of Science & Technology, KIST-School), Choi, Jongsuk (Korea Inst. of Sci. and Tech) |
Keywords: Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness, Social Touch in Human–Robot Interaction
Abstract: Active Speaker Detection(ASD) is a technology that identifies which individual is speaking in a video containing multiple people. When applied to mobile robots, ASD enables them to determine the active speaker in multiparty interactions, allowing the system to associate voice, facial expressions, and body movements. This integration enhances the robot’s ability to interpret non-verbal communication and understand the context of conversations more efficiently. In this study, we focused on real-time ASD, and evaluated the real-time feasibility of several ASD models, including stateof- the-art lightweight model. To evaluate three ASD models, we used the opensource AVA-ActiveSpeaker dataset and recorded real-time conversation video containing multiple people. To evaluate this, we examined the performance variation and inference time changes according to segment size for each model. Additionally, to simulate inference times in various computing environments, we compared results across three different setups, including a CPU-only environment. The evaluation results of the lightweight version of the SOTA model demonstrated high performance in practice, and other models also managed to detect active speakers in real-time without issues under high-end computing environments. However, we found that it was difficult to utilize them effectively in relatively low-performance environments. As future work, we aim to explore methods and develop the model to reduce inference time in low-performance environments, enabling real-time operation.
|
|
12:50-15:10, Paper WeLBR.30 | Add to My Program |
AZRA: Extending the Affective Capabilities of Zoomorphic Robots Using Augmented Reality |
|
Macdonald, Shaun (University of Glasgow), ElSayed, Salma (Abertay University), McGill, Mark (University of Glasgow) |
Keywords: Novel Interfaces and Interaction Modalities, Creating Human-Robot Relationships, Affective Computing
Abstract: Zoomorphic robots could serve as accessible and practical alternatives for users unable or unwilling to keep pets. However, their affective interactions are often simplistic and short-lived, limiting their potential for domestic adoption. In order to facilitate more dynamic and nuanced affective interactions and relationships between users and zoomorphic robots we present AZRA, a novel augmented reality (AR) framework that extends the affective capabilities of these robots without physical modifications. To demonstrate AZRA, we augment a zoomorphic robot, Petit Qoobo, with novel emotional displays (face, light, sound, thought bubbles) and interaction modalities (voice, touch, proximity, gaze). Additionally, AZRA features a computational model of emotion to calculate the robot's emotional responses, daily moods, evolving personality and needs. We highlight how AZRA can be used for rapid participatory prototyping and enhancing existing robots, then discuss implications on future zoomorphic robot development.
|
|
12:50-15:10, Paper WeLBR.31 | Add to My Program |
Impact Analysis of Switching Pause Synchronization for Spoken Dialogue Systems |
|
Ujigawa, Yosuke (Keio Univ), Takashio, Kazunori (Keio University) |
Keywords: Non-verbal Cues and Expressiveness, Linguistic Communication and Dialogue, Evaluation Methods
Abstract: Each individual has a unique mental tempo (referred to as personal tempo), and the alignment of this tempo plays a crucial role in facilitating smooth interactions with spoken dialogue systems. This study focuses on the "switching pause," a key component of conversational tempo that is established during interaction. Using a dialogue corpus, we analyzed the impact of switching pauses on the dialogue and the process of synchronization. Through the analysis of synchronization between pairs, we examined dialogues with high similarity in switching pauses to elucidate the impact of this synchronization on goal achievement and cooperativity in dialogue. Furthermore, we conducted a time-series analysis within pairs to investigate the synchronization process and proposed a method for determining switching pauses for implementation in dialogue systems. This work contributes to the implementation and evaluation of a spoken dialogue system capable of adjusting its switching pauses. This methodology is essential for investigating individual differences among users and achieving effective dialogue with such systems, contributing significantly to the elucidation of personal factors that enable smooth communication.
|
|
12:50-15:10, Paper WeLBR.32 | Add to My Program |
Can AI Express Emotion Accurately? a Study of Emotion Conveyance in AI-Generated Music |
|
Gao, Xinwei (Eindhoven University of Technology), Chen, Dengkai (Northwestern Polytechnical University), Cuijpers, Raymond (Eindhoven University of Technology), Gou, Zhiming (KU Leuven), Ham, Jaap (Eindhoven University of Technology) |
Keywords: Affective Computing, Evaluation Methods, Motivations and Emotions in Robotics
Abstract: This study investigates the accuracy of emotion conveyance in AI-generated music and the mechanisms involved in valence and arousal. In a mixed-subjects lab experiment (n = 24), participants from either Dutch or Chinese backgrounds listened to 16 music clips, each generated based on one of four representative emotion labels (excited, angry, depressed, and relaxed). After listening, participants selected the emotion label that best matched their perceived emotion and indicated their emotional perception on a visual analogue scale (VAS) emotion map. The results indicate that AI-generated music has the ability to convey emotions, with significant differences across emotion types. Positive-valence musical emotions convey better, particularly positive-valence, high-arousal emotions. Many participants misclassified angry music as excited. Furthermore, arousal was found to influence participants’ valence judgement: high arousal music increases the misjudgement of negative valence as more positive, while low-arousal music interferes with valence judgement. Additionally, emotion labels with opposite signs on the valence and arousal dimensions may enhance sensitivity to arousal cues. These findings provide valuable insights into the emotional design of AI music generation systems, contributing to their future development in the context of affective computing and human-AI interaction.
|
|
12:50-15:10, Paper WeLBR.33 | Add to My Program |
Investigation of the Feasibility of Large-Scale Dataset Construction and Automated Evaluation: Towards Effective Evaluation of LLM-Based Agents Understanding Implicature |
|
Iida, Ayu (Nihon University), Okuoka, Kohei (Nihon University), Omori, Takashi (Tamagawa University), Nakashima, Ryoichi (Kyoto University), Osawa, Masahiko (Nihon University) |
Keywords: Cognitive Skills and Mental Models, Creating Human-Robot Relationships, Cooperation and Collaboration in Human-Robot Teams
Abstract: Recently, large language models (LLMs) have made remarkable progress, yet they still struggle to perform adequately in communicative contexts involving implicature. Our previous study proposed the LLM-based agents by integrating LLMs with Cognitive Models, and demonstrated that the agents could generate appropriate utterances as if they infer the speaker’s intentions in three dialogue scenarios. However, two major limitations remain in our previous study. First, evaluation in few scenarios makes it difficult to robustly assess the agents' performances. Second, the fact that the generated utterances were evaluated by an experienced experimenter familiar with the evaluation process limits the replicability of the evaluation process by others. To rigorously assess the performances of the agents, this study constructs a large-scale dataset of dialogue scenarios involving implicature, and then examines whether LLMs can reliably evaluate the generated utterances by the agents. We show that our dataset can be appropriate for examining the agents' performances and the evaluation by LLMs is possible.
|
|
12:50-15:10, Paper WeLBR.34 | Add to My Program |
I-To-Te: Convivial Relationship between Human and Mobile Robot Via Tether |
|
Hasegawa, Komei (Toyohashi University of Technology), Ito, Daiyu (Toyohashi University of Technology), Okada, Michio (Toyohashi University of Technology) |
Keywords: Creating Human-Robot Relationships, Philosophical Issues in Human-Robot Coexistence, Innovative Robot Designs
Abstract: This study explores the possibility of achieving a walking experience with robots that feels as natural and considerate as human walking interactions. For instance, in a blind marathon, interactions mediated by a tether allow participants to gently constrain each other while maintaining their agency without force. This kind of relationship, where both parties can express their agency, is thought to align with the concept of conviviality. Building on this idea, this study proposes a robot, I-to-Te, that walks alongside a human via a tether, and discusses the nature of convivial relationships. We conducted an experiment where participants walked with I-to-Te under three conditions: Human-Agency, Robot-Agency, and Mutual-Agency conditions. The results revealed that under the Mutual-Agency condition, both the human and the robot maintained their agency, leading to increased likability of the robot and enhanced satisfaction of the interaction.
|
|
12:50-15:10, Paper WeLBR.35 | Add to My Program |
Adoption of AIBO at Home of an Elderly Couple: A Qualitative Case Report |
|
Kasuga, Haruka (Hokkaido University), Kasuga, Yuichiro (Hokkaido University) |
Keywords: Robot Companions and Social Robots
Abstract: In aging societies, measures are required to mitigate social security and caregiving burdens on younger generations while promoting the physical and mental well-being of older adults. Although companion animals have demonstrated health benefits for the elderly, care burden concerns often deter adoption. This study explores the challenges the elderly face in adopting companion robots and the functions they utilize often. We conducted a field study from April 2, 2020, to March 27, 2025, in the household of an elderly couple with AIBO, a dog-like robot. Leveraging interviews, photographic and video records, and AIBO’s health checkups, we discovered that i) AIBO facilitated conversations between the elderly couple; ii) they perceived AIBO as a living pet; iii) its presence reduced feelings of loneliness; iv) it required minimal care; and v) its surveillance functionality was not perceived as intrusive. The couple’s son also viewed AIBO positively. Notably, AIBO coexisted peacefully with a resident cat, highlighting its advantage over live animals in multi-pet households. However, the couple showed no interest in using AIBO’s linked app or participating in owner meetups, indicating the limited appeal of these functions among older adults.
|
|
12:50-15:10, Paper WeLBR.36 | Add to My Program |
Modeling Physical Perception in Virtual Interactions |
|
Chase, Elyse (Rice University), OMalley, Marcia (Rice University) |
Keywords: Cognitive Skills and Mental Models, Multi-modal Situation Awareness and Spatial Cognition, Monitoring of Behaviour and Internal States of Humans
Abstract: Humans can interact effectively with complicated environments, seamlessly taking actions to learn about the objects around them and build individual cognitive world models. If robots of the future are to easily collaborate with humans on tasks in a range of dynamic environments, those robots must be able to learn from human interaction and understand personalized mental models in near real-time. These interactions are inherently multisensory, leading to layers of complexity. As a step towards understanding multisensory human mental models from interactions, we gathered pilot data from interactions and probed density judgments in virtual reality with pseudohaptic illusions. We then implemented a particle filtering workflow to estimate each individual's mental model. Future work could expand this to consider more sensory information in different tasks.
|
|
12:50-15:10, Paper WeLBR.37 | Add to My Program |
CabinetBot: A Context and Intention-Aware Robotic Cabinet System for Supporting Object Retrieval, Organization, and Storage |
|
Lee, Hansoo (Korea Institute of Science and Technology), Lee, Taewoon (Intelligence and Interaction Center, Korea Institute of Science), Lee, Jeongmin (Korea Institute of Science and Technology), Kwak, Sonya Sona (Korea Institute of Science and Technology (KIST)) |
Keywords: Assistive Robotics, Detecting and Understanding Human Activity, Multi-modal Situation Awareness and Spatial Cognition
Abstract: Managing personal belongings, such as retrieving, organizing, and storing objects, is a cognitively demanding task in daily life, especially for individuals with physical or mental limitations. We present CabinetBot, a context and intention-aware robotic cabinet system that supports object management through multimodal sensing and interaction. The system integrates computer vision, hand-object interaction recognition, and large language models (LLMs) to proactively detect user behavior and respond through automated drawer actuation. In our evaluation, CabinetBot demonstrated both high object detection and action recognition accuracy (over 90%) and reliable understanding of user voice commands. This work highlights a human-centered approach to robotic assistance in everyday environments by enabling natural, adaptive support for object management-related tasks.
|
|
12:50-15:10, Paper WeLBR.38 | Add to My Program |
A VR-Based Movement Training System with Real-Time Physical Load Feedback |
|
Iwami, Kouichi (Tamagawa-University), Inamura, Tetsunari (Tamagawa University) |
Keywords: Human Factors and Ergonomics, Monitoring of Behaviour and Internal States of Humans
Abstract: We present a VR-based movement training system that provides real-time feedback on physical load to support safe and adaptive skill acquisition. The system integrates two platforms: SIGVerse, a VR interaction environment for full-body avatar control, and DhaibaWorks, a biomechanics simulator that estimates joint torques based on user-specific body models. The system is capable of adapting to each user's physical characteristics, such as height and weight, allowing for personalized feedback that reflects individual body structure. The real-time integration of these platforms enables users to visualize internal state, such as joint torque, together with postural alignment through multiple feedback modalities. In this study, we focus on a simulated lifting task, where participants train without handling real objects. This pre-training setup allows users to rehearse safe and efficient body mechanics before engaging with physical loads—an important consideration in environments where early exposure to heavy objects may pose safety risks. These results indicate that real-time visualization of physical effort improves participants' subjective understanding; however, additional studies are required to determine whether combining multiple visual feedback yields synergistic benefits. The proposed approach advances human-centered interactive system design. It may guide future applications such as exoskeleton training, in which force-based feedback is employed not only to assist movement but also to steer users toward optimal force-generation strategies.
|
|
12:50-15:10, Paper WeLBR.39 | Add to My Program |
Creating and Evaluating a Centralized Augmented Reality MAV Path Planning Interface |
|
Ontiveros Rodriguez, Joe (ARISE Laboratory), Sharma, Janamejay (University of Denver), Haring, Kerstin Sophie (University of Denver), Reardon, Christopher M. (MITRE) |
Keywords: Virtual and Augmented Tele-presence Environments, User-centered Design of Robots, Degrees of Autonomy and Teleoperation
Abstract: With the increasing prevalence of autonomous micro-aerial vehicles (MAVs), their management, especially in the realm of path planning, has become a challenge for individual operators. Previous research methods explored using augmented reality interfaces for MAV control and path planning, focusing primarily on manual waypoint placements and digital twin representations. However, integrating autonomous waypoint placement and newer autonomy-focused interaction methods could improve operator multitasking ability, performance, and reduce overall cognitive load. This paper seeks to identify the effectiveness of autonomous waypoint placement in an AR context compared to traditional manual placement.
|
|
12:50-15:10, Paper WeLBR.40 | Add to My Program |
Toward Intuitive and Adaptive Robot Command Systems: A Comparative Study Using Generative AI and Bodystorming |
|
Park, Soobin (Intelligent and Interactive Robotics, Korea Institute of Science), Lee, Hansoo (Korea Institute of Science and Technology), Seo, Changhee (Intelligence and Interaction Research Center, Korea Institute Of), Kim, Doik (KIST), Kwak, Sonya Sona (Korea Institute of Science and Technology (KIST)) |
Keywords: User-centered Design of Robots, Assistive Robotics
Abstract: As robotic technologies become increasingly integrated into everyday life, there is a growing need for intuitive natural language command systems that allow general users to easily control robots without specialized knowledge. This study investigates how such commands are actually constructed through two comparative experiments: one using a generative AI-based simulation, and the other using a bodystorming-based participatory session with a human surrogate. In particular, in the bodystorming experiment, a human acted in place of a humanoid robot to observe real-time interactions and user expressions. Participants issued and refined commands using a generative AI model (ChatGPT-4o) in the generative AI-based simulation experiment, and employed both verbal and non-verbal expressions in the bodystorming session. Quantitative and qualitative analyses revealed that while the generative AI struggled to interpret context-dependent commands, requiring users to overly formalize their expressions, the human surrogate in the bodystorming method was able to understand even less structured commands and allowed participants to construct more concise, intuitive, and adaptive instructions, especially when non-verbal behaviors were included. This study provides empirical insights for developing user-friendly robot command interfaces and natural, adaptable command systems that better reflect users’ intentions in everyday contexts.
|
|
12:50-15:10, Paper WeLBR.41 | Add to My Program |
Guiding Visual Attention through Predictive Robot Eyes |
|
Naendrup-Poell, Lara (Technical University Berlin), Onnasch, Linda (Technische Universität Berlin) |
Keywords: Non-verbal Cues and Expressiveness, Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments
Abstract: A key factor in successful human-robot interaction (HRI) is the predictability of a robot’s actions. Visual cues, such as eyes or arrows, can serve as directional indicators to enhance predictability, potentially improving performance and increasing trust. This laboratory study investigated the effects of predictive cues on performance, trust, and visual attention allocation in an industrial HRI setting. Using a 3 (predictive cues: abstract anthropomorphic eyes, directional arrows, no cues) × 3 (experience in three experimental blocks) mixed design, 42 participants were tasked with predicting a robot's movement target as quickly as possible. Contrary to our expectations, predictive cues did not significantly affect trust or prediction performance. However, eye-tracking revealed that participants exposed to anthropomorphic eyes identified the target earlier than those without cues. Interestingly, participant's self-reports showed infrequent use of the cues as directional guidance. Still, greater cue usage, as indicated by fixation data, was associated with faster predictions, suggesting that predictive cues, particularly anthropomorphic ones, guide visual attention and may improve efficiency.
|
|
12:50-15:10, Paper WeLBR.42 | Add to My Program |
Effects of Robot Expressing Achievement on Trust Dynamics |
|
Maehigashi, Akihiro (Shizuoka University), Kubo, Kenta (Mazda Motor Corp), Yamada, Seiji (National Institute of Informatics) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Non-verbal Cues and Expressiveness
Abstract: This study investigated the effects of robot motion in expressing achievement on trust dynamics in human-robot interaction (HRI). We conducted two experiments. In Experiment 1, we examined how various motion patterns induce different affective states in humans and classified them using the circumplex model of affect. Using the classifications of motion patterns, we conducted Experiment 2 to examine the effects of robot motion in expressing achievement on trust dynamics in HRI. The results indicated that a robot’s constant motion, which induced low-arousal and neutral pleasant emotions, decreased emotional trust in and reliance on the robot more than when it was motionless. The results highlight the importance of robot motion patterns in shaping trust in HRI.
|
|
12:50-15:10, Paper WeLBR.43 | Add to My Program |
Socially Assistive Robot Hyodol for Depressive Symptoms of Older Adults in Medically Underserved Areas: A Preliminary Study |
|
Jung, Han Wool (Yongin Severance Hospital), Kim, Yujin (Yongin Severance Hospital), Kim, Hyojung (Yongin Severance Hospital), Kim, Min-kyeong (Yongin Severance Hospital), Lee, Hyejung (Yongin Severance Hospital), Park, Jin Young (Yongin Severance Hospital), Kim, Woo Jung (Yongin Severance Hospital), Kim, Jihee (Hyodol Co. Ltd), Do, Gangho (Digital Medic Co. Ltd), Park, Sehwan (Digital Medic Co. Ltd), Choi, Young-seop (Hyodol Co. Ltd), Park, Jaesub (Yongin Severance Hospital) |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Assistive Robotics
Abstract: Socially assistive robots are effective for elderly care, if they employ personalization, person-centered principles, rich interactions, and a careful role setting and psychosocial alignment. Hyodol is a socially assistive robot for elderly people that has the persona of a grandchild and mimics the relationship between grandparents and grandchildren. Based on the principles of behavioral activation and a human-centered approach, this robot provides continuous care for users’ emotional well-being, health management, and daily routines. The current study aims to evaluate the effect of Hyodol on depressive symptoms and other factors related to quality of life among older adults living in medically underserved areas. A total of 278 participants were assessed for depressive symptoms, loneliness, medication adherence, and user acceptance. After six months of use, participants’ overall depressive symptoms were significantly reduced, with the proportion of individuals categorized as high-risk for depression decreasing by 45%. Significant improvements were also observed in loneliness and medication adherence. Moreover, participants reported high levels of user acceptance and satisfaction, exceeding 70% of the total possible score. These results highlight its potential as a valuable tool for supporting mental healthcare and overall well-being among older adults in medically underserved areas.
|
|
12:50-15:10, Paper WeLBR.44 | Add to My Program |
Do Androids Dream of Ethical Decisions? a Research Plan on Robot Influence in Ethical Decision-Making |
|
Matarese, Marco (Italian Institute of Technology), Guerrieri, Vittorio (University of Genoa), Kahya, Rabiya (KTO Karatay University), Rea, Francesco (Istituto Italiano Di Tecnologia), Sciutti, Alessandra (Italian Institute of Technology) |
Keywords: Ethical Issues in Human-robot Interaction Research, Creating Human-Robot Relationships, Philosophical Issues in Human-Robot Coexistence
Abstract: Given the promise of having robots more and more integrated in people's everyday lives, they can be required to engage in moral reasoning. Hence, we cannot ignore artificial agents' potential influence on people's ethical decision-making (EDM). In EDM, individuals use their own moral principles and conscience to solve dilemmas, which indeed have several ways to be addressed. However, social influence still plays a role in such problems. For this reason, we aim to address the problem of robots' influence during EDM. First, we present results from a preliminary study where participants had only been exposed to a robot's EDM. Furthermore, we describe our research plan to extend the work by including AI-generated justifications for the ethical decisions. Finally, we describe how we plan to use these justifications in a between-subjects user study to investigate whether robots perceived as competent are more persuasive than warm ones in EDM.
|
|
12:50-15:10, Paper WeLBR.45 | Add to My Program |
Empirical Evaluation of Healthcare Communication Robot Encouraging Self-Disclosure of Chronic Pain |
|
Shimada, Airi (Keio University), Takashio, Kazunori (Keio University) |
Keywords: Assistive Robotics, Monitoring of Behaviour and Internal States of Humans, Robots in Education, Therapy and Rehabilitation
Abstract: Self-disclosure of pain is essential to communicate pain, which is a subjective sensation, to a third party. However, many elderly people, especially those with chronic pain, are hesitant to communicate their pain. As a result, many patients do not receive appropriate treatment at the right time. The ultimate goal of this study is to create a robot for people with chronic pain that detects the user’s discomfort through multiple modalities in daily interactions, and reports the recorded information to a hospital or family if necessary. In this paper, we implemented a system that detects discomfort based on the user’s verbal expressions of pain and the action of rubbing, and asks detailed questions about the pain. We conducted a demonstration experiment with patients at Nichinan Hospital, and the content of the dialogue was evaluated by a physical therapist. The proposed method received significantly higher ratings for the naturalness of the conversation, the ease of use of the system, and the length of the conversation. The physical therapist’s evaluation suggested that the ability of the dialogue system to "detect" the user’s discomfort or abnormalities had a positive effect on facilitating pain communication and encouraging self-disclosure.
|
|
12:50-15:10, Paper WeLBR.46 | Add to My Program |
The Uncanny Valley in Virtual Reality: The Role of Virtual Agents’ Human-Like Appearance in Eeriness and Implicit Behaviours |
|
Barinal, Badel (Behavioural Science Institute, Radboud University), Heyselaar, Evelien (Behavioural Science Institute, Radboud University), Müller, Barbara (Behavioural Science Institute, Radboud University) |
Keywords: Anthropomorphic Robots and Virtual Humans, Social Presence for Robots and Virtual Humans, Evaluation Methods
Abstract: Advances in Virtual Reality (VR) have enabled the creation of increasingly realistic virtual agents, raising questions about how varying levels of human-like appearance influence user experience. Based on the uncanny valley hypothesis, this study investigated how three appearance conditions, i.e., mechanical robot, humanoid robot, and virtual human, affect feelings of eeriness and implicit behavioural responses in VR. Ninety-five participants completed an approach task with virtual agents, during which their minimum interpersonal distance and approach speed were continuously recorded. Subjective ratings collected after the task revealed an uncanny valley pattern: humanoid robots were perceived as significantly more eerie than the overall mean, whereas virtual humans were rated significantly less eerie. Behavioural analyses using linear mixed-effects models showed no significant differences in minimum distance and approach speed across appearance conditions. Moreover, no moderation by perceived agency or experience was observed for any of the variables. Approach speed significantly increased over time, suggesting habituation, but a similar pattern was not found for minimum distance. These findings extend theoretical understanding of the uncanny valley to three-dimensional embodied agents in immersive settings and offer practical insights for the design of virtual agents.
|
|
12:50-15:10, Paper WeLBR.47 | Add to My Program |
Heirloom Table: Exploring Conversational Robots for Supporting Social Relationships for People with Dementia |
|
Raja, Adhityan (Eindhoven University of Technology), Khot, Rucha (Eindhoven University of Technology), van Marle, Diede (Eindhoven University of Technology), Schaefer, Peter (Eindhoven University of Technology), Fischer, Joel (University of Nottingham), Lee, Minha (Eindhoven University of Technology) |
Keywords: Anthropomorphic Robots and Virtual Humans, Robot Companions and Social Robots, Embodiment, Empathy and Intersubjectivity
Abstract: This study investigates how an AI-driven conversational robot can contribute to the act of remembering, particularly in the context of reminiscence for people with dementia. We introduce the Heirloom Table, a conversational robot prototype augmented with voice that converses and reminisces about its past owners, facilitating intergenerational memory sharing. We explored how the robot could support recollection, learning, familiarization, and social engagement in a study with duos- people with dementia and their acquaintance. Our findings suggest that the robot could structure personal histories into engaging narratives, prompting deeper reflection and discussion. Overall, participants valued the table as a conversational catalyst, but highlighted concerns around personalization, trust, and privacy. The study underscores the potential of anthropomorphized AI agents in reminiscence therapy, potentially strengthening interdependence between vulnerable people and members of their care network.
|
|
12:50-15:10, Paper WeLBR.48 | Add to My Program |
Grounding Word Meaning through Perception: Toward Compositional Language Understanding in Human-Robot Interaction |
|
Shaukat, Saima (University of Plymouth), Aly, Amir (University of Plymouth), Wennekers, Thomas (University of Plymouth), Cangelosi, Angelo (University of Manchester) |
Keywords: Linguistic Communication and Dialogue, Multimodal Interaction and Conversational Skills, Machine Learning and Adaptation
Abstract: For autonomous robots to interact naturally with humans, they must develop language understanding capabilities that connect linguistic expressions to multimodal perception. A key challenge arises when robots encounter lexical variations such as synonyms or novel phrases not observed during training. In this ongoing work, we present a multimodal word grounding framework that systematically integrates linguistic structures—including word indices, parts-of-speech tags, semantic word embeddings, and large language model representations—with perceptual features extracted from sensory data, including object geometry, color, and spatial positioning (centroids), where spatial relationships are learned through our Bayesian grounding model. We evaluate five experimental cases and demonstrate improved synonym generalization using semantic embeddings. While this framework effectively grounds individual words, it is limited to single-word grounding and cannot handle more complex linguistic structures such as phrases or full sentences. Therefore, we discuss extending the framework toward compositional language understanding, from the word to phrase to sentence levels, aiming to enable robots to build linguistic knowledge in an unsupervised bottom-up manner. This work contributes to advancing robot language understanding and generalization for natural human–robot interaction in dynamic environments.
|
|
12:50-15:10, Paper WeLBR.49 | Add to My Program |
CoHEXist: Evaluating Three Interaction Strategies for a Holistic View of Human-Mobile Robot Coexistence |
|
Niessen, Nicolas (Technical University of Munich) |
Keywords: Evaluation Methods, Human Factors and Ergonomics, HRI and Collaboration in Manufacturing Environments
Abstract: This paper presents an evaluation of three different interaction strategies for autonomous mobile robots, focusing on their impact on efficiency, physical safety, and perceived safety in human-robot coexistence scenarios. We utilize the novel CoHEXist test setup to facilitate natural and quantifiable human-robot encounters in open spaces. Our findings indicate that a swift driving style significantly affects physical safety, while communication through trajectory projection shows no significant effect on the evaluated metrics. The study highlights the importance of considering order effects and the potential of motion data for more precise future analyses.
|
|
12:50-15:10, Paper WeLBR.50 | Add to My Program |
Fostering Human-Robot Teams in the Care for Older Adults |
|
Balalic, Sanja (Hanze University of Applied Sciences), van Doorn, Jenny (University of Groningen, the Netherlands) |
|
|
12:50-15:10, Paper WeLBR.51 | Add to My Program |
Designing Engagement-Based Adaptive Proactive Behaviors in Mobile Social Robots on User Experience |
|
Kwon, Minji (UNIST), Sung, Minjae (Ulsan National Institute of Science and Technology), Lee, Hui Sung (UNIST (Ulsan National Institute of Science and Technology)) |
Keywords: Assistive Robotics, Degrees of Autonomy and Teleoperation, Evaluation Methods
Abstract: This paper investigates how mobile social robots can enhance user experience through adaptive, proactive behaviors based on user engagement. We propose a behavioral model that modulates the robot’s proactivity level according to task involvement and compare it with fixed proactive and reactive behaviors in a user study. Results indicate that adaptively proactive robots provide more appropriate and useful assistance while better preserving user autonomy. These findings underscore that designing adaptive proactive behavior for mobile robots should take into account both the user’s ability related to the task and the type of assistance being provided, whether verbal or physical.
|
| |