| |
Last updated on August 21, 2023. This conference program is tentative and subject to change
Technical Program for Thursday August 31, 2023
|
ThAT1 |
Room T1 |
Cognition & Assistive Robots |
Special Session |
Chair: Ayub, Ali | University of Waterloo |
Co-Chair: Holthaus, Patrick | University of Hertfordshire |
|
10:30-10:40, Paper ThAT1.1 | |
Optometrist’s Algorithm for Personalizing Robot-Human Handovers (I) |
|
Gupte, Vivek (Birla Institute of Technology and Science - Pilani, Goa, India), Suissa, Dan Rouven (Ben-Gurion University of the Negev), Edan, Yael (Ben-Gurion University of the Negev) |
Keywords: Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams, Machine Learning and Adaptation
Abstract: With an increasing interest in human-robot collaboration, there is a need to develop robot behaviour while keeping the human user’s preferences in mind. Highly skilled human users doing delicate tasks require their robot partners to behave according to their own habits as well as task constraints. To achieve this we present the use of the Optometrist’s Algorithm (OA) to interactively and intuitively personalize Robot-Human Handovers. Using this algorithm, we tune controller parameters for speed, location and effort. We study the differences in the fluency of the handovers before and after tuning, as well as the subjective perception of this process in a study of N = 30 non- expert users of mixed background – evaluating the OA. The users rate the interaction on common trust, safety and workload scales, amongst other measures. They rate our tuning process to be engaging and easy to use. The personalization leads to an increase in the fluency of the interaction. We observe that the average user prefers to have a quick, close and effortless handover. The participants are the most sensitive to the speed of the robot.
|
|
10:40-10:50, Paper ThAT1.2 | |
A Case of Identity: Enacting Robot Identity with Belief Propagation for Decentralized Multi-Agent Task Allocation (I) |
|
Berry, Jasmine (University of Michigan), Olson, Elizabeth (University of Michigan), Gilbert, Alia (University of Michigan), Jenkins, Odest Chadwicke (University of Michigan) |
Keywords: Cognitive Skills and Mental Models, Cooperation and Collaboration in Human-Robot Teams, Embodiment, Empathy and Intersubjectivity
Abstract: Advancements in autonomous agents have led to an increasingly ubiquitous presence of robots in human environments where social and physical interaction is expected. Such environments are often composed of heterogeneous agents with disparate action capabilities, intentions, and motivations. Intra- and inter-agent dissimilarity often prevents enacting effective behavioral skills (e.g., collaboration, communication, coordination) towards dynamic task allocation objectives. We propose a Bayesian probabilistic inference approach, Multi-Robot Belief Propagation with Identity constraints (MRPB-I), for 1) decentralized task allocation in multi-agent systems and 2) modeling task affinity using personal identities. MRPB-I leverages competing costs of individual and group capabilities that result in less error-prone convergence to steady state, scalability without loss of accuracy, and sensitivity to environmental dynamics. An implementation of MRBP-I as a distributed algorithm that weighs factors of both individual and cooperative perception in an energy-minimizing task allocation scheme is presented.
|
|
10:50-11:00, Paper ThAT1.3 | |
Bio-Inspired Cognitive Decision-Making to Personalize the Interaction and the Selection of Exercises of Social Assistive Robots in Elderly Care (I) |
|
Maroto-Gómez, Marcos (Universidad Carlos III De Madrid), Carrasco-Martínez, Sara (Universidad Carlos III De Madrid), Marques Villarroya, Sara (Universidad Carlos III of Madrid), Malfaz, Maria (Universidad Carlos III De Madrid), Castro González, Álvaro (Universidad Carlos III De Madrid), Salichs, Miguel A. (University Carlos III of Madrid) |
Keywords: Robots in Education, Therapy and Rehabilitation, Computational Architectures, Assistive Robotics
Abstract: Socially assistive robots in healthcare have reported positive results in recent years, for example, in reducing the impact of mild cognitive impairment in older adults. The lack of a qualified workforce and the increase in the older adult population in developed countries have encouraged designers to develop socially assistive robots that operate autonomously by bringing in cognitive and decision-making methods to facilitate the caregivers' tasks, select the most appropriate activities, and personalize the interaction. This paper presents the development of a cognitive human-inspired decision-making system for autonomous social assistive robots managing the personalized selection of exercises in cognitive stimulation and providing affective support to their users. The decision-making system receives inputs from the robot's perceptions, user information stored in the robot's memory, events in an agenda, and information from a bio-inspired module. These inputs generate autonomous decisions that drive the robot's behavior depending on each situation. We show the system's capacity, integrated into our Mini social robot, to adapt the interaction, select tailored exercises based on the user's features, and execute exercises previously programmed by a caregiver to alleviate cognitive deterioration and accompany older people. Besides, the system generates a natural robot behavior based on biologically inspired methods to personalize activities, engage the user, and increase the number of robot services.
|
|
11:00-11:10, Paper ThAT1.4 | |
A Personalized Household Assistive Robot That Learns and Creates New Breakfast Options through Human-Robot Interaction (I) |
|
Ayub, Ali (University of Waterloo), Nehaniv, Chrystopher (University of Waterloo), Dautenhahn, Kerstin (University of Waterloo) |
Keywords: Computational Architectures, Assistive Robotics
Abstract: For robots to assist users with household tasks, they must first learn about the tasks from the users. Further, performing the same task every day, in the same way, can become boring for the robot's user(s), therefore, assistive robots must find creative ways to perform tasks in the household. In this paper, we present a cognitive architecture for a household assistive robot that can learn personalized breakfast options from its users and then use the learned knowledge to set up a table for breakfast. The architecture can also use the learned knowledge to create new breakfast options over a longer period of time. The proposed cognitive architecture combines state-of-the-art perceptual learning algorithms, computational implementation of cognitive models of memory encoding and learning, a task planner for picking and placing objects in the household, a graphical user interface (GUI) to interact with the user and a novel approach for creating new breakfast options using the learned knowledge. The architecture is integrated with the Fetch mobile manipulator robot and validated, as a proof-of-concept system evaluation in a large indoor environment with multiple kitchen objects. Experimental results demonstrate the effectiveness of our architecture to learn personalized breakfast options from the user and generate new breakfast options never learned by the robot.
|
|
11:10-11:20, Paper ThAT1.5 | |
Evaluation of a Multimodal Sensory Feedback Device for Displaying Proprioceptive Data from a Robotic Grasper |
|
Molina, Alicia (Georgia Tech Student Center), Kelly, Erin (Georgia Institute of Technology), Majditehran, Houriyeh (Georgia Institute of Technology), Hammond III, Frank L. (Georgia Institute of Technology) |
Keywords: Assistive Robotics, Multimodal Interaction and Conversational Skills, Cognitive and Sensorimotor Development
Abstract: A wearable multimodal sensory feedback device (SFD) was developed to communicate proprioceptive information from a robotic grasper onto the operator’s forearm. The robotic grasper could pick up objects through a pinching motion and was composed of two fingers, each able to open and close in a mirror of the other. The aperture amount of each of the grasper’s fingers was mapped onto the SFD which uses two separate actuators that induced skin-stretch and implemented vibrotactile stimuli to communicate this information to the user. This paper evaluates the extent to which the SFD can effectively communicate the grasper’s proprioceptive information using skin stretch feedback that is maintained with vibrations. Subjects were asked to manipulate the grasper blindly guided by their natural proprioceptive feedback only (NO), their natural proprioceptive feedback and skin-stretch feedback (NS), skin-stretch feedback only (SS), and skin-stretch and vibrotactile stimulation (SV). The experiments’ results indicate that the SFD can effectively communicate proprioceptive sensory information and enhance the body’s natural proprioceptive sense.
|
|
11:20-11:30, Paper ThAT1.6 | |
Developing Adaptive, Personalised, Autonomous Social Robots Using Physiological Signals: System Development and a Pilot Study |
|
Chandra, Shruti (University of Waterloo), Sharma, Isha (University of Waterloo), Schnapp, Benjamin David (University of Waterloo), Dixon, Michael (University of Waterloo), Dautenhahn, Kerstin (University of Waterloo) |
Keywords: Assistive Robotics, Monitoring of Behaviour and Internal States of Humans, Applications of Social Robots
Abstract: Maintaining physical, emotional and psychological health is vital for well-being. Social robots have been increasingly used in healthcare to support physical and mental health. Providing appropriate, adaptive and personalised feedback based on the user's internal states is crucial for effective and engaging human-robot interaction, especially in one-to-one interaction scenarios. In this research, we developed an adaptive and autonomous system, integrating a social robot, a wearable non-intrusive Polar chest sensor and algorithms to guide people in three application scenarios to promote physical, emotional, and psychological well-being. The social robot senses users' psychophysiological measures such as heart rate and heart-rate variability via the wearable sensor, monitors their stress responses, provides real-time feedback and guides them to perform activities. We detail the system development and a pilot study with fifteen participants to evaluate the system in the three scenarios. The findings suggest that the autonomous system could effectively guide participants through the activities by regulating their stress responses. Participants' physiological data also support these results. Moreover, the system was well-accepted by its users.
|
|
11:30-11:40, Paper ThAT1.7 | |
Realizing an Assist-As-Needed Robotic Dressing Support System through Analysis of Human Movements and Residual Abilities |
|
Yamasaki, Kakeru (Kyushu Institute of Technology), Kajiwara, Takumi (Kyushu Institute of Technology), Fujita, Wataru (Kyushu Institute of Technology), Shibata, Tomohiro (Kyushu Institute of Technology) |
Keywords: Assistive Robotics, Detecting and Understanding Human Activity, Curiosity, Intentionality and Initiative in Interaction
Abstract: This study proposes a robotic dressing assistance system that utilizes residual abilities following the Assist-As-Needed (AAN) principle. Human modeling is crucial for effective robot-assisted dressing, and this study focuses on incorporating the AAN principle into the field of robot-assisted dressing through the backward dressing method. Specifically, arm-swinging movements during the arm-through phase of robotic dressing were analyzed to determine whether modeling of a human can be carried out. An experiment was conducted with 20 subjects to clarify the movements of the subjects and the state of the robot at the time of dressing. Results indicated periodicity in the movements of some subjects, suggesting the possibility of modeling humans. This study highlights the importance of incorporating the AAN principle in robot-assisted dressing and provides insight into future research aiming to realize this principle.
|
|
11:40-11:50, Paper ThAT1.8 | |
Is a Robot Trustworthy Enough to Delegate Your Control? |
|
Shin, Soomin (KIST), Kang, Dahyun (Korea Institute of Science and Technology), Kwak, Sonya Sona (Korea Institute of Science and Technology (KIST)) |
Keywords: Assistive Robotics, User-centered Design of Robots
Abstract: The aim of our study was to investigate people's preferred interaction with a robot - whether they prefer a robot that offers assistance based on its own judgment or a robot that follows specific verbal commands given by the user. To achieve this, we presented two types of sound-based interfaces. The first interface was a robot that judges the user's needs based on the sound the user makes, without requiring any verbal commands. For example, if the user slaps a table with a pile of paper to organize it, the robot opens the drawer where the stapler is located. The second interface was a robot that follows the user's verbal commands. In this interface, the user specifically asks the robot to find the stapler, and the robot opens the drawer accordingly. Our results showed that people preferred the verbal command interface in terms of usefulness, intelligence, appropriateness, and service evaluation. This indicates that individuals prefer to interact with a robot that follows their specific commands rather than a robot that infers people’s intention and reacts accordingly. Our findings suggest that designing robots to follow specific verbal commands may lead to more positive user experiences and perceptions of the robot's intelligence and usefulness.
|
|
ThAT2 |
Room T2 |
Ethical Issues in Human-Robot Interaction Research |
Regular Session |
Chair: Kim, Boyoung | George Mason University Korea |
|
10:30-10:40, Paper ThAT2.1 | |
What's at Stake? Robot Explanations Matter for High but Not Low Stake Scenarios |
|
Melsion, Gaspar Isaac (KTH Royal Institute of Technology), Stower, Rebecca (KTH), Winkle, Katie (Uppsala University), Leite, Iolanda (KTH Royal Institute of Technology) |
Keywords: Ethical Issues in Human-robot Interaction Research, Machine Learning and Adaptation, Social Intelligence for Robots
Abstract: Although the field of Explainable Artificial Intelligence (XAI) in Human-Robot Interaction is gathering increasing attention, how well different explanations compare across HRI scenarios is still not well understood. We conducted an exploratory online study with 335 participants analysing the interaction between type of explanation (counterfactual, feature-based, and no explanation), the stake of the scenario (high, low) and the application scenario (healthcare, industry). Participants viewed one of 12 different vignettes depicting a combination of these three factors and rated their system understanding and trust in the robot. Compared to no explanation, both counterfactual and feature-based explanations improved system understanding and performance trust (but not moral trust). Additionally, when no explanation was present, high-stake scenarios led to significantly worse performance trust and system understanding. These findings suggest that explanations can be used to calibrate users' perceptions of the robot in high-stake scenarios.
|
|
10:40-10:50, Paper ThAT2.2 | |
Ethical Design for Privacy-Related Communication in Human-Robot Interaction |
|
Weng, Yueh-Hsuan (Tohoku University), Francesconi, Enrico (IGSG-CNR) |
Keywords: Ethical Issues in Human-robot Interaction Research, User-centered Design of Robots, Robot Companions and Social Robots
Abstract: To realize the human-robot co-existence, the impacts of Ethical, Legal, and Social Implications (ELSI) on social robots have received wide attention from the public in recent years. However, the traditional law-centered approach is often questioned due to its incapability to overcome the AI Pacing Problem when it is applied to robot governance. In this paper, we want to propose an alternative design-centered approach for robot governance called “Ethical Design” via the case study of a humanoid companion robot “LOVOT” with a focus on its privacy-related communication in private spaces on the issue of consent in human-robot interaction.
|
|
10:50-11:00, Paper ThAT2.3 | |
The Impact of Different Ethical Frameworks Underlying a Robot's Advice on Charitable Donations |
|
Kim, Boyoung (George Mason University Korea), Wen, Ruchen (Colorado School of Mines), Zhu, Qin (Virginia Tech), Williams, Tom (Colorado School of Mines), Phillips, Elizabeth (George Mason University) |
Keywords: Ethical Issues in Human-robot Interaction Research, Linguistic Communication and Dialogue, Applications of Social Robots
Abstract: The current work explored to what extent a robot could persuade people to participate in charitable giving by offering moral advice grounded in different ethical theories. In a laboratory, participants, who are students at a university, first performed a task to acquire lottery tickets and then received from a robot information about a charity event organized for students at their university. The robot also offered them moral advice of which the underlying framework was grounded in either deontological or Confucian role ethics to encourage donating their lottery tickets to the event. We found advice grounded in Confucian role ethics to be more effective in inducing donations than advice grounded in deontological ethics. We also found that the more strongly participants felt close to other students at their university, the less donations they would make after receiving advice grounded in deontological ethics. These findings suggest the benefits of framing moral messages of robots based upon theories of Confucian role ethics in promoting prosocial behavior. We discuss potential explanations for the negative relationship between participants' sense of closeness with other students and their donation behavior when the robot's advice focuses on theories of deontological ethics.
|
|
11:00-11:10, Paper ThAT2.4 | |
Victims and Observers: How Gender, Victimization Experience, and Biases Shape Perceptions of Robot Abuse |
|
Garcia Goo, Hideki (University of Twente), Winkle, Katie (Uppsala University), Williams, Tom (Colorado School of Mines), Strait, Megan (The University of Texas Rio Grande Valley) |
Keywords: Ethical Issues in Human-robot Interaction Research, Embodiment, Empathy and Intersubjectivity, Social Intelligence for Robots
Abstract: With the deployment of robots in public realms, researchers are seeing more and more cases of abusive disinhibition towards robots. Because robots embody gendered identities, poor navigation of antisocial dynamics may reinforce or exacerbate gender-based violence. Robots deployed in social settings must recognize and respond to abuse in a way that minimises ethical risk. This will require designers to first understand the risk posed by abuse of robots, and how humans perceive robot-directed abuse. To that end, we conducted an exploratory study of reactions to a physically abusive interaction between a human perpetrator and a victimized agent. Given extensions of gendered biases to robotic agents, as well as associations between an agent's human likeness and the experiential capacity attributed to it, we quasi-manipulated the victim's humanness (via use of a human actor vs. NAO robot) and gendering (via inclusion of stereotypically masculine vs. feminine cues in their presentation) across four video-recorded reproductions of the interaction. Analysis of data from 417 participants, each of whom watched one of the four videos, indicates that the intensity of emotional distress felt by an observer is associated with their gender identification, previous experience with victimization, hostile sexism, and support for social stratification, as well as the victim's gendering.
|
|
11:10-11:20, Paper ThAT2.5 | |
Trust in Robot Self-Defense: People Would Prefer a Competent, Tele-Operated Robot That Tries to Help |
|
Kochenborger Duarte, Eduardo (Halmstad University), Shiomi, Masahiro (ATR), Vinel, Alexey (Halmstad University), Cooney, Martin (Halmstad University) |
Keywords: Ethical Issues in Human-robot Interaction Research, Social Learning and Skill Acquisition Via Teaching and Imitation, Degrees of Autonomy and Teleoperation
Abstract: Motivated by the expectation that robot presence at crime scenes will become increasingly prevalent, the question arises of how they can protect humans in their care or vicinity. The current paper delves into the concept of “robot self-defense” and explores whether a robot should be tele-operated or autonomous, and how humans perceive imperfections in robot performance. To gain insight into how people feel, an online survey was conducted with 180 participants, who watched six videos of a robot defending a victim. The study provides insights into trust in human-robot interactions and sheds light on the complex dynamics involved in robot self-defense. The results indicate that people found a tele-operated robot to be more accepted, and that attempting to help but failing is more acceptable than just observing.
|
|
11:20-11:30, Paper ThAT2.6 | |
Ethical Participatory Design of Social Robots through Co-Construction of Participatory Design Protocols |
|
Datey, Isha (Oakland University), Soper, Hunter (Oakland University), Hossain, Khadeejah (Oakland University), Louie, Wing-Yue Geoffrey (Oakland University), Zytko, Douglas (Oakland University) |
Keywords: Ethical Issues in Human-robot Interaction Research, User-centered Design of Robots, Robot Companions and Social Robots
Abstract: Ethics have become a core consideration in human-robot interaction (HRI) due to ample opportunity for both positive and negative impact on humans. HRI literature has expounded on ways to produce ethical social robots, especially participatory design (PD) that integrates anticipated users and other stakeholders as designers themselves to ensure their values are integrated into robot design. We draw attention to the ethics of participation in robot design, distinct from the ethics of the robot ultimately designed. We propose an approach to foregrounding ethics in PD processes through co-construction of robot PD protocols with stakeholders. We call this “pre-PD” because it entails expanding the boundaries of PD beyond the product of design (the robot) to also include the participatory activities that enable design. Contributions of the paper include: (1) a case study of pre-PD for sexual violence mitigation robots to demonstrate feasibility of stakeholders co-constructing robot PD protocols, and (2) an actionable framework for HRI researchers to use when constructing their own PD protocols with stakeholders, informed by reflection on the case study.
|
|
11:30-11:40, Paper ThAT2.7 | |
The Invisible Labor of Authoring Dialogue for Teleoperated Socially Assistive Robots |
|
Elbeleidy, Saad (Colorado School of Mines), Reddy, Elizabeth (Colorado School of Mines), Williams, Tom (Colorado School of Mines) |
Keywords: Ethical Issues in Human-robot Interaction Research, Assistive Robotics, Robots in Education, Therapy and Rehabilitation
Abstract: Some labor is overlooked or devalued, while necessary within the context of paid employment. This is "invisible labor". Invisible labor is often performed by minoritized groups and is typically invisible to those in power. Novel technologies can introduce new sociotechnical labor paradigms that reduce labor visibility. In this paper, we consider how invisible labor might manifest for teleoperated Socially Assistive Robots (SARs). By combining an analysis of the labor context of teleoperated SAR use with insights from interviews with SAR teleoperators, we demonstrate how invisible labor manifests in the practical deployment of teleoperated SARs. Finally, we provide recommendations for developers and policymakers to remedy this labor invisibility.
|
|
11:40-11:50, Paper ThAT2.8 | |
Grounding Robot Navigation in Self-Defense Law |
|
Zhu, James (Carnegie Mellon University), Shrivastava, Anoushka (Carnegie Mellon University), Johnson, Aaron M. (Carnegie Mellon University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Philosophical Issues in Human-Robot Coexistence, Ethical Issues in Human-robot Interaction Research
Abstract: Robots operating in close proximity to humans rely heavily on human trust to successfully complete their tasks. But what are the real outcomes when this trust is violated? Self-defense law provides a framework for analyzing tangible failure scenarios that can inform the design of robots and their algorithms. Studying self-defense is particularly important for ground robots since they operate within public environments, where they can pose a legitimate threat to the safety of nearby humans. Moreover, even if ground robots can guarantee human safety, the perception of a physical threat is sufficient to justify human self-defense against robots. In this paper, we synthesize works in law, engineering, and social science to present four actionable recommendations for how the robotics community can craft robots to mitigate the likelihood of self-defense situations arising. We establish how current U.S. self-defense law can justify a human protecting themselves against a robot, discuss the current literature on human attitudes toward robots, and analyze methods that have been produced to allow robots to operate close to humans. Finally, we present hypothetical scenarios that underscore how current robot navigation methods can fail to sufficiently consider self-defense concerns and the need for the recommendations to guide improvements in the field.
|
|
ThAT3 |
Room T3 |
Robot Companions and Social Robots |
Regular Session |
Chair: Rossi, Silvia | Universita' Di Napoli Federico II |
|
10:30-10:40, Paper ThAT3.1 | |
I = Robot: An Investigation of How Perspective Switching Can Support People’s Acceptance of AI-Powered Social Robots |
|
Wittmann, Maximilian (Friedrich-Alexander-Universität Erlangen-Nürnberg), Köhler, Lena (FAU Erlangen-Nuremberg), Morschheuser, Benedikt (Gamification Research Group, Friedrich-Alexander-Universität Erl) |
Keywords: Creating Human-Robot Relationships, Cooperation and Collaboration in Human-Robot Teams, Applications of Social Robots
Abstract: Digital companions and social robots empowered to have social capabilities by artificial intelligence (AI) are increasingly finding their way into our everyday lives. Even though these technologies benefit individuals seeking companionship and social interaction, adopting AI-powered social robots and companions remains a challenge. There is little known about effective approaches to overcoming barriers between humans and such systems that prevent their acceptance in the field. By applying the approach of perspective switching, we are examining whether taking the perspective of an AI-powered social robot can improve users’ perceptions of social robots and increase relevant antecedents of technology acceptance. Our study demonstrates that instruction-based perspective switching is a promising approach to influence the perceived usefulness, perceived enjoyment, and perceived sociability towards an AI-powered social robot and thus proves to be a useful means to support robot acceptance in society.
|
|
10:40-10:50, Paper ThAT3.2 | |
Human Perception on Social Robot's Face and Color Expression Using Computational Emotion Model |
|
Dzhoroev, Temirlan (Ulsan National Institute of Science & Technology), Park, Haeun (Ulsan National Institue of Science and Technology (UNIST)), Lee, Jiyeon (Ulsan National Institute of Science and Technology), Kim, Byounghern (Ulsan National Institute of Science and Technology), Lee, Hui Sung (UNIST (Ulsan National Institute of Science and Technology)) |
Keywords: Robot Companions and Social Robots, User-centered Design of Robots, Multimodal Interaction and Conversational Skills
Abstract: Researchers have explored the effects of expressing emotions using various modalities in the field of social robots. Prior studies have demonstrated that the use of color in emotional expressions can enhance user acceptance, relationship building, and communication effectiveness. This study aims to validate the effectiveness of different modalities in expressing Ekman's six basic emotions. Specifically, four modalities were compared: face expression (F), face and LED color expression (FL), face and LED color expression with blinking (FLB), and face and eye color expression (FE). To accomplish this, we developed a small social robot prototype and used a computational emotion model to improve robot dynamics and interactivity. The findings revealed that, although the F modality effectively expressed emotions, ambiguous emotions were better perceived when color or blinking was incorporated. Emotions such as anger, sadness, disgust, surprise, and fear were better conveyed to the participants when the FL, FLB, and FE modalities were utilized. For happiness, F alone was sufficient for recognition. This study provides empirical evidence on the effectiveness of different modalities for expressing emotions in social robots and offers valuable insights gathered from participants' feedback and reflections.
|
|
10:50-11:00, Paper ThAT3.3 | |
Add-If-Silent Rule-Based Growing Neural Gas for High-Density Topological Structure of Unknown Objects |
|
Shoji, Masaya (Robotis Corporation / Graduate University of Industrial Technolo), Obo, Takenori (Tokyo Metropolitan University), Kubota, Naoyuki (Tokyo Metropolitan University) |
Keywords: Robot Companions and Social Robots, Social Intelligence for Robots, Computational Architectures
Abstract: To realize a super-smart society (Society 5.0) where humans and robots coexist, there is a need for a perceptual system that can recognize unknown objects in various unknown environments quickly and flexibly. In unknown environments, the characteristics of objects cannot be known in advance, so prior learning-based recognition methods such as deep reinforcement learning cannot fully cover the problem. There have been many studies on environment recognition (clustering, etc.) using a combination of RGB images and distance images, but the recognition performance is unstable because it strongly depends on the lighting conditions of the environment. Therefore, in this study, we construct a 3D topological map of the environment in real-time using Growing Neural Gas (GNG), which can learn 3D topological structures even for unlearned objects, using only 3D point cloud data as input. In the real world, due to the characteristics of RGB-D cameras, sample density decreases for more far-away objects and only sparse depth information can be obtained, so conventional GNG cannot generate high-density topological structures of unknown objects. Therefore, if the object category labels of the winner nodes (nearest nodes) for the input vector (3D point cloud) match the unknown object and are within a predefined tolerance area, then it is judged to be useful input information for learning the topological structure of the unknown object, and the topological structure of the unknown object is learned. We propose Add-if-Silent rule-based GNG (AiS-GNG) which can generate high-density topological structures for far-away objects by directly adding input data as a reference vector. We verify the effectiveness of the proposed method through experiments using a 3D dynamics simulator.
|
|
11:00-11:10, Paper ThAT3.4 | |
Social Robot Dressing Style: An Evaluation of Interlocutor Preference for University Setting |
|
Ashok, Ashita (RPTU Kaiserslautern-Landau), Paplu, Sarwar (Technische Universität Kaiserslautern), Berns, Karsten (University of Kaiserslautern) |
Keywords: Creating Human-Robot Relationships, Robot Companions and Social Robots, User-centered Design of Robots
Abstract: The presented study investigated the dressing style preference of human interlocutors for the social robot, ROBIN, in a university setting. Through an online questionnaire format, the research examined the impact of robot attire on human-robot interaction (HRI), and the perception of social robots in a social context. A mixed-methods approach was employed, conducting within-subjects empirical study via an online questionnaire that consisted of user demographics, social factors, constructs of robot usage, and two identical HRI videos. The videos featured ROBIN, a humanoid robot, dressed in formal vs. casual attire, respectively, as a representative from the student service center interacting with a human student. The findings indicated that 51.35% probable human interlocutors expressed a preference for the casual clothing style, while 48.65% preferred formal style. Additionally, participants associated social traits such as friendly, helpful, comfortable, approachable, and interesting with the robot's casual attire. This study highlights the significance of robot clothing in personalized HRI, and its impact on perception of social robots by humans. Analyzing the interlocutor preferences for robot dressing style, it emphasizes clothing as an influential factor in the design of socially acceptable robots.
|
|
11:10-11:20, Paper ThAT3.5 | |
Development and Evaluation of a Meal Partner Robot Platform |
|
Fujii, Ayaka (National Institute of Advanced Industrial Science and Technology), Okada, Kei (The University of Tokyo), Inaba, Masayuki (The University of Tokyo) |
Keywords: Creating Human-Robot Relationships, Robot Companions and Social Robots, User-centered Design of Robots
Abstract: Eating with others enriches our lives and has many positive effects. However, numerous people are forced to eat alone in the current COVID-19 pandemic situation. We think that robots that exist with bodies and can be interacted with in real time can be good mealtime partners. In this study, we developed a meal partner robot platform called Mamoru'21, which can share eating behavior. We experimented to evaluate Mamoru'21. The results showed that Mamoru'21 could make eating more enjoyable than a conventional communication robot. We also investigated the preferable appearance of a meal partner robot and confirmed that Mamoru'21 met the requirements.
|
|
11:20-11:30, Paper ThAT3.6 | |
Diversity-Aware Verbal Interaction between a Robot and People with Spinal Cord Injury |
|
Grassi, Lucrezia (University of Genova), Canepa, Danilo (University of Genoa), Bellitto, Amy (University of Genoa), Casadio, Maura (University of Genoa), Massone, Antonino (S.C. Unitŕ Spinale Unipolare, Santa Corona Hospital, ASL2 Savone), Recchiuto, Carmine Tommaso (University of Genova), Sgorbissa, Antonio (University of Genova) |
Keywords: Robot Companions and Social Robots, User-centered Design of Robots, Applications of Social Robots
Abstract: This article explores the acceptance of a humanoid robot designed to engage in conversations with clinicians and individuals with spinal cord injuries in a hospital environment. Building upon prior research, we introduce the concept of "diversity-aware" robots, which possess the capability to interact with people while adapting to their culture, age, gender, preferences, and physical and mental conditions. These robots are connected to a cloud system specifically designed to consider these factors, enabling them to adapt to the context and individuals they interact with. Our experiments involved the NAO robot interacting with both clinicians and individuals with spinal cord injuries. Subsequent to the interaction, participants completed a questionnaire and underwent an interview. The collected data were analyzed to assess the system's acceptability and its persistence beyond the initial novelty effect. Furthermore, we investigated whether clinicians exhibited a lower predisposition towards the system and expressed greater concerns than end-users about using the robot, which could potentially hinder the adoption of the system.
|
|
11:30-11:40, Paper ThAT3.7 | |
Social Robots As Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearance |
|
Jung, Yoonwon (Seoul National University), Hahn, Sowon (Seoul National University) |
Keywords: Motivations and Emotions in Robotics, Robot Companions and Social Robots
Abstract: Loneliness is a distressing personal experience and a growing social issue. Social robots could alleviate the pain of loneliness, particularly for those who lack in-person interaction. This paper investigated how the effect of loneliness on the anthropomorphism of social robots differs by robot appearance, and how it influences purchase intention. Participants viewed a video of one of the three robots (machine-like, animal-like, and human-like) moving and interacting with a human counterpart. Bootstrapped multiple regression results revealed that although the unique effect of animal-likeness on anthropomorphism compared to human-likeness was higher, lonely individuals' tendency to anthropomorphize the animal-like robot was lower than that of the human-like robot. This moderating effect remained significant after covariates were included. Bootstrapped mediation analysis showed that anthropomorphism had both a positive direct effect on purchase intent and a positive indirect effect mediated by likability. Our results suggest that lonely individuals' tendency of anthropomorphizing social robots should not be summarized into one unified inclination. Moreover, by extending the effect of loneliness on anthropomorphism to likability and purchase intent, this current study explored the potential of social robots to be adopted as companions of lonely individuals in their real life. Lastly, we discuss the practical implications of the current study for designing social robots.
|
|
11:40-11:50, Paper ThAT3.8 | |
Perceived Sociality and Persuasion: Investigating the Effects of Social and Technical Framing on Human-Robot Interaction |
|
Boos, Annika (Technical University of Munich), Emmermann, Birte (Technical University of Munich), Reiner, Maximilian (Technical University of Munich), Bengler, Klaus (Technical University of Munich) |
Keywords: Personalities for Robotic or Virtual Characters, Narrative and Story-telling in Interaction, Social Intelligence for Robots
Abstract: This study investigated the effects of two text-based framings (‘social’ vs. ‘technical’) of a robot. We aimed to discover whether a robot framed as ‘social’ is perceived as more social and whether it is more persuasive. We used the Desert Survival Problem to create a task the robot and participant solved together. We found no significant differences in robot perception (post interaction), or compliance. Self-reported technical knowledge was not significantly correlated with social attributions to the robot. In line with our hypotheses, perceived competence, perceived helpfulness of the robot, and willingness to work with it again were positively correlated. Furthermore, perceived warmth of the robot was positively correlated with the willingness to work with the robot again.
|
|
ThAT4 |
Room T4 |
Robots in Education, Therapy and Rehabilitation |
Regular Session |
Chair: Tapus, Adriana | ENSTA Paris, Institut Polytechnique De Paris |
|
10:30-10:40, Paper ThAT4.1 | |
MoveToCode: An Embodied Augmented Reality Visual Programming Language with an Autonomous Robot Tutor for Promoting Student Programming Curiosity |
|
Groechel, Thomas (Univeristy of Southern California), Ipek, Goktan (University of Southern California), Ly, Karen (University of Southern California), Velentza, Anna-Maria (University of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Child-Robot Interaction, Virtual and Augmented Tele-presence Environments
Abstract: Virtual, augmented, and mixed reality for human-robot interaction (VAM-HRI) is a new and rapidly growing field of research. The field of socially assistive robot (SAR) has made impactful advances in educational settings, but has not yet benefited from VAM-HRI advances. We developed MoveToCode - an open-source, embodied (i.e., kinesthetic) learning visual programming language that aims to increase student (ages 8-12) curiosity during programming. MoveToCode uses an augmented reality (AR) autonomous robot tutor named Kuri that models the students' kinesthetic curiosity and acts to promote their curiosity in programming. MoveToCode design was informed by pilot studies and tested in Los Angeles elementary classrooms (n=21). Results from main study validated our design decisions compared to the pilot study which was conducted in a real elementary school classroom environment (n=15), showing an improvement in perceived robot helpfulness (median +Delta1.25 out of 5) and number of completed exercises (median +Delta1, maximum of 11). While no significant changes were found in pre/post student curiosity or intention to program later in life, students wrote more open-ended questions post-study on topics related to robots, programming, research, and if they would like to do the activity again. This work demonstrates the potential of using VAM-HRI in a kinesthetic context for SAR tutors, and highlights the existing conventions and new design considerations for creating AR applications for SAR.
|
|
10:40-10:50, Paper ThAT4.2 | |
Adapting a Teachable Robot's Dialog Responses Using Reinforcement Learning in Teaching Conversation |
|
Love, Rachel (Monash University), Law, Edith (University of Waterloo), Cohen, Philip R (Openstream Inc., Monash University), Kulic, Dana (Monash University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Machine Learning and Adaptation, Multimodal Interaction and Conversational Skills
Abstract: Teachable robots can offer benefits to students through the use of social behaviours, such as speech, gaze, and gestures, to promote engagement and learning. Adapting these behaviours can deliver personalised interactions to better suit each individual. There is a growing body of research utilising reinforcement learning in social robotics, however there is limited research in the use of adaptive dialog behaviours for social robots. We propose an adaptive response-selection algorithm for a teachable robot which aims to improve user engagement in the teaching task. The proposed approach uses Q-learning to learn an individualised policy. The algorithm is rewarded according to the time taken per teaching input, and the amount paraphrasing in the user's response. A user study has been conducted to evaluate the algorithm, compared to a method of random response-selection. The results indicate that an adaptive approach learns to select more rewarding actions over time, and personalise to the individual user.
|
|
10:50-11:00, Paper ThAT4.3 | |
Dancing in a Tutu: Using a Ballet Robot to Encourage Young Girls into Robotics |
|
Gong, Jiayong (The University of Auckland), Yu, Stephy (The University of Auckland), Fowler, Allan (The University of Auckland), Sutherland, Craig (University of Auckland) |
Keywords: Robots in Education, Therapy and Rehabilitation, Narrative and Story-telling in Interaction, Curiosity, Intentionality and Initiative in Interaction
Abstract: Despite recent initiatives, the number of women in programming and software engineering has continued to drop over recent years. One potential reason for this drop is that girls ``learn'' early on that programming is a boys' activity and that they are not good at it. Therefore, they are less likely to choose topics relating to programming as they progress through their studies. Thus, one approach to increasing women in programming is to target girls at an early age and show them that programming is both doable and fun. This paper presents a pilot study involving a robotic programming system for girls aged five to ten. The children used a tangible-based programming system to program ballet dances on a Nao robot in a real-world classroom. The results indicate that the participants enjoyed the system and could build programs without many issues. While this study only includes a small data set, it does validate the overall idea and suggests that further research in the area is beneficial.
|
|
11:00-11:10, Paper ThAT4.4 | |
Benefits, Challenges and Research Recommendations for Social Robots in Education and Learning: A Meta-Review |
|
Barakova, Emilia I. (Eindhoven University of Technology), Vaananen, Kaisa (Tampere University), Kaipainen, Kirsikka (Tampere University), Markopoulos, Panos (Eindhoven University of Technology) |
Keywords: Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots
Abstract: Social robots exist in various forms but are still not spreading widely to the societal contexts they are envisioned for, such as educational settings. There appears to be a gap between research and practice: A lot of research already demonstrates promising results for using social robots in various domains, but this promise has yet to be realized in “the real world”. The aim of this paper is to form a systematic understanding of the potential and challenges of social robots in the domain of education and learning, which is an intensively researched application domain for social robots. We conducted a meta-review of recent literature reviews (published in 2018-2022), using the PRISMA method. We analyzed 12 review papers that met the defined inclusion criteria by extracting the potential benefits and challenges presented in these reviews. We identify six benefits, four challenges, and six recommendations for future research on social robots in education and learning, which emphasize which developments are needed to realize the potential of social robots and translational research and to understand how they can be implemented in actual educational contexts.
|
|
11:10-11:20, Paper ThAT4.5 | |
Robots in Education: Influence of Regulatory Focus Theory |
|
Hei, Xiaoxuan (ENSTA Paris, Institut Polytechnique De Paris), Zhang, Heng (ENSTA Paris, Institut Polytechnique De Paris), Tapus, Adriana (ENSTA Paris, Institut Polytechnique De Paris) |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots
Abstract: The Covid-19 pandemic has massively developed the use of distance learning. The limits of this practice have gradually come to light, both for students and for teachers. It is now crucial to design alternative solutions to overcome the shortcomings of videoconferencing in terms of involvement, concentration, learning, and equity. Social robots are increasingly used as tutors in the educational context and help improve teaching efficiency. Many psychology-based principles have been applied in education to guide instructional strategies, motivate students, and create a positive and productive learning environment. In this work, we use Regulatory Focus Theory (RFT), which categorizes an individual's motivation into two types: Promotion and Prevention. Promotion-focused individuals are motivated by the potential for growth and achievement, whereas prevention-focused individuals are motivated by the potential for avoiding negative outcomes. Based on RFT, we aim to explore if and how the regulatory-focused behavior of the tutor robot can affect participants' learning outcomes. In this work, a language learning scenario was designed with two conditions: (1) a robot tutor with promotion-focused behavior, (2) a robot tutor with prevention-focused behavior. The results are encouraging and support that promotion robot tutor can increase the learning efficiency of promotion participants and prevention robot tutor will enhance the learning interest of prevention participants.
|
|
11:20-11:30, Paper ThAT4.6 | |
A Study of Demonstration-Based Learning of Upper-Body Motions in the Context of Robot-Assisted Therapy |
|
Quiroga, Natalia (Hochschule Bonn-Rhein-Sieg (H-BRS)), Mitrevski, Alex (Hochschule Bonn-Rhein-Sieg), Plöger, Paul G. (Hochschule Bonn Rhein Sieg) |
Keywords: Programming by Demonstration, Robots in Education, Therapy and Rehabilitation, Assistive Robotics
Abstract: In therapeutic scenarios, robots are sometimes used for imitation activities in which the robot demonstrates a motion and the individual under therapy needs to repeat it. To allow incorporating new types of motions in such activities, the robot should have an ability to learn motions by observing demonstrations from a human, such as a therapist. In this paper, we investigate an approach for acquiring motions from skeleton observations of a human, which are collected by a robot-centric RGB-D camera. The learning process from human body gestures to robot movements is done by mapping the joint angle positions to the robot's body, such that self-collisions of the end effector are prevented by re-estimating the angles in a safe angular position. We performed both a quantitative and a qualitative evaluation of the method, namely we (i) quantitatively evaluated the motion reproduction error of the procedure by performing a study with QTrobot in which the robot acquired different upper-body dance moves from multiple participants, and (ii) performed a qualitative user study to evaluate the robot's perceived reproduction. The quantitative evaluation demonstrates the method's overall feasibility, although the reproduction quality is affected by noise in the skeleton observations, while the qualitative evaluation suggests generally high satisfaction with the robot's motion, except for motions that are likely to lead to self-collisions and which were reproduced less accurately.
|
|
11:30-11:40, Paper ThAT4.7 | |
The Impact of Robot Co-Location on Student Learning Experiences When Reasoning about Geometry |
|
Grosso, Veronica (University of Illinois Chicago), Michaelis, Joseph (University of Illinois Chicago) |
Keywords: Social Presence for Robots and Virtual Humans, Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots
Abstract: The application of social robots in education is an emerging field that has the potential to transform the way we teach and learn. In this work, we compare the effects of a physically co-located social robot and a virtual social robot on learning experiences, as students reason about geometry conjectures. Our thematic analysis of interactions and interviews show that the co-located robot was treated more socially and improved learning experiences compared to a virtual robot. Specifically, it increased the perceived effectiveness of the interaction to lower anxiety, provide companionship, and support reasoning and comprehension. These results support the use of co-located, physically present robots to complement learning activities that benefit from social interaction, including reasoning about complex problems and use of gestures. This research contributes to the growing body of literature on the use of social robots in education and highlights the potential for further research on this subject.
|
|
11:40-11:50, Paper ThAT4.8 | |
A Companion for Aphasia Training: Development and Early Stakeholder Evaluation of a Robot-Assisted Speech Training App |
|
Linden, Katharina Friederike (TH Köln - University of Applied Sciences), Arndt, Julia (TH Köln - University of Applied Sciences), Neef, Caterina (TH Köln - University of Applied Sciences), Richert, Anja (University of Applied Sciences Cologne) |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, User-centered Design of Robots
Abstract: Aphasia is a common symptom of stroke. Patients affected may experience difficulties in all aspects of language. To complement traditional logopedic therapy, we developed a speech training application for a social robot, allowing affected individuals to perform additional training independently. In an early evaluation, a representative of each main stakeholder group we identified - namely persons with aphasia, care staff, and speech therapists - evaluated our application in terms of perceived usefulness, ease of use, and overall user experience. The robot guided the participants through the training session autonomously and most exercises were completed without help, thus proving the feasibility of our concept. The participants rated the application overall as positive and we achieved promising results in terms of attitude towards and intention to use the system. Furthermore, the social component of the training with the robot was very well received among the participants. In the future, the training content will be revised with the help of a linguist, to adequately support the training needs of persons with aphasia.
|
|
ThAT5 |
Room T5 |
Social Intelligence for Robots II |
Regular Session |
Chair: Senaratne, Hashini Hiranya | CSIRO |
|
10:30-10:40, Paper ThAT5.1 | |
Robot Self-Recognition Via Facial Expression Sensorimotor Learning |
|
Zhegong, Shangguan (ENSTA-Paris), Ding, Mengyuan (Xi'an Jiaotong University), Yu, Chuang (University of Manchester), Chen, Chaona (University of Glasgow), Tapus, Adriana (ENSTA Paris, Institut Polytechnique De Paris) |
Keywords: Cognitive and Sensorimotor Development, Social Intelligence for Robots, Social Learning and Skill Acquisition Via Teaching and Imitation
Abstract: To develop robots that can show cognitive functions, we must learn from the knowledge of human cognition. Existing biological and psychological evidence suggests that self-face perception and sensorimotor learning mechanisms play a crucial role in self-recognition. However, one of the most important self-identity cues – facial information – has not been extensively studied in the robot self-recognition task. Current research on robot self-recognition primarily relies on the recognition of high-precision targets and tracking of manipulator motions, where the self-perception of facial information is not well studied. In this work, we propose a novel approach to achieve self-recognition via self-perception of facial expressions. Specifically, we developed a Conditional Generative Adversarial Network (CGAN) model using the knowledge on human cognitive and sensorimotor functions. It allows the robot to be aware of self-face (i.e., off-line model). Passing the observed visual variations in a mirror and comparing them to self-perceptive information, the robot can recognize the self through an online Bayesian learning regression. The results of our first experiment show that the robot can recognize itself in a mirror. The results from the second experiment show that our algorithm could be tricked by a similar robot with the same facial expressions, which is similar to the rubber hand illusion (RHI).
|
|
10:40-10:50, Paper ThAT5.2 | |
What Properties of Norms Can We Implement in Robots? |
|
Malle, Bertram (Brown University), Rosen, Eric (Brown University), Chi, Vivienne Bihe (Brown University), Ramesh, Dev (Brown Univeristy) |
Keywords: Robotic Etiquette, Social Learning and Skill Acquisition Via Teaching and Imitation, Ethical Issues in Human-robot Interaction Research
Abstract: Norms are indispensable for human communities, and so they will be for robot-human communities. We analyze some of the requirements for a robot to represent norms and conform its actions to them. These requirements include both cognitive and social properties that human norms instantiate. We examine which of these properties can be implemented in a robot's architecture and review some previous computational approaches. We then introduce an approach using behavior trees, argue for its promise to implement properties of norms, and discuss unsolved challenges.
|
|
10:50-11:00, Paper ThAT5.3 | |
CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities |
|
Agrawal, Ayush (Robotics Research Center, IIIT Hyderabad), Arora, Raghav (IIIT Hyderabad), Datta, Ahana (International Institute of Information Technology, Hyderabad), Banerjee, Snehasis (Iiit-H / Tcs), Bhowmick, Brojeshwar (Tata Consultancy Services), Jatavallabhula, Krishna Murthy (MIT), Sridharan, Mohan (University of Birmingham), Krishna, Madhava (IIIT Hyderabad) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Motion Planning and Navigation in Human-Centered Environments, Assistive Robotics
Abstract: This paper introduces a novel method for deter- mining the best room to place an object in, for embodied scene rearrangement. While state-of-the-art approaches rely on large language models (LLMs) or reinforcement learned (RL) policies for this task, our approach, CLIPGraphs, efficiently combines commonsense domain knowledge, data-driven methods, and recent advances in multimodal learning. Specifically, it (a)encodes a knowledge graph of prior human preferences about the room location of different objects in home environments, (b) incorporates vision-language features to support multimodal queries based on images or text, and (c) uses a graph network to learn object-room affinities based on embeddings of the prior knowledge and the vision-language features. We demonstrate that our approach provides better estimates of the most appropriate location of objects from a benchmark set of object categories in comparison with state-of-the-art baselines. Supplementary material and code: https://clipgraphs.github.io
|
|
11:00-11:10, Paper ThAT5.4 | |
Come Closer: The Effects of Robot Personality on Human Proxemics Behaviours |
|
Moujahid, Meriam (Heriot-Watt University), Robb, David A. (Heriot Watt University), Dondrup, Christian (Heriot-Watt University), Hastie, Helen (School of Mathematical and Computer Sciences, Heriot-Watt Univer) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Social Intelligence for Robots, Multimodal Interaction and Conversational Skills
Abstract: Social Robots in human environments need to be able to reason about their physical surroundings while interacting with people. Furthermore, human proxemics behaviours around robots can indicate how people perceive these robots and can inform design. Here, we introduce Charlie, a situated robot receptionist that can interact with people using verbal and non-verbal communication in a dynamic environment, where users might enter or leave the scene at any time. The robot receptionist is stationary and cannot navigate, therefore, people have full control over their personal space as they are the ones approaching the robot at a comfortable distance. We investigated the influence of different apparent robot personalities on the proxemics behaviours of the humans. The results indicate that different types of robot personalities, specifically introversion and extroversion, can influence human proxemics behaviours. Subjects maintained shorter distances with the introvert robot receptionist compared to the extrovert robot. Interestingly, these distances were not identical to human-human typical interpersonal distances.
|
|
11:10-11:20, Paper ThAT5.5 | |
Exophora Resolution of Linguistic Instructions with a Demonstrative Based on Real-World Multimodal Information |
|
Oyama, Akira (Ritsumeikan University), Hasegawa, Shoichi (Ritsumeikan University), Nakagawa, Hikaru (Ritsumeikan University), Taniguchi, Akira (Ritsumeikan University), Hagiwara, Yoshinobu (Ritsumeikan University), Taniguchi, Tadahiro (Ritsumeikan University) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Multimodal Interaction and Conversational Skills, Non-verbal Cues and Expressiveness
Abstract: To enable a robot to provide support in a home environment through human-robot interaction, exophora resolution is crucial for accurately identifying the target of ambiguous linguistic instructions, which may include a demonstrative, such as “Take that one.” Unlike endophora resolution, which involves predicting the corresponding word from given sentences, exophora resolution necessitates comprehensive utilization of external real-world information to identify and disambiguate the target from the on-site environment. This study aims to resolve ambiguity in language instructions containing a demonstrative through exophora resolution, utilizing real-world multimodal information. The robot accomplishes this by using three types of information: 1) object categories, 2) demonstratives, and 3) pointing, as well as knowledge about objects obtained from the robot's pre-exploration of the environment. We evaluated the accuracy of object identification under multiple conditions by identifying a user-indicated object in a field that mimics a home environment. Our results demonstrate that our proposed method of exophora resolution using multimodal information can identify the target with 2 to 3 times higher accuracy than baseline methods in cases where information is missing.
|
|
11:20-11:30, Paper ThAT5.6 | |
Measuring Situational Awareness Latency in Human-Robot Teaming Experiments |
|
Senaratne, Hashini Hiranya (CSIRO), Pitt, Alex (CSIRO), Talbot, Fletcher (CSIRO), Moghadam, Peyman (CSIRO), Sikka, Pavan (CSIRO), Howard, David (CSIRO), Williams, Jason (CSIRO), Kulic, Dana (Monash University), Paris, Cecile (CSIRO) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Monitoring of Behaviour and Internal States of Humans, Cooperation and Collaboration in Human-Robot Teams
Abstract: A human supervisor's Situational Awareness (SA) is a critical aspect for successful Human-Robot Teaming (HRT). SA has been estimated using different techniques; however, many of those are associated with various biases, including recall and overgeneralisation biases. A key SA metric is latency, the delay between the time the robotic system requires supervisor assistance and the time the supervisor identifies that need in HRT experiments. Eye movements are increasingly used to assess SA across a range of domains, enabling objective and continuous SA assessment. However, to date, only a small number of features have been evaluated for estimating different types of SA latencies. In this paper, we investigated how two types of SA latencies (perceptual and comprehending) correlate with eye movement data collected during a remote field experiment, where a human supervisor directed a team of robots in a smart farming context. We identified 39 instances of SA latencies (13 perceptual and 26 comprehending). These instances were used to identify how a human supervisor’s SA is affected by task context, and to evaluate correlations between five eye movement features and SA latencies. Two eye movement features related to fixation duration and saccade duration demonstrated very strong correlations (r ≈ -0.8 and r ≈ 0.85). Our findings can be extended to estimate the real-time likelihood of the human experiencing SA latency.
|
|
11:30-11:40, Paper ThAT5.7 | |
SoGrIn: A Non-Verbal Dataset of Social Group-Level Interactions |
|
Webb, Nicola (University of the West England), Giuliani, Manuel (University of the West of England, Bristol), Lemaignan, Séverin (PAL Robotics) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Non-verbal Cues and Expressiveness, Robot Companions and Social Robots
Abstract: We present the Social Group Interactions (SoGrIn) dataset; a dataset which captures non-verbal signals of groups as they complete socially collaborative and formation-provoking tasks. The dataset comprises precise proxemics (captured motion) and facial features (facial landmarks, gaze direction, facial action units) encompassing a total duration of 60 minutes involving 30 individuals, divided into six groups. Also included are basic demographic information and responses to the Big 5 personality questionnaire. The Social Group Interactions dataset is publicly available at https://doi.org/10.5281/zenodo.7778123.
|
|
11:40-11:50, Paper ThAT5.8 | |
Towards a System That Allows Robots to Use Commitments in Joint Action with Humans |
|
Repiso, Ely (LAAS-CNRS, Toulouse), Sarthou, Guillaume (LAAS-CNRS), Clodic, Aurélie (Laas - Cnrs) |
Keywords: Social Intelligence for Robots, Cooperation and Collaboration in Human-Robot Teams, Robot Companions and Social Robots
Abstract: In collaborative tasks, expectations for achieving shared goals arise at all hierarchical plan levels, including plans, tasks, subtasks, and actions. However, these expectations also generate uncertainties for individuals executing the joint plan. If left unresolved, these uncertainties can impede successful task completion. Uncertainties may relate to the agents' motivation to initiate, continue, or complete their plan (motivational uncertainty), the best way to execute their shared plan (instrumental uncertainty), and their knowledge of other agents and the environment (common ground uncertainty). These expectations can be either normative or descriptive, but only normative expectations trigger reactions from agents to resolve the aforementioned types of uncertainties. Thus, this paper introduces a theoretical model that enables a robot to consider all agents' expectations and take actions that reduce the uncertainties associated with their shared plan. By doing so, we aim to enhance the likelihood of success in joint plans between robots and humans. To demonstrate the effectiveness of our theoretical commitment model, we have implemented a proof of concept for a client service use case in a food shop.
|
|
ThAT6 |
Room T6 |
Visual and Haptic Cues for Physical Human-Robot Interaction and
Co-Manipulation |
Special Session |
Chair: Pierri, Francesco | Universitŕ Della Basilicata |
|
10:30-10:40, Paper ThAT6.1 | |
Assistive Force Control in Collaborative Human-Robot Transportation (I) |
|
Cavalcante Lima, Bruno Gabriel (University of Salerno), Ferrentino, Enrico (University of Salerno), Chiacchio, Pasquale (Universitŕ Di Salerno), Vento, Mario (University of Salerno) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Assistive Robotics, HRI and Collaboration in Manufacturing Environments
Abstract: Collaborative robotics has gained significant traction in the industrial scenario due to its ability to merge human cognitive abilities with robot strength and dexterity. One specific area where this technology is promising is the transportation of heavy and/or bulky objects. In the scenarios where the human leads, physical human-robot interaction triggers cognitive human-robot interaction, by which the robot is called to adapt its behavior to the collaborator's intention. Based on this principle, this paper introduces a novel control architecture, namely assistive force control (AFC), by which the robot's purpose is to alleviate the human collaborator's effort during transportation. Instead of acting on the robot's motion, the AFC acts on its causes, by intuitively defining assistive forces, which are input to a lower-level direct force controller. We validate the proposed architecture on two real-case transportation scenarios involving an industrial robot collaboratively carrying objects with different subjects. Our preliminary results show that low effort is required for human operators to manipulate heavy objects, confirming that the proposed architecture is well-suited for collaborative transportation in real-world scenarios.
|
|
10:40-10:50, Paper ThAT6.2 | |
Enhancing Contact Stability in Admittance-Type Haptic Interaction Using Bidirectional Time-Domain Passivity Control (I) |
|
Park, Seong-Su (Korea Advanced Institute of Science and Technology), Dinc, Huseyin Tugcan (Korea Advanced Institute of Science and Technology (KAIST)), Lee, Kwang-Hyun (Korea Advanced Institute of Science and Technology), Ryu, Jee-Hwan (Korea Advanced Institute of Science and Technology) |
Keywords: HRI and Collaboration in Manufacturing Environments
Abstract: The present paper proposes a novel strategy to enhance admittance-type haptic interaction using bidirectional time-domain passivity control. While admittance-type haptic interaction is widely employed in human-robot collaboration, its ability to render low virtual inertia can be limited, leading to unstable interactions with rigid environments. The proposed strategy seeks to stabilize a lower range of virtual inertia while maintaining responsive behavior in free space. The Franka Emika Collaborative robot was utilized in various experiments to test the approach, and the results indicate that the bidirectional time-domain passivity controller can improve interaction performance relative to the conventional unidirectional time-domain passivity approach. This technique may aid in reducing operator fatigue during the manipulation of heavy, non-back drivable industrial robots while preserving a lightweight feel.
|
|
10:50-11:00, Paper ThAT6.3 | |
Depth Image-Based Deformation Estimation of Deformable Objects for Collaborative Mobile Transportation (I) |
|
Nicola, Giorgio (CNR), Mutti, Stefano (CNR STIIMA), Villagrossi, Enrico (Italian National Research Council), Pedrocchi, Nicola (National Research Council of Italy (CNR)) |
Keywords: HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation
Abstract: Human-Robot collaborative transportation is a promising technology that combines the strength of humans and robots. The most common approaches rely on methodologies that exploit force-sensing. However, the drawbacks are multiple. First, the magnitude of force applied might be limited to avoid damages. Then, force measurements might be unidirectional according to the material properties; e.g., compression forces are not measurable for fabrics. This paper proposes an approach based on the estimation of the deformation state of the manipulated object from depth images. Specifically, the segmented depth images of the manipulated object are fed to a Convolutional Neural Network (CNN) model to estimate the current deformation status. Compared with the desired deformation, the current deformation status is used to generate the robot’s twist command. The methodology is proved in a mobile robot application, where carbon-fiber fabrics are transported. A comparison with the state-of-the-art is reported proving that the proposed method is more accurate and more repeatable.
|
|
11:00-11:10, Paper ThAT6.4 | |
HRI-Based Gaze-Contingent Eye Tracking for Autism Spectrum Disorder Treatment: A Preliminary Study Using a NAO Robot (I) |
|
Brienza, Michele (University of Basilicata), Laus, Francesco (University of Basilicata), Guglielmi, Vito (University of Basilicata), Carriero, Graziano (University of Basilicata), Sileo, Monica (University of Basilicata), Grisolia, Mariantonietta (IRCCS Fondazione Stella Maris Mediterraneo), Palermo, Giuseppina (IRCCS Fondazione Stella Maris Mediterraneo), Bloisi, Domenico (University of Basilicata), Pierri, Francesco (Universitŕ Della Basilicata), Turi, Marco (IRCCS Fondazione Stella Maris Mediterraneo), Muratori, Filippo (IRCCS, Scientific Institute Stella Maris, Pisa) |
Keywords: Applications of Social Robots, Assistive Robotics, Child-Robot Interaction
Abstract: Social robots can be used for assisting children managing chronic illness through education and encouragement. In this paper, we present a study about the use of a NAO robot in the therapy with children diagnosed with an autism spectrum disorder (ASD). In particular, we propose an approach to track the gaze of the child while she/he is interacting with the robot. We adopt a two level architecture, where the high-levels task in the treatment protocol are decided by the therapist and the robot performs autonomously the low-level tasks. We carried out a preliminary evaluation of the proposed approach involving neurotypical and an autistic children.
|
|
11:10-11:20, Paper ThAT6.5 | |
Redundant Multi-DoF Robot Arm Co-Operation through the Body Integration System (I) |
|
Suzuki, Hyuga (Nagoya Institute of Technology), Yukawa, Hikari (Nagoya Institute of Technology), Minamizawa, Kouta (Keio University), Tanaka, Yoshihiro (Nagoya Institute of Technology) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Embodiment, Empathy and Intersubjectivity, Creating Human-Robot Relationships
Abstract: The human arm has seven DoF. If the human arm were more redundant, it would improve workability. However, intuitive manipulation of redundant arms is difficult due to the complexity of the input system or the limitations of the human arm. In this study, we constructed a system in which multiple operators integrate their bodies to intuitively operate a redundant multi-DoF robotic arm. The robotic arm was assembled by connecting two arms, and the roles of each section --distal arm and proximal arm-- were assigned to different operators. First, vibrotactile feedback methods to the users based on the operation movement were investigated to compensate for the recognition of the partner's movement. Three types of feedback methods —the actions of both oneself and one's partner, the actions of the partner, and none— were compared in order to improve the body awareness. It was shown that feedback of the movement from the partner who operates the proximal robotic arm significantly enhanced the sense of body ownership. Then, the proposed co-operation of the redundant robotic arm was evaluated in the workability. Two types of manipulation methods —a single-operator system that switched the target robotic arm and a multi-DoF body-integration system used by two people— were compared through an object-manipulation task involving obstacle avoidance. It was shown that the two-person co-operation through body integration system to be faster than the single-person operation.
|
|
11:20-11:30, Paper ThAT6.6 | |
Visual and Haptic Cues for Human-Robot Handover (I) |
|
Costanzo, Marco (Universitŕ Degli Studi Della Campania "Luigi Vanvitelli"), Natale, Ciro (Universitŕ Degli Studi Della Campania "Luigi Vanvitelli"), Selvaggio, Mario (Universitŕ Degli Studi Di Napoli Federico II) |
Keywords: Non-verbal Cues and Expressiveness, HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: The adoption of robots outside their cages in conventional industrial scenarios requires not only safe human-robot interaction but also intuitive human-robot interactive communication. In human-robot collaborative tasks, the objective is to help humans in performing their job with less physical and cognitive effort. A collaborative task can involve the exchange of objects between the robot and the operator. However, the handover operation should be sufficiently intuitive, fluid, and natural for being accepted by the involved humans. Naturalness strongly depends on the speed of the object exchange and the way of communication. For the latter aspect, this paper proposes a multi-modal communication based on visual and haptic cues. Concerning the handover speed requirement, the paper proposes a high-performance visual servoing based on an Extended Kalman Filter (EKF) estimating object speed during the handover and a homography-based object tracking. The object safety is ensured by proper control of the robot grasp force based on a model-based approach exploiting tactile measurements. The same perception modality is also used as a source of haptic cues that make the handover intuitive and natural. Experiments of human-robot handovers through haptic and visual cues communication demonstrate the effectiveness of the proposed approach.
|
|
ThWT1 |
Room T1 |
Trust, Acceptance and Social Cues in Human-Robot Interaction - SCRITA |
Workshop or Tutorial Session |
|
, Paper ThWT1. | |
Trust, Acceptance and Social Cues in Human-Robot Interaction - SCRITA |
|
Rossi, Alessandra (University of Naples Federico II), Holthaus, Patrick (University of Hertfordshire), Lakatos, Gabriella (University of Hertfordshire), Moros, Sílvia (University of Hertfordshire), Riches, Lewis (University of Hertfordshire) |
|
ThWT2 |
Room T2 |
7th Workshop on Behavior Adaptation, Interaction and Learning for Assistive
Robotics (BAILAR) |
Workshop or Tutorial Session |
|
, Paper ThWT2. | |
7th Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR) |
|
Staffa, Mariacarla (University of Naples Parthenope), Rossi, Silvia (Universita' di Napoli Federico II), Sciutti, Alessandra (Italian Institute of Technology), Winkle, Katie (Uppsala University) |
|
ThWT3 |
Room T3 |
Robots for Learning (R4L): AI to Power Robots |
Workshop or Tutorial Session |
|
, Paper ThWT3. | |
Robots for Learning (R4L): AI to Power Robots |
|
Carnieto Tozadore, Daniel (École Polytechnique Fédérale de Lausanne (EPFL)), Nasir, Jauwairia (University of Augsburg), Kanero, Junko (Sabanci University), Neumann, Michelle (Southern Cross University, Gold Coast), Neerincx, Mark (TNO), Johal, Wafa (University of New South Wales) |
|
ThWT4 |
Room T4 |
Researching Diversity and Inclusion in Human-Robot Interaction:
Methodological, Technical and Ethical Considerations (divHRI) |
Workshop or Tutorial Session |
|
, Paper ThWT4. | |
Researching Diversity and Inclusion in Human-Robot Interaction: Methodological, Technical and Ethical Considerations (divHRI) |
|
Straßmann, Carolin (University of Applied Sciences ruhr West), Eimler, Sabrina C. (Hochschule Ruhr West, University of Applied Sciences), Arntz, Alexander (University of Applied Sciences Ruhr West), Helgert, Andrč (University of Applied Sciences Ruhr West), Timm, Lara (University of Applied Sciences Ruhr West) |
|
ThWT5 |
Room T5 |
Multidisciplinary Perspectives on Context-Aware Embodied Spoken
Interactions (MP-COSIN) |
Workshop or Tutorial Session |
|
, Paper ThWT5. | |
Multidisciplinary Perspectives on Context-Aware Embodied Spoken Interactions (MP-COSIN) |
|
Cumbal, Ronald (KTH Royal Institute of Technology), Axelsson, Agnes (KTH Royal Institute of Technology), Pelikan, Hannah (Linköping University), Lala, Divesh (Kyoto University), Reimann, Merle (Vrije Universiteit Amsterdam), Gervits, Felix (DEVCOM Army Research Laboratory), Engwall, Olov (KTH Royal Institute of Technology) |
| |