| |
Last updated on August 21, 2022. This conference program is tentative and subject to change
Technical Program for Thursday September 1, 2022
|
Th101 |
Auditorium |
HRI and Collaboration in Manufacturing Environments |
Regular Session |
Chair: Iovino, Matteo | ABB Corporate Research |
Co-Chair: Orlandini, Andrea | National Research Council of Italy |
|
08:40-08:52, Paper Th101.1 | |
CoCo Games: Graphical Game-Theoretic Swarm Control for Communication-Aware Coverage (I) |
|
Fernando, Malintha (Indiana University), Senanayake, Ransalu (Stanford University), Swany, Martin (Indiana University) |
|
08:52-09:04, Paper Th101.2 | |
A Contact-Adaptive Control Framework for Co-Manipulation Tasks with Application to Collaborative Screwing |
|
Villa, Nicola (Istituto Italiano Di Tecnologia), Mobedi, Emir (Istituto Italiano Di Tecnologia), Ajoudani, Arash (Istituto Italiano Di Tecnologia) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation
Abstract: This paper proposes a novel framework for robotic manipulation tasks, exploiting the Human-Robot Collaboration (HRC) potential. The framework integrates two adaptive controllers to i) modulate robot compliance in contact with the environment along constrained directions, and to ii) enable human guidance through touch when a manual intervention is needed. To demonstrate the potential of the proposed framework, we consider a collaborative screwing task. In this example application, the operator is in charge of placing the screws on the table and following the instructions on a graphical user interface. The robot, after identifying the position of the screws through an online human pose-tracking system, performs the screwing using the proposed controller. The human operator can adjust the screwing position of the robot using the adaptive interface at anytime if the position accuracy through vision is insufficient. We first experimentally evaluate the operation of the proposed controller and demonstrate its performance in comparison to the classical impedance control. Next, the overall system is evaluated in a collaborative (human and robot) setting.
|
|
09:04-09:16, Paper Th101.3 | |
Fast Contact Detection and Classification for Kinesthetic Teaching in Robots Using Only Embedded Sensors |
|
Salt Ducaju, Julian Mauricio (LTH, Lund University), Olofsson, Bjorn (Lund University), Robertsson, Anders (LTH, Lund University), Johansson, Rolf (Lund University) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Programming by Demonstration
Abstract: Collaborative robots have been designed to perform tasks where human cooperation may occur. Additionally, undesired collisions can happen in the robot's environment. A contact classifier may be needed if robot trajectory recalculation is to be activated depending on the source of robot-environment contact. For this reason, we have evaluated a fast contact detection and classification method and we propose necessary modifications and extensions so that it is able to detect a contact in any direction and distinguish if it has been caused by voluntary human cooperation or by accidental collision with a static obstacle for kinesthetic teaching applications. Robot compliance control is used for trajectory following as an active strategy to ensure safety of the robot and its environment. Only sensor data that are conventionally available in commercial collaborative robots, such as joint-torque sensors and joint-position encoders/resolvers, are used in our method. Moreover, fast contact detection is ensured by using the frequency content of the estimated external forces, whereas external force direction and sense relative to the robot's motion is used to classify its source. Our method has been experimentally proven to be successful in a collaborative assembly task for a number of different experimentally recorded trajectories and with the intervention of different operators.
|
|
09:16-09:28, Paper Th101.4 | |
Dirichlet-Based Dynamic Movement Primitives for Encoding Periodic Motions with Predefined Accuracy |
|
Papageorgiou, Dimitrios (Aristotle University of Thessaloniki), Argiropoulos, Despina-Ekaterini ((a) Institute of Computer Science Foundation for Research and T), Doulgeri, Zoe (Aristotle University of Thessaloniki) |
Keywords: Programming by Demonstration
Abstract: In this work the utilization of Dirichlet (periodic sinc) base functions in DMPs for encoding periodic motions is proposed. By utilizing such kernels, we are able to analytically compute the minimum required number of kernels based only on the predefined accuracy, which is a hyperparameter that can be intuitively selected. The computation of the minimum required number of kernels is based on the frequency content of the demonstrated motion. The learning procedure essentially consists of the sampling of the demonstrated trajectory. The approach is validated through simulations and experiments with the KUKA LWR4+ robot, which show that utilizing the automatically calculated number of basis functions, the predefined accuracy is achieved by the proposed DMP model.
|
|
09:28-09:40, Paper Th101.5 | |
Combining Context Awareness and Planning to Learn Behavior Trees from Demonstration |
|
Gustavsson, Oscar (KTH Royal Institute of Technology), Iovino, Matteo (ABB Corporate Research), Styrud, Jonathan (ABB), Smith, Claes Christian (KTH Royal Institute of Technology) |
Keywords: Programming by Demonstration, HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: Fast changing tasks in unpredictable, collaborative environments are typical for medium-small companies, where robotised applications are increasing. Thus, robot programs should be generated in short time with small effort, and the robot able to react dynamically to the environment. To address this we propose a method that combines context awareness and planning to learn Behavior Trees (BTs), a reactive policy representation that is becoming more popular in robotics and has been used successfully in many collaborative scenarios. Context awareness allows for inferring from the demonstration the frames in which actions are executed and to capture relevant aspects of the task, while a planner is used to automatically generate the BT from the sequence of actions from the demonstration. The learned BT is shown to solve non-trivial manipulation tasks where learning the context is fundamental to achieve the goal. Moreover, we collected non-expert demonstrations to study the performances of the algorithm in industrial scenarios.
|
|
09:40-09:52, Paper Th101.6 | |
Towards Transferring Human Preferences from Canonical to Actual Assembly Tasks |
|
Nemlekar, Heramb (University of Southern California), Guan, Runyu (University of Southern California), Luo, Guanyang (University of Southern California), Gupta, Satyandra K. (University of Southern California), Nikolaidis, Stefanos (University of Southern California) |
Keywords: Programming by Demonstration, HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation
Abstract: To assist human users according to their individual preference in assembly tasks, robots typically require user demonstrations in the given task. However, providing demonstrations in actual assembly tasks can be tedious and time-consuming. Our thesis is that we can learn the preference of users in actual assembly tasks from their demonstrations in a representative canonical task. Inspired by prior work in economy of human movement, we propose to represent user preferences as a linear reward function over abstract task-agnostic features, such as movement and physical and mental effort required by the user. For each user, we learn the weights of the reward function from their demonstrations in a canonical task and use the learned weights to anticipate their actions in the actual assembly task; without any user demonstrations in the actual task. We evaluate our proposed method in a model-airplane assembly study and show that preferences can be effectively transferred from canonical to actual assembly tasks, enabling robots to anticipate user actions.
|
|
09:52-10:04, Paper Th101.7 | |
Multi-Perspective Human Robot Interaction through an Augmented Video Interface Supported by Deep Learning |
|
da Silva Filho, José Grimaldo (University Grenoble Alpes - INRIA), Rekik, Khansa (ZeMA GGmbH), Kanso, Ali (ZeMA GGmbH), Schnitman, Leizer (Universidade Federal Da Bahia) |
Keywords: HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation, Motion Planning and Navigation in Human-Centered Environments
Abstract: As the world surpasses a billion cameras and their coverage of the public and private spaces increases, the possibility of using their visual feed to not just observe, but to command robots through their video becomes an ever more interesting prospect. Our work deals with multi-perspective interaction, where a robot autonomously maps image pixels from reachable cameras to positions on its global coordinate space. This enables an operator to send the robot to specific positions in the camera with no manual calibration. Furthermore, robot information, such as planned paths, can be used to augment all affected camera images with an overlayed projection of their visual information. The robustness of this approach has been validated in both simulated and real world experiments.
|
|
10:04-10:15, Paper Th101.8 | |
Virtual Barriers in Augmented Reality for Safe and Effective Human-Robot Cooperation in Manufacturing |
|
Hoang, Khoa Cong (Monash University), Chan, Wesley Patrick (Monash University), Lay, Steven (Monash University), Cosgun, Akansel (Monash University), Croft, Elizabeth (Monash University) |
Keywords: HRI and Collaboration in Manufacturing Environments, Virtual and Augmented Tele-presence Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: Safety is a fundamental requirement in any human-robot collaboration scenario. To ensure the safety of users, from both physical and psychological aspects, for such scenarios, we propose a novel Virtual Barrier system facilitated by an augmented reality interface. Our system provides two types of Virtual Barriers to ensure safety: 1) a Virtual Person Barrier which encapsulates and follows the user to protect them from collisions with the robot, and 2) Virtual Obstacle Barriers which users can spawn to protect objects or regions that the robot should not enter. Our system utilizes augmented reality to visually display these protective barriers to the user during operation. To enable effective human-robot collaboration, our system automatically replans the robot's motion when potential collisions are detected as a result of a barrier intersecting the robot's planned path. We compared our novel system with a standard 2D display interface through a user study, where participants performed a task mimicking an industrial manufacturing procedure. Results show that our system increases both physical and psychological safety and task efficiency, and makes the interaction more intuitive.
|
|
Th102 |
Aragonese/Catalana |
Nonverbal Communication Skills in Humans and Robots |
Special Session |
Chair: Celiktutan, Oya | King's College London |
Co-Chair: Tuyen, Nguyen Tan Viet | King's College London |
Organizer: Celiktutan, Oya | King's College London |
Organizer: Tuyen, Nguyen Tan Viet | King's College London |
Organizer: Chamoux, Marine | SoftBank Robotics Europe |
Organizer: Georgescu, Alexandra | King's College London |
Organizer: Cukurova, Mutlu | University College London |
Organizer: Lison, Pierre | Norwegian Computing Center (NR) |
Organizer: Ahn, Ho Seok | The University of Auckland, Auckland |
|
08:40-08:52, Paper Th102.1 | |
Nonverbal Cues Expressing Robot Personality - a Movement Analysts Perspective (I) |
|
van Otterdijk, Maria Theodorus Henricus (University of Oslo), Song, Heqiu (Eindhoven Univeristy of Technology), Tsiakas, Konstantinos (Delft University of Technology), van Zeijl, Ilka Talin (University of Technology Eindhoven, Department of Industrial Des), Barakova, Emilia I. (Eindhoven University of Technology) |
Keywords: Innovative Robot Designs, Personalities for Robotic or Virtual Characters, Non-verbal Cues and Expressiveness
Abstract: In social robotics, where people and robots interact in a social context, robot personality design is critical. Through voice, words, gestures, and nonverbal clues, social robots with expressive behaviors can display human-like actions, and the robot's personality will ensure consistency. This research aims to create robot personalities expressed only by nonverbal cues. Differently from existing studies that test expressive behaviors with non-specialized participants, we look at how and why human movement analysts perceive distinct personalities in robots (introvert vs. extrovert) based on the robot's movement and other dynamic features, such as joint position, head, and torso position, voice pitch, speed, and so on. We report the findings of a thematic analysis of the data obtained during a focus group with movement analysis experts who watched Pepper robot behaviors designed to be extrovert and introvert. Our findings lead to new guidelines for designing different robot movement features, including body symmetry, personality trait consistency, and social cue congruence during an interaction, all emphasized by the movement analyzers. Finally, we summarize the design principles for extrovert and introvert robot behaviors based on the combined findings of the focus group data analysis and literature review.
|
|
08:52-09:04, Paper Th102.2 | |
Familiar Acoustic Cues for Legible Service Robots (I) |
|
Angelopoulos, Georgios (Interdepartmental Center for Advances in Robotic Surgery - ICARO), Vigni, Francesco (University of Naples Federico II), Rossi, Alessandra (University of Naples Federico II), Russo, Giuseppina (University of Naples Federico II), Turco, Mario (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Non-verbal Cues and Expressiveness, Sound design for robots, Curiosity, Intentionality and Initiative in Interaction
Abstract: When navigating in a shared environment, the extent to which robots are able to effectively use signals for coordinating with human behaviors can ameliorate dissatisfaction and increase acceptance. In this paper, we present an online video study to investigate whether familiar acoustic signals can improve the legibility of a robot's navigation behavior. We collected the responses of 120 participants to evaluate their perceptions of a robot that communicates with one of the three used non-verbal navigational cues (an acoustic signal, an acoustic in pair with a visual signal, and a dissimilar frequency acoustic signal). Our results showed a significant legibility improvement when the robot used both light and acoustic signals to communicate its intentions compared to using only the same acoustic sound. Additionally, our findings highlighted that people also perceived differently the robot's intentions when they were expressed through two frequencies of the mere sound. The results of this work suggest a paradigm that can help the development of mobile service robots in public spaces.
|
|
09:04-09:16, Paper Th102.3 | |
From Message to Expression: Exploring Non-Verbal Communication for Appearance-Constrained Robots (I) |
|
Sanoubari, Elaheh (Google), David, Byron (Google), Kew, J. Chase (Google Robotics), Cunningham, Corbin (Google), Caluwaerts, Ken (Google) |
Keywords: Non-verbal Cues and Expressiveness, Innovative Robot Designs, HRI and Collaboration in Manufacturing Environments
Abstract: Human-robot communication is key to establishing transparency of robot states, and promoting psychological safety of the people interacting with a robot. Such communication is especially challenging for appearance-constrained robots, as they do not afford to employ commonly-used anthropomorphic modalities such as facial expressions. We explore how an appearance-constrained quadruped robot can use LEDs and gestures to generate affective expressions in order to non-verbally communicate. Traditionally, affect has been used as a signalling paradigm in Human-Robot Interaction where expressions are designed with the goal of communicating a particular emotion such as happiness. Instead, this work explores a generalizable mapping from informative messages (e.g., "I am listening") to abstract affective expressions. We modulate the expressions by a model of affect, taking inspirations from Affect Control Theory (ACT). To explore designing the expressions, we first conducted pilot semi-structured interviews consulting stakeholders who regularly operate appearance-constrained robots (N=12) to understand the communication requirements. Then, we designed a set of six affective expressions and evaluated them in a crowdsourced study (N=450). Findings suggest that the expressions can significantly improve effective communication of a robot's awareness and intent, and promote psychological safety of people interacting with it.
|
|
09:16-09:28, Paper Th102.4 | |
Knowing Where to Look: A Planning-Based Architecture to Automate the Gaze Behavior of Social Robots (I) |
|
Mishra, Chinmaya (Max Planck Institute for Psycholinguistics), Skantze, Gabriel (KTH) |
Keywords: Non-verbal Cues and Expressiveness, Social Intelligence for Robots, Computational Architectures
Abstract: Gaze cues play an important role in human communication and are used to coordinate turn-taking and joint attention, as well as to regulate intimacy. In order to have fluent conversations with people, social robots need to exhibit human-like gaze behavior. Previous Gaze Control Systems (GCS) in HRI have automated robot gaze using data-driven or heuristic approaches. However, these systems tend to be mainly reactive in nature. Planning the robot gaze ahead of time could help in achieving more realistic gaze behavior and better eye-head coordination. In this paper, we propose and implement a novel planning-based GCS. We evaluate our system in a comparative within-subjects user study (N=26) between a reactive system and our proposed system. The results show that the users preferred the proposed system and that it was significantly more interpretable and better at regulating intimacy.
|
|
09:28-09:40, Paper Th102.5 | |
Towards an Automatic Generation of Natural Gestures for a Storyteller Robot (I) |
|
Zabala Cristobal, Unai (University of the Basque Country (UPV/EHU)), Rodriguez, Igor (University of Basque Country), Lazkano, Elena (University of Basque Country) |
Keywords: Storytelling in HRI, Non-verbal Cues and Expressiveness, Applications of Social Robots
Abstract: Natural gesturing is very important for the credibility of social robots. It is even more crucial for storytelling robots since the expression, emotion and emphasis must be highlighted. In this paper we propose a hybrid gesture generation approach for a storytelling robot that combines beats automatically generated by a GAN with a probabilistic semantic related gesture insertion system. Beats are executed according to a probability based on the duration of the sentences and semantic gesture insertions are dependent of the previous occurrences of the gestures associated to the words. The polarity of the text is extracted and affects several features of the motion to arouse emotion. A qualitative evaluation of robot behavior is conducted and confirms the approach as a promising one as storytelling system.
|
|
Th103 |
Sveva/Normanna |
Cooperation and Collaboration in Human-Robot Teams II |
Regular Session |
Chair: Lalitharatne, Thilina Dulantha | Imperial College London |
Co-Chair: Lorenzini, Marta | Istituto Italiano Di Tecnologia |
|
08:40-08:52, Paper Th103.1 | |
Continuous and Incremental Learning in Physical Human-Robot Cooperation Using Probabilistic Movement Primitives |
|
Schäle, Daniel (Western Norway University of Applied Sciences), Stoelen, Martin F. (University of Plymouth), Kyrkjebø, Erik (Western Norway University of Applied Sciences) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Programming by Demonstration, HRI and Collaboration in Manufacturing Environments
Abstract: For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incrementally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.
|
|
08:52-09:04, Paper Th103.2 | |
Reimagining RViz: Multidimensional Augmented Reality Robot Signal Design |
|
Groechel, Thomas (Univeristy of Southern California), O'Connell, Allison (University of Southern California), Nigro, Massimiliano (Politecnico Di Milano), Mataric, Maja (University of Southern California) |
Keywords: Virtual and Augmented Tele-presence Environments, User-centered Design of Robots, Anthropomorphic Robots and Virtual Humans
Abstract: From RViz to augmented reality (AR), a wide variety of robot signal visualizations exist for conveying robot capabilities. Many of the visualizations designed for AR, however, have not isolated multiple salient Virtual Design Elements (VDEs) for a given signal and comparatively evaluated combinations of those VDEs. To address this, we identify multiple VDEs for AR signaling of the following core robot capabilities: navigation, light detection and ranging (LiDAR), camera, face detection, audio localization, and natural language processing. We evaluated each signal's VDE combinations with an Amazon Mechanical Turk study (n=150) where participants watched 4 videos for each signal (consisting of 2 independent VDE choices) and rated the clarity and visual appeal of each signal. The results define a set of the most clear and visually appealing signal visualization designs and inform about interaction effects among VDEs. The resulting VDEs offer design insights and a baseline for continued research into AR robot capability signalling.
|
|
09:04-09:16, Paper Th103.3 | |
Detachable Smart Teaching Device for the Easy and Safe Operation of Robot Manipulator |
|
Do, Hyun Min (Korea Institute of Machinery and Materials), Kim, Hwi-su (Korea Institute of Machinery & Materials), Kim, Uikyum (Ajou University), Choi, Taeyong (KIMM), Park, Jongwoo (Korea Institue of Machinery & Materials) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments, Human Factors and Ergonomics
Abstract: Increasing demand for automation and recent intact trends have accelerated the employment of robots in manufacturing and service areas. Thus, there is a growing demand for easier and safer operation of robots, both by experts and non-professionals. This paper proposes a detachable smart robot teaching device with collision prediction to meet such demands. The proposed device is composed of two modules. One is for easy teaching based on a 6D mouse, and the other is for safe operation by measuring the distance to the surrounding obstacles and predicting the collision situation. The teaching module is used only in a teaching phase and can be separated afterward. A prototype is implemented, and the performance is verified through experiments with a collaborative robot.
|
|
09:16-09:28, Paper Th103.4 | |
Perceptions of a Robot’s Mental States Influence Performance in a Collaborative Task for Males and Females Differently |
|
Siri, Giulia (Istituto Italiano Di Tecnologia), Abubshait, Abdulaziz (Italian Institute of Technology), De Tommaso, Davide (Istituto Italiano Di Tecnologia), Cardellicchio, Pasquale (Istituto Italiano Di Tecnologia), D'Ausilio, Alessandro (University of Ferrara & CTNSC - Italian Institute of Technology), Wykowska, Agnieszka (Istituto Italiano Di Tecnologia) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Curiosity, Intentionality and Initiative in Interaction, Embodiment, Empathy and Intersubjectivity
Abstract: With the increasing use of social robots and automated machines in our daily lives, roboticists need to design robots that are suitable for human-robot collaboration. Prior work suggests that robots that are perceived to be intentional (i.e., are able to experience mental life capacities), can, in most cases, positively affect human-robot collaboration. With studies highlighting the importance of individual differences and how they drive our perception. We aimed to investigate how individual differences in gender moderate the relationship between subjective perceptions of robots and behavioral performance in a human-robot collaborative task. Participants rated a humanoid robot (i.e., iCub) on whether it can experience mental life capacities and completed a collaborative task with it. We correlated their subjective ratings with the completion time of the collaborative task and found a positive correlation between perceiving iCub to experience basic and social emotion with their performance (i.e., movement times). This relationship, however, was evident for males but not females. The results of this study suggest that perceiving humanoid robots as capable of experiencing mental states influences collaborative performance differently depending on gender. These findings can be relevant for the field of social robotics and to successfully design robot interaction partners for workplaces.
|
|
09:28-09:40, Paper Th103.5 | |
Task Selection and Planning in Human-Robot Collaborative Processes: To Be a Leader or a Follower? |
|
Noormohammadi-Asl, Ali (University of Waterloo), Ayub, Ali (University of Waterloo), Smith, Stephen L. (University of Waterloo), Dautenhahn, Kerstin (University of Waterloo) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Machine Learning and Adaptation, Detecting and Understanding Human Activity
Abstract: Recent advances in collaborative robots have provided an opportunity for the close collaboration of humans and robots in a shared workspace. To exploit this collaboration, robots need to plan for optimal team performance while considering human presence and preference. This paper studies the problem of task selection and planning in a collaborative, simulated scenario. In contrast to existing approaches, which mainly involve assigning tasks to agents by a task allocation unit and informing them through a communication interface, we give the human and robot the agency to be the leader or follower. This allows them to select their own tasks or even assign tasks to each other. We propose a task selection and planning algorithm that enables the robot to consider the human's preference to lead, as well as the team and the human's performance, and adapts itself accordingly by taking or giving the lead. The effectiveness of this algorithm has been validated through a simulation study with different combinations of human accuracy levels and preferences for leading.
|
|
09:40-09:52, Paper Th103.6 | |
Explainability in Collaborative Robotics: The Effect of Informing the User on Task Performance and Trust |
|
Adamik, Mark (Aalborg University), Madsen, Asger Printz (Aalborg University), Rehm, Matthias (Aalborg University) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments
Abstract: In order to test how explanations affect a user working together with a collaborative robot, we created a test scenario where a user sorts trash together with a collaborative robot. Sometimes the robot is not able to fulfill its part of the task. Different modalities (textual, graphical, both) for explaining this error to the user are tested in a between subjects design and the effects on task performance, cognitive load and trust are analyzed.
|
|
09:52-10:04, Paper Th103.7 | |
Perceptual Deadband for Haptic Data Compression: Symmetric or Asymmetric? |
|
Bhardwaj, Amit (Indian Institute of Technology Jodhpur, India) |
Keywords: Degrees of Autonomy and Teleoperation, Virtual and Augmented Tele-presence Environments, Machine Learning and Adaptation
Abstract: In the literature, perceptual deadband approach based on the Weber's law of perception has been employed to reduce haptic data (i.e., force) rate for a typical teleoperation application. The approach selects only those samples for transmission which lie outside the perceptual deadband. The existing structure of the deadband has linear decision boundaries and assume that the just noticeable differences (JNDs) for increasing and decreasing change in a reference force stimulus are similar. This paper questions this assumption and searches for an asymmetric perceptual deadband using a data-driven approach. For the purpose, we design an experimental setup for collecting human haptic responses (perceived and non-perceived) of several users for a force range [3, 5] N. A machine learning classifier inspired from the Weber's law is trained to predict the labels of the responses and define a generalized linear perceptual deadband for each user. The results show that the generalized deadband do provide different increasing and decreasing JND (i.e., pointing towards asymmetry in force perception), but fails to improve the data reduction significantly as compared to the existing symmetric one.
|
|
10:04-10:15, Paper Th103.8 | |
Visualizing Robot Intent for Object Handovers with Augmented Reality |
|
Newbury, Rhys (Monash University), Cosgun, Akansel (Monash University), Crowley-Davis, Tysha (Monash University), Chan, Wesley Patrick (Monash University), Drummond, Tom (Monash University), Croft, Elizabeth (Monash University) |
Keywords: Virtual and Augmented Tele-presence Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: Humans are highly skilled in communicating their intent for when and where a handover would occur. However, even the state-of-the-art robotic implementations for handovers typically lack of such communication skills. This study investigates visualization of the robot’s internal state and intent for Human-to-Robot Handovers using Augmented Reality. Specifically, we explore the use of visualized 3D models of the object and the robotic gripper to communicate the robot’s estimation of where the object is and the pose in which the robot intends to grasp the object. We tested this design via a user study with 16 participants, in which each participant handed over a cube-shaped object to the robot 12 times. Results show communicating robot intent via augmented reality substantially improves the perceived experience of the users for handovers. Results also indicate that the effectiveness of augmented reality is even more pronounced for the perceived safety and fluency of the interaction when the robot makes errors in localizing the object.
|
|
Th401 |
Auditorium |
HRI and Collaboration in Manufacturing Environments II |
Regular Session |
Chair: Zanchettin, Andrea Maria | Politecnico Di Milano |
Co-Chair: Saliba, Michael A. | University of Malta |
|
10:45-10:57, Paper Th401.1 | |
A Mixed Capability-Based and Optimization Methodology for Human-Robot Task Allocation and Scheduling |
|
Monguzzi, Andrea (Politecnico Di Milano), Badawi, Mahmoud (Politecnico Di Milano), Zanchettin, Andrea Maria (Politecnico Di Milano), Rocco, Paolo (Politecnico Di Milano) |
Keywords: HRI and Collaboration in Manufacturing Environments, Human Factors and Ergonomics, Cooperation and Collaboration in Human-Robot Teams
Abstract: In this work, we address two crucial issues that arise in the design of a human-robot collaborative station for the assembly of products: the optimal task allocation and the scheduling problem. We propose an offline method to solve in series the two mentioned issues, considering a static allocation and taking into account several features such as the minimization of postural discomfort, operation processing times, idle times and hence the total cycle time. Our methodology consists of a mixed approach that combines a capability-based method, where the agents’ capabilities are tested against a list of predefined criteria, with optimization. In particular, we formulate a modified version of the Hungarian Algorithm to solve also unbalanced assignment problems, where the number of tasks is different from the number of agents. The scheduling policy is obtained by means of a Mixed Integer Linear Programming (MILP) formulation, with a multi-objective optimization. Moreover, the concepts of operation, assembly tree and precedence graph are formalized, since they represent the inputs to our method, together with the information on the workstation layout and on the selected kind of robot. Finally, the proposed solution is applied to a case study to define the optimal task allocation and scheduling for two different workstation layouts: the results are compared and the best layout is accordingly selected.
|
|
10:57-11:09, Paper Th401.2 | |
SM-EXO: Shape Memory Alloy-Based Hand EXOskeleton for Cobotic Application in Cognitive Development Investigation |
|
Srivastava, Rupal (Technological University of the Shannon), Singh, Maulshree (Technological University of the Shannon: Midlands Midwest), Gomes, Guilherme Daniel (Technological University of the Shannon), Murray, Niall (Technological University of the Shannon: Midlands Midwest), DEVINE, DECLAN (Technological University of the Shannon: Midlands Midwest) |
Keywords: HRI and Collaboration in Manufacturing Environments, Detecting and Understanding Human Activity, Child-Robot Interaction
Abstract: The conventional smart gloves present a challenge regarding their portability as most work on gesture recognition techniques based on vision sensing and image processing. The multiple algorithms and signal filtering further make the overall process cumbersome. This work proposes a Shape Memory Alloy (SMA) integrated sensing mechanism in a smart glove for autonomous control. A novel hand gesture recognition technology is developed using kinaesthetic feedback from the finger joint movements. The paper presents a smart glove with an external SMA embedded tubing attachment for the thumb, index, and middle fingers. The motion of the SMA wires is constrained between a fixed end on the tip of the fingers, and the other end is connected to a linear position sensor with spring feedback. The SMA wires in this design exist in their Austenite phase at room temperature, thus exhibiting superelastic or pseudoelastic behavior. The tension in the SMA wire is observed and measured upon bending the fingers, corresponding to the mechanical travel in the linear position sensor. The individual and a combination of position sensor readings are then used as commands for actuating interactive toys. Using a three-finger approach, one can extract seven commands depending upon single or multiple finger movements. This data is further used to actuate the toys, and a use-case for cobotic application is proposed to help better understand interactive play, hand-eye coordination, and thus early cognitive development in children with Autism Spectrum Disorder (ASD). The discrete data output with binary data is independent of other devices or heavy data processing requirements, thus making the proposed novel SM-EXO a better alternative for non-portable and complex smart gloves.
|
|
11:09-11:21, Paper Th401.3 | |
Benchmarking Deep Neural Networks for Gesture Recognition on Embedded Devices |
|
Bini, Stefano (University of Salerno), Greco, Antonio (University of Salerno), Saggese, Alessia (Universita' Degli Studi Di Salerno), Vento, Mario (University of Salerno) |
Keywords: HRI and Collaboration in Manufacturing Environments, Applications of Social Robots, Detecting and Understanding Human Activity
Abstract: The gesture is one of the most used forms of communication between humans; in recent years, given the new trend of factories to be adapted to Industry 4.0 paradigm, the scientific community has shown a growing interest towards the design of Gesture Recognition (GR) algorithms for Human-Robot Interaction (HRI) applications. Within this context, the GR algorithm needs to work in real time and over embedded platforms, with limited resources. Anyway, when looking at the available scientific literature, the aim of the different proposed neural networks (i.e. 2D and 3D) and of the different modalities used for feeding the network (i.e. RGB, RGB-D, optical flow) is typically the optimization of the accuracy, without strongly paying attention to the feasibility over low power hardware devices. Anyway, the analysis related to the trade-off between accuracy and computational burden (for both networks and modalities) becomes important so as to allow GR algorithms to work in industrial robotics applications. In this paper, we perform a wide benchmarking focusing not only on the accuracy but also on the computational burden, involving two different architectures (2D and 3D), with two different backbones (MobileNet, ResNeXt) and four types of input modalities (RGB, Depth, Optical Flow, Motion History Image) and their combinations.
|
|
11:21-11:33, Paper Th401.4 | |
Model-Based Design of a Collaborative Human-Robot Workspace |
|
Rahmayanti, Rifa (Universidad De Oviedo), Alvarez, Juan Carlos (Universidad De Oviedo), Alvarez Prieto, Diego (University of Oviedo), Lopez Rodriguez, Antonio Miguel (University of Oviedo) |
Keywords: HRI and Collaboration in Manufacturing Environments, Evaluation Methods, Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper presents a complete design and validation of a specific case of a Human-Robot collaborative workspace consisting of a robot arm and a human operator working in a pick & place task in close proximity. The human motion monitoring is made with IMU-based wearables and optical markers, and the robot motion planning method considers safety distance criteria as prescribed in ISO/TS 15066. This case study is meant to illustrate the design of collaborative environments by using the Model-based Design approach. The goal is to systematically address the problems of safety assurance and performance optimization, allowing the use of optimization and machine learning approaches.
|
|
11:33-11:45, Paper Th401.5 | |
ROSIE: A ROS Adapter for a Modular Digital Twinning Framework |
|
Pisanelli, Gianmarco (University of Sheffield, AMRC), Tymczuk, Mariusz (The University of Sheffield Advanced Manufacturing Research Cent), Douthwaite, James (University of Sheffield), Aitken, Jonathan Maxwell (University of Sheffield), Law, James (The University of Sheffield) |
Keywords: HRI and Collaboration in Manufacturing Environments, Novel Interfaces and Interaction Modalities, Degrees of Autonomy and Teleoperation
Abstract: As robotic systems become more interactive and complex, there is a need to standardise interfaces and simplify development processes. This is particularly pertinent in the field of manufacturing, where human-robot collaboration is on the increase, but where standards and proprietary software are key barriers to deployment and adoption. In this article we present the ROSIE Adapter, a general-purpose, modular adapter developed in ROS designed to support the creation and connection of industry-ready digital twins. Together with our previous work on the modular CSI digital-twin framework, we demonstrate how the ROSIE Adapter creates a versatile ``plug-and-play'' interface that simplifies the development of new robotic processes, and improves accessibility to novice users. Furthermore, the adaptor supports integration of intuitive interface devices, such as speech and augmented reality interfaces, which enable more natural collaboration. We describe the adaptor and its use in two real-world applications, demonstrate the ease of use via a three-day hackathon event, and provide results showing the faithfulness of the arising digital twins to their connected physical systems.
|
|
11:45-11:57, Paper Th401.6 | |
Human-Guided Goal Assignment to Effectively Manage Workload for a Smart Robotic Assistant |
|
Dhanaraj, Neel (University of Southern California), Malhan, Rishi (University of Southern California), Nemlekar, Heramb (University of Southern California), Nikolaidis, Stefanos (University of Southern California), Gupta, Satyandra K. (University of Southern California) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: Managing robot workloads in human robot teams is critical for efficient team operation. If robots are overloaded with work, then they will miss deadlines and force humans to take on extra work. This paper presents a framework for a robot to assess its own workload based on an initial goal assignment. The robot does this by generating task and motion plans and computing the probability of missing deadlines due to the possibility of delays in task execution. A branch and bound based search is used to generate task and motion plans by minimizing task execution effort. The robot presents a diverse set of task and motion plans to the humans to offer multiple different options. Humans can either approve a plan or provide guidance to reduce the workload by either relaxing deadlines or removing goal(s) assigned to the robots.
|
|
11:57-12:09, Paper Th401.7 | |
Timing-Specified Controllers with Feedback for Human-Robot Handovers |
|
Kshirsagar, Alap (Cornell University), Ravi, Rahul Kumar (Cornell University), Kress-Gazit, Hadas (Cornell University), Hoffman, Guy (Cornell University) |
Keywords: HRI and Collaboration in Manufacturing Environments, Motion Planning and Navigation in Human-Centered Environments
Abstract: We develop and evaluate two human-robot handover controllers that allow end-users to specify timing parameters for the robot reach motion, and that provide feedback if the robot cannot satisfy those constraints. End-user tuning with feedback is a useful controller feature in settings where robots have to be re-programmed for varying task requirements but end-users do not have programming knowledge. The two controllers we propose are both receding-horizon controllers that differ in their objective function, and their user specified parameters, and subsequently their user-interface: One controller uses a minimum cumulative jerk (MCJ) objective function, and the other a minimum cumulative error (MCE) objective function. We implemented the controllers on a collaborative robot and conducted two controlled experiments to compare the user experience and performance of these controllers vis-à-vis a baseline proportional velocity (PV) controller. In each experiment, participants (n=30) interactively tuned the controller parameters, and collaborated with a robot to perform a time-constrained repetitive task. We found that the timing controller with the MCE implementation can provide a better user experience, both while setting the parameters (p=0.011) and performing the handovers with the robot (p<0.001), and fewer failures (p=0.016) compared to the PV controller, however the MCJ implementation did not provide better user experience compared to the PV controller. The MCJ controller also resulted in more failures than the PV controller. These results could inform the design of usable and effective end-user configurable controllers for human-robot interaction.
|
|
12:09-12:21, Paper Th401.8 | |
An AI-Powered Hierarchical Communication Framework for Robust Human-Robot Collaboration in Industrial Settings |
|
Mukherjee, Debasmita (The University of British Columbia), Gupta, Kashish (University of British Columbia), Najjaran, Homayoun (University of Victoria) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Multimodal Interaction and Conversational Skills
Abstract: Cohesive human-robot collaboration (HRC) for carrying out an industrial task requires an intelligent robot capable of functioning in uncertain and noisy environments. This can be achieved through seamless and natural communication between human and robot partners. Introducing naturalness in communication is highly complex due to both aleatoric variability and epistemic uncertainty originating from the components of the HRC system including the human, sensors, robot(s), and the environment. The presented work proposes the artificial intelligence (AI)-powered multimodal, robust fusion (AI-MRF) architecture that combines communication modalities from the human for a more natural communication. The proposed architecture utilizes fuzzy inferencing and Dempster-Shafer theory for deal with different manifestations of uncertainty. AI-MRF is scalable and modular. The evaluation of AI-MRF for safety and robustness under real-world mimicking case studies is showcased. While the architecture has been evaluated for HRC in industrial settings, it can be readily implemented into any human and machine communication scenarios.
|
|
12:21-12:33, Paper Th401.9 | |
Task-Oriented Robot-To-Human Handovers in Collaborative Tool-Use Tasks |
|
Qin, Meiying (Yale University), Brawer, Jake (Yale University), Scassellati, Brian (Yale) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments, Assistive Robotics
Abstract: Robot-to-Human handovers are common exercises in many robotics application domains. The requirements of handovers may vary across these different domains. In this paper, we first devised a taxonomy to organize the diverse and sometimes contradictory requirements. Among these, task-oriented handovers were not well-studied but important because the purpose of the handovers in human-robot collaboration (HRC) is not merely to pass an object from a robot to a human receiver, but to enable the human receiver to use it in a subsequent tool-use task. A successful task-oriented handover should incorporate task-related information - orienting the tool such that the human can grasp it in a way that is suitable for the task. We identified multiple difficulty levels of task-oriented handovers, and implemented a system to generate task-oriented handovers with novel tools on a physical robot. Unlike previous studies on task-oriented handovers, we trained the robot with tool-use demonstrations rather than handover demonstrations, since task-oriented handovers are dependent on the tool usages in the subsequent task. We demonstrated that our method can adapt to all difficulty levels of task-oriented handovers, including tasks that matched the typical usage of the tool (level I), tasks that required an improvised and unusual usage of the tool (level II), and tasks where the handover was adapted to the pose of a manipulandum (level III). We evaluated the generated handovers with online surveys. Participants rated our handovers to appear more comfortable for the human receiver and more appropriate for subsequent tasks when compared with typical handovers from prior work.
|
|
Th402 |
Aragonese/Catalana |
Social Human-Robot Interaction of Human-Care Service Robots |
Special Session |
Chair: Ahn, Ho Seok | The University of Auckland, Auckland |
Organizer: Ahn, Ho Seok | The University of Auckland, Auckland |
Organizer: Jang, Minsu | Electronics & Telecommunications Research Institute |
Organizer: Choi, Jongsuk | Korea Inst. of Sci. and Tech |
Organizer: Lee, Dong-Wook | Korea Institute of Industrial Technology |
Organizer: Kim, Jaehong | ETRI |
Organizer: Lim, Yoonseob | Korea Institute of Science and Technology |
|
10:45-10:57, Paper Th402.1 | |
Living with a Telepresence Robot: Results from a Field-Trial (I) |
|
Fiorini, Laura (University of Florence), Sorrentino, Alessandra (University of Florence), Becchimanzi, Claudia (Università degli Studi di Firenze), Mattia, Pistolesi (Università degli Studi di Firenze), Tosi, Francesca (Università degli Studi di Firenze), Cavallo, Filippo (University of Florence) |
|
10:57-11:09, Paper Th402.2 | |
Hot or Not? Exploring User Perceptions of Thermal Human-Robot Interaction (I) |
|
Borgstedt, Jacqueline (University of Glasgow), Pollick, Frank (University of Glasgow), Brewster, Stephen (University of Glasgow) |
Keywords: Robot Companions and Social Robots, Social Touch in Human–Robot Interaction, Novel Interfaces and Interaction Modalities
Abstract: Haptics is an essential element of interaction between humans and socially assistive robots. However, it is often limited to movements or vibrations and misses key aspects such as temperature. This mixed-methods study explores the potential of enhancing human-robot interaction [HRI] through thermal stimulation to regulate affect during a stress-inducing task. Participants were exposed to thermal stimulation while completing the Mannheim-multicomponent-stress-task (MMST). Findings yielded that human-robot emotional touch may induce comfort and relaxation during the exposure to acute stressors. User affect may be further enhanced through thermal stimulation, which was experienced as comforting, de-stressing, and altered participants’ perception of the robot to be more life-like. Allowing participants to calibrate a temperature they perceived as calming provided novel insights into the temperature ranges suitable for interaction. While neutral temperatures were the most popular amongst participants, findings suggest that cool (4 – 29 ºC), neutral (30 – 32 ºC), and warm (33ºC -36 ºC) temperatures can all induce comforting effects during exposure to stress. The results highlight the potential of thermal HRI in general and, more specifically, the advantages of personalized temperature calibration.
|
|
11:09-11:21, Paper Th402.3 | |
Real-World Validation Study of Daily Activity Detection for the Elderly (I) |
|
Cho, Miyoung (Electronics and Telecommunications Research Institute), Jang, Jinhyeok (ETRI), Lee, Jaeyeon (ETRI), Jang, Minsu (Electronics & Telecommunications Research Institute), Kim, DoHyung (Electronics and Telecommunications Research Institute), Kim, Jaehong (ETRI) |
Keywords: Detecting and Understanding Human Activity, Applications of Social Robots, Evaluation Methods
Abstract: The development of artificial intelligence has led to significant progress in activity detection; however, a proper evaluation of the activity detection performance is not guaranteed and is still lacking in real-life environments. In this study, to verify the stability and usefulness of activity detection in a real-world setting, we analyzed the activity detection performance for 40 elderly people using a human care robot.
|
|
11:21-11:33, Paper Th402.4 | |
TODS: Thermal Outdoor Image Dataset for Spoofing Detection (I) |
|
Lee, DongGuw (Seoul National University (SNU)), Song, KyuSeob (KAIST (Korea Advanced Institute of Science and Technology)), Kwon, Dong-Soo (KAIST) |
Keywords: Social Intelligence for Robots, Machine Learning and Adaptation, Applications of Social Robots
Abstract: Face spoofing detection could be used in conjunction with face recognition or facial expression recognition in social robots for better interaction with the users. Recent works have employed thermal image-based spoofing detection due to its competitive spoofing detection performance. However, existing methods utilized indoor thermal images which put a limit on using thermal image-based spoofing detection in outdoor environments. Moreover, several thermal image-based spoofing detection datasets consist of facial images at a frontal viewpoint, which would be put a limit on the usage in social robots with short heights. In this paper, we present our novel thermal image dataset for spoofing detection in outdoor environments. Unlike most existing datasets, our dataset is collected in outdoor environments with the camera facing the upper view to simulate the viewpoint of a social robot. To cover diverse outdoor environments, the dataset was collected at varying locations, times, and weather. Furthermore, we trained an event recognition-based spoofing detection on our collected dataset and also have provided benchmarking results for several baselines. Our spoofing detection method based on MobileNetV2 yielded a spoofing detection accuracy of 96.04% and GFLOPs of 0.88. Our dataset will be open-sourced for users pursuing academic research and will be available upon request to the authors.
|
|
11:33-11:45, Paper Th402.5 | |
Developing a Social Conversational Robot for the Hospital Waiting Room (I) |
|
Gunson, Nancie (Heriot-Watt University), Hernández García, Daniel (Heriot-Watt University), Sieińska, Weronika (Heriot-Watt University), Dondrup, Christian (Heriot-Watt University), Lemon, Oliver (Heriot-Watt University) |
Keywords: Multimodal Interaction and Conversational Skills, Robot Companions and Social Robots, Applications of Social Robots
Abstract: Possible applications for Social Robots in healthcare settings, that could have a tremendous social impact in helping alleviating staff workload, are those of a patient-facing role such as robot receptionist, providing assistance to patients and visitors. Examples of functions that such robots would need to be able to execute are greeting visitors, reception check-in/out of patients, answering common questions they may have, showing them where to sit, helping them locate missing objects, providing directions to facilities, guiding them to different locations, etc. In this paper we describe current progress towards developing a multimodal conversational AI system integrated in a Social Conversational Robot (an ARI robot) that will act as a receptionist in a hospital waiting room. We present the developed architecture of the system and report on an initial experimental validation study carried out in laboratory conditions with the ARI robot.
|
|
11:45-11:57, Paper Th402.6 | |
Let Me Introduce Myself – Using Self-Disclosure As a Social Cue for Health Care Robots (I) |
|
Herzog, Olivia (Technical University of Munich), Rögner, Katharina (Technical University of Munich) |
Keywords: Creating Human-Robot Relationships, Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots
Abstract: In health care, social care robots might approach humans more closely and sensitively than ever before. It is, therefore, essential to ensure that patients trust and accept them. Since hospital stays often offer only little time to gain experience interacting with a robot, it is important to identify strategies to strengthen the building of trust right from the first contact. One construct promoting trust in human-human interaction is self-disclosure. The aim of this study is to examine whether it also has a trust-promoting effect in human-robot interaction. Therefore, a study was conducted where subjects experienced a hospital scenario: A social care robot introduced itself, either just listing its tasks or additionally sharing personal information about itself. It was investigated whether a self-disclosing care robot is perceived as more trustworthy, anthropomorphic, likeable, and perceived as mindful, as well as more likely to be accepted than a robot that did not do so. A content analysis of the reasons for acceptance was conducted, as well as a semantic analysis of the adjectives used to describe the robot. The quantitative analysis revealed no significant difference between the introductions for the variables trust, acceptance, anthropomorphism, likeability, and mind perception. Descriptive data indicate a possible positive impact of self-disclosure on acceptance and likeability. In the self-disclosure condition, trust in technology was most often cited as a reason for accepting the robot. In the low self-disclosure condition, concerns about possible errors of the robot were mentioned most often. The most frequent reason for acceptance was a potential relief for nursing staff.
|
|
11:57-12:09, Paper Th402.7 | |
Moving Away from 'robotic' Interactions: Evaluation of Empathy, Emotion and Sentiment Expressed and Detected by Computer Systems (I) |
|
Gasteiger, Norina (The University of Manchester; the University of Auckland), Lim, JongYoon (University of Auckland), Hellou, Mehdi (The University of Auckland), MacDonald, Bruce (University of Auckland), Ahn, Ho Seok (The University of Auckland, Auckland) |
Keywords: Affective Computing, Creating Human-Robot Relationships, User-centered Design of Robots
Abstract: Social robots are often critiqued as being too ‘robotic’ and unemotional. For affective human-robot interaction (HRI), robots must detect sentiment and express emotion and empathy in return. We explored the extent to which people can detect emotions, empathy and sentiment from speech expressed by a computer system, with a focus on changes in prosody (pitch, tone, volume) and how people identify sentiment from written text, compared to a sentiment analyzer. 89 participants identified empathy, emotion and sentiment from audio and text embedded in a survey. Empathy and sentiment were best expressed in the audio, while emotions were the most difficult detect (75%, 67% and 42% respectively). We found moderate agreement (70%) between the sentiment identified by the participants and the analyzer. There is potential for computer systems to express affect by using changes in prosody, as well as analyzing text to identify sentiment. This may help to further develop affective capabilities and appropriate responses in social robots, in order to avoid ‘robotic’ interactions. Future research should explore how to better express negative sentiment and emotions, while leveraging multi-modal approaches to HRI.
|
|
12:09-12:21, Paper Th402.8 | |
Exploring ICT for the Elderly: By Analyzing the Elders’ Needs for Information Acquisition and Delivery (I) |
|
Kang, Dahyun (Korea Institute of Science and Technology), Choi, Jongsuk (Korea Inst. of Sci. and Tech), Kwak, Sonya Sona (Korea Institute of Science and Technology (KIST)) |
Keywords: User-centered Design of Robots, Assistive Robotics
Abstract: With becoming an aging society rapidly, the digital divide among the elderly is expected to emerge as a social problem. In order to bridge the digital gap between the elderly and the general public, it is necessary to find out what difficulties the elderly have in using information and communication technology (ICT) and what technologies and services they need. In order to figure out those issues, we conducted three phases of study: ICT literacy survey, interview, and design workshop (N=10). This study has three major findings. First, from the physical view, the elderly found ICT that reduces their movement distance attractive. Second, from the cognitive perspective, services which reduce the gap between the real physical ability of the elderly and the perceived ability of them by themselves are needed. Lastly, from the psychological view, an intuitive and natural interface is needed for the elderly who are afraid of learning new technology.
|
|
12:21-12:33, Paper Th402.9 | |
Who’s on First? the Impact of the Proactive Interaction on User Acceptance of the Robotized Object (I) |
|
Kim, Sangmin (Korea Institute of Science and Technology), Kang, Dahyun (Korea Institute of Science and Technology), Choi, Jongsuk (Korea Inst. of Sci. and Tech), Kwak, Sonya Sona (Korea Institute of Science and Technology (KIST)) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Novel Interfaces and Interaction Modalities, User-centered Design of Robots
Abstract: Due to advances in technology, various types of existing products can be robotized. The interaction design between the robotized objects which are the robotized products and users has become an emerging issue since how to interact affects user acceptance of the new products. Because the robots can perceive, recognize and assist the users, robotized objects can assist users by reading the context and guessing their intentions in advance, or can assist users in response to users’ clear requests. In order to explore the effective interaction strategy of robotized object, we conduct 2 × 2 mixed-participant experiment. As a result, participants were more surprised by and felt positive emotion toward robotized objects with proactive interaction than with reactive interaction. In addition, the robotized object with proactive interaction was evaluated as more intelligent and easier to use than that with reactive interaction. The influence of interaction strategies on satisfaction of robotized objects was mediated by surprise and positive emotions. Furthermore, the robotized object with proactive interaction was evaluated as more positive when the user’s intention was expressed only by action cue than when it was expressed by both verbal cue and action cue.
|
|
Th403 |
Sveva/Normanna |
Mental Models of the Human User in Social HRI |
Special Session |
Chair: Staffa, Mariacarla | University of Naples Parthenope |
Co-Chair: Alimardani, Maryam | Tilburg University |
Organizer: Staffa, Mariacarla | University of Naples Parthenope |
Organizer: Alimardani, Maryam | Tilburg University |
Organizer: Anzalone, Salvatore Maria | Université Paris 8 |
|
10:45-10:57, Paper Th403.1 | |
Questioning Wizard of Oz: Effects of Revealing the Wizard behind the Robot (I) |
|
Nasir, Jauwairia (EPFL), Oppliger, Pierre (EPFL), Bruno, Barbara (Swiss Federal Institute of Technology in Lausanne (EPFL)), Dillenbourg, Pierre (EPFL) |
Keywords: Robot Companions and Social Robots, User-centered Design of Robots, Monitoring of Behaviour and Internal States of Humans
Abstract: Wizard of Oz, a very commonly employed technique in human-robot interaction, faces the criticism of being deceptive as the humans interacting with the robot are told, if at all, only at the end of their interaction that there was in fact a human behind the robot. What if the robot reveals the wizard behind itself very early in the interaction? We built a deep wizard of Oz setup to allow for a robot to play together with a human against a computer AI in the context of Connect 4 game. This cooperative game interaction against a common opponent is then followed by a conversation between the human and the robot. We conducted an exploratory user study with 29 adults with three conditions where the robot reveals the wizard, lies about the wizard, and does not say anything, respectively. We also split the data based on how the participants perceive the robot in terms of autonomy. Using different metrics, we evaluate how the users interact with and perceive the robot in both the experimental and perceived conditions. We find that while there is indeed a significant difference in the participants willingness to follow robots suggestions between the experimental conditions as well as in the effort they put to prove themselves as humans (reverse Turing test), there isn't any significant difference in their robot perception. Additionally, how humans perceive whether the robot is tele-operated or autonomous seems to be indifferent to the robot revealing its identity, i.e., the pre-conceived notions may be uninfluenced even if the robot explicitly states otherwise. Lastly, interestingly in the perception based conditions, absence of statistical significance may suggest that, in certain contexts, wizard of oz may not require hiding the wizard after all.
|
|
10:57-11:09, Paper Th403.2 | |
Motivational Gestures in Robot-Assisted Language Learning: A Study of Cognitive Engagement Using EEG Brain Activity (I) |
|
Alimardani, Maryam (Tilburg University), Harinandansingh, Jishnu (Tilburg University), Ravin, Lindsey (Tilburg University), de Haas, Mirjam (Tilburg University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Non-verbal Cues and Expressiveness, Motivations and Emotions in Robotics
Abstract: Social robots have been shown effective in pedagogical settings due to their embodiment and social behavior that can improve a learner’s motivation and engagement. In this study, the impact of a social robot’s motivational gestures in robot-assisted language learning (RALL) was investigated. Twenty-five university students participated in a language learning task tutored by a NAO robot under two conditions (within-subjects design); in one condition the robot provided positive and negative feedback on participant’s performance using both verbal and non-verbal behavior (Gesture condition), in another condition the robot only employed verbal feedback (No-Gesture condition). To assess cognitive engagement and learning in each condition, we collected EEG brain activity from the participants during the interaction and evaluated their word knowledge during an immediate and delayed post-test. No significant difference was found with respect to cognitive engagement as quantified by the EEG Engagement Index during the practice phase. Similarly, the word test results indicated an overall high performance in both conditions, suggesting similar learning gain regardless of the robot’s gestures. These findings do not provide evidence in favor of robot’s motivational gestures during language learning tasks but at the same time indicate challenges with respect to the design of effective social behavior for pedagogical robots.
|
|
11:09-11:21, Paper Th403.3 | |
Generating Emotional Gestures for Handling Social Failures in HRI (I) |
|
Rossi, Alessandra (University of Naples Federico II), John, Nitha Elizabeth (University of Naples Federico II), Taglialatela, Giuliano (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Cognitive Skills and Mental Models, Applications of Social Robots, Social Intelligence for Robots
Abstract: As people are getting more used to interact with social robots, their expectations of these robots also increase. However, robots are not always able to meet such expectations due to the limitations of the hardware and the software, or it might be possible that robots are simply unable to correctly elaborate the information about the agents, environment and context, and as consequence they produce erroneous behaviours. For example, a robot might get incongruous responses of people from the observations of multimodal systems. In such case, a technique used by humans is verbal irony, or sarcasm which it is a form of verbal irony, to recover from the situation. To this extent, we present a two parts study where we aimed to endow a robot with sarcasm. Results showed that social interacting behaviours, such as paying attention during a conversation, being transparent on the process of thinking and elaborating a response, allow people's to perceive a robot with higher anthropomorphism and animacy. Moreover, robot failure recovery mechanisms are easier recognised by people when they use verbal incongruence.
|
|
11:21-11:33, Paper Th403.4 | |
The Robot Olympics: Estimating and Influencing Beliefs about a Robot's Perceptual Capabilities (I) |
|
Rueben, Matthew (University of Southern California), Rothberg, Eitan (The Ohio State University), Tang, Matthew (University of California Berkeley), Inzerillo, Sarah (Clarkson University), Kshirsagar, Saurabh (University of Southern California), Manchanda, Maansi (University of Southern California), Dudley, Ginger (University of Southern California), Fraune, Marlena (New Mexico State University), Mataric, Maja (University of Southern California) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Cognitive Skills and Mental Models, Cooperation and Collaboration in Human-Robot Teams
Abstract: People often hold inaccurate mental models of robots. When such misconceptions regard a robot's perceptual capabilities, they can lead to issues with safety, privacy, and interaction efficiency. This work is the first attempt to model users' beliefs about a robot's perceptual capabilities and make plans to improve their accuracy—i.e., to perform belief repair. We designed a new domain called the Robot Olympics, implemented it as a web-based game platform for collecting data about users' beliefs, and developed an approach to estimating and influencing users' beliefs about a virtual robot in that domain. We then conducted a study that collected user behavior and belief data from 240 online participants who played the game. Results revealed shortcomings in modeling the participant's interpretations of the robot's actions, as well as the decision making process behind their own actions. The insights from this work provide recommendations for designing further studies and improving user models to support belief repair in human-robot interaction.
|
|
11:33-11:45, Paper Th403.5 | |
Efficacy of a 'Misconceiving' Robot to Improve Computational Thinking in a Collaborative Problem Solving Activity: A Pilot Study (I) |
|
Norman, Utku (Swiss Federal Institute of Technology in Lausanne (EPFL)), Chin, Alexandra (Wellesley College), Bruno, Barbara (Swiss Federal Institute of Technology in Lausanne (EPFL)), Dillenbourg, Pierre (EPFL) |
Keywords: Robots in Education, Therapy and Rehabilitation, Monitoring of Behaviour and Internal States of Humans, Cooperation and Collaboration in Human-Robot Teams
Abstract: Robot-mediated learning activities are often designed as collaborative exercises where children work together to achieve the activity objectives. Although miscommunications and misunderstandings occur frequently, humans, unlike robots, are very good at overcoming them and converging to a shared solution. With the aim of equipping a robot with these abilities and exploring its effects, in this article we investigate how a humanoid robot can collaborate with a human learner to construct a shared solution to a problem via suggesting actions and (dis)agreeing with each other. Concretely, we designed a learning activity aiming to improve the computational thinking skills of children, in which the robot makes suggestions on what to do, that may be in line with what the human thinks or not. Furthermore, the robot may suggest wrong actions that could essentially prevent them from finding a correct solution. Via a pilot study conducted remotely with 9 school children, we investigate whether the interaction results in positive learning outcomes, how the collaboration evolves, and how these relate to each other. The results show positive learning outcomes for the participants in terms of finding better solutions, suggesting that the collaboration with the robot might have helped trigger the learning mechanisms.
|
|
11:45-11:57, Paper Th403.6 | |
SLOT-V: Supervised Learning of Observer Models for Legible Robot Motion Planning in Manipulation (I) |
|
Wallkötter, Sebastian (Uppsala University), Chetouani, Mohamed (Sorbonne University), Castellano, Ginevra (Uppsala University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation
Abstract: We present SLOT-V, a novel supervised learning framework that learns observer models (human preferences) from robot motion trajectories in a legibility context. Legibility measures how easily a (human) observer can infer the robot's goal from a robot motion trajectory. When generating such trajectories, existing planners often rely on an observer model that estimates the quality of trajectory candidates. These observer models are frequently hand-crafted or, occasionally, learned from demonstrations. Here, we propose to learn them in a supervised manner using the same data format that is frequently used during the evaluation of aforementioned approaches. We then demonstrate the generality of SLOT-V using a Franka Emika in a simulated manipulation environment. For this, we show that it can learn to closely predict various hand-crafted observer models, i.e., that SLOT-V's hypothesis space encompasses existing handcrafted models. Next, we showcase SLOT-V's ability to generalize by showing that a trained model continues to perform well in environments with unseen goal configurations and/or goal counts. Finally, we benchmark SLOT-V's sample efficiency (and performance) against an existing IRL approach and show that SLOT-V learns better observer models with less data. Combined, these results suggest that SLOT-V can learn viable observer models. Better observer models imply more legible trajectories, which may - in turn - lead to better and more transparent human-robot interaction.
|
|
11:57-12:09, Paper Th403.7 | |
Does What Users Say Match What They Do? Comparing Self-Reported Attitudes and Behaviours towards a Social Robot (I) |
|
Stower, Rebecca (KTH), Tatarian, Karen (SoftBank Robotics and Sorbonne University), RUDAZ, Damien (Telecom Paris), Chamoux, Marine (SoftBank Robotics Europe), Chetouani, Mohamed (Sorbonne University), Kappas, Arvid (Jacobs University Bremen) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Detecting and Understanding Human Activity, Evaluation Methods
Abstract: Constructs intended to capture social attitudes and behaviour towards social robots are incredibly varied, with little overlap or consistency in how they may be related. In this study we conduct exploratory analyses between participants' self-reported attitudes and behaviour towards a social robot. We designed an autonomous interaction where 102 participants interacted with a social robot (Pepper) in a hypothetical travel planning scenario, during which the robot displayed various multi-modal social behaviours. Several behavioural measures were embedded throughout the interaction, followed by a self-report questionnaire targeting participant's social attitudes towards the robot (social trust, liking, rapport, competency trust, technology acceptance, mind perception, social presence, and social information processing). Several relationships were identified between participant's behaviour and self-reported attitudes towards the robot. Implications for how to conceptualise and measure interactions with social robots are discussed.
|
|
12:09-12:21, Paper Th403.8 | |
Assessing Emotions in Human-Robot Interaction Based on the Appraisal Theory (I) |
|
Demutti, Marco (University of Genoa), D'Amato, Vincenzo (University of Genoa), Recchiuto, Carmine Tommaso (University of Genova), Oneto, Luca (University of Genoa), Sgorbissa, Antonio (University of Genova) |
Keywords: Motivations and Emotions in Robotics, Affective Computing, Applications of Social Robots
Abstract: Emotions have always played a crucial role in human evolution, improving not only social contact but also their ability to adapt and react to a changing environment. In the field of social robotics, providing robots with the ability to recognize human emotions through the interpretation of non-verbal signals may represent the key to more effective and engaging interaction. However, the problem of emotion recognition has usually been addressed in limited and static scenarios, by classifying emotions using sensory data such as facial expressions, body postures, and voice. This work proposes a novel emotion recognition framework, based on the appraisal theory of emotion. According to the theory, the expected person's appraisal of a given situation depending on their needs and goals (henceforth referred to as "appraisal information") is combined with sensory data. A pilot experiment was designed and conducted: participants were involved in spontaneous verbal interaction with the humanoid robot Pepper, programmed to elicit different emotions in various moments. Then, a Random Forest classifier was trained to classify positive and negative emotions using: (i) sensor data only; (ii) sensor data supplemented by appraisal information. Preliminary results confirm a performance improvement in emotion classification when appraisal information is considered.
|
|
Th601 |
Auditorium |
Child-Robot Interaction II |
Regular Session |
Chair: Gunes, Hatice | University of Cambridge |
Co-Chair: Baillie, Lynne | Heriot-Watt University |
|
14:00-14:12, Paper Th601.1 | |
Influence of Animallike Affective Non-Verbal Behavior on Children’s Perceptions of a Zoomorphic Robot |
|
Voysey, Isobel (University of Edinburgh), Baillie, Lynne (Heriot-Watt University), Williams, Joanne (University of Edinburgh), Herrmann, J. Michael (University of Edinburgh) |
Keywords: Child-Robot Interaction, Non-verbal Cues and Expressiveness, Robot Companions and Social Robots
Abstract: Zoomorphic robots are a promising tool for animal welfare education and could be used to teach children that animals have minds and emotions and thereby reduce acceptance of cruelty towards animals. This study investigated the influence of animallike affective non-verbal behavior on children's perceptions of the attributes and mental abilities of a zoomorphic robot, as well as their acceptance of cruelty towards it. Children who interacted with a robot that displayed animallike affective non-verbal behavior ascribed a significantly higher level of mental abilities. Higher levels of perceived mental abilities were not generally correlated with lower acceptance of cruelty but higher levels of perceived social attributes were. Post-hoc analysis of reasoning given for unacceptability of cruelty found that the group of children who made moral judgments about the cruelty had rated the zoomorphic robot as significantly more animate.
|
|
14:12-14:24, Paper Th601.2 | |
Understanding Children's Trust Development through Repeated Interactions with a Virtual Social Robot |
|
Calvo-Barajas, Natalia (Uppsala University), Castellano, Ginevra (Uppsala University) |
Keywords: Child-Robot Interaction, Long-term Experience and Longitudinal HRI Studies, Creating Human-Robot Relationships
Abstract: Studies in Child-Robot Interaction have shown that children form first impressions of a robot's trustworthiness that might influence how they interact with social robots in long-term interactions. However, how children's trust in robots evolves and how it relates to relationship formation is not well understood. This study investigates the effects of repeated encounters with a virtual social robot on children's social and competency trust in social robots and their relationship formation. We developed an online storytelling game with the Furhat robot, where 25 children (9-12 years old) played with the robot over two sessions with seven days of zero exposure in between. Results show that children's competency trust improved with time. We also found empirical evidence that children felt closer to the robot in the second encounter. This work enriches the scientific understanding of children's trust development in social robots over extended periods of time in child-robot collaborative interactions.
|
|
14:24-14:36, Paper Th601.3 | |
Can Robots Help in the Evaluation of Mental Wellbeing in Children? an Empirical Study |
|
Abbasi, Nida Itrat (University of Cambridge), Spitale, Micol (University of Cambridge), Anderson, Joanna (University of Cambridge), Ford, Tamsin (University of Cambridge), Jones, Peter B. (University of Cambridge), Gunes, Hatice (University of Cambridge) |
Keywords: Child-Robot Interaction, Evaluation Methods
Abstract: Socially Assistive Robots (SARs) show promise in helping children during therapeutic and clinical interventions. However, using SARs for the evaluation of mental wellbeing of children has not yet been explored. Thus, this paper presents an empirical study with 28 children 8-13 years old interacting with a Nao robot in a 45-minute session where the robot administered (robotised) the Short Mood and Feelings Questionnaire (SMFQ) and the Revised Child Anxiety and Depression Scale (RCADS). Prior to the experimental session, we also evaluated children’s wellbeing using established standardised approaches via online RCADS questionnaires filled by the children (self-report) and their parents (parent-report). We clustered the participants into three groups (lower, medium, and higher tertile) based on their SMFQ scores. Further, we analysed the questionnaire responses across the three clusters and across the different modes of administration (self-report, parent-report, and robotised). Our results show that the robotised evaluation seems to be the most suitable mode in identifying wellbeing related anomalies in children across the three clusters of participants as compared with the self-report and the parent-report modes. Further, children with decreasing levels of wellbeing (lower, medium and higher tertiles) exhibit different response patterns: children of higher tertile are more negative in their responses to the robot while the ones of lower tertile are more positive in their responses to the robot. Findings from this work show that SARs can be a promising tool to potentially evaluate mental wellbeing related concerns in children.
|
|
14:36-14:48, Paper Th601.4 | |
A Game-Based Approach for Evaluating and Customizing Handwriting Training Using an Autonomous Social Robot |
|
Carnieto Tozadore, Daniel (École Polytechnique Fédérale De Lausanne (EPFL)), Wang, Chenyang (ETH Zurich), Marchesi, Giorgia (University of Genoa), Bruno, Barbara (Swiss Federal Institute of Technology in Lausanne (EPFL)), Dillenbourg, Pierre (EPFL) |
Keywords: Child-Robot Interaction, Robots in Education, Therapy and Rehabilitation, Applications of Social Robots
Abstract: Handwriting learning is a long and complex process that takes about ten years to be fully mastered. Nearly one-third of all children aged 4-12 experiences handwriting difficulties and, sadly, most of them are left to fight them on their own, due to the scarcity of tools for the detection and remediation of such difficulties. Building on state-of-the-art digital solutions for automated handwriting assessment and the training of specific handwriting-related skills, in this article we discuss requirements, rationale, and architecture of a system for handwriting training, which relies on a social robot as a mediator agent, offering personalized training and suggestions. The system is envisioned to operate autonomously and to support long-term interactions via personalization. Preliminary validation of the system in an experiment with 31 children showed its potential not only for autonomously guiding handwriting training sessions, but also for its inclusion in the teachers' practice.
|
|
14:48-15:00, Paper Th601.5 | |
Ice-Breakers, Turn-Takers and Fun-Makers: Exploring Robots for Groups with Teenagers |
|
Gillet, Sarah (KTH Royal Institute of Technology), Winkle, Katie (KTH Royal Institute of Technology), Belgiovine, Giulia (Istituto Italiano Di Tecnologia), Leite, Iolanda (KTH Royal Institute of Technology) |
Keywords: Child-Robot Interaction, Robot Companions and Social Robots, Applications of Social Robots
Abstract: Successful, enjoyable group interactions are important in public and personal contexts, especially for teenagers whose peer groups are important for self-identity and self-esteem. Social robots seemingly have the potential to positively shape group interactions, but it seems difficult to effect such impact by designing robot behaviors solely based on related (human interaction) literature. In this article, we take a user-centered approach to explore how teenagers envisage a social robot ``group assistant''. We engaged 16 teenagers in focus groups, interviews, and robot testing to capture their views and reflections about robots for groups. Over the course of a two-week summer school, participants co-designed the action space for such a robot and experienced working with/wizarding it for 10+ hours. This experience further altered and deepened their insights into using robots as group assistants. We report results regarding teenagers' views on the applicability and use of a robot group assistant, how these expectations evolved throughout the study, and their repeat interactions with the robot. Our results indicate that each group moves on a spectrum of need for the robot, reflected in use of the robot more (or less) for ice-breaking, turn-taking, and fun-making as the situation demanded.
|
|
15:00-15:12, Paper Th601.6 | |
Motivating Children to Practice Perspective-Taking through Playing Games with Cozmo |
|
Yadollahi, Elmira (KTH), Couto, Marta (INESC-ID), Dillenbourg, Pierre (EPFL), Paiva, Ana (INESC-ID and Instituto Superior Técnico, TechnicalUniversity Of) |
Keywords: Child-Robot Interaction, Cognitive Skills and Mental Models, Robots in Education, Therapy and Rehabilitation
Abstract: Recent studies with children have pointed out the importance of spatial thinking as an essential factor in determining later success in STEM-related fields. The current study explores the potential of using embodied activities with robots to aid the development of children's spatial perspective-taking abilities. This research focuses on evaluating children's spatial perspective-taking abilities and assessing the potential of the designed activity to practice perspective-taking. The activity design is inspired by the dynamic and mental processes involved in remote-controlled cars and racing games, it is developed with a Cozmo robot, and it includes guiding the robot within the maze by considering the robot's point of view. We evaluated the activity through a user study with 22 elementary school children between the ages of 8 and 9. The findings showed that children's performance at different angular disparities was aligned with the previous research in developmental psychology. Additionally, most children made fewer mistakes in guiding the robot as they played more. Finally, while we did not observe any performance improvement in the group of children who had access to the robot's point of view during the game, we learned new insights about how children perceived seeing the maze through the robot's eyes.
|
|
15:12-15:24, Paper Th601.7 | |
An Initial Investigation into the Use of Social Robots within an Existing Educational Program for Students with Learning Disabilities |
|
Azizi, Negin (University of Waterloo), chandra, shruti (University of Waterloo), Gray, Michael (Learning Disabilities Society), Sager, Melissa Sager (Learning Disabilities Society), Fane, Jennifer Fane (Learning Disabilities Society), Dautenhahn, Kerstin (University of Waterloo) |
Keywords: Child-Robot Interaction, Assistive Robotics, Robots in Education, Therapy and Rehabilitation
Abstract: Students with a learning disability (LD) generally require supplementary one-to-one instruction and support to acquire the foundational academic skills learned at school. Because learning is more difficult for students with LD, students can frequently display off-task behaviours to avoid attempting or completing challenging learning tasks. Re-directing students back to their learning task is a frequent strategy used by educators to support students. However, there have been limited studies investigating the use of assistive technology to support student re-direction, specifically in a ``real-world'' educational setting. In this in situ study, we investigate the impact of integrating socially assistive robot to provide re-direction strategies to students. A social robot, QT, was employed within the existing learning program during one-to-one remedial instruction sessions. The study comprised two phases, ``Instruction as usual" (IAU) and ``Robot-mediated instructions" (RMI). Both followed the students' one-to-one instructional program where students get personalised learning support from their instructors, except for the RMI phase which included a social robot as a tool. We investigated the impact of the robot on students' on-task behaviours and progress towards learning goals. The results of our mixed method analysis suggest that the robotic intervention supported students in staying on-task and completing their learning goal.
|
|
15:24-15:36, Paper Th601.8 | |
A Sample Efficiency Improved Method Via Hierarchical Reinforcement Learning Networks |
|
Chen, Qinghua (Oakland University), dallas, evan (Oakland University), Shahverdi, Pourya (Oakland University, Michigan, USA), Korneder, Jessica (Oakland University,), Rawashdeh, Osamah (Oakland University), Louie, Wing-Yue Geoffrey (Oakland University) |
Keywords: Applications of Social Robots, Machine Learning and Adaptation, Child-Robot Interaction
Abstract: Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.
|
|
15:36-15:48, Paper Th601.9 | |
Robot-Mediated Group Instruction for Children with ASD: A Pilot Study |
|
Trombly, Madeline (Oakland University), Shahverdi, Pourya (Oakland University, Michigan, USA), Huang, Nathan (Oakland University), Chen, Qinghua (Oakland University), Korneder, Jessica (Oakland University,), Louie, Wing-Yue Geoffrey (Oakland University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction, Applications of Social Robots
Abstract: Children diagnosed with autism spectrum disorder (ASD) typically work towards acquiring skills to participate in a regular classroom setting such as attending and appropriately responding to an instructor’s requests. Social robots have the potential to support children with ASD in learning group-interaction skills. However, the majority of studies that target children with ASD’s interactions with social robots have been limited to one-on-one interactions. Group interaction sessions present unique challenges such as the unpredictable behaviors of the other children participating in the group intervention session and shared attention from the instructor. We present the design of a robot-mediated group interaction intervention for children with ASD to enable them to practice the skills required to participate in a classroom. We also present a study investigating differences in children's learning behaviors during robot-led and human-led group interventions over multiple intervention sessions. Results of this study suggests that children with ASD's learning behaviors are similar during human and robot instruction. Furthermore, preliminary results of this study suggest that a novelty effect was not observed when children interacted with the robot over multiple sessions.
|
|
15:48-16:00, Paper Th601.10 | |
Closed-Loop Position Control of a Pediatric Soft Robotic Wearable Device for Upper Extremity Assistance |
|
Mucchiani, Caio (University of California Riverside), Liu, Zhichao (University of California, Riverside), Sahin, Ipsita (University of California, Riverside), Dube, Jared (UC Riverside), Vu, Linh (UC Riverside), Kokkoni, Elena (University of California, Riverside), Karydis, Konstantinos (University of California, Riverside) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Child-Robot Interaction
Abstract: This work focuses on closed-loop control based on proprioceptive feedback for a pneumatically-actuated soft wearable device aimed at future support of infant reaching tasks. The device comprises two soft pneumatic actuators (one textile-based and one silicone-casted) actively controlling two degrees-of-freedom per arm (shoulder adduction/abduction and elbow flexion/extension, respectively). Inertial measurement units (IMUs) attached to the wearable device provide real-time joint angle feedback. Device kinematics analysis is informed by anthropometric data from infants (arm lengths) reported in the literature. Range of motion and muscle co-activation patterns in infant reaching are considered to derive desired trajectories for the device's end-effector. Then, a proportional-derivative controller is developed to regulate the pressure inside the actuators and in turn move the arm along desired setpoints within the reachable workspace. Experimental results on tracking desired arm trajectories using an engineered mannequin are presented, demonstrating that the proposed controller can help guide the mannequin's wrist to the desired setpoints.
|
|
Th602 |
Aragonese/Catalana |
Non-Verbal Cues and Expressivenessn-Verbal Cues and Expressiveness III |
Regular Session |
Chair: Tanaka, Fumihide | University of Tsukuba |
Co-Chair: Lawson, Wallace | US Naval Research Laboratory |
|
14:00-14:12, Paper Th602.1 | |
A Non-Humanoid Robotic Object for Providing a Sense of Security |
|
Manor, Adi (Reichman University), Todress, Etay (Reichman University), Megidish, Benny (Media Innovation Lab, the Interdisciplinary Center (IDC) Herzliy), Mikulincer, Mario (Reichman University), Erel, Hadas (Media Innovation Lab, Interdisciplinary Center Herzliya) |
Keywords: Non-verbal Cues and Expressiveness, Motivations and Emotions in Robotics, Creating Human-Robot Relationships
Abstract: Having a sense of security is considered a basic human emotional need. It increases confidence, encourages exploration, and enhances relationships with others. In this study we tested the possibility of leveraging the interaction with a simple non-humanoid robot for increasing participants’ sense of security. The robotic behavior was designed with a psychologist expert in attachment theory and was translated into the robot’s morphology by an animator. Specifically, the robot was designed to be attentive and responsive using lean, gaze and nodding gestures. We compared participants’ experience in the secure condition to the experience of participants who interacted with a non-responsive robot. We further compared the participants’ implicit sense of security between the robotic conditions and an additional baseline condition in which participants did not interact with the robot. Our findings indicate the potential in leveraging a simple non-humanoid robot for enhancing humans' sense of security.
|
|
14:12-14:24, Paper Th602.2 | |
Eye Design of Social Robots Inspired by the Difference of Gaze Clarity in Canid Species |
|
Ouchi, Yuri (University of Tsukuba), Tanaka, Fumihide (University of Tsukuba) |
Keywords: Non-verbal Cues and Expressiveness, Creating Human-Robot Relationships
Abstract: The clarity of the position of the eyes and pupils in canid species varies depending on their sociality. Existing social robots, on the other hand, have a wide range of applications, including education, medical care, and customer service, while most of the robots have had a simple design of eyes with a white base and a black iris. However, as in canid species, it is conceivable that the level of appropriate communication intensity varies depending on the purpose of a robot, and that there is a suitable eye design depending on its use. In this study, we propose the appropriate gaze clarity of a robot for three situations where the avatar speaks to the human, the avatar listens to the human's talk, and the avatar is nearby the human during the task requiring concentration.
|
|
14:24-14:36, Paper Th602.3 | |
Expression of Personality by Gaze Movements of an Android Robot in Multi-Party Dialogues |
|
Shintani, Taiken (Osaka University), Ishi, Carlos Toshinori (RIKEN), Ishiguro, Hiroshi (Osaka University) |
Keywords: Non-verbal Cues and Expressiveness, Personalities for Robotic or Virtual Characters, Multimodal Interaction and Conversational Skills
Abstract: In this study, we describe an improved version of our proposed model to generate gaze movements (eye and head movements) of a dialogue robot in multi-party dialogue situations, and investigated how the impressions change for models created by data of speakers with different personalities. For that purpose, we used a multimodal three-party dialogue data, and first analyzed the distributions of (1) the gaze target (towards dialogue partners or gaze aversion), (2) the gaze duration, and (3) the eyeball direction during gaze aversion. We then generated gaze behaviors in an android robot with the data of two people who were found to have distinctive personalities, and conducted subjective evaluation experiments. Results showed that a significant difference was found in the perceived personalities between the motions generated by the two models.
|
|
14:36-14:48, Paper Th602.4 | |
Agree or Disagree? Generating Body Gestures from Affective Contextual Cues During Dyadic Interactions |
|
Tuyen, Nguyen Tan Viet (King's College London), Celiktutan, Oya (King's College London) |
Keywords: Non-verbal Cues and Expressiveness, Multimodal Interaction and Conversational Skills, Machine Learning and Adaptation
Abstract: Humans naturally produce nonverbal signals such as facial expressions, body movements, hand gestures, and tone of voice, along with words, to communicate their messages, opinions, and feelings. Considering robots are progressively moving out from research laboratories into human environments, it is increasingly desirable that they develop a similar social intelligence. Therefore, equipping social robots with nonverbal communication skills has been an active research area for decades, where data-driven, end-to-end learning approaches have become predominant in recent years, offering scalability and generalisability. However, most of these approaches consider a single character, modelling intrapersonal dynamics only. In this paper, we propose a method based on conditional Generative Adversarial Networks, intending to generate behaviours for a robot in affective dyadic interactions. Our method takes as an input the audio of a target person together with the nonverbal signals of their interacting partner, modelled by a novel Context Encoder, to generate appropriate body gestures. We evaluate our method on the multimodal JESTKOD dataset that comprises dyadic interactions under agreement and disagreement scenarios. The experimental results show that Context Encoder can better contribute to the prediction of co-speech gestures in agreement situations.
|
|
14:48-15:00, Paper Th602.5 | |
Evaluation of Expressive Motions Based on the Framework of Laban Effort Features for Social Attributes of Robots |
|
Emir, Ebru (University of Waterloo), Burns, Catherine Marie (University of Waterloo) |
Keywords: Personalities for Robotic or Virtual Characters, Non-verbal Cues and Expressiveness, Human Factors and Ergonomics
Abstract: In today's world, it is not uncommon to see robots adopted in various domains and environments. Robots take over several roles and tasks from manufacturing facilities to households and offices. It is crucial to measure people's judgment of robots' social attributes since the findings can shape the future design for social robots. Using only a simple and mono-functional robotic vacuum cleaner, this paper investigates the impact of expressive motions on how people perceive the social attributes of the robot. The Laban Effort Features, a framework for movement analysis that emerged from dance, was modified to design expressive motions for a simple cleaning task. Participants were asked to rate the social attributes of the robot under several treatment conditions using a video-based online survey. The results indicated that velocity influenced people's ratings of the robot's warmth and competence, while path planning behavior influenced people's ratings of the robot's competence and discomfort. Limitations of this study include the kinematic constraints of the robot, potential issues with survey design, and technical constraints related to the open interface provided by the robot’s developer. The findings should be considered when incorporating expressive motions into domestic service robots operating in social settings.
|
|
15:00-15:12, Paper Th602.6 | |
Salient Keypoints for Interactive Meta-Learning (SKIML) |
|
Lawson, Wallace (US Naval Research Laboratory), Harrison, Anthony (Naval Research Laboratory), Chang, Mai Lee (University of Texas at Austin), Adams, William (US Naval Research Laboratory), Trafton, Greg (Naval Research Laboratory) |
Keywords: Machine Learning and Adaptation, Non-verbal Cues and Expressiveness, Detecting and Understanding Human Activity
Abstract: Learning to recognize new objects in real time in unconstrained environments presents significant challenges for robotic platforms. We present a meta-learning solution to this problem as well as a registered image and events dataset to facilitate work in this domain. Our solution uses interactive motion to isolate the object, and motion-based saliency (from events) to select relevant keypoints from a high-resolution RGB image. Salient keypoints are then passed to a meta-learner to classify the object type. We show that using our interactive isolation and keypoint selection approach, we outperform existing techniques by 6-20%.
|
|
15:12-15:24, Paper Th602.7 | |
Seeing Is Not Feeling the Touch from a Robot |
|
Kunold, Laura (Ruhr University Bochum) |
Keywords: Social Touch in Human–Robot Interaction, Non-verbal Cues and Expressiveness
Abstract: A pre-registered conceptual video-based replication of a laboratory experiment was conducted to test whether the impact of a robot’s non-functional touch to a human can be studied from observation (online). Therefore, n=92 participants watched either a video recording of the same human–robot interaction with or without touch. The interpretation, evaluation, and emotional as well as behavioral responses were collected by means of an online-survey. The results show that the observation of touch affects observers’ emotional state: Contrary to what was hypothesized, observers felt significantly better when no touch was visible and they evaluated the robot’s touch as inappropriate. The findings are compared to results from a laboratory experiment to raise awareness for the different perspectives involved in observing and experiencing touch.
|
|
15:24-15:36, Paper Th602.8 | |
Don't Get into Trouble! Risk-Aware Decision-Making for Autonomous Vehicles |
|
Mokhtari, Kasra (Indi EVq), Wagner, Alan Richard (Penn State University) |
Keywords: Social Touch in Human–Robot Interaction, Social Intelligence for Robots, Motion Planning and Navigation in Human-Centered Environments
Abstract: Risk is traditionally described as the expected likelihood of an undesirable outcome, such as a collision for an autonomous vehicle. Accurately predicting risk or potentially risky situations is critical for the safe operation of an autonomous vehicle. This work combines use of a controller trained to navigate around individuals in a crowd and a risk-based decision-making framework for an autonomous vehicle that integrates high-level risk-based path planning with a reinforcement learning-based low-level control. We evaluated our method using a high-fidelity simulation environment. We show our method results in zero collisions with pedestrians and predicted the least risky path, time to travel, or day to travel in approximately 72% of traversals. This work can improve safety by allowing an autonomous vehicle to one day avoid and react to risky situations.
|
|
15:36-15:48, Paper Th602.9 | |
Physical Touch from a Robot Caregiver: Examining Factors That Shape Patient Experience |
|
Mazursky, Alex (University of Chicago), DeVoe, Madeleine (University of Chicago), Sebo, Sarah (University of Chicago) |
Keywords: Social Touch in Human–Robot Interaction, Medical and Surgical Applications, Applications of Social Robots
Abstract: Robot-initiated touch is a promising mode of expression that would allow robot caregivers to perform physical tasks (instrumental touch) and provide comfort (affective touch) in healthcare settings. To understand the factors that shape how people respond to touch from a robotic caregiver, we conducted a crowdsourced study (N=163) examining how robot-initiated touch (present or absent), the robot’s intent (instrumental or affective), robot appearance (Nao or Stretch), and robot tone (empathetic or serious) impact the perceived quality of care. Results show that participants prefer instrumental to affective touch, view the robot as having greater social attributes (higher warmth, higher competence, and lower discomfort) after robot-initiated touch, are more comfortable interacting with the human-like Nao than the more machine-like Stretch, and favor consistent robot tone and appearance. From these results, we derived three design guidelines for caregiving robots in healthcare settings.
|
|
15:48-16:00, Paper Th602.10 | |
Public Perception, Privacy, Safety, and Ethical Considerations of Communication Robots in Law Enforcement |
|
Salehzadeh, Roya (The University of Alabama), Bordbar, Fareed (The University of Alabama), Griffin, Darrin (The University of Alabama), Cousin, Christian (The University of Alabama), Jalili, Nader (The University of Alabama) |
Keywords: Social Touch in Human–Robot Interaction, Social Presence for Robots and Virtual Humans, User-centered Design of Robots
Abstract: To assist and protect citizen communities and police officers, robots have been developed for situational responses (e.g., explosive ordinance disposal). However, the robots used by law enforcement are typically expensive, can be difficult to operate, and do not readily facilitate communication between individuals. Recent research by the authors examined how communication impacts trust between robots and humans in the context of law enforcement. Using a mobile communication robot, law enforcement officers (LEOs) reported high levels of trust because the robot provided near face-to-face interaction using screens, microphones, and speakers. This paper seeks to expand upon the previous findings by discussing public perception, privacy, safety, and ethical considerations as they pertain to communication robots utilized in law enforcement. In the following, the authors explain their primary research thrusts and provide their plan for expanded stakeholder involvement for future research. For the future work, the authors will work with stakeholders to develop ethically grounded communication robots and accompanying education programs that enhance communication, trust, transparency, and accessibility between LEOs and citizens communities.
|
|
Th603 |
Sveva/Normanna |
Ethical Issues in Human-Robot Interaction Research II |
Regular Session |
Co-Chair: Mansouri, Masoumeh | Birmingham University |
|
14:00-14:12, Paper Th603.1 | |
Investigating Gender-Stereotyped Interactions with Virtual Agents in Public Spaces |
|
Müller, Ana (University of Applied Sciences Cologne), Penkert, Lydia (University of Applied Sciences Cologne), Schneider, Sebastian (Technische Hochschule Köln), Richert, Anja (University of Applied Sciences Cologne) |
Keywords: Ethical Issues in Human-robot Interaction Research, Robotic Etiquette
Abstract: Current research on the impact of gender appearance in virtual agents and social robots highlights the danger of transmitting and solidifying existing gender stereotypes. To investigate gender-stereotypical interaction at public spaces in dependency of virtual agents gender appearance, we varied the gender of a virtual agent at a metro station. We used an ethnographic study approach, combining a two-day behavior observation with semi-structured interviews with descriptive and qualitative system log analysis of four weeks with. Our results show that topics of conversation differ in dependency of the virtual agents gender: the male virtual agent was asked about topics such as brothels, drugs and alcohol and insulted frequently, while the female one was asked for relationship status or flirting.
|
|
14:12-14:24, Paper Th603.2 | |
AI Bias in Human-Robot Interaction: An Evaluation of the Risk in Gender Biased Robots |
|
Hitron, Tom (Media Innovation Lab, Reichman University, Herzliya, Israel), Megidish, Benny (Media Innovation Lab, the Interdisciplinary Center (IDC) Herzliy), Todress, Etay (Reichman University), Morag Yaar, Noa (Media Innovation Lab, Reichman University), Erel, Hadas (Media Innovation Lab, Interdisciplinary Center Herzliya) |
Keywords: Ethical Issues in Human-robot Interaction Research, Social Presence for Robots and Virtual Humans, Applications of Social Robots
Abstract: With recent advancements in AI, there are growing concerns about human biases implemented in AI decisions. Threats posed by AI bias may be even more drastic when applied to robots that are perceived as independent entities and are not mediated by humans. Furthermore, technology is typically perceived as objective and there is a risk that people will embrace its decisions without considering possible biases. In order to understand the extent of threats brought about by such biases, we evaluated participants' responses to a gender-biased robot mediating a debate between two participants (male and female). The vast majority of participants did not associate the robot's behavior with a bias, despite being informed that the robot's algorithm is based on human examples. Participants attributed the robot's decisions to their own performance and used explanations involving gender stereotypes. Our findings suggest that robots' biased behaviors can serve as validation for common human stereotypes.
|
|
14:24-14:36, Paper Th603.3 | |
Robot Self-Defense: Robots Can Use Force on Human Attackers to Defend Victims |
|
Kochenborger Duarte, Eduardo (Halmstad University), Shiomi, Masahiro (ATR), Vinel, Alexey (Halmstad University), Cooney, Martin (Halmstad University) |
Keywords: Ethical Issues in Human-robot Interaction Research, Philosophical Issues in Human-Robot Coexistence, Applications of Social Robots
Abstract: Could a social robot use force to prevent violence directed toward humans in its care?–Might crime be eradicated, or conversely could excessive use of force proliferate and human dignity become trampled beneath cold robotic wheels? Such speculation is one part of a larger, increasingly important question of how social robots will be expected to behave in our societies, as robotic technologies develop and become increasingly widespread. Here, to gain some insight into this topic of “robot self-defense”, we proposed a simplified heuristic based on perceived risk of loss to predict acceptability, and conducted a user survey with 304 participants, who watched eight animated videos of robots and humans in a violent altercation. The results indicated that people largely accept the idea that a humanoid robot can use force on attackers to help others. Furthermore, self-defense was perceived as more acceptable when the appearance of the defender was humanoid rather than mechanical, and when the force disparity between attacker and defender was high. The immediate suggestion is that it could be beneficial to re-examine common assumptions that a robot should never harm or risk harming humans, and to discuss and consider the possibilities for robot self-defense.
|
|
14:36-14:48, Paper Th603.4 | |
Nothing about Us without Us: A Participatory Design for an Inclusive Signing Tiago Robot |
|
Antonioni, Emanuele (Sapienza University of Rome), Sanalitro, Cristiana (International Telematic University UNINETTUNO), Capirci, Olga (Consiglio Nazionale Delle Ricerche (Istituto ISTC)), Di Renzo, Alessio (Institute for Cognitive Sciences and Technologies - National Res), Maria Beatrice, D'aversa (School of LIS - SILIS Group), Bloisi, Domenico (University of Basilicata), Wang, Lun (Sapienza University of Rome), Bartoli, Ermanno (Sapienza, University of Rome), Diaco, Lorenzo (Sapienza University of Rome), Nardi, Daniele (Sapienza University of Rome), Presutti, Valentina (University of Bologna) |
Keywords: Ethical Issues in Human-robot Interaction Research, Creating Human-Robot Relationships, Linguistic Communication and Dialogue
Abstract: The success of the interaction between the robotics community and the users of these services is an aspect of considerable importance in the drafting of the development plan of any technology. This aspect becomes even more relevant when dealing with sensitive services and issues such as those related to interaction with specific subgroups of any population. Over the years, there have been few successes in integrating and proposing technologies related to deafness and sign language. Instead, in this paper, we propose an account of successful interaction between a signatory robot and the Italian deaf community, which occurred during the Smart City Robotics Challenge (SciRoc) 2021 competition. Thanks to the use of a participatory design and the involvement of experts belonging to the deaf community from the early stages of the project, it was possible to create a technology that has achieved significant results in terms of acceptance by the community itself and could lead to significant results in the technology development as well.
|
|
14:48-15:00, Paper Th603.5 | |
Polite and Unambiguous Requests Facilitate Willingness to Help an Autonomous Delivery Robot and Favourable Social Attributions |
|
Boos, Annika (Technical University of Munich), Zimmermann, Markus (Starship Technologies), Zych, Monika (Starship Technologies), Bengler, Klaus (Technische Universitaet Muenchen) |
Keywords: Robotic Etiquette, Linguistic Communication and Dialogue, Human Factors and Ergonomics
Abstract: Robots are increasingly involved in tasks that require them to navigate social spaces shared with humans. Following social norms is considered a key requirement for such robots to ensure their acceptance and long-term use. This paper focuses on delivery robots as these often encounter problems in their operational areas-in this case a busy university campus-when they find their way blocked by people and they cannot move on towards their goal destination. We explored automated cue triggering to resolve this situation autonomously without the help of remote operators. Eighty-three pedestrians participated in a real-world study using a delivery robot. Four different cues were tested for their perceived politeness and ambiguity. The four different cues differed in the presence or absence of an instruction and the presence or absence of a justification for the request to let the robot pass as well as source orientation within the justification, which was either internally (self-) directed or externally (user-) directed. The results reveal a complex picture. Overall, a positive effect of verbal instructions in comparison to staying mute on social attributions to the robot was found. Contrary to our expectations, there was no significant difference in politeness between the different requests. Participants' willingness to let the robot pass was positively correlated with perceived politeness, and negatively correlated with ambiguity of the requests.
|
|
15:00-15:12, Paper Th603.6 | |
A Review and Recommendations on Reporting Recruitment and Compensation Information in HRI Research Papers |
|
Cordero, Julia (Interaction Lab), Groechel, Thomas (Univeristy of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Ethical Issues in Human-robot Interaction Research
Abstract: Study reproducibility and generalizability of results to broadly inclusive populations is crucial in any research. Previous meta-analyses in HRI have focused on the consistency of reported information from papers in various categories. However, members of the HRI community have noted that much of the information needed for reproducible and generalizable studies is not found in published papers. We address this issue by surveying the reported study metadata over the main proceedings of the 2021 IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) and the past three years (2019 through 2021) of the main proceedings of the International Conference on Human-Robot Interaction (HRI) and alt.HRI. Based on the analysis results, we propose a set of recommendations for the HRI community that follow the longer-standing reporting guidelines from human-computer interaction (HCI), psychology, and other fields most related to HRI. Finally, we examine two key areas for user study reproducibility: recruitment details and participant compensation. We find a lack of reporting of both of these study metadata categories: of the 414 studies across both conferences and all years, 258 studies failed to report recruitment method and 255 studies failed to report compensation. This work provides guidance about specific types of needed reporting improvements for the field of HRI.
|
|
15:12-15:24, Paper Th603.7 | |
Norm Learning with Reward Models from Instructive and Evaluative Feedback |
|
Rosen, Eric (Brown University), Hsiung, Eric (Brown University), Chi, Vivienne Bihe (Brown University), Malle, Bertram (Brown University) |
Keywords: Robotic Etiquette, Social Intelligence for Robots, Applications of Social Robots
Abstract: People are increasingly interacting with artificial agents in social settings, and as these agents become more sophisticated, people will have to teach them social norms. Two prominent teaching methods include instructing the learner how to act, and giving evaluative feedback on the learner’s actions. Our empirical findings indicate that people naturally adopt both methods when teaching norms to a simulated robot, and they use the methods selectively as a function of the robot’s perceived expertise and learning progress. In our algorithmic work, we conceptualize a set of context-specific norms as a reward function and integrate learning from the two teaching methods under a single likelihood-based algorithm, which estimates a reward function that induces policies maximally likely to satisfy the teacher’s intended norms. We compare robot learning under various teacher models and demonstrate that a robot responsive to both teaching methods can learn to reach its goal and minimize norm violations in a navigation task for a grid world. We improve the robot’s learning speed and performance by enabling teachers to give feedback at an abstract level (which rooms are acceptable to navigate) rather than at a low level (how to navigate any particular room).
|
|
15:24-15:36, Paper Th603.8 | |
A Literature Review of Trust Repair in HRI |
|
Esterwood, Connor (University of Michigan), Robert, Lionel (University of Michigan) |
Keywords: Social Intelligence for Robots, Cooperation and Collaboration in Human-Robot Teams, Robotic Etiquette
Abstract: Trust is vital for effective human-robot teams. Trust is unstable, however, and it changes over time, with decreases in trust occurring when robots make mistakes. In such cases, certain strategies identified in the human–human literature can be deployed to repair trust, including apologies, denials, explanations, and promises. Whether these strategies work in the human–robot domain, however, remains largely unknown. This is primarily because of the fragmented and dispersed state of the current literature on trust repair in HRI. As a result, this paper brings together studies on trust repair in HRI and presents a more cohesive view of when apologies, denials, explanations, and promises have been seen to repair trust. In doing so, this paper also highlights possible gaps and proposes future work. This contributes to the literature in several ways but primarily provides a starting point for future research and recommendations for studies seeking to determine how trust can be repaired in HRI.
|
|
15:36-15:48, Paper Th603.9 | |
Privacy Expectations for Human-Autonomous Vehicle Interactions |
|
Bloom, Cara (MITRE), Emery, Josiah (MITRE) |
Keywords: Ethical Issues in Human-robot Interaction Research, Philosophical Issues in Human-Robot Coexistence
Abstract: Robots operating in public spaces, such as autonomous vehicles, will necessarily collect images and other data concerning the people and vehicles in their vicinity, raising privacy concerns. Common conceptions of privacy in robotics do not include the challenges of many-to-many surveillance where fleets of many individual robots collect data on many people during operation. Technologists, legal scholars, and privacy researchers recommend such technologies fulfill the reasonable privacy expectations of society, but there is no standard method for measuring privacy expectations. We propose a method informed by Contextual Integrity Theory for identifying societal privacy expectations for autonomous vehicle-collected data and codifying the contextual expectations as norms. We present a study (n=600) that identifies twelve distinct norms, which are made up of contextual factors such as the subject of data collection and the data use. In a model for tolerance of autonomous vehicle data collection, we find that both contextual factors related to the data processing and factors related to the individual are significant predictors.
|
|
15:48-16:00, Paper Th603.10 | |
Evaluating the Impact of Emotional Apology on Human-Robot Trust |
|
Xu, Jin (Georgia Institute of Technology), Howard, Ayanna (Georgia Institute of Technology) |
Keywords: Ethical Issues in Human-robot Interaction Research, Creating Human-Robot Relationships
Abstract: Previous research has shown that robot mistakes or malfunctions have a significant negative impact on people’s trust. One way to mitigate the negative impact of trust violation is through trust repair. Although trust repair has been studied extensively, it is still not known which strategy is effective in repairing trust in a time-sensitive driving scenario. Additionally, prior research on trust repair has not dealt with the effects of expressing emotion in attempting trust repair. In this paper, we presented the development of a variety of trust repair methods for a time-sensitive scenario using a simulated driving environment as a testbed for validation. These trust repair methods included baseline apology, emotional apology, and explanation. We conducted an experiment to compare the impact of these trust repair methods on human-robot trust. Experimental results indicated that the emotional apology positively affected more participants than the no-repair, baseline apology, and explanation. Furthermore, this study identified emotional apology as the most effective method for the time-sensitive driving scenario.
|
| |