| |
Last updated on August 21, 2022. This conference program is tentative and subject to change
Technical Program for Wednesday August 31, 2022
|
We101 |
Auditorium |
Creating Human-Robot Relationships |
Regular Session |
Chair: de Graaf, Maartje | Utrecht University |
Co-Chair: Afyouni, Alia | Strate Design School |
|
08:30-08:42, Paper We101.1 | |
We Are All Individuals: The Role of Robot Personality and Human Traits in Trustworthy Interaction |
|
Lim, Meiyii (Heriot-Watt University), Lopes, Jose David (Heriot Watt University), Robb, David A. (Heriot Watt University), Wilson, Bruce W (Heriot-Watt University), Moujahid, Meriam (Heriot-Watt University), De Pellegrin, Emanuele (Heriot-Watt University), Hastie, Helen (School of Mathematical and Computer Sciences, Heriot-Watt Univer) |
Keywords: Personalities for Robotic or Virtual Characters, Creating Human-Robot Relationships, Applications of Social Robots
Abstract: As robots take on roles in our society, it is important that their appearance, behaviour and personality are appropriate for the job they are given and are perceived favourably by the people with whom they interact. Here, we provide an extensive quantitative and qualitative study exploring robot personality but, importantly, with respect to individual human traits. Firstly, we show that we can accurately portray personality in a social robot, in terms of extroversion-introversion using vocal cues and linguistic features. Secondly, through garnering preferences and trust ratings for these different robot personalities, we establish that, for a Robo-Barista, an extrovert robot is preferred and trusted more than an introvert robot, regardless of the subject's own personality. Thirdly, we find that individual attitudes and predispositions towards robots do impact trust in the Robo-Baristas, and are therefore important considerations in addition to robot personality, roles and interaction context when designing any human-robot interaction study.
|
|
08:42-08:54, Paper We101.2 | |
How Users, Facility Managers, and Bystanders Perceive and Accept a Navigation Robot for Visually Impaired People in Public Buildings |
|
Kayukawa, Seita (Waseda University), Sato, Daisuke (Carnegie Mellon University), Murata, Masayuki (IBM Research - Tokyo), Tatsuya, Ishihara (IBM Research - Tokyo), kosugi, Akihiro (IBM Research - Tokyo), Takagi, Hironobu (IBM Research - Tokyo), Morishima, Shigeo (Waseda University), Asakawa, Chieko (Carnegie Mellon University) |
Keywords: Assistive Robotics, Creating Human-Robot Relationships
Abstract: Autonomous navigation robots have a considerable potential to offer a new form of mobility aid to people with visual impairments. However, to deploy such robots in public buildings, it is imperative to receive acceptance from not only robot users but also people that use the buildings and managers of those facilities. Therefore, we conducted three studies to investigate the acceptance and concerns of our prototype robot, which looks like a regular suitcase. First, an online survey revealed that people could accept the robot navigating blind users. Second, in the interviews with facility managers, they were cautious about the robot's camera and the privacy of their customers. Finally, focus group sessions with legally blind participants who experienced the robot navigation revealed that the robot may cause trouble when it collides with those who may not be aware of the user's blindness. Still, many participants liked the design of the robot which assimilated into the surroundings.
|
|
08:54-09:06, Paper We101.3 | |
Living One Week with an Autonomous Pepper in a Rehabilitation Center: Lessons from the Field |
|
Afyouni, Alia (Strate Design School), Ocnarescu, Ioana (Strate), Cossin, Isabelle (Strate Research - Strate Ecole De Design), KAMOUN, Emna (Strate School of Design), Mazel, Alexandre (Aldebaran-Robotics), FATTAL, CHARLES (Centre Bouffard Vercelli Ussap) |
Keywords: Assistive Robotics, Creating Human-Robot Relationships, Social Presence for Robots and Virtual Humans
Abstract: Socially assistive robots (SAR) are conceived to improve the quality of life by better assisting people in need, when it comes to rehabilitation, convalescence, learning etc. In this paper we explore how patients could cohabit with a socially assistive robot in rehabilitation centers. We presented a qualitative analysis of eight patients living with Pepper robot during a week’s time. The data come from videos of daily interactions between the patients and Pepper, patients’ journal diaries and reports written by the medical staff. Our analysis proposes an understanding on what worked in terms of a potential relationship between patients and Pepper, while the robot was still under development. Our study shows that notions like patience, politeness, and a general positive appreciation of the design of the robot create a social presence for the residents of the reeducation center.
|
|
09:06-09:18, Paper We101.4 | |
Ankle Intention Detection Algorithm with HD-EMG Sensor |
|
Kim, Inwoo (Yonsei University), Jung, Hyunjin (Yonsei University), Kim, Jongkyu (Yonsei University), Kim, Sihwan (Yonsei University), Park, Jonhyuk (Yonsei University), Lee, Soo-hong (Yonsei University) |
Keywords: Creating Human-Robot Relationships, Monitoring of Behaviour and Internal States of Humans, Machine Learning and Adaptation
Abstract: The ankle plays a very large role as an end effector in gait and leg erection. As the number of people with reduced mobility in the ankle joint due to aging and nerve damage increases, rehabilitation and related research are steadily increasing. However, most studies overlook the eversion action that plays an important role in stability. In this study, an intention detection algorithm including the eversion motion was developed, and a multi-channel EMG sensor module was developed and utilized. By moving the ankle in a specific direction, 36 channels of EMG signals were measured to determine the correlation between ankle motion and EMG signals. CNN and ADAM were used for algorithm production, and ankle motion was estimated with high accuracy.
|
|
09:18-09:30, Paper We101.5 | |
Humans and Robot Arm: Laban Movement Theory to Create Emotional Connection |
|
La Viola, Carlo (UNIFI), Fiorini, Laura (University of Florence), Mancioppi, Gianmaria (Scuola Superiore Sant'Anna), Kim, Jaeseok (Scuola Superiore Sant'Anna), Cavallo, Filippo (University of Florence) |
Keywords: Creating Human-Robot Relationships, Motivations and Emotions in Robotics
Abstract: Movement is one of the basic tools that humans use to convey emotional states. Body language and movement are also relevant in the perception that we have of other people. Many studies have been done on movement and emotion sharing, involving humans on one side and robots or automated agents on the other. From these studies, there is evidence of the importance of robots to improve their social capabilities and develop more effective social interaction. This work aims at embedding some social movements in a robot manipulator, that will elicit an emotional response from users. Laban Movement Analysis is used to do so, and social movements are developed on a robot arm, then shown to participants in an online study and a questionnaire. The results show that it is possible to elicit emotions through movement only and that the perception is not affected by personal experiences.
|
|
09:30-09:42, Paper We101.6 | |
Why Do You Think This Joke Told by Robot Is Funny? the Humor Style Matters |
|
Zhang, Heng (ENSTA Paris, Institut Polytechnique De Paris), Yu, Chuang (University of Manchester), Tapus, Adriana (ENSTA Paris, Institut Polytechnique De Paris) |
Keywords: Creating Human-Robot Relationships, Novel Interfaces and Interaction Modalities, Robot Companions and Social Robots
Abstract: Humor usually plays a positive role in social activities. We posit that endowing a social robot with humor ability can enhance expressive human-robot interaction. People's perception on humor is different, and therefore, making the robot expressing humor in an appropriate way is a challenge. The main aim of this paper is to explore the correlation between people's perception on different types of jokes and their humor styles (Affiliative, Self-enhancing, Self-defeating, Aggressive). In the experiment, we used the humanoid robot Pepper to perform different types jokes. Both subjective (jokes rating) and objective measures (RGB and thermal images) were used. The latter method was employed to extract facial features (facial action unit and facial temperature). After extracting and analyzing the data of both measurement methods, we found that the Self-defeating humor style positively affects people's rating on all types of jokes. In addition, there is also a positive correlation between people's humor style scores and the degree of happiness.
|
|
09:42-09:54, Paper We101.7 | |
Robots for Connection: A Co-Design Study with Adolescents |
|
Alves-Oliveira, Patrícia (University of Washington), Björling, Elin (University of Washington), Wiesmann, Patriya (University of Washington), Heba, Dwikat (University of Washington), Bhatia, Simran (University of Washington), Mihata, Kai (University of Washington), Cakmak, Maya (University of Washington) |
Keywords: Creating Human-Robot Relationships, Embodiment, Empathy and Intersubjectivity, Innovative Robot Designs
Abstract: Adolescents isolated at home during the COVID-19 pandemic lockdown are more likely to feel lonely and in need of social connection. Social robots may provide a much needed social interaction without the risk of contracting an infection. In this paper, we detail our co-design process used to engage adolescents in the design of a social robot prototype intended to broadly support their mental health. Data gathered from our four week design study of nine remote sessions and interviews with 16 adolescents suggested the following design requirements for a home robot: (1) be able to enact a set of roles including a coach, companion, and confidant; (2) amplify human-to-human connection by supporting peer relationships; (3) account for data privacy and device ownership. Design materials are available in open-access, contributing to best practices for the field of Human-Robot Interaction.
|
|
09:54-10:06, Paper We101.8 | |
Self-Disclosure to a Robot “In-The-Wild”: Category, Human Personality and Robot Identity |
|
Neerincx, Anouk (Utrecht University), Edens, Chantal (Utrecht University), Broz, Frank (TU Delft), Li, Yanzhe (Technical University of Delft), Neerincx, Mark (TNO) |
Keywords: Creating Human-Robot Relationships, Personalities for Robotic or Virtual Characters, Robot Companions and Social Robots
Abstract: Self-disclosures can be valuable and sensitive parts of the human-robot interaction. This paper investigates how far human’s tendency to self-disclose depends on the topic of interaction, individual’s personality and perceived robot identity (i.e., human-, robot- or animal-like). Robot’s (Pepper) identity was shown in its self-disclosure, interaction behaviors (gestures, sound and voice), and ”clothing”. In an ”in-the- wild” study at a science festival, 80 visitors interacted with one of these robot identities. When questioned by the robot, they disclosed more about their attitudes and opinions than about other categories. Significant correlations appeared between personality characteristics and the degree of self-disclosure, as well as differences in self-disclosure categories. The different robot identities showed no effects on disclosures
|
|
10:06-10:18, Paper We101.9 | |
Exploring First Impressions of the Perceived Social Intelligence and Construal Level of Robots That Disclose Their Ability to Deceive |
|
Rogers, Kantwon (Georgia Institute of Technology), Howard, Ayanna (Georgia Institute of Technology) |
Keywords: Applications of Social Robots, Robot Companions and Social Robots, Creating Human-Robot Relationships
Abstract: If a robot tells you it can lie for your benefit, how would that change how you perceive it? This paper presents a mixed-methods empirical study that investigates how disclosure of deceptive or honest capabilities influences the perceived social intelligence and construal level of a robot. We first conduct a study with 198 Mechanical Turk participants, and then a replication of it with 15 undergraduate students in order to gain qualitative data. Our results show that how a robot introduces itself can have noticeable effects on how it is perceived--even from just one exposure. In particular, when revealing having ability to lie when it believes it is in the best interest of a human, people noticeably find the robot to be less trustworthy than a robot that conceals any honesty aspects or reveals total truthfulness. Moreover, robots that are forthcoming with their truthful abilities are seen in a lower construal than one that is transparent about its deceptive abilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.
|
|
10:18-10:30, Paper We101.10 | |
Instructive Interaction for Redirection of Customer Attention from Robot to Service |
|
Baba, Jun (CyberAgent, Inc), Song, Sichao (CyberAgent Inc), Nakanishi, Junya (Osaka Univ), Yoshikawa, Yuichiro (Osaka University), Ishiguro, Hiroshi (Osaka University) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Applications of Social Robots, Creating Human-Robot Relationships
Abstract: Social robotics recommendations have been studied for a long time, and many existing studies have addressed in-store recommendations using robots. However, it has been pointed out that many studies conducted in “in-the-wild” field environments have only focused on the initial stages of customer purchase behavior, such as stopping and engaging in a conversation, and few have been able to induce an interest in the product and even purchase. One of the causes is that the robot itself attracts most of the customer’s attention, making it difficult for customers to be interested in the robot’s recommendations. To solve this problem, this study examines the inclusion of clear and specific instructions to customers in interactions in which the robot recommends services and products. We conducted a field experiment to confirm the effectiveness of such instructive interactions, and found that customers are more likely to be interested in the content recommended by the robot, rather than in the robot itself, through the instructive interaction.
|
|
We102 |
Aragonese/Catalana |
Detecting and Understanding Human Activity |
Regular Session |
Chair: Sanfeliu, Alberto | Universitat Politècnica De Cataluyna |
Co-Chair: Chan, Wesley Patrick | Monash University |
|
08:30-08:42, Paper We102.1 | |
Drowsiness Prevention Using a Social Robot |
|
Hara, Koki (Kyoto University), Takemoto, Ayumi (University of Latvia), Nakazawa, Atsushi (Kyoto University) |
Keywords: Applications of Social Robots, Detecting and Understanding Human Activity, Human Factors and Ergonomics
Abstract: Drowsiness is one of the major causes of accidents in driving and other tasks. Therefore, finding effective ways to prevent drowsiness and to maintain their alert state is an important subject of study in human-machine systems such as autonomous driving. In this paper, we show the potential to use a social robot to prevent participants from becoming drowsy and to keep them alert. Twenty-five participants were asked to perform Sustained Attention to Response Tasks (SART) and report their subjective drowsiness levels. When the system detected drowsiness from the task reaction time or the self-reports, one of the following awakening alarms was triggered: (1) sound and the robot's movement (SRM), (2) sound from (motionless) robot (SR), (3) sound only (no robot present) (SO), or (4) no stimulus (NS), as a control. The participants' task performance and self-reported drowsiness were continuously recorded to evaluate the effectiveness of each alarm condition. The experimental results showed a significant difference in the self-reported scores of drowsiness between the SRM and SR conditions and the NS condition, while significance was not found between the SO and NS conditions. In addition, the response time was shorter for SRM. The difference between SR and SO is only the presence of the social robot, so these results indicate that the presence of the social robot increases the participants’ alertness level.
|
|
08:42-08:54, Paper We102.2 | |
Head Pose for Object Deixis in VR-Based Human-Robot Interaction |
|
Higgins, Padraig (University of Maryland, Baltimore County), Barron, Ryan (University of Maryland Baltimore County), Matuszek, Cynthia (University of Maryland, Baltimore County) |
Keywords: Virtual and Augmented Tele-presence Environments, Multimodal Interaction and Conversational Skills, Machine Learning and Adaptation
Abstract: Modern robotics heavily relies on machine learning and has a growing need for training data. Advances and commercialization of virtual reality (VR) present an opportunity to use VR as a tool to gather such data for human-robot interactions. We present the Robot Interaction in VR simulator, which allows human participants to interact with simulated robots and environments in real-time. We are particularly interested in spoken interactions between the human and robot, which can be combined with the robot's sensory data for language grounding. To demonstrate the utility of the simulator, we describe a study which investigates whether a user's head pose can serve as a proxy for gaze in a VR object selection task. Participants were asked to describe a series of known objects, providing approximate labels for the focus of attention. We demonstrate that using a concept of gaze derived from head pose can be used to effectively narrow the set of objects that are the target of participants' attention and linguistic descriptions.
|
|
08:54-09:06, Paper We102.3 | |
Impacts of Teaching towards Training Gesture Recognizers for Human-Robot Interaction |
|
Tan, Jia Chuan Albert (Monash University), Chan, Wesley Patrick (Monash University), Robinson, Nicole Lee (Monash University), Kulic, Dana (Monash University), Croft, Elizabeth (Monash University) |
Keywords: Detecting and Understanding Human Activity, Machine Learning and Adaptation
Abstract: The use of hand-based gestures has been proposed as an intuitive way for people to communicate with robots. Typically the set of gestures is defined by the experimenter. However, existing works do not necessarily focus on gestures that are communicative, and it is unclear whether the selected gesture are actually intuitive to users. This paper investigates whether different people inherently use similar gestures to convey the same commands to robots, and how teaching of gestures when collecting demonstrations for training recognizers can improve resulting accuracy. We conducted this work in two stages. In Stage 1, we conducted an online user study (n=190) to investigate if people use similar gestures to communicate the same set of given commands to a robot when no guidance or training was given. Results revealed large variations in the gestures used among individuals With the absences of training. Training a gesture recognizer using this dataset resulted in an accuracy of around 20%. In response to this, Stage 2 involved proposing a common set of gestures for the commands. We taught these gestures through demonstrations and collected ~7500 videos of gestures from study participants to train another gesture recognition model. Initial results showed improved accuracy but a number of gestures had high confusion rates. Refining our gesture set and recognition model by removing those gestures, We achieved an final accuracy of 84.1 ± 2.4%. We integrated the gesture recognition model into the ROS framework and demonstrated a use case, where a person commands a robot to perform a pick and place task using the gesture set.
|
|
09:06-09:18, Paper We102.4 | |
Preliminary Investigation of Collision Risk Assessment with Vision for Selecting Targets Paid Attention to by Mobile Robot |
|
Hayashi, Masaaki (Waseda University), Miyake, Tamon (Waseda University), Kamezaki, Mitsuhiro (Waseda University), YAMATO, Junji (Kogakuin University), Saito, Kyosuke (Waseda University), Hamada, Taro (Waseda University), Sakurai, Eriko (Waseda University), Sugano, Shigeki (Waseda University), Ohya, Jun (Waseda University) |
Keywords: Detecting and Understanding Human Activity, Motion Planning and Navigation in Human-Centered Environments
Abstract: Vision plays an important role in motion planning for mobile robots which coexist with humans. Because a method predicting a pedestrian path with a camera has a trade-off relationship between the calculation speed and accuracy, such a path prediction method is not good at instantaneously detecting multiple people at a distance. In this study, we thus present a method with visual recognition and prediction of transition of human action states to assess the risk of collision for selecting the avoidance target. The proposed system calculates the risk assessment score based on recognition of human body direction, human walking patterns with an object, and face orientation as well as prediction of transition of human action states. First, we investigated the validation of each recognition model, and we confirmed that the proposed system can recognize and predict human actions with high accuracy ahead of 3 m. Then, we compared the risk assessment score with video interviews to ask a human whom a mobile robot should pay attention to, and we found that the proposed system could capture the features of human states that people pay attention to when avoiding collision with other people from vision.
|
|
09:18-09:30, Paper We102.5 | |
Context and Intention for 3D Human Motion Prediction: Experimentation and User Study in Handover Tasks |
|
Laplaza, Javier (Universitat Politècnica De Catalunya), Garrell, Anais (UPC-CSIC), Moreno-Noguer, Francesc (CSIC), Sanfeliu, Alberto (Universitat Politècnica De Cataluyna) |
Keywords: Machine Learning and Adaptation, Detecting and Understanding Human Activity, Curiosity, Intentionality and Initiative in Interaction
Abstract: In this work we present a novel attention deep learning model that uses context and human intention for 3D human body motion prediction in hand-over human-robot tasks. This model uses a multi-head attention architecture which incorporates as inputs the human motion, the robot end effector and the position of the obstacles. The outputs of the model are the predicted motion of the human body and the predicted human intention. We use this model to analyze a hand-over collaborative task with a robot where the robot is able to predict the future motion of the human and use this information in it’s planner. We perform several experiments and ask the human volunteers to fill a standard poll to rate different features of the task when the robot uses the prediction versus when the robot doesn’t use the prediction
|
|
09:30-09:42, Paper We102.6 | |
The Atlas Benchmark: An Automated Evaluation Framework for Human Motion Prediction |
|
Rudenko, Andrey (Robert Bosch GmbH), Palmieri, Luigi (Robert Bosch GmbH), Huang, Wanting (Robert Bosch), Lilienthal, Achim J. (Orebro University), Arras, Kai Oliver (Bosch Research) |
Keywords: Evaluation Methods, Detecting and Understanding Human Activity, Motion Planning and Navigation in Human-Centered Environments
Abstract: Human motion trajectory prediction, an essential task for autonomous systems in many domains, has been on the rise in recent years. With a multitude of new methods proposed by different communities, the lack of standardized benchmarks and objective comparisons is increasingly becoming a major limitation to assess progress and guide further research. Existing benchmarks are limited in their scope and flexibility to conduct relevant experiments and to account for contextual cues of agents and environments. In this paper we present Atlas, a benchmark to systematically evaluate human motion trajectory prediction algorithms in a unified framework. Atlas offers data preprocessing functions, hyperparameter optimization, comes with popular datasets and has the flexibility to setup and conduct underexplored yet relevant experiments to analyze a method’s accuracy and robustness. In an example application of Atlas, we compare five popular model- and learning-based predictors and find that, when properly applied, early physics-based approaches are still remarkably competitive. Such results confirm the necessity of benchmarks like Atlas.
|
|
09:42-09:54, Paper We102.7 | |
Proactive Robot Movements in a Crowd by Predicting and Considering the Social Influence |
|
Moder, Martin (University Duisburg-Essen), Pauli, Josef (Universität Duisburg-Essen) |
Keywords: Machine Learning and Adaptation, Social Learning and Skill Acquisition Via Teaching and Imitation, Multi-modal Situation Awareness and Spatial Cognition
Abstract: Dense crowds are challenging scenes for an autonomous mobile robot. Planning in such an interactive environment requires predicting uncertain human intentions and reactions to future robot actions. Concerning these capabilities, we propose a probabilistic forecasting model which factorizes the human motion uncertainty as follows: 1) A (conditioned) normalizing flow (CNF) estimates the densities of human goals. 2) The density of trajectories toward goals is predicted autoregressively (AR), where the density of individual social actions is inferred simultaneously for a dynamic number of humans. The underlying Gaussian AR framework is extended with our SocialSampling to counteract collisions during sampling. The model allows us to determine a crowd prediction conditional on a particular robot plan and a crowd prediction independent of it for the same goals. We demonstrate that the divergence between the two probabilistic predictions can be efficiently determined and we derive our Social Influence (SI) objective from it. Finally, a model-predictive policy for robot crowd navigation is proposed that minimizes the SI objective. Thus, the robot reflects its future movement in order not to disturb humans in their movement if possible. The experiments on real datasets show that the model achieves state-of-the-art accuracy in predicting pedestrian movements. Furthermore, our evaluations show that robot policy with our SI objective produces safe and proactive behaviors, such as taking evasive action at the right time to avoid conflicts.
|
|
09:54-10:06, Paper We102.8 | |
Automatic Pathological Gait Recognition by a Mobile Robot Using Ultrawideband-Based Localization and a Depth Camera |
|
Jun, Kooksung (Gwangju Institute of Science and Technology), Oh, Sojin (Gwangju Institute of Science and Technology), Lee, Sanghyub (Gwangju Institute of Science and Technology), Lee, Deok-Won (Korea Institute of Science and Technology), Kim, Mun Sang (GIST) |
Keywords: Applications of Social Robots, Machine Learning and Adaptation, Detecting and Understanding Human Activity
Abstract: Human gaits constitute significant health information because they indicate body function conditions, such as sensory, motor, and cognitive functions. Monitoring gait patterns helps to find weakened body parts and to prevent diseases before they become serious. In this paper, we propose a novel method to monitor human gaits in indoor environments by using a mobile robot and artificial intelligence. A mobile robot periodically collects 3D skeleton data by using an installed depth camera, and the collected data are used to extract gait parameters and to classify normal, antalgic, stiff-legged, steppage, lurching, and Trendelenburg gaits via the proposed system. Ultrawideband (UWB)-based localization with Kalman filtering and odometry-based state estimation is adopted to allow a mobile robot to rapidly and accurately reach the target human. We use both 3D skeletons and joint-based angle data to achieve improved pathological gait classification performance. A bidirectional gated recurrent unit (Bi-GRU) architecture is adopted for fusion at the feature level. An accuracy of 97.64% is achieved by using the proposed multiple-input model, whereas 96.63% and 94.17% accuracies are achieved by the skeleton- and joint-based angle input models, respectively. The proposed automatic gait analysis method can contribute to improving untact smart home care and quality of life.
|
|
10:06-10:18, Paper We102.9 | |
Listen and Tell Me Who the User Is Talking To: Automatic Detection of the Interlocutor’s Type During a Conversation |
|
Hmamouche, Youssef (International Artificial Intelligence Center of Morocco, Univers), Ochs, Magalie (LSIS Laboratory), CHAMINADE, Thierry (CNRS), Prevot Laurent, Prevot Laurent (Laboratoire Parole Et Langage, Aix-Marseille Universite, CNRS, U) |
Keywords: Computational Architectures, Detecting and Understanding Human Activity, Evaluation Methods
Abstract: In the well-known Turing test, humans have to judge whether they write to another human or a chatbot. In this article, we propose a reversed Turing test adapted to live conversations: based on the speech of the human, we have developed a model that automatically detects whether she/he speaks to an artificial agent or a human. We propose in this work a prediction methodology combining a step of specific features extraction from behaviour and a specific deep learning model based on recurrent neural networks. The prediction results show that our approach, and more particularly the considered features, improves significantly the predictions compared to the traditional approach in the field of automatic speech recognition systems, which is based on spectral features, such as Mel-frequency Cepstral Coefficients (MFCCs). Our approach allows evaluating automatically the type of conversational agent, human or artificial agent, solely based on the speech of the human interlocutor. Most importantly, this model provides a novel and very promising approach to weigh the importance of the behaviour cues used to make correctly recognize the nature of the interlocutor, in other words, what aspects of the human behaviour adapts to the nature of its interlocutor.
|
|
10:18-10:30, Paper We102.10 | |
Recognizing Motion Onset During Robot-Assisted Body-Weight Unloading Is Challenging but Seems Feasible |
|
Haji Hassani, Roushanak (University of Basel), Bolliger, Marc (Balgrist University Hospital), Rauter, Georg (University of Basel) |
Keywords: Detecting and Understanding Human Activity, Assistive Robotics, Machine Learning and Adaptation
Abstract: Patients with neurological gait impairments fear to fall while performing locomotor tasks like sit to stand, stand to sit, and level-ground walking without assistance. This prevents them from participating in daily life. Multi-directional Body- Weight Support (BWS) systems aim to assist training in a safe environment to overcome this limitation. To ensure the safety and comfort during training, BWS systems should effectively and automatically assist patients with forces that support the desired locomotor task. Instead of manual switching of controllers through the therapist, we propose a machine learning-based motion onset recognition model that aims at automatic controller switching (4 gait related task classes in this paper). In addition to the data provided by the gait rehabilitation robot FLOAT, data of three Inertial Measurement Units (IMUs) attached to the sternum and middle of the outer thighs on 6 healthy participants were used to predict motion onset of these 4 gait-related tasks. Four train data sets have been built up from the synchronously obtained data from the IMUs and the BWS system by dividing them into observation windows with sizes of 100 ms and 200 ms and overlap factors of 50% and 90%. From each training dataset, 108 time-domain features have been extracted, ranked, and reduced using the Minimum Redundancy Maximum Relevance (MRMR) method from each train set before applying it to the classifier. The dominant features were applied for training and comparing four different classifiers. The performance of the classifiers has been evaluated based on leave one participant out cross-validation. The ensemble classifier obtained from the train data set with a window size of 100 ms and 90% overlap achieved the best performance with an F1 score of 83.7%.
|
|
We103 |
Sveva/Normanna |
Assistive Robotics |
Regular Session |
Chair: Edan, Yael | Ben-Gurion University of the Negev |
Co-Chair: Broz, Frank | TU Delft |
|
08:30-08:42, Paper We103.1 | |
The Robot Screener Will See You Now: A Socially Assistive Robot for COVID-19 Screening in Long-Term Care Homes |
|
Getson, Cristina (University of Toronto), Nejat, Goldie (University of Toronto) |
Keywords: Applications of Social Robots, Assistive Robotics, Multimodal Interaction and Conversational Skills
Abstract: The rapid spread of COVID-19 around the globe has increased the need to adopt autonomous social robots within our healthcare systems. In particular, socially assistive robots can help to improve the day-to-day functioning of our healthcare facilities including long-term care, while keeping residents and staff safe by performing repetitive tasks such as health screening. In this paper, we present the first human-robot interaction study with an autonomous multi-task socially assistive robot used for non-contact screening in long-term care homes. The robot monitors temperature, checks for face masks, and asks screening questions to minimize human-to-human contact. We investigated staff perceptions of 7 attributes: screening experience without and with the robot, efficiency, cognitive attitude, freeing up staff, safety, affective attitude, and intent to use the robot. Furthermore, we investigated the influence of demographics on these attributes. Study results show that, overall, staff rated these attributes high for the screening robot, with a statistically significant increase in cognitive attitude and safety after interacting with the robot. Differences between gender and occupation were also determined. Our study highlights the potential application of an autonomous screening robot for long-term care homes.
|
|
08:42-08:54, Paper We103.2 | |
Usability of an Immersive Control System for a Humanoid Robot Surrogate |
|
Mackey, Bethany (Bristol Robotics Laboratory), Bremner, Paul (University of the West of England), Giuliani, Manuel (University of the West of England, Bristol) |
Keywords: Social Presence for Robots and Virtual Humans, Applications of Social Robots, Assistive Robotics
Abstract: Social isolation is an issue that effects many people, especially those from ethic minorities, LGBTQIA+ communities, the elderly, those in long-term healthcare, and those living with life-limiting illnesses. It has become increasingly evident during the pandemic, when mental health issues have soared, and the importance of interacting with loved ones has been highlighted. While telecommunication software helped a great deal in these unprecedented circumstances, it does not allow for navigation in remote environments, and lacks high level interactions found in face-to-face communication. Therefore, this system has been developed to address these issues, and this study was being conducted to test the technical usability of the system when being used by healthy participants. It was found that 720p is the highest resolution that can be applied before the camera delay becomes unusable; though participants suggested that they would like the option to switch to 2k resolution should they be looking close up without moving. In addition, it became apparent that overall the participants were positive about the system, but would prefer a less bulky head-mounted display, and that the choice of which robot to use with the system (Nao or Pepper) was entirely down to individual preference based on the task being completed.
|
|
08:54-09:06, Paper We103.3 | |
Assist Effectiveness Study Based on Viscosity: Comparison of Assumed Command Signal and Actual Command Signal |
|
Shimoda, Yusuke (Chuo University), Fujita, Tetsuhito (Chuo University), Machida, Katsuki (Chuo University), Okui, Manabu (Chuo University), Nishihama, Rie (Chuo University), Nakamura, Taro (Chuo University) |
Keywords: Assistive Robotics
Abstract: In wearable assist devices, the assumed and actual command timings are different. This discrepancy cause problems, such as a decrease in the assist effect. In this study, we focus on viscosity, which is a characteristic of human muscles, and propose viscosity assistance to solve the problems. This method outputs the torque based on the predetermined viscosity coefficient and the actual angular velocity if the torque and angular velocity of the human joints are in opposite directions. The possibility of viscosity assist is demonstrated by analyzing the stand-to-sit motion, and an assist device using a magnetorheological fluid brake is developed. The assist device is driven using three command inputs: time-based, joint angle-based, and constant viscosity coefficient command inputs. Under conditions based on time and angle, the actual output deviates from the assumed value when the motion is different from the expected motion. In terms of the viscosity command, when only the constant command is used throughout the entire motion section, some subjects showed almost identical results to those expected. Even if a deviation from this assumption is indicated, the degree of agreement can be improved by switching several types of viscosity coefficients during motion.
|
|
09:06-09:18, Paper We103.4 | |
Push and Pull Feedback in Mobile Robotic Telepresence - a Telecare Case Study |
|
Keidar, Omer (Ben-Gurion University of the Negev, Beer Sheva), Olatunji, Samuel (Ben Gurion University of the Negev), Edan, Yael (Ben-Gurion University of the Negev) |
Keywords: Assistive Robotics, Degrees of Autonomy and Teleoperation, Cooperation and Collaboration in Human-Robot Teams
Abstract: Mobile robotic telepresence (MRP) has emerged as a possible solution for supporting health caregivers in a multitude of tasks such as monitoring, pre-diagnosis, and delivery of items. Improved interaction with the system is an important part of using such MRP systems. The current study aimed at evaluating if it is beneficial for the user to have control over the information being presented. An experimental system that represented a hospital environment was developed. A user teleoperated a mobile robot to deliver medication supplies to a patient and receive samples from the patient while attending to a secondary task involving medical records. Two interaction modes were examined in the MRP system – proactive (termed 'push' mode) and reactive (termed 'pull' mode). The influence of the interaction modes on different aspects of performance and user perception was investigated. User studies were performed with 20 participants coming from two different types of groups – users with and without technological backgrounds. Results revealed that for both user types, the proactive interaction enhances performance, situation awareness and satisfaction compared to the reactive interaction. The study highlights the potential of improving telecare experience with MRPs by different interaction modes.
|
|
09:18-09:30, Paper We103.5 | |
Simple User Assistant Driving Algorithm for Cost Effective Smart Electric Wheelchair |
|
ERTURK, Esranur (UST-KIST), Lee, Dongyoung (Korea Institute of Science and Technology), Kim, Soonkyum (Korea Institute of Science and Technology) |
Keywords: Assistive Robotics, Motion Planning and Navigation in Human-Centered Environments, Creating Human-Robot Relationships
Abstract: Wheelchairs are among the most important equipment in terms of facilitating the lives of people with walking disabilities. In particular, the use of electric wheelchairs allows people with walking disabilities to move comfortably without excessively straining their bodies. However, in some cases, a person with a walking disability who uses an electric wheelchair may have difficulties in driving the wheelchair or may experience unintended collisions due to obstacles that are not within the line of sight. In order to prevent such negative events, it is necessary to develop a wheelchair with electric wheels that is produced at a minimum cost, can be easily sold to all kinds of people with disabilities, and will not be difficult to use. In addition, this equipment must include a system designed to provide wheelchair control and driving comfort with maximum efficiency for areas outside of the line of sight, which is one of the features required by all people with walking disabilities. This article explains the design of a prototype that can detect blind spots thanks to sensors and provide driving comfort while a wheelchair driver with walking disabilities uses it. In this paper, it is aimed to create a smart electric wheelchair that can be used by anyone with a walking disability while keeping the costs to a minimum.
|
|
09:30-09:42, Paper We103.6 | |
Performance Analysis of Vibrotactile and Slide-And-Squeeze Haptic Feedback Devices for Limbs Postural Adjustment |
|
Lorenzini, Marta (Istituto Italiano Di Tecnologia), Ciotti, Simone (University of Pisa), Gandarias, Juan M. (Istituto Italiano Di Tecnologia), Fani, Simone (University of Pisa - IT00286820501), Bianchi, Matteo (University of Pisa), Ajoudani, Arash (Istituto Italiano Di Tecnologia) |
Keywords: Assistive Robotics, Multi-modal Situation Awareness and Spatial Cognition, Human Factors and Ergonomics
Abstract: Recurrent or sustained awkward body postures are among the most frequently cited risk factors to the development of work-related musculoskeletal disorders (MSDs). To prevent workers from adopting harmful configurations but also to guide them toward more ergonomic ones, wearable haptic devices may be the ideal solution. In this paper, a vibrotactile unit, called ErgoTac, and a slide-and-squeeze unit, called CUFF, were evaluated in a limbs postural correction setting. Their capability of providing single-joint (shoulder or knee) and multi-joint (shoulder and knee at once) guidance was compared in twelve healthy subjects, using quantitative task-related metrics and subjective quantitative evaluation. An integrated environment was also built to ease communication and data sharing between the involved sensor and feedback systems. Results show good acceptability and intuitiveness for both devices. ErgoTac appeared as the suitable feedback device for the shoulder, while the CUFF may be the effective solution for the knee. This comparative study, although preliminary, was propaedeutic to the potential integration of the two devices for effective whole-body postural corrections, with the aim to develop a feedback and assistive apparatus to increase workers' awareness about risky working conditions and therefore to prevent MSDs.
|
|
09:42-09:54, Paper We103.7 | |
EvaSIM: A Software Simulator for the EVA Open-Source Robotics Platform |
|
Rocha, Marcelo (Fluminense Federal University), Cruz-Sandoval, Dagoberto (UCSD), Favela, Jesus (CICESE), Muchaluat-Saade, Debora (UFF) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Applications of Social Robots
Abstract: Socially Assistive Robots (SARs) have successfully been used in various types of health therapies as non-pharmacological interventions. A SAR called EVA (Embodied Voice Assistant)is an open-source robotics platform intended to serve as a tool to support research in Human-Robot Interaction. The EVA robot was originally developed to assist in non-pharmacological interventions for people with Dementia and has more recently been applied for children with Autism Spectrum Disorder. EVA provides multimodal interactions such as verbal and non-verbal communication, facial recognition and light sensory effects. Although EVA uses low-cost hardware and open-source software, it is not always possible, or practical, to have a physical robot at hand, particularly during rapid iterative cycles of design and evaluation of therapies. Thus, our motivation to develop a simulator that allows testing the scripts of therapies to be enacted by the EVA robot. This work proposes EvaSIM (EVA Robot Simulator), a simulator that can interpret an EVA script code and emulate the multimodal interaction capabilities of the physical robot, such as Text-To-Speech, facial expression recognition, controlling light sensory effects, etc. Several EVA scripts were run using the simulator attesting that they have the same behaviour as the physical robot. EvaSIM can serve as a support tool in the teaching/learning process of the robot’s scripting language, enabling the training of technicians and therapists in script development and testing for the EVA robot.
|
|
09:54-10:06, Paper We103.8 | |
Comparison of a VR-Based and a Rule-Based Robot Control Method for Assistance in a Physical Human-Robot Collaboration Scenario |
|
Kowalski, Christian (University of Oldenburg), Brinkmann, Anna (Carl Von Ossietzky University Oldenburg), Hellmers, Sandra (University of Oldenburg), Fifelski, Conrad (Carl Von Ossietzky University Oldenburg), Hein, Andreas (University of Oldenburg) |
Keywords: Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams
Abstract: Many physically demanding tasks require the help of a second person to avoid the otherwise harmful effects of musculoskeletal disorders when performing the task alone. The workforce in many domains, such as nursing care, is rapidly declining. Furthermore, complex repositioning tasks during patient care make it all the more difficult to find a simple solution. Therefore, robotic assistance could be a perfect fit to support this scenario. For this purpose, we conducted a study where 12 caregivers were given the task of repositioning a patient simulator (80 kg) within a nursing bed with the help of a robotic assistant. We measured the physical relief potential using muscle activity and force data while examining this human-robot collaboration problem with two different control approaches, namely a virtual reality-based and a rule-based robot control method. The results show that both methods effectively supported the chosen task with varied strengths and weaknesses. The main advantage of VR-based control is the ability to adapt to the situation and execute quickly (6.61 s), but this also results in higher robot acting forces (35.70 N). With the rule-based control variant, a lower speed of support can be noted on average (10.84 s), but better robot interaction forces (14.95 N) can be discovered in return.
|
|
10:06-10:18, Paper We103.9 | |
On-The-Go Robot-To-Human Handovers with a Mobile Manipulator |
|
He, Kerry (Monash University), Simini, Pradeepsundar (Monash University), Chan, Wesley Patrick (Monash University), Kulic, Dana (Monash University), Croft, Elizabeth (Monash University), Cosgun, Akansel (Monash University) |
Keywords: Assistive Robotics, Motion Planning and Navigation in Human-Centered Environments
Abstract: Existing approaches to direct robot-to-human handovers are typically implemented on fixed-base robot arms, or on mobile manipulators that come to a full stop before performing the handover. We propose "on-the-go" handovers which permit a moving mobile manipulator to hand over an object to a human without stopping. The on-the-go handover motion is generated with a reactive controller that allows simultaneous control of the base and the arm. In a user study, human receivers subjectively assessed on-the-go handovers to be more efficient, predictable, natural, better timed and safer than handovers that implemented a "stop-and-deliver" behavior.
|
|
10:18-10:30, Paper We103.10 | |
Investigating the Usability of a Socially Assistive Robotic Cognitive Training Task with Augmented Sensory Feedback Modalities for Older Adults |
|
Nault, Emilyann (Heriot-Watt University & University of Edinburgh), Baillie, Lynne (Heriot-Watt University), Broz, Frank (TU Delft) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Novel Interfaces and Interaction Modalities
Abstract: Cognitive training is effective at retaining cognitive function and delaying decline for typically ageing older adults, individuals with mild cognitive impairment, and persons with dementia. Technological resources can address limiting factors that inhibit engagement and access to this treatment. We investigated how a socially assistive robot-facilitated memory task with sensory feedback was received by older adults. The impact of unimodal and multimodal administration of auditory and haptic feedback using two robot embodiments (Pepper and Nao) was evaluated in terms of user performance, usability, and workload. In contrast to sensory feedback research, auditory feedback resulted in significantly higher task accuracy. This was, however, supported by previous work from neurological literature. Auditory feedback also received significantly higher usability, and this preference was validated by qualitative feedback from participants. Regardless of robotic embodiment, this study demonstrates an advantage for auditory feedback (over haptic and multimodal) in cognitive training activities for older adults.
|
|
We501 |
Auditorium |
Social Intelligence for Robots |
Regular Session |
Chair: Ishiguro, Hiroshi | Osaka University |
Co-Chair: Pynadath, David V. | University of Southern California |
|
12:10-12:22, Paper We501.1 | |
An Autonomous Conversational Android That Acquires Human-Item Co-Occurrence in the Real World |
|
Sakamoto, Yuki (Osaka University), Uchida, Takahisa (Osaka University), Ishiguro, Hiroshi (Osaka University) |
Keywords: Social Intelligence for Robots, Linguistic Communication and Dialogue, Androids
Abstract: The goal of this study is to develop an autonomous conversational robot that acquires knowledge about human society in the real world. In this study, we define experience data as image data obtained from a camera (visual information) and dialogue data obtained from a microphone (auditory information). From experience data, the robot acquires knowledge about the co-occurrence between humans and items, that is, whether it is usual or not for humans to have the items. Not only does the proposed system acquire knowledge, but also generates utterances based on acquired knowledge. We have implemented the proposed method on an android, a human-like robot. We conducted an experiment in which the android was placed in a real-world environment (shopping mall) and interacted with visitors. The percentage of positive responses to the robot's questions based on the acquired knowledge suggests that this system can acquire knowledge from experience data.
|
|
12:22-12:34, Paper We501.2 | |
Explainable Reinforcement Learning in Human-Robot Teams: The Impact of Decision-Tree Explanations on Transparency |
|
Pynadath, David V. (University of Southern California), Gurney, Nikolos (University of Southern California), Wang, Ning (University of Southern California) |
Keywords: Social Intelligence for Robots, Cooperation and Collaboration in Human-Robot Teams, Machine Learning and Adaptation
Abstract: Understanding the decisions of AI-driven systems and the rationale behind such decisions is key to the success of the human-robot team. However, the complexity and the "black-box" nature of many AI algorithms create a barrier for establishing such understanding within their human counterparts. Reinforcement Learning (RL), a machine-learning algorithm based on the simple idea of action-reward mappings, has a rich quantitative representation and a complex iterative reasoning process that present a significant obstacle to human understanding of, for example, how value functions are constructed, how the algorithms update the value functions, and how such updates impact the action/policy chosen by the robot. In this paper, we discuss our work to address this challenge by developing a decision-tree based explainable model for RL to make a robot's decision-making process more transparent. Set in a human-robot virtual teaming testbed, we conducted a study to assess the impact of the explanations, generated using decision trees, on building transparency, calibrating trust, and improving the overall human-robot team's performance. We discuss the design of the explainable model and the positive impact of the explanations on outcome measures.
|
|
12:34-12:46, Paper We501.3 | |
Measuring Visual Social Engagement from Proxemics and Gaze |
|
Webb, Nicola (University of the West England), Giuliani, Manuel (University of the West of England, Bristol), Lemaignan, Séverin (PAL Robotics) |
Keywords: Social Intelligence for Robots, Multi-modal Situation Awareness and Spatial Cognition
Abstract: When we approach a group, there is an exchange of a multitude of verbal or non-verbal social signals to indicate that we are looking to interact. We continue to share these signals throughout the interaction to portray our thoughts and motivations. We define an interaction by the signals we send; sending different signals evoke a different response. Giving social robots the knowledge of group social interaction, they will have the ability to more effectively participate in these interactions in the real world. In this paper, we present the results from an online data collection study looking at social group dynamics. We collected a dataset of social behaviours in a group using a socially interactive game played online by 88 participants. We also introduce a novel visual social engagement metric, which is derived from two social signals: proxemics (distance between interaction participants) and mutual gaze. We propose a mathematical formula of both mutual gaze as the product of the mutual distances to the optical axis, and the visual social engagement as mutual gaze divided by distance between participants. Additionally, we investigate the influence of personality traits on the resulting interaction patterns. Using the metric, we create unique interaction profiles which suggest that participants have an interaction ’style’. No clear correlation between personality and interaction patterns were found.
|
|
12:46-12:58, Paper We501.4 | |
Cooperative and Uncooperative Behaviour in Task-Oriented Dialogues with Social Robots |
|
Wilcock, Graham (CDM Interact, Helsinki, Finland), Jokinen, Kristiina (AIRC, AIST, Japan and University of Helsinki, Finland) |
Keywords: Social Intelligence for Robots, Linguistic Communication and Dialogue, Cooperation and Collaboration in Human-Robot Teams
Abstract: The paper addresses aspects of cooperative and uncooperative behaviour in natural language dialogue between humans and social robots. The principles of cooperation in human-human dialogues are taken as the basis for cooperative behaviour in human-robot dialogues. Several approaches are described that can improve human-robot cooperation. These include more flexible recognition of user intents, more flexible searches using knowledge graphs, generating more cooperative responses using semantic metadata, and generating more human-friendly responses using Wikipedia. These approaches are demonstrated in a series of videos.
|
|
12:58-13:10, Paper We501.5 | |
Shaping Haru's Affective Behavior with Valence and Arousal Based Implicit Facial Feedback |
|
Wang, Hui (Ocean University of China), Chen, Guodong (Shandong Jiaotong University), Gomez, Randy (Honda Research Institute Japan Co., Ltd), Nakamura, Keisuke (Honda Research Institute Japan Co., Ltd), He, Bo (Ocean University of China), Li, Guangliang (Ocean University of China) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Social Intelligence for Robots, Assistive Robotics
Abstract: Social robots that are able to express emotions can potentially improve human's well-being. Whether and how they can learn from interactions between them and human being in a natural way will be key to their success and acceptance by ordinary people. In this paper, we proposed to shape social robot Haru affective behaviors with predicted continuous rewards based on received implicit facial feedback via human-centered reinforcement learning. The implicit facial feedback was estimated with the valence and arousal of received implicit facial feedback using Russell’s circumplex model, which can provide a more accurate estimation of the subtle psychological changes of human user, resulting in more effective robot behavior learning. The whole experiment is conducted on the desktop robot Haru, which is primarily used to study emotional interactions with human in different scenarios. Our experimental results show that with our proposed method, Haru can obtain a similar performance to learning from explicit feedback, eliminating the need for human users to get familiar with training interface in advance and resulting in an unobtrusive learning process.
|
|
We502 |
Aragonese/Catalana |
Cooperation and Collaboration in Human-Robot Teams |
Regular Session |
Chair: Zanchettin, Andrea Maria | Politecnico Di Milano |
Co-Chair: Mayima, Amandine | LAAS-CNRS |
|
12:10-12:22, Paper We502.1 | |
JAHRVIS, a Supervision System for Human-Robot Collaboration |
|
Mayima, Amandine (LAAS-CNRS), Clodic, Aurélie (Laas - Cnrs), Alami, Rachid (CNRS) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Cognitive Skills and Mental Models, Monitoring of Behaviour and Internal States of Humans
Abstract: The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.
|
|
12:22-12:34, Paper We502.2 | |
Learning Human Actions Semantics in Virtual Reality for a Better Human-Robot Collaboration |
|
Lucci, Niccolò (Politecnico Di Milano), Preziosa, Giuseppe Fabio (Politecnico Di Milano), Zanchettin, Andrea Maria (Politecnico Di Milano) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Programming by Demonstration, Detecting and Understanding Human Activity
Abstract: The advent of collaborative robots has revolutionised the work concept for Small Medium Enterprises(SME), introducing quick line configuration changes and allowing a human operator to work together with a robot. Several problems are still present regarding the need to reprogram the robotic collaborator. This paper exploits the concept of Virtual Reality to instruct the robotic co-worker, allowing a non-skilled operator to teach a task effortlessly and straightforwardly. Each Skill composing the task is classified and characterised through its pre and postconditions to let the robot stop and notify the operator in case of inability to complete the scheduled action. Moreover, the system allows the operator to interact with the robot and be proactive during the task execution.
|
|
12:34-12:46, Paper We502.3 | |
Leveraging Cognitive States in Human-Robot Teaming |
|
Kolb, Jack (Georgia Institute of Technology), Ravichandar, Harish (Georgia Institute of Technology), Chernova, Sonia (Georgia Institute of Technology) |
Keywords: Cooperation and Collaboration in Human-Robot Teams
Abstract: Mixed human-robot teams (HRTs) have the potential to perform complex tasks by leveraging diverse and complementary capabilities within the team. However, assigning humans to operator roles in HRTs is challenging due to the significant variation in user capabilities. While much of prior work in role assignment treats humans as interchangeable (either generally or within a category), we investigate the utility of personalized models of operator capabilities based in relevant human factors in an effort to improve overall team performance. We call this approach individualized role assignment (IRA) and provide a formal definition. A key challenge for IRA is associated with the fact that factors that affect human performance are not static (e.g., one's ability to track multiple objects can change during or between tasks). Instead of relying on time-consuming and highly-intrusive measurements taken during the execution of tasks, we propose the use of short cognitive tests, taken before engaging in human-robot tasks, and predictive models of individual performance to perform IRA. Results from a comprehensive user study conclusively demonstrate that IRA leads to significantly better team performance than a baseline method that assumes human operators are interchangeable, even when we control for the influence of the robots' performance. Further, our results point to the possibility that such relative benefits of IRA will increase as the number of operators (i.e., choices) increase for a fixed number of tasks.
|
|
12:46-12:58, Paper We502.4 | |
AugRE: Augmented Robot Environment to Facilitate Human-Robot Teaming and Communication |
|
Regal, Frank (The University of Texas at Austin), Petlowany, Christina (The University of Texas at Austin), Pehlivanturk, Can (The University of Texas at Austin), Van Sice, Corrie (University of Texas at Austin), Suarez, Christopher (University of Texas at Austin), Anderson, Robert (The University of Texas at Austin), Pryor, Mitchell (University of Texas) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Virtual and Augmented Tele-presence Environments, Novel Interfaces and Interaction Modalities
Abstract: Augmented Reality (AR) provides a method to superimpose real-time information on the physical world. AR is well-suited for complex robotic systems to help users understand robot behavior, status, and intent. This paper presents an AR system, Augmented Robot Environment (AugRE), that combines ROS-based robotic systems with Microsoft HoloLens 2 AR headsets to form a scalable multi-agent human-robot teaming system for indoor and outdoor exploration. The system allows multiple users to simultaneously localize, supervise, and receive labeled images from robotic clients. An overview of AugRE and details of the novel system architecture that allows for large-scale human-robot teaming is presented below. Studies showcasing system performance with multiple robotic clients are presented. Results show that AugRE can scale to 50 robotic clients with minimal performance degradation, due in part to key components that leverage a recent advancement in robotic client-to-client communication called Robofleet. Finally we discuss new capabilities enabled by AugRE.
|
|
12:58-13:10, Paper We502.5 | |
A Benchmark Toolkit for Collaborative Human-Robot Interaction |
|
Riedelbauch, Dominik (University of Bayreuth), Hümmer, Jonathan (University of Bayreuth) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Evaluation Methods
Abstract: Novel human-robot collaboration (HRC) methods need careful validation. This is often achieved with user studies in laboratory environments, which mostly rely on highly individual, complex prototype setups in early design stages. Lately, the lack of replicability imposed by such experiments has vitally been discussed. In this paper, we contribute a benchmark toolkit to compose scalable, synthetic tasks as a unified basis for future efforts towards more replicable research. To this end, we propose modular task boards which cover different domains and HRC scenarios. The design of benchmark tasks with these task boards is supported by a software application which generates structural task models and materials for benchmark problem reproduction by 3D printing. Our experiments show that these tasks can be carried out robustly by robots, hence preventing unintended robot failure and providing a controllable, reliably reproducible setting for user studies.
|
|
We503 |
Sveva/Normanna |
Robot Companions and Social Robots |
Regular Session |
Chair: Güneysu Özgür, Arzu | KTH |
Co-Chair: Hannibal, Glenda | Ulm University |
|
12:10-12:22, Paper We503.1 | |
Domestic Social Robots As Companions or Assistants? the Effects of the Robot Positioning on the Consumer Purchase Intentions |
|
Kim, Jun San (KB Financial Group), Kang, Dahyun (Korea Institute of Science and Technology), Choi, Jongsuk (Korea Inst. of Sci. and Tech), Kwak, Sonya Sona (Korea Institute of Science and Technology (KIST)) |
Keywords: Robot Companions and Social Robots, Assistive Robotics, User-centered Design of Robots
Abstract: This study explores the effects of the positioning strategy of domestic social robots on the purchase intention of consumers. Specifically, the authors investigate the effects of robot positioning as companions with as assistants and as appliances. The study results showed that the participants preferred the domestic social robots positioned as assistants rather than as companions. Moreover, for male participants, the positioning of domestic social robots as appliances was also preferred over robots positioned as companions. The study results also showed that the effects of positioning on the purchase intention were mediated by the participants’ perception of usefulness regarding the robot.
|
|
12:22-12:34, Paper We503.2 | |
Tolerating Untrustworthy Robots: Studying Human Vulnerability Experience within a Privacy Scenario for Trust in Robots |
|
Hannibal, Glenda (Ulm University), Dobrosovestnova, Anna (TU Wien), Weiss, Astrid (TU Wien) |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Philosophical Issues in Human-Robot Coexistence
Abstract: Focusing on human experience of vulnerability in everyday life interaction scenarios is still a novel approach. So far, only a proof-of-concept online study has been conducted, and to extend this work, we present a follow-up online study. We consider in more detail how human experience of vulnerability caused by a trust violation through a privacy breach affects trust ratings in an interaction scenario with the PEPPER robot assisting with clothes shopping. We report the results from 32 survey responses and 11 semi-structured interviews. Our findings reveal the existence of the privacy paradox also for studying trust in HRI, which is a common observation describing a discrepancy between the stated privacy concerns by people and their behavior to safeguard it. Moreover, we reflect that participants considered only the added value of utility and entertainment when deciding whether or not to interact with the robot again, but not the privacy breach. We conclude that people might tolerate an untrustworthy robot even when they are feeling vulnerable in the everyday life situation of clothes shopping.
|
|
12:34-12:46, Paper We503.3 | |
Socially Assistive Robotics and Wearable Sensors for Intelligent User Dressing Assistance |
|
Robinson, Fraser (University of Toronto), Cen, Zinan (University of Toronto), Naguib, Hani E. (University of Toronto), Nejat, Goldie (University of Toronto) |
Keywords: Robot Companions and Social Robots, Detecting and Understanding Human Activity, Assistive Robotics
Abstract: Individuals living with cognitive impairments are faced with unique challenges in completing important activities of daily living such as dressing. In this paper, we present the first socially assistive robot-wearable sensors system to provide dressing assistance through social human-robot interactions. A novel robot-wearable architecture is development to classify, prompt and provide feedback on user dressing actions. Namely, strain sensor based smart clothing on the user are used for joint angle mapping, which are then classified into different dressing steps. The robot uses a MAXQ hierarchical learning method to learn assistive behaviors to aid a user with the sequence of dressing steps. Experiments were validated the performance of the joint angle mapping model, dressing action classifier, and behavior adaptation modules as well as the overall system for dressing assistance.
|
|
12:46-12:58, Paper We503.4 | |
Mitigating Judgmental Fallacies with Social Robot Advisors |
|
Polakow, Torr (Tel Aviv University), Teodorescu, Andrei (Tel-Hai Academic College), Busemeyer, Jerome (Indiana University), Gordon, Goren (Tel Aviv University) |
Keywords: Applications of Social Robots, Robot Companions and Social Robots, Creating Human-Robot Relationships
Abstract: The role of social robots as advisors for decision making is investigated. It has been consistently shown that when asked to rank options, people often make fallacious judgements. Furthermore, such fallacies can be sensitive to presentation mode. We study whether having social robot advisors presenting options can mitigate and reduce the fallacy rates of participants. For this purpose we explored a novel presentation mode of options with conjunction judgmental fallacy, namely, choosing among different rank-orders, as opposed to rank the options themselves. We first show that the mere presentation mode has a significant mitigating effect on the fallacy rates. We then further show that when social robot advisors present the rank-orders, the fallacy rates of participants significantly decrease even further. Moreover, participants perceive the fallacious robot as more likeable and intelligent, but assign the non-fallacious robot to trustworthy roles, such as jury and analyst. These results suggest that social robot advisors may be used to influence and mitigate human fallacious judgmental decision making.
|
|
12:58-13:10, Paper We503.5 | |
Positive Facial and Verbal Sentiments of a Social Robot Mitigates Negative User Perceptions in Attitudinally Dissimilar Interactions |
|
Gittens, Curtis (The University of the West Indies Cave Hill Campus), Jiang, Ying (Ontario Tech University), Hung, Patrick, C.K (OntarioTech University) |
Keywords: Robot Companions and Social Robots, Social Intelligence for Robots, Creating Human-Robot Relationships
Abstract: Social robots are increasingly used in different services such as education, hospitality, healthcare, and elderly care. These robots are often adopted to provide people with information and guidance, which sometimes can be quite different from people’s expectations. Therefore, it is important to understand whether similar or dissimilar opinions or attitudes of a focal subject held by the user and robot affects the user’s perception of the robot and future adoption intentions. We propose that robot design (positive facial expressions and verbal sentiments) may moderate the effect of attitude dissimilarity on robot perception. It is well-documented in the psychology literature that attitude similarity affects human-human relationships; however, no such study has been undertaken in human-robot interactions. Our results showed that when the robot expressed its views using a neutral facial expression and verbal sentiment, attitude similarity influenced robot perception, i.e., those participants who had similar (vs. dissimilar) attitudes as the robot perceived the robot to be positive (vs. negative). However, when the robot expressed its opinion with a positive facial expression and verbal sentiment, participants judged the robot as positive regardless of attitude similarity between the user and the robot. These results indicate that positive facial expressions and the verbal sentiment of a robot can nullify any negative effect when a user and a robot share very different opinions.
|
|
We601 |
Auditorium |
User-Centered Design of Robots II |
Regular Session |
Chair: Maria Joseph, Felix Orlando | Indian Institute of Technology Roorkee |
Co-Chair: Trovato, Gabriele | Shibaura Institute of Technology |
|
14:30-14:42, Paper We601.1 | |
Fuzzy Based Control of a Flexible Bevel-Tip Needle for Percutaneous Interventions |
|
Maria Joseph, Felix Orlando (Indian Institute of Technology Roorkee) |
Keywords: Medical and Surgical Applications, Machine Learning and Adaptation
Abstract: In Minimal Invasive Surgical procedures, flexible bevel tip needles are widely used for percutaneous interventions due to the advantage of enhancing the target reaching accuracy. Here, the target reaching accuracy suffers due to tissue in-homogeneity, deformation in tissue domain and improper steering techniques. The main objective of the percutaneous interventional procedures is ensuring patient safety and reaching desired target position accurately. Several researchers have already developed many approaches to control the needle steering for precise target reaching. To overcome complex approaches in existing controllers, we have proposed a fuzzy based controller to regulate the needle in a specified plane. Our designed method involves the needle non-holonomic constraints based kinematics inside tissue domain and Lyapunov analysis based fuzzy rule base for fuzzy inference system which ensures the closed loop stability of needling system for percutaneous interventional procedures. We have also validated our designed control scheme through extensive simulations and experimentation in biological tissue.
|
|
14:42-14:54, Paper We601.2 | |
User Requirements for a Robot Teleoperation System for General Medical Examination |
|
Chirapornchai, Chatchai (Bristol Robotics Laboratory, University of the West of England), Bremner, Paul (University of the West of England), Giuliani, Manuel (University of the West of England, Bristol), Niyi-Odumosu, Faatihah (University of the West of England) |
Keywords: User-centered Design of Robots, Assistive Robotics, Medical and Surgical Applications
Abstract: Thailand, as well as many other countries worldwide, is facing a shortage of medical staff. We purpose a solution to improve medical services in health centres: a robot teleoperation system to allow patients to consult with doctors from public hospitals, and for doctors to examine and make decisions about their required care. To develop such a system, a user-centred design (UCD) process is followed. Here we present an important first step in this process to establish user requirements for such a system. Hence, we have conducted a focus group with Thai medical staff from Banphaeo General Hospital and an online survey with potential patients. An online collaborative board has been setup to facilitate running the focus group virtually and provide an effective tool to gather data. A qualitative data is then analysed using a framework analysis. Based on this work, we present a list of user requirements for doctors, patients and assistants and discuss how the collected requirements can be transferred into technical specifications of the system. Our study found that communication among different user groups is the most important requirement.
|
|
14:54-15:06, Paper We601.3 | |
Contributions of User Tests in a Living Lab in the Co-Design Process of Human-Robot Interactions |
|
Olivier, Marion (UTT), REY, Stéphanie (Berger-Levrault), VOILMY, DIMITRI (Troyes University of Technology), Ganascia, Jean-Gabriel (Sorbonne University and CNRS) |
Keywords: User-centered Design of Robots, Creating Human-Robot Relationships, Social Intelligence for Robots
Abstract: In the context of an aging population, technological tools such as socially assistive robots (SARs) have made their appearance as tools to assist the work practices of care teams in contact with the elderly. We want to show the contribution of user tests in a Living Lab in the process of co-designing such tools. These tests allow both technical problems to be highlighted and Human-Robot Interactions (HRI) to be studied in an iterative way. We conducted tests with elderly users as well as teenagers, to analyze the strategies used by users of the system. This also allowed us to observe the first emotional reaction and the first interaction modality (tactile or vocal) provoked by robot. Based on these observations, we propose perspectives for a better HRI.
|
|
15:06-15:18, Paper We601.4 | |
Designing the Mobile Robot Kevin for a Life Science Laboratory |
|
Kleine-Wechelmann, Sarah (Fraunhofer Institute for Manufacturing Engineering and Automatio), Bastiaanse, Kim (Hochschule Der Medien Stuttgart), Freundel, Matthias (Fraunhofer IPA), Becker-Asano, Christian (Stuttgart Media University) |
Keywords: User-centered Design of Robots, Assistive Robotics, Robot Companions and Social Robots
Abstract: Laboratories are being increasingly automated. In small laboratories individual processes can be fully automated, but this is usually not economically viable. Nevertheless, in- dividual process steps can be performed by flexible, mobile robots to relieve the laboratory staff. As a contribution to the requirements in a life science laboratory the mobile, dextrous robot Kevin was designed by the Fraunhofer IPA research institute in Stuttgart, Germany. Kevin is a mobile service robot which is able to fulfill non-value adding activities such as transportation of labware. This paper gives an overview of Kevin’s functionalities, its development process, and presents a preliminary study on how its lights and sounds improve user interaction.
|
|
15:18-15:30, Paper We601.5 | |
From Task Analysis to Wireframe Design: An Approach to User-Centered Design of a GUI for Mobile HRI at Assembly Workplaces |
|
Colceriu, Christian (Krones AG), Leichtmann, Benedikt (Johannes Kepler University Linz), Brell-Cokcan, Sigrid (RWTH Aachen University), Jonas, Wolfgang (Braunschweig University of Art), Nitsch, Verena (Universität Der Bundeswehr München) |
Keywords: User-centered Design of Robots, HRI and Collaboration in Manufacturing Environments, Assistive Robotics
Abstract: While user-centered design philosophy and corresponding design recommendations are central pillars of human-robot interaction (HRI) research, the process how to move from such abstract and generalized design recommendations to concrete, context-specific design implementations remains under-researched and vague in the literature. The goal of this paper is therefore to show an approach for moving from abstract design recommendations to a concrete interface, and thus illustrates a design process that is rarely illustrated in concrete terms in HRI. This is done using a real-world use case of designing a possible user-centered interface for mobile cooperative manufacturing robots for assembly work in a medium-sized company. A study is presented to conceptualize and test a Research-through-Design approach, which combines transdisciplinary methods to determine relevant information which should be displayed on a graphical user interface (GUI) for HRI. Based on the use case, a Goal-Directed Task Analysis (GDTA) was conducted, consisting of a participatory observation and interviews with subject matter experts to analyze an assembly task from the work objective to the information units. The acquired information has been transferred to a physical model. A wireframe has been created to show how the results of the GDTA and the physical model can be applied to a GUI. The wireframe design has been evaluated through qualitative interviews with end users (n = 12) to get first estimates about its relevance. In order to validate the applied methods, design and engineering students (n = 10) repeated the process in stages followed by interviews. The results indicate that the method mix shows potential and leads to supportive user interfaces.
|
|
We602 |
Aragonese/Catalana |
Short and Long-Term Personalisation in Human-Robot Interaction |
Special Session |
Chair: Andriella, Antonio | Pal Robotics |
Co-Chair: Louie, Wing-Yue Geoffrey | Oakland University |
Organizer: Andriella, Antonio | Pal Robotics |
Organizer: Louie, Wing-Yue Geoffrey | Oakland University |
Organizer: Irfan, Bahar | KTH Royal Institute of Technology |
Organizer: Di Nuovo, Alessandro | Sheffield Hallam University |
Organizer: Rossi, Silvia | Universita' Di Napoli Federico II |
|
14:30-14:42, Paper We602.1 | |
Enhancing Affective Robotics Via Human Internal State Monitoring (I) |
|
Staffa, Mariacarla (University of Naples Parthenope), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Applications of Social Robots, Detecting and Understanding Human Activity, Motivations and Emotions in Robotics
Abstract: During the last years, many solutions have been proposed to achieve a natural Human-Robot Interaction (HRI) and Communication paving the way to new paradigms of understanding and adaptation based on mutual affective perception. Especially in human-robot social interaction, it is helpful not only that people can understand the robot’s behavioral state, but also robots possess the ability to detect, interpret and adaptively react to human affective responses. Typical approaches are able to assess humans’ affective responses from the observation of overt behavior. However, there are cases in which the overt observable behaviors could not match with the internal states (e.g., people with diseases compromising normal emotional responses). In such cases, having an objective measure of the users’ state from ‘inside’ is of paramount importance. This work presents an affect detection model able to provide a measure of the human affective state, with particular focus on the stress state, from the analysis of EEG users’ activity during the interaction with a social humanoid robot endowed with diverse affective elicitation behaviors. We argue that monitoring the stress state of a human during HRI is necessary to adapt the robot behavior in a way to avoid possible counterproductive effects of its use.
|
|
14:42-14:54, Paper We602.2 | |
Transparent Learning from Demonstration for Robot-Mediated Therapy (I) |
|
Tyshka, Alexander (Oakland University), Louie, Wing-Yue Geoffrey (Oakland University) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots
Abstract: Robot-mediated therapy is an emerging field of research seeking to improve therapy for children with Autism Spectrum Disorder (ASD). Current approaches to autonomous robot-mediated therapy often focus on having a robot teach a single skill to children with ASD and lack a personalized approach to each individual. More recently, Learning from Demonstration (LfD) approaches are being explored to teach socially assistive robots to deliver personalized interventions after they have been deployed but these approaches require large amounts of demonstrations and utilize learning models that cannot be easily interpreted. In this work, we present a LfD system capable of learning the delivery of autism therapies in a data-efficient manner utilizing learning models that are inherently interpretable. The LfD system learns a behavioral model of the task with minimal supervision via hierarchical clustering and then learns an interpretable policy to determine when to execute the learned behaviors. The system is able to learn from less than an hour of demonstrations and for each of its predictions can identify demonstrated instances that contributed to its decision. The system performs well under unsupervised conditions and achieves even better performance with a low-effort human correction process that is enabled by the interpretable model.
|
|
14:54-15:06, Paper We602.3 | |
Learning Personalized Human-Aware Robot Navigation Using Virtual Reality Demonstrations from a User Study (I) |
|
de Heuvel, Jorge (University of Bonn), Corral, Nathan (University of Bonn), Bruckschen, Lilli (University of Bonn), Bennewitz, Maren (University of Bonn) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Programming by Demonstration, Novel Interfaces and Interaction Modalities
Abstract: For the most comfortable, human-aware robot navigation, subjective user preferences need to be taken into account. This paper presents a novel reinforcement learning framework to train a personalized navigation controller along with an intuitive virtual reality demonstration interface. The conducted user study provides evidence that our personalized approach significantly outperforms classical approaches with more comfortable human-robot experiences. We achieve these results using only a few demonstration trajectories from non-expert users, who predominantly appreciate the intuitive demonstration setup. As we show in the experiments, the learned controller generalizes well to states not covered in the demonstration data, while still reflecting user preferences during navigation. Finally, we transfer the navigation controller without loss in performance to a real robot.
|
|
15:06-15:18, Paper We602.4 | |
ClassMate Robot: A Robot to Support Teaching and Learning Activities in Schools (I) |
|
Cucciniello, Ilenia (Università Degli Studi Di Napoli Federico II), L'Arco, Gianluca (University of Naples Federico II), Rossi, Alessandra (University of Naples Federico II), Autorino, Claudio (Protom Group S.p.a), Santoro, Giuseppe (Protom Group S.p.a), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots, Multimodal Interaction and Conversational Skills
Abstract: Educational robotics field is an innovative approach based on the use of robots in schools to support teaching and learning activities. While several robotic solutions exist in support of the STEM teaching activities, in this work, we present ``Classmate Robot" as a new social robot to be used in the classrooms as a support to the learning experience through interaction. Classmate Robot has been designed and developed to improve the effectiveness of the activities by providing a framework where the robot's behaviors can be personalized, and learning applications can be easily integrated on top of the robot interaction capabilities. This approach aims to increase the engagement of learners. We introduce the ROS-based architecture developed that is divided into three main layers plus an application layer. As a social robot, it combines several multimodal social cues to interact and communicate with students and teachers. Moreover, the robot is endowed with a set of behaviors designed to be compliant with its role of ``classmate" in the interaction with the students.
|
|
15:18-15:30, Paper We602.5 | |
A Self Learning System for Emotion Awareness and Adaptation in Humanoid Robots (I) |
|
Shenoy, Sudhir (University of Virginia), Jiang, Yusheng (University of Virginia), Lynch, Tyler (University of Virginia), Manuel, Lauren Isabelle (University of Virginia), Doryab, Afsaneh (Carnegie Mellon University) |
Keywords: Motivations and Emotions in Robotics, Machine Learning and Adaptation, Social Intelligence for Robots
Abstract: Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions
|
|
We603 |
Sveva/Normanna |
Hand-Object Interaction: From Human Demonstrations to Robot Manipulation |
Special Session |
Chair: Carfì, Alessandro | University of Genoa |
Co-Chair: Pozzi, Maria | University of Siena |
Organizer: Carfì, Alessandro | University of Genoa |
Organizer: Mastrogiovanni, Fulvio | University of Genoa |
Organizer: Faria, Diego | Aston University |
Organizer: Perdereau, Véronique | Sorbonne University |
Organizer: Patten, Timothy | University of Technology Sydney |
Organizer: Vincze, Markus | Vienna University of Technology |
|
14:30-14:42, Paper We603.1 | |
An Affordable System for the Teleoperation of Dexterous Robotic Hands Using Leap Motion Hand Tracking and Vibrotactile Feedback (I) |
|
Coppola, Claudio (Queen Mary University of London), Solak, Gokhan (Queen Mary University of London), Jamone, Lorenzo (Queen Mary University London) |
Keywords: Degrees of Autonomy and Teleoperation, Social Learning and Skill Acquisition Via Teaching and Imitation, HRI and Collaboration in Manufacturing Environments
Abstract: Using robot manipulators in contexts where it is undesirable or impractical for humans to physically intervene is crucial for several applications, from manufacturing to extreme environments. However, robots require a high degree of intelligence to operate in those environments, especially if they are not fully structured. Teleoperation compensates for this limitation by connecting the human operator to the robot using human-robot interfaces. The remotely operated sessions can also be used as demonstrations to program more powerful autonomous agents. In this article, we report a thorough user study to characterise the effect of simple vibrotactile feedback on the performance and cognitive load of the human user in performing teleoperated grasping and manipulation tasks. The experiments are performed using a portable and affordable bilateral teleoperation system that we designed, composed of a Leap Motion sensor and a custom-designed vibrotactile haptic glove to operate a 4-fingered robot hand equipped with 3-axis force sensors on the fingertips; the software packages we developed are open-source and publicly available. Our results show that vibrotactile feedback improves teleoperation and reduces cognitive load, especially for complex in-hand manipulation tasks.
|
|
14:42-14:54, Paper We603.2 | |
In-Hand Manipulation Planning Using Human Motion Dictionary (I) |
|
HAMMOUD, Ali (Sorbonne University), Belcamino, Valerio (Università Degli Studi Di Genova), Carfì, Alessandro (University of Genoa), Perdereau, Véronique (Sorbonne University), Mastrogiovanni, Fulvio (University of Genoa) |
Keywords: Programming by Demonstration, Motion Planning and Navigation in Human-Centered Environments, Androids
Abstract: Dexterous in-hand manipulation is a peculiar and useful human skill. This ability requires the coordination of many senses and hand motion to adhere to many constraints. These constraints vary and can be influenced by the object characteristics or the specific application. One of the key elements for a robotic platform to implement reliable in-hand manipulation skills is to be able to integrate those constraints in their motion generations. These constraints can be implicitly modelled, learned through experience or human demonstrations. We propose a method based on motion primitives dictionaries to learn and reproduce in-hand manipulation skills. In particular, we focused on fingertip motions during the manipulation, and we defined an optimization process to combine motion primitives to reach specific fingertip configurations. The results of this work show that the proposed approach can generate manipulation motion coherent with the human one and that manipulation constraints are inherited even without an explicit formalization.
|
|
14:54-15:06, Paper We603.3 | |
Learning Grasping Strategies for a Soft Non-Anthropomorphic Hand from Human Demonstrations (I) |
|
Turco, Enrico (Istituto Italiano Di Tecnologia), Bo, Valerio (Istituto Italiano Di Tecnologia), Tavassoli, Mehrdad (Istituto Italiano Di Tecnologia), Pozzi, Maria (University of Siena), Prattichizzo, Domenico (University of Siena) |
Keywords: Programming by Demonstration, Innovative Robot Designs
Abstract: Finding effective grasp strategies constitutes one of the main challenges in robotic manipulation, especially when dealing with soft, underactuated, and non-anthropomorphic hands. This work presents a Learning from Demonstration approach to extract grasp primitives using a novel reconfigurable soft hand, the Soft ScoopGripper (SSG). Starting from human demonstrations, we derived Gaussian models through which we were able to devise different grasping strategies, exploiting the SSG features. As the grasping strategies are tightly related to the characteristics of the object to be grasped, we tested two different ways of modeling objects in the training dataset and we comparatively evaluated the resulting primitives. Experimental grasping trials on unknown test objects confirmed the effectiveness of the learned primitives and showed how assuming different levels of knowledge about the object representation in the training phase influences the grasp success.
|
|
15:06-15:18, Paper We603.4 | |
From Handheld to Unconstrained Object Detection: A Weakly-Supervised On-Line Learning Approach (I) |
|
Maiettini, Elisa (Humanoid Sensing and Perception, Istituto Italiano Di Tecnologia), Maracani, Andrea (Istituto Italiano Di Tecnologia and University of Genoa), Camoriano, Raffaello (Istituto Italiano Di Tecnologia), Pasquale, Giulia (Istituto Italiano Di Tecnologia), Tikhanoff, Vadim (Italian Institute of Technology), Rosasco, Lorenzo (Istituto Italiano Di Tecnologia & MassachusettsInstitute OfTechn), Natale, Lorenzo (Istituto Italiano Di Tecnologia) |
Keywords: Machine Learning and Adaptation, Social Learning and Skill Acquisition Via Teaching and Imitation
Abstract: Deep Learning (DL) based methods for object detection achieve remarkable performance at the cost of computationally expensive training and extensive data labeling. Robots embodiment can be exploited to mitigate this burden by acquiring automatically annotated training data via a natural interaction with a human showing the object of interest, handheld. However, learning solely from this data may introduce biases (the so-called domain shift), and prevents adaptation to novel tasks. While Weakly-supervised Learning (WSL) offers a well-established set of techniques to cope with these problems in general-purpose Computer Vision, its adoption in challenging robotic domains is still at a preliminary stage. In this work, we target the scenario of a robot trained in a teacher-learner setting to detect handheld objects. The aim is to improve detection performance in different settings by letting the robot explore the environment with a limited human labeling budget. We compare several techniques for WSL in detection pipelines to reduce model re-training costs without compromising accuracy, proposing solutions which target the considered robotic scenario. We show that the robot can improve adaptation to novel domains, either by interacting with a human teacher (Active Learning) or with an autonomous supervision (Semi-supervised Learning). We integrate our strategies into an on-line detection method, achieving efficient model update capabilities with few labels. We experimentally benchmark our method on challenging robotic object detection tasks under domain shift.
|
|
We801 |
Auditorium |
Applications of Social Robots II |
Regular Session |
Chair: Ostrowski, Anastasia K. | Massachusetts Institute of Technology |
Co-Chair: Schwartz, Tim | German Research Center for Artificial Intelligence (DFKI GmbH) |
|
16:00-16:12, Paper We801.1 | |
Creating Informative and Successful Human Robot Interaction in a Real-World Service Environment |
|
Mikkelsen, Rikke Vivi (Aalborg University), Rehm, Matthias (Aalborg University) |
Keywords: Applications of Social Robots, User-centered Design of Robots, Child-Robot Interaction
Abstract: This study explores how to best utilize service robots when deploying them in customer facing contexts for information requests. Many companies investing in such robots unfortunately fail to integrate them successfully in a way where they facilitate meaningful interactions with users. Copenhagen Visitor Service is one of those companies who have tried to integrate a Pepper robot into their service environment but failed due to several factors relating to both the environment, the robot and lack of a strategical implementation. Through an exploratory field study with in-situ feedback from the participants, it became evident that the robot in this specific setting needed a more focused purpose and an interaction flow that was not reliant on verbal communication. A prototype was developed consisting of an informative quiz targeting families. A subsequent field deployment demonstrated a better and more meaningful user experience, indicating that careful consideration of the use context is paramount for a successful integration as well as a strategical implementation.
|
|
16:12-16:24, Paper We801.2 | |
Should AI Systems in Nuclear Facilities Explain Decisions the Way Humans Do? an Interview Study |
|
Taylor, Hazel (The University of Manchester), Jay, Caroline (University of Manchester), Lennox, Barry (The University of Manchester), Cangelosi, Angelo (University of Manchester), Dennis, Louise (University of Manchester) |
Keywords: User-centered Design of Robots, Motion Planning and Navigation in Human-Centered Environments, Cognitive Skills and Mental Models
Abstract: There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.
|
|
16:24-16:36, Paper We801.3 | |
Iterative User-Centric Development of Mobile Robotic Systems with Intuitive Multimodal Human-Robot Interaction in a Clinic Environment |
|
Horn, Hanns-Peter (HFC Human-Factors-Consult GmbH), Nadig, Matthias (German Research Center for Artificial Intelligence (DFKI)), Hackbarth, Johannes (DFKI GmbH), Willms, Christian (DFKI), Jacob, Caspar (German Research Center for Artificial Intelligence), Autexier, Serge (German Research Centre for Artificial Intelligence (DFKI)), Schwartz, Tim (German Research Center for Artificial Intelligence (DFKI GmbH)), Kruijff-Korbayova, Ivana (DFKI) |
Keywords: User-centered Design of Robots, Applications of Social Robots, Multimodal Interaction and Conversational Skills
Abstract: Deploying robots with the ability to autonomously navigate in the environment of a clinic necessitates carefully designing their verbal and non-verbal behaviour in order to avoid irritating, worrying or even endangering users, bystanders and passersby. We describe a user-centred approach used to design, implement and evaluate an interaction concept for mobile robotic assistants for a clinic environment, resulting in two robotic assistant prototypes and a generalized multimodal interaction framework applicable to a variety of autonomously navigating robotic agents. We present the steps of the design process and intermediate findings, the final system components and experiment results showing good acceptance of the interaction concept. A clip of the two robots in action can be found under https://short.dfki.de/ro-man .
|
|
16:36-16:48, Paper We801.4 | |
Ethics, Equity, & Justice in Human-Robot Interaction: A Review and Future Directions |
|
Ostrowski, Anastasia K. (Massachusetts Institute of Technology), Walker, Raechel (Massachusetts Institute of Technology), Das, Madhurima (Massachusetts Institute of Technology), Yang, Maria (Massachusetts Institute of Technology), Breazeal, Cynthia (MIT), Park, Hae Won (MIT), Verma, Aditi (University of Michigan) |
Keywords: Ethical Issues in Human-robot Interaction Research, User-centered Design of Robots
Abstract: As social robots rapidly become mainstream technologies, it is critical for HRI researchers and practitioners to consider their societal and ethical impacts as well as their ability to perpetuate or mitigate intersectional social inequities and hierarchies relating to race, class, gender, disability, and other social axes. Through an equity, ethics, and justice-centered audit of human-robot interaction (HRI) scholarship, we reveal how the HRI community has engaged with these topics over the past two decades. We use the five senses ethical framework that has been proposed specifically for use in HRI contexts to perform the review paired with an analysis of equity and justice. We then expand the Design Justice framework (a framework for analyzing how design impacts society and distributes benefits and burdens to society through the lenses of equity, values, scope, ownership, and accountability) to HRI contexts through the inclusion of HRI-specific topics such as autonomy, transparency, deception, and policies. We invite researchers and practitioners to explore the HRI Equitable Design framework to work towards designing equitable and inclusive HRI research studies and technologies.
|
|
16:48-17:00, Paper We801.5 | |
Practical Considerations for Deploying Robot Teleoperation in Therapy and Telehealth |
|
Elbeleidy, Saad (Colorado School of Mines), Mott, Terran (Colorado School of Mines), Liu, Dan (ATLAS Institute, University of Colorado Boulder), Williams, Tom (Colorado School of Mines) |
Keywords: Assistive Robotics, User-centered Design of Robots, Robots in Education, Therapy and Rehabilitation
Abstract: Socially Assistive Robots (SARs) have shown promise, but there are still practical challenges to their widespread adoption. Recent research has demonstrated the advantages of teleoperated systems in this space and called for better guidelines for teleoperation interfaces. We ran group usability tests with therapists with no experience with robots to learn more about the challenges they face. We found that robot-novice therapists understand how robots can be effective in therapy. However, learning to use a robot interface can be challenging for new users. These challenges include the unfamiliar metaphors used for robot connection and the need to create, acquire, or share robot interaction content. We also identify users’ needs that are perhaps non-obvious in a research context, such as privacy of client health information and professional boundaries with client families when using electronic tools. We make several recommendations based on analysis of our group usability tests: (1) developing dedicated interfaces for content authoring that account for caregiver technical expertise, (2) implementing content organization and sharing tools, (3) using connection metaphors that non-technical users may be more familiar with such as phone calls or web URLs, (4) considering user privacy in connection methods chosen, especially within telehealth. Most importantly, we encourage further research in SAR teleoperation that focuses on caregivers as teleoperators
|
|
We802 |
Aragonese/Catalana |
Social Intelligence for Robots II |
Regular Session |
Chair: Rehm, Matthias | Aalborg University |
Co-Chair: Ben Allouch, Somaya | Amsterdam University |
|
16:00-16:12, Paper We802.1 | |
Giving Social Robots a Conversational Memory for Motivational Experience Sharing |
|
Saravanan, Avinash (TU Delft), Tsfasman, Maria (TU Delft), Neerincx, Mark (TNO), Oertel, Catharine (Delft University of Technology) |
Keywords: Social Intelligence for Robots, Creating Human-Robot Relationships, Motivations and Emotions in Robotics
Abstract: In ongoing and consecutive conversations with persons, a social robot has to determine which aspects to remember and how to address them in the conversation. In the health domain, important aspects concern the health-related goals, the experienced progress (expressed sentiment) and the ongoing motivation to pursue them. Despite the progress in speech technology and conversational agents, most social robots lack a memory for such experience sharing. This paper presents the design and evaluation of a conversational memory for personalized behavior change support conversations on healthy nutrition via memory-based motivational rephrasing. The main hypothesis is that referring to previous sessions improves motivation and goal attainment, particularly when references vary. In addition, the paper explores how far motivational rephrasing affects user's perception of the conversational agent (the virtual Furhat). An experiment with 79 participants was conducted via Zoom, consisting of three conversation sessions. The results showed a significant increase in participants' change in motivation when multiple references to previous sessions were provided.
|
|
16:12-16:24, Paper We802.2 | |
Robots with Theory of Mind for Humans: A Survey |
|
Gurney, Nikolos (University of Southern California), Pynadath, David V. (University of Southern California) |
Keywords: Social Intelligence for Robots, Monitoring of Behaviour and Internal States of Humans, Cognitive Skills and Mental Models
Abstract: Theory of Mind (ToM) is a psychological construct that captures the ability to ascribe mental states to others and then use those representations for explaining and predicting behavior. We review recent progress in endowing artificially intelligent robots with ToM. A broad array of modeling, experimental, and benchmarking approaches and methods are present in the extant literature. Unlike other domains of human cognition for which research has achieved super-human capabilities, ToM for robots lacks a unified construct and is not consistently benchmarked or validated---realities which possibly hinder progress in this domain. We argue that this is, at least in part, due to inconsistent defining of ToM, no presence of a unifying modeling construct, and the absence of a shared data resource. We believe these would improve the ability of the research community to compare the ToM abilities of different systems. We suggest that establishing a shared definition of ToM, creating a shared data resource that supports consistent benchmarking & validation, and developing a generalized modeling tool are critical steps towards giving robots ToM capabilities that lay observers will recognize as such.
|
|
16:24-16:36, Paper We802.3 | |
The Effects of Interaction Strategy and Robot Intent on Shopping Behavior |
|
Burg, Cedric (Lorenz Technology), Rehm, Matthias (Aalborg University), Gomez Cubero, Carlos (Aalborg University) |
Keywords: Assistive Robotics, Social Intelligence for Robots, Robotic Etiquette
Abstract: There is a growing interest in the retail industry to deploy service robots for customer interactions. Deploying such customer-facing robots raises the question of how we want to interact with these robots and reveals concerns that businesses and marketers could use robots to manipulate consumers. In this experiment, 67 study participants interacted with different virtual shopping robots that tried to impact "shoppers" purchasing decisions. The results indicate that a robot can increase consumer spending. The study exemplifies how a collaborative robot could be used as a customer-serving robot in a retail environment and investigates the impact of (i) different interaction strategies (human vs robot control) and (ii) dark patterns on shopping behavior (manipulative vs supportive robot).
|
|
16:36-16:48, Paper We802.4 | |
"This Bot Knows What I'm Talking About!" Human-Inspired Laughter Classification Methods for Adaptive Robotic Comedians |
|
Gray, Carson (Oregon State University), Webster, Trevor (Oregon State University), Ozarowicz, Brian (Oregon State University), Chen, Yuhang (Oregon State University), Bui, Timothy (Oregon State University), srivastava, ajitesh (University of Southern California), Fitter, Naomi T. (Oregon State University) |
Keywords: Robots in art and entertainment, Machine Learning and Adaptation, Social Intelligence for Robots
Abstract: Robotic comedians (and social robots generally) need to recognize and adapt to human responses during playful dialog. To support this ability, we determined design guidelines via a survey of 20 human comedians and developed a machine learning pipeline to support comedian-like behaviors by our robotic system. Based on comedian input, we identified that discerning laughter vs. no laughter during a joke setup and big laugh vs. so-so response vs. no laugh after a punchline were important skills for a comedian. To enable these abilities in a robotic system, we used an existing dataset of robot comedy performance audio to train classifiers for audience responses during the setup and after the punchline of jokes. Top-performing models for the above types of discernment performed similarly to human raters who completed the same classification task. Comparison of the current results to our past efforts of a similar nature reveal repeatability of top-performing approaches and generalizability of the approaches to new parts of robot comedy routines. The social intelligence supported by this work can promote the likability and acceptance of robots.
|
|
16:48-17:00, Paper We802.5 | |
Advancing Socially-Aware Navigation for Public Spaces |
|
Salek Shahrezaie, Roya (University of Nevada, Reno), Manalo, Bethany (University of Nevada, Reno), Brantley, Aaron (University of Nevada, Reno), Lynch, Casey R. (University of Nevada, Reno), Feil-Seifer, David (University of Nevada, Reno) |
Keywords: Social Intelligence for Robots, Applications of Social Robots
Abstract: Mobile robots must navigate efficiently, reliably, and appropriately around people when acting in shared social environments. For robots to be accepted in such environments, we explore robot navigation for the social contexts of each setting. Navigating through dynamic environments solely considering a collision-free path has long been solved. In human-robot environments, the challenge is no longer about efficiently navigating from one point to another. Autonomously detecting the context and adapting to an appropriate social navigation strategy is vital for social robots' long-term applicability in dense human environments. As complex social environments, museums are suitable for studying such behavior as they have many different navigation contexts in a small space. Our prior Socially-Aware Navigation model considered context classification, object detection, and pre-defined rules to define navigation behavior in more specific contexts, such as a hallway or queue. This work uses environmental context, object information, and more realistic interaction rules for complex social spaces. In the first part of the project, we convert real-world interactions into algorithmic rules for use in a robot's navigation system. Moreover, we use context recognition, object detection, and scene data for context-appropriate rule selection. We introduce our methodology of studying social behaviors in complex contexts, different analyses of our text corpus for museums, and the presentation of extracted social norms. Finally, we demonstrate applying some of the rules in scenarios in the simulation environment.
|
|
We803 |
Sveva/Normanna |
Ethical Issues in Human-Robot Interaction Research |
Regular Session |
Chair: Weiss, Astrid | TU Wien |
Co-Chair: Louie, Wing-Yue Geoffrey | Oakland University |
|
16:00-16:12, Paper We803.1 | |
With a Little Help of Humans. an Exploratory Study of Delivery Robots Stuck in Snow |
|
Dobrosovestnova, Anna (TU Wien), Schwaninger, Isabel (TU Wien), Weiss, Astrid (TU Wien) |
Keywords: Motivations and Emotions in Robotics, Curiosity, Intentionality and Initiative in Interaction, Ethical Issues in Human-robot Interaction Research
Abstract: People's willingness to help robots has been explored in the lab and in the wild in various settings. While previous studies relied on robotic prototypes, service robots are now already deployed in public spaces. This presents a novel and exciting opportunity for human-robot interaction (HRI) scholars to study robotic technologies in the context of their deployment. In this paper, we present the qualitative methodology and the outcomes of an exploratory mixed-methods (observations, autoethnography, online content analysis) study of people voluntarily helping commercially deployed delivery robots in Tallinn, Estonia. Based on the cumulative findings of the three methods, we discuss how spontaneous help towards robots manifested, the situational factors that may have contributed to the observed helping behaviors, and the role that the perceptions of the robots as cute and helpful may have played in these interactions. While our findings support the assumption that human help is a reasonable mitigation strategy to overcoming the challenges service robots may face in uncontrolled environments, we also emphasize the importance of considering the ethical implications when commercial technology relies in part on passersby to succeed in its tasks.
|
|
16:12-16:24, Paper We803.2 | |
Verbal and Non-Verbal Conflict Resolution Strategies for Service Robots |
|
Babel, Franziska (Ulm University), Kraus, Johannes (Ulm University, Dept. Human Factors), Hock, Philipp (Ulm University), Baumann, Martin (Ulm University) |
Keywords: Multimodal Interaction and Conversational Skills, Robotic Etiquette, Non-verbal Cues and Expressiveness
Abstract: When service robots will be employed in private and public spaces conflicts in human-robot interaction (HRI) might arise. To gain priority and continue its tasks, the robot would benefit from conflict resolution strategies (CRS) that are acceptable and effective. Previous studies have mainly investigated verbal or text-based CRS. As verbal interaction might not be suitable for every application context or robot type, movement-based CRS were investigated. First, four possible implementations were pre-tested in an online study (N = 101). Then two CRS varying in modality (verbal vs. motoric) and assertiveness (submissive vs. dominant) were tested for acceptance and compliance in a lab study (N = 31) and compared for three robot types (humanoid, zoomorphic, and mechanoid). The verbal appeal was the most effective strategy to achieve user compliance. The motoric dominant strategy (moving back and forth) was perceived as most assertive and least polite if applied by the mechanoid cleaning robot but was not more effective than the verbal strategy. These studies provide insights into the influence of robot type on the acceptability and effectiveness of robot conflict resolution behavior depending on the modality.
|
|
16:24-16:36, Paper We803.3 | |
Exploring the Influence of Culture and Gender on Older Adults’ Perception of Polite Robots |
|
Kumar, Shikhar (Ben-Gurion University of the Negev), Halloun, Samer (Ben-Gurion University), Itzhak, Eliran (Ben-Gurion University), Tractinsky, Noam (Ben-Gurion University of the Negev), Nimrod, Galit (Ben-Gurion University of the Negev), Edan, Yael (Ben-Gurion University of the Negev) |
Keywords: Robotic Etiquette
Abstract: This study explored the culture and gender differences among older adults interacting with polite non-humanoid robots. We have used Lakoff theory for the polite conversational maxims. A within-subjects experiment was designed with polite and non-polite, correct and erroneous behaving robots. The polite robot employs three sub-rules (“don’t impose,” “give options,” and “be friendly”). A user study was conducted with older adults from two cultural backgrounds: Israeli Jewish and Arab participants. The study revealed that the participants could not differentiate between the polite behaviors when the robot was correct. They were more annoyed with a polite robot making an error as compared to a non-polite erroneous robot. Whereas gender had no impact on participants’ evaluations, there were significant cultural differences between the groups: Although the Arab participants were primarily more skeptical, they enjoyed, were satisfied, and trusted the robot more than the Jewish participants. In addition, they preferred the direct non-polite behavior, whereas the Jewish participants liked the polite behavior more.
|
|
16:36-16:48, Paper We803.4 | |
Parental Attitudes, Trust, and Comfort with Using Robots for Providing Care to Children with Developmental Disabilities |
|
Louie, Wing-Yue Geoffrey (Oakland University), Korneder, Jessica (Oakland University,), Zeigler-Hill, Virgil (Oakland University) |
Keywords: Child-Robot Interaction, Ethical Issues in Human-robot Interaction Research, Robots in Education, Therapy and Rehabilitation
Abstract: Parents of children with developmental disabilities face significantly higher workloads then parents of neurotypical children due to their higher care giving demands. Consequently, parents of children with developmental disabilities often face emotional, physical, mental, and social health declines. Currently there has been significant research and development of robots for providing care to children with developmental disabilities to address a variety of care giving scenarios. However, it is presently unclear whether parents would be comfortable with robots interacting with their children in these different child-robot interaction scenarios. In this paper, we investigate parental comfort towards robots caring for children with developmental disabilities in a variety of interaction scenarios and the influence of parental negative attitudes towards robots as well as trust on their comfort towards robots in these scenarios. Overall, our findings suggest that US parental attitudes, trust, and comfort towards robots caring for children with developmental disabilities are neutral. Parents were most comfortable with a robot serving the role as a teaching assistant to children with a developmental disability and least comfortable as a bus driver. Furthermore, trust for robots had a medium positive association with comfort with child-robot interactions and negative attitudes toward robots had a medium negative association with comfort with child-robot interactions.
|
|
16:48-17:00, Paper We803.5 | |
Perception of Physical and Virtual Agents: Exploration of Factors Influencing the Acceptance of Intrusive Domestic Agents |
|
Zehnder, Eloise (Université De Lorraine), Dinet, Jérôme (University of Lorraine), Charpillet, Francois (Université De Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, Franc) |
Keywords: Human Factors and Ergonomics, Anthropomorphic Robots and Virtual Humans, Ethical Issues in Human-robot Interaction Research
Abstract: Domestic robots and agents are widely sold to the grand public, leading us to ethical issues related to the data harvested by such machines. While users show a general acceptance of these robots, concerns remain when it comes to information security and privacy. Current research indicates that there’s a privacy-security trade-off for better use, but the anthropomorphic and social abilities of a robot are also known to modulate its acceptance and use. To explore and deepen what literature already brought on the subject we examined how users perceived their robot (Replika, Roomba©, Amazon Echo©, Google Home©, or Cozmo©/Vector©) through an online questionnaire exploring acceptance, perceived privacy and security, anthropomorphism, disclosure, perceived intimacy, and loneliness. The results supported the literature regarding the potential manipulative effects of robot’s anthropomorphism for acceptance but also information disclosure, perceived intimacy, security, and privacy.
|
|
We901 |
Auditorium |
Non-Verbal Cues and Expressiveness II |
Regular Session |
Chair: Sgorbissa, Antonio | University of Genova |
Co-Chair: Yamada, Seiji | National Institute of Informatics |
|
17:10-17:22, Paper We901.1 | |
Comparison of Vehicle-To-Bicyclist and Vehicle-To-Pedestrian Communication Feedback Module: A Study on Increasing Legibility, Public Acceptance and Trust |
|
Schmidt-Wolf, Melanie (University of Nevada, Reno), Feil-Seifer, David (University of Nevada, Reno) |
Keywords: Non-verbal Cues and Expressiveness, Applications of Social Robots, Creating Human-Robot Relationships
Abstract: Autonomous vehicles have an existential communication challenge due to the lack of need for a human driver who can signal to vulnerable road users nearby about the intentions of the vehicle. This presents an opportunity for a vehicle to vulnerable road user communication system, such as for bicyclists. Enabling communication between bicyclists and autonomous vehicles will lead to an improvement of the bicyclists' safety in autonomous driving. If a bicyclist wants to pass the autonomous vehicle, the autonomous vehicle should provide feedback to the human about what it is about to do and what it would like the person to do. The user study presented in this paper investigated several possible options for an external display for effective nonverbal communication between an autonomous vehicle and a bicyclist. The results were compared to our recent study concerning vehicle-to-pedestrian communication. In total 208 participants were recruited for the vehicle-to-walker and vehicle-to-bicyclist feedback module studies. The results did not show significant differences between the communication modalities presented. This paper shows and discusses differences between vehicle-to-walker and vehicle-to-bicyclist feedback modules. It is plausible to use the same combination of interaction modes, symbols and text, as for the vehicle-to-pedestrian communication feedback module due to economic reasons. This study shows the necessity for more immersive environments to study vehicle to bicyclist communication needs in more detail.
|
|
17:22-17:34, Paper We901.2 | |
Physical Embodiment vs. Smartphone: Which Influences Presence and Anthropomorphism Most in Telecommunication? |
|
Yun, Nungduk (The Graduate University for Advanced Studies, SOKENDAI), Yamada, Seiji (National Institute of Informatics) |
Keywords: Non-verbal Cues and Expressiveness, Social Presence for Robots and Virtual Humans, Virtual and Augmented Tele-presence Environments
Abstract: Today, people are enjoying using teleconference systems like Zoom or Skype for social communication or for having drinks through a screen with others via the internet. In addition, some people have started using embodied systems called telepresence robots, such as the Beam robot. Some schools have started using telepresence robots so that students can attend school. However, in previous studies, systems have not been compared in terms of social presence and anthropomorphism, for example, robots compared with humans. Therefore, we wondered how the presence and anthropomorphism of such systems affect people. Therefore, we carried out a web-based experiment and conducted a one-way ANOVA (smartphone vs. telepresence robot with vs. without motion). Some people feel that telepresence robots bring a feeling of presence to remote places. Ironically, from the results, a video teleconference system using a smartphone and a telepresence robot did not create a feeling of presence, but regarding anthropomorphism, participants felt more of a human-likeness in the video teleconference system.
|
|
17:34-17:46, Paper We901.3 | |
Action Unit Generation through Dimensional Emotion Recognition from Text |
|
Bucci, Benedetta (University of Naples Federico II), Rossi, Alessandra (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Affective Computing, Non-verbal Cues and Expressiveness, Applications of Social Robots
Abstract: Expressiveness is a critical feature for the communication between humans and robots, and it helps humans to better understand and accept a robot. Emotions can be expressed through a variety of modalities: kinesthetic (via facial expression), body posture and gestures, auditory, thus the acoustic features of speech, and semantic, thus the content of what is said. One of the most effective modalities to communicate emotions is through facial expressions. Social robots often show facial expressions with coded animations. However, the robot must be able to express appropriate emotional responses according to the interaction with people. In this work, we consider verbal interactions between humans and robots and propose a system composed of two modules for the generation of facial emotions by recognising the arousal and valence values of a written sentence. The first module, based on Bidirectional Encoder Representations from Transformers, is deployed for emotion recognition in a sentence. The second, an Auxiliary Classifier Generative Adversarial Network, is proposed for the generation of facial movements for expressing the recognised emotion in terms of valence and arousal.
|
|
17:46-17:58, Paper We901.4 | |
Exploring Older Adults' Acceptance, Needs and Design Requirements towards Applying Social Robots in a Rehabilitation Context |
|
Liu, Baisong (Eindhoven University of Technology), Tetteroo, Daniel (Eindhoven University of Technology), Timmermans, Annick. A. A. (Rehabilitation Foundation Limburg (SRL)), Markopoulos, Panos (Eindhoven University of Technology) |
Keywords: Robots in Education, Therapy and Rehabilitation, Social Touch in Human–Robot Interaction, User-centered Design of Robots
Abstract: This paper presents a qualitative study that uses video prototypes and interviews to explore older adults' acceptance, needs, and design requirements towards a social robotic application for physical rehabilitation. Our study identified the benefits of applying social robots (SR) in physical rehabilitation. Further, we discovered participants' preference for an anthropomorphic social robot design. The data revealed a desire for social interaction could increase motivation for older adults to engage in an active lifestyle and social robot acceptance. However, participants showed low motivation for technology adoption and negatively anthropomorphize the social robot, which lowers acceptance for their application. This work complements the current user-centered explorations with SR in rehabilitation, and provides considerations for SR design for rehabilitative applications.
|
|
17:58-18:10, Paper We901.5 | |
A Microsociological Approach to Understanding the Boundary between Robot Cooperativeness and Uncooperativeness in Human-Robot Collaboration |
|
Abe, Naoko (The University of Sydney), Rye, David (The University of Sydney), Loke, Lian (The University of Sydney) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Interaction Kinesics, Non-verbal Cues and Expressiveness
Abstract: While existing approaches to human-robot collaboration typically focus on how to build robots that can work safely and fluently with humans on collaborative tasks, our research focuses on how people experience interaction with a robot and interpret its behaviour as cooperative or uncooperative. A microsociological theory was used to analyse the process of interaction as it unfolds, aiming to examine human perception of the cooperativeness and uncooperativeness of a robot and identify the boundary between them in the context of human-robot collaboration. Our hypothesis was that an unexpected robot movement during human-robot interaction will cause a negative perception of uncooperativeness. An experiment where the interaction was ‘disrupted’ by the robot’s movement during a collaborative task was conducted with 21 participants. Our findings, obtained through qualitative analysis based on semi-structured interviews and observations, show that the disruption leads certainly to a negative perception of the robot. The perception of robot cooperativeness or uncooperativeness, however, includes complex processes, and its boundary is not rigid, but flexible and nuanced.
|
|
We902 |
Aragonese/Catalana |
Motion Planning and Navigation in Human-Centered Environments |
Regular Session |
Chair: Nicola, Giorgio | CNR |
Co-Chair: Caluwaerts, Ken | Google |
|
17:10-17:22, Paper We902.1 | |
Spatio-Temporal Action Order Representation for Mobile Manipulation Planning |
|
Kawasaki, Yosuke (Keio University), Takahashi, Masaki (Keio University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Multi-modal Situation Awareness and Spatial Cognition, Assistive Robotics
Abstract: Social robots are used to perform mobile manipulation tasks, such as tidying up and carrying, based on instructions provided by humans. A mobile manipulation planner, which is used to exploit the robot's functions, requires a better understanding of the feasible actions in real space based on the robot's subsystem configuration and the object placement in the environment. This study aims to realize a mobile manipulation planner considering the world state, which consists of the robot state (subsystem configuration and their state) required to exploit the robot's functions. In this paper, this study proposes a novel environmental representation called a world state-dependent action graph (WDAG). The WDAG represents the spatial and temporal order of feasible actions based on the world state by adopting the knowledge representation with scene graphs and a recursive multilayered graph structure. The study also proposes a mobile manipulation planning method using the WDAG. The planner enables the derivation of many effective action sequences to accomplish the given tasks based on an exhaustive understanding of the spatial and temporal connections of actions. The effectiveness of the proposed method is evaluated through practical machine experiments performed. The experimental result demonstrates that the proposed method facilitates the effective utilization of the robot's functions.
|
|
17:22-17:34, Paper We902.2 | |
Providers-Clients-Robots: Framework for Spatial-Semantic Planning for Shared Understanding in Human-Robot Interaction |
|
Kathuria, Tribhi (University of Michigan, Ann Arbor), Xu, Yifan (University of Michigan), Chakhachiro, Theodor (American University of Beirut), Yang, X. Jessie (University of Michigan), Ghaffari, Maani (University of Michigan) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Applications of Social Robots, Creating Human-Robot Relationships
Abstract: This paper develops a novel framework called Providers-Clients-Robots (PCR), applicable to socially assistive robots that support research on shared understanding in human-robot interactions. Providers, Clients, and Robots share an actionable and intuitive representation of the environment to create plans that best satisfy the combined needs of all parties. The plans are formed via interaction between the Client and the Robot based on a previously built multi-modal navigation graph. The explainable environmental representation in the form of a navigation graph is constructed collaboratively between Providers and Robots prior to interaction with Clients. We develop a realization of the proposed framework to create a spatial-semantic representation of an indoor environment autonomously. Moreover, we develop a planner that takes in constraints from Providers and Clients of the establishment and dynamically plans a sequence of visits to each area of interest. Evaluations show that the proposed realization of the PCR framework can successfully make plans while satisfying the specified time budget and sequence constraints and outperforming the greedy baseline.
|
|
17:34-17:46, Paper We902.3 | |
A Model for Determining Natural Pathways for Side-By-Side Companion Robots in Passing Pedestrian Flows Using Dynamic Density |
|
Nguyen, Vinh (Unitec Institute of Technology), Tran, Thang (IOIT), Kuo, I Han (Unitec Institute of Technology) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Robot Companions and Social Robots, Cooperation and Collaboration in Human-Robot Teams
Abstract: Passing pedestrians is a common task for pairs while moving. Whilst pairs generally prefer Side-by-side walking mode, that mode tends to occupy more space in the pathway and reduces space for pedestrians traveling in the opposite direction than Leader-Follower mode in which one follows the other. Thus, humans often intuitively consider solutions to optimize the balance between side-by-side walking mode and moving space for others in passing. This is also a problem that designers of companion robots often have to solve. By discovering, modeling, and incorporating a new factor - the habit of moving with the flow and density in moving (called dynamic density) - this work proposes a novel model to determine natural navigation pathways for companion robot to pass multiple pedestrians walking in the opposite directions, mimicking human passing behaviors by taking into account this factor. Based on two experimental observations and data collections, the model was developed and then validated by comparing the pathways generated by the model and the natural moving plans of the pairs in the same situations. The simulation results show that the new model is able to determine moving plans of pairs in passing situations, similar to real decisions of humans.
|
|
17:46-17:58, Paper We902.4 | |
Observer-Aware Legibility for Social Navigation |
|
Taylor, Ada (Carnegie Mellon University), Mamantov, Ellie (Yale University), Admoni, Henny (Carnegie Mellon University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Non-verbal Cues and Expressiveness, Robotic Etiquette
Abstract: We designed an observer-aware method for creating navigation paths that simultaneously indicate a robot’s goal while attempting to remain in view for a particular observer. Prior art in legible motion does not account for the limited field of view of observers, which can lead to wasted communication efforts that are unobserved by the intended audience. Our observer-aware legibility algorithm directly models the locations and perspectives of observers, and places legible movements where they can be easily seen. To explore the effectiveness of this technique, we performed a 300-person online user study. Users viewed first-person videos of restaurant scenes with robot waiters moving along paths optimized for different observer perspectives, along with a baseline path that did not take into account any observer's field of view. Participants were asked to report their estimate of how likely it was the robot was heading to their table versus the other goal table as it moved along each path. We found that for observers with incomplete views of the restaurant, observer-aware legibility is effective at increasing the period of time for which observers correctly infer the goal of the robot. Non-targeted observers have lower performance on paths created for other observers than themselves, which is the natural drawback of personalizing legible motion to a particular observer. We also find that an observer's relationship to the environment (e.g. what is in their field of view) has more influence on their inferences than the observer's relative position to the targeted observer, and discuss how this implies knowledge of the environment is required in order to effectively plan for multiple observers at once.
|
|
17:58-18:10, Paper We902.5 | |
Augmented Environment Representations with Complete Object Models |
|
Sivananda, Krishnananda Prabhu (Aalto University), Verdoja, Francesco (Aalto University), Kyrki, Ville (Aalto University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Virtual and Augmented Tele-presence Environments, Cognitive Skills and Mental Models
Abstract: While 2D occupancy maps commonly used in mobile robotics enable safe navigation in indoor environments, in order for robots to understand and interact with their environment and its inhabitants representing 3D geometry and semantic environment information is required. Semantic information is crucial in effective interpretation of the meanings humans attribute to different parts of a space, while 3D geometry is important for safety and high-level understanding. We propose a pipeline that can generate a multi-layer representation of indoor environments for robotic applications. The proposed representation includes 3D metric-semantic layers, a 2D occupancy layer, and an object instance layer where known objects are replaced with an approximate model obtained through a novel model-matching approach. The metric-semantic layer and the object instance layer are combined to form an augmented representation of the environment. Experiments show that the proposed shape matching method outperforms a state-of-the-art deep learning method when tasked to complete unseen parts of objects in the scene. The pipeline performance translates well from simulation to real world as shown by F1-score analysis, with semantic segmentation accuracy using Mask R-CNN acting as the major bottleneck. Finally, we also demonstrate on a real robotic platform how the multi-layer map can be used to improve navigation safety.
|
| |