| |
Last updated on August 6, 2017. This conference program is tentative and subject to change
Technical Program for Wednesday August 30, 2017
|
We1A Regular Session, Belem II |
Add to My Program |
Human-Robot Interaction (I) |
|
|
Chair: Sycara, Katia | Carnegie Mellon Univ |
Co-Chair: Peshkova, Ekaterina | Alpen-Adria-Univ. Klagenfurt |
|
09:00-09:15, Paper We1A.1 | Add to My Program |
Investigating the Influence of Embodiment on Facial Mimicry in HRI Using Computer Vision-Based Measures |
Paetzel, Maike | Uppsala Univ |
VARNI, Giovanna | Isir - Upmc Cnrs |
Hupont, Isabelle | Pierre Et Marie Curie Univ |
Chetouani, Mohamed | Univ. Pierre Et Marie Curie |
Peters, Christopher | Royal Inst. of Tech |
Castellano, Ginevra | Uppsala Univ |
Keywords: Embodiment, Empathy and Intersubjectivity, Anthropomorphic Robots and Virtual Humans, Evaluation Methods and New Methodologies
Abstract: Mimicry plays an important role in social interaction. In human communication, it is used to establish rapport and bonding both with other humans, as well as robots and virtual characters. However, little is known about the underlying factors that elicit mimicry in humans when interacting with a robot. In this work, we study the influence of embodiment on participants' ability to mimic a social character. Participants were asked to intentionally mimic the laughing behavior of the Furhat mixed embodied robotic head and a 2D virtual version of the same character. To explore the effect of embodiment, we present two novel approaches to automatically assess people's ability to mimic based solely on videos of their facial expressions. In contrast to participants' self-assessment, the analysis of video recordings suggests a better ability to mimic when people interact with the 2D embodiment.
|
|
09:15-09:30, Paper We1A.2 | Add to My Program |
Natural Head Movement for HRI with a Muscular-Skeletal Head and Neck Robot |
Barker, Steve | Oxford Brookes Univ |
Izadi, Hooshang | Oxford Brookes Univ |
Crook, Nigel | Oxford Brookes Univ |
Hayatleh, Khaled | Oxford Brookes Univ |
Rolf, Matthias | Oxford Brookes Univ |
Hughes, Philip | Oxford Brookes Univ |
Fellows, Neil | Oxford Brookes Univ |
Keywords: Non-verbal Cues and Expressiveness, Anthropomorphic Robots and Virtual Humans
Abstract: This paper presents a study of the movements of a humanoid head-and-neck robot called Eddie. Eddie has a musculo-skeletal structure similar to that found in human necks enabling it to perform head movements that are comparable with human head movements. This study compares the move- ments of Eddie with those of a more conventional robotic neck structure and with those of a human head. Results show that Eddie’s movements are perceived as significantly more natural and by trend more lifelike than the conventional head’s. No differences were found with respect to the impression of human- likeness, consciousness, and elegance.
|
|
09:30-09:45, Paper We1A.3 | Add to My Program |
Hybrid Chat and Task Dialogue for More Engaging HRI Using Reinforcement Learning |
Papaioannou, Ioannis | Heriot-Watt Univ |
Dondrup, Christian | Heriot-Watt Univ |
Novikova, Jekaterina | Heriot-Watt Univ |
Lemon, Oliver | Heriot-Watt Univ |
Keywords: Multimodal Interaction and Conversational Skills, Machine Learning and Adaptation, Linguistic Communication and Dialogue
Abstract: Most of today’s task-based spoken dialogue systems perform poorly if the user goal is not within the system’s task domain. On the other hand, chatbots cannot perform tasks involving robot actions but are able to deal with unforeseen user input. To overcome the limitations of each of these separate approaches and be able to exploit their strengths, we present and evaluate a fully autonomous robotic system using a novel combination of task-based and chat-style dialogue in order to enhance the user experience with human-robot dialogue systems. We employ Reinforcement Learning (RL) to create a scalable and extensible approach to combining chat and task-based dialogue for multimodal systems. In an evaluation with real users, the combined system was rated as significantly more “pleasant” and better met the users’ expectations in a hybrid task+chat condition, compared to the task-only condition, without suffering any significant loss in task completion
|
|
09:45-10:00, Paper We1A.4 | Add to My Program |
Episodic Memory Formulation and Its Application in Long-Term HRI |
Sigalas, Markos | Foundation for Res. and Tech. - Hellas |
Maniadakis, Michail | Foundation for Res. and Tech. -- Hellas (FORTH) |
Trahanias, Panos | Foundation for Res. and Tech. – Hellas (FORTH) |
Keywords: Long-term Experience and Longitudinal HRI Studies, Machine Learning and Adaptation, Assistive Robotics
Abstract: Efficient storing and managing of robot's experiences is of utmost importance in long-term recurring Human-Robot Interaction scenarios, where the volume of information increases constantly. To address these issues, a novel entity-based episodic memory is introduced in this work. Knowledge is represented by hierarchical multigraphs enabling for fast information retrieval. Consisting entities are asynchronously updated, while a time-correlated importance factor modulates the merging, forgetting or refreshing of memories, in order to facilitate search and management of the stored information. An HMM-based probabilistic inference is employed to infer or predict the HRI state or to identify abnormal scenario unfolding and, thus, guide future robot activities. The performance of the employed memory schema is assessed on both simulated and real ``breakfast preparation'' scenarios. The results indicate that the proposed memory model is able to efficiently store and manage the acquired data without any loss of critical information. Moreover, our approach was also shown capable of successfully inferring user's hidden preferences and thus guiding robot behavior accordingly in order to improve user's HRI experience.
|
|
10:00-10:15, Paper We1A.5 | Add to My Program |
A Wizard-Of-Oz Study of Curiosity in Human-Robot Interaction |
Law, Edith | Univ. of Waterloo |
Cai, Vicky | Univ. of Waterloo |
Liu, Qi Feng | Univ. of Waterloo |
Sasy, Sajin | Univ. of Waterloo |
Goh, Joslin | Univ. of Waterloo |
Blidaru, Alex | Univ. of Waterloo |
Kulic, Dana | Univ. of Waterloo |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Assistive Robotics
Abstract: Service robots are becoming a widespread tool for assisting humans in scientific, industrial and even domestic settings. Yet, our understanding of how to motivate and sustain interactions between human users and robots remains limited. In this work, we conducted a study to investigate how surprising robot behaviour evokes curiosity and influences trust and engagement in the context of participants interacting with Recyclo, a service robot for providing recycling recommendations. In a Wizard-of-Oz experiment, 36 participants were asked to interact with Recyclo to recognize and sort a variety of objects, and were given object recognition responses that were either unsurprising or surprising. Results show that surprise gave rise to information seeking behavior indicative of curiosity, while having a positive influence on engagement and negative influence on trust.
|
|
10:15-10:30, Paper We1A.6 | Add to My Program |
Love at First Sight: Mere Exposure to Robot Appearance Leaves Impressions Similar to Interactions with Physical Robots |
FakhrHosseini, Maryam | Michigan Tech. Univ |
Barnes, Jaclyn | Michigan Tech. Univ |
Hilliger, Samantha | Michigan Tech. Univ |
Jeon, Myounghoon | Michigan Tech. Univ |
Park, Chung Hyuk | George Washington Univ |
Howard, Ayanna | Georgia Inst. of Tech |
Keywords: Robot Companions and Social Robots, Human Factors and Ergonomics, Creating Human-Robot Relationships
Abstract: As the technology needed to make robots robust and affordable draws ever nearer, human-robot interaction (HRI) research to make robots more useful and accessible to the general population becomes more crucial. In this study, 58 college students filled out an online survey soliciting their judgments regarding seven social robots based solely on appearance. Results suggest that participants prefer robots that resemble animals or humans over those that are intended to represent an imaginary creature or do not resemble a creature at all. Results are discussed based on social robot application and design features.
|
|
We1B Regular Session, Ajuda II |
Add to My Program |
Motivations and Emotions in Robotics |
|
|
Chair: Bonarini, Andrea | Pol. Di Milano |
Co-Chair: Watanabe, Tomio | Okayama Prefectural Univ |
|
09:00-09:15, Paper We1B.1 | Add to My Program |
Impression’s Predictive Models for Animated Robot |
Izui, Takamune | Tokyo Univ. of Agriculture and Tech |
Venture, Gentiane | Tokyo Univ. of Agriculture and Tech |
Keywords: Embodiment, Empathy and Intersubjectivity, Personalities for Robotic or Virtual Characters, Evaluation Methods and New Methodologies
Abstract: Some studies in the field of HRI show that the impressions given by a robot depends on motions. The robot that is devel-oped for communication with human needs to give the appro-priate impression of the users. Yet robot motion gives a differ-ent impression to the observers. This means that robot design-ers and programmers have to make trial and errors to generate behaviors conveying the appropriate impressions. This study developed a predictive model of user’s qualitative impression scores, and evaluates the effectiveness of the model using ex-perimental data. In the experiment, 2 kinds of humanoid ro-bots presented 6 behaviors and participants rated the qualita-tive impression. We compared the obtained scores with the predictive scores calculated by our model. As a result, it is pos-sible to predict users' impression scores for robot behaviors is 89.6% of the cases
|
|
09:15-09:30, Paper We1B.2 | Add to My Program |
Investigating the Real World Impact of Emotion Portrayal through Robot Voice and Motion |
Winkle, Katie | Univ. of the West England |
Bremner, Paul | Univ. of the West of England |
Keywords: Motivations and Emotions in Robotics, Applications of Social Robots, Assistive Robotics
Abstract: In this paper we investigate robot to human Interpersonal Emotion Transfer (IET) in a real world contextualised human-robot interaction (HRI). IET is an umbrella term which describes the impact of emotions in human-human interaction (HHI). This includes emotion contagion and social appraisal effects. These effects are particularly relevant in domains such as teaching, sports, exercise and healthy eating; domains increasingly targeted by socially assistive robotics. As such, we suggest socially assistive robots may benefit from affective communication in the same way as their human counterparts. We show that emotion recognition from robot voice and motion is possible in explicit validation experiments but does not hold in a socially assistive interaction. Our findings suggest that robot to human IET relies on the human having an expectation for, and hence recognition of, robot emotions; mimicry of valanced motion is not sufficient.
|
|
09:30-09:45, Paper We1B.3 | Add to My Program |
A Framework for a Robot’s Emotions Engine |
SALEM, Ben | School of Engineering, |
Keywords: Cognitive Skills and Mental Models, Motivations and Emotions in Robotics
Abstract: An Emotion Engine is a modelling and a simplification of the Brain circuitry that generate emotions. It should produce a variety of responses including rapid reaction-like emotions as well as slower moods. We introduce such an engine and then propose a framework for its translated equivalent for a robot. We then define key issues that need addressing and provide guidelines via the framework, for its implementation onto an actual robot’s Emotions Engine.
|
|
09:45-10:00, Paper We1B.4 | Add to My Program |
A Robot at Home – How Affect, Technology Commitment, and Personality Traits Influence User Experience in an Intelligent Robotics Apartment |
Bernotat, Jasmin | CITEC, Bielefeld Univ |
Eyssel, Friederike | Bielefeld Univ |
Keywords: Motivations and Emotions in Robotics, Monitoring of Behaviour and Internal States of Humans, Creating Human-Robot Relationships
Abstract: Previous research has shown that user features like affect, personality traits, user gender, technology commitment, perceived ease of technology use, and the feeling of being observed impact human-technology interaction (e.g., [1], [2]). To date, most studies have focused on the influence of user characteristics while interacting with single technical devices such as smart phones, audio players (e.g., [3]), or computers (e.g., [1]). To extend this work, we investigated the influence of individual user characteristics, the perceived ease of task completion, and the feeling of being observed on human-technology interaction and human-robot interaction (HRI) in particular. We explored how participants would solve seven tasks within a smart laboratory apartment. To do so, we collected video data and complemented this analysis with survey data to investigate naïve users’ attitudes towards the smart home and the robot. User characteristics such as agreeableness, low negative affect, technology acceptance, low perceived competence regarding technology use, and the perceived ease of task were predictors of positive user experiences within the intelligent robotics apartment. Regression analyses revealed that a positive evaluation of the robot was predicted by positive affect and, to a lesser extent, by technology acceptance. Actual interactions with the robot were predicted by a positive evaluation of the robot and, to a lesser degree, by technology acceptance. Moreover, our findings show that user characteristics and, by tendency, the ease of task impact HRI within an intelligent apartment. Implications for future research on how to investigate the interplay of user and further task characteristics to improve HRI are discussed.
|
|
10:00-10:15, Paper We1B.5 | Add to My Program |
Study of Emotion Rendering Design for Humanoid Robots Compiled with Real-Time Music Mood Perception |
Cheng, Stone | National Chiao Tung Univ |
Keywords: Motivations and Emotions in Robotics, Robots in art and entertainment, Applications of Social Robots
Abstract: This study focuses on the role of anthropomorphic robots in rendering emotion and expressive behavior, either in entertainment or communicative scenarios. An integrated system was proposed to demonstrate the emotional movements of humanoid robot inspired by the real-time music emotions. The music emotions tracking system progressively extracts the features of music and characterizes music-induced emotions in an emotion plane to trace the real-time emotion locus of music. The Thayer’s model of mood is consisted of four quadrants: (i)Contentment, (ii)Depression, (iii)Anxious, (iv)Exuberance. Each emotion is quantized to three levels to express the degree of mood. A humanoid robot, Kondo KHR-3HV, is used as a base robot that allows for 17 Degree-Of-Freedom (DOF) movement. The motions designs for emotional expression are based on Laban Movement Analysis (LMA) to construct a quantifiable action description system. The system is capable of describing and interpreting many varieties of human movements. Furthermore, a questionnaire survey is conducted to evaluate the results of proposed emotions rendering system judged by the participants’ experience. For the comparative emotion model checklist, 53 participants had to rate their felt emotional reaction to emotions movement on all three motions levels checklists. Good agreement has been obtained between motion design expression and questionnaire survey evaluations.
|
|
10:15-10:30, Paper We1B.6 | Add to My Program |
Emotion Classification Using Linear Predictive Features on Wavelet-Decomposed EEG Data |
Kraljević, Luka | FESB Univ. of Split |
Russo, Mladen | FESB Univ. of Split |
Sikora, Marjan | FESB Univ. of Split |
Keywords: Motivations and Emotions in Robotics, Creating Human-Robot Relationships, Applications of Social Robots
Abstract: Emotions play a significant role in human communication and decision making. In order to bypass current limitations of human-robot interaction, more natural, trustworthy and nonverbal way of communication is needed. This requires robots to be able to explain and perceive person’s emotions. Our work is based on the concept that each emotional state can be placed on a two-dimensional plane with arousal and valence as the axes. We propose a new feature set based on using the linear predictive coefficients on wavelet-decomposed EEG signals. Emotion classification is then performed using support vector machine with Gaussian kernel. Proposed approach is evaluated on EEG signals from publicly available DEAP dataset and results show that our method is effective and outperforms some state of the art methods
|
|
We1C Regular Session, Ajuda III |
Add to My Program |
Robots in Education |
|
|
Chair: Kwon, Dong-Soo | KAIST |
Co-Chair: Paiva, Ana | INESC-ID and Inst. Superior Técnico, Tech. of Lisbon |
|
09:00-09:15, Paper We1C.1 | Add to My Program |
Determining the Effect of Programming Language in Educational Robotic Activities |
Angel-Fernandez, Julian M. | Vienna Univ. of Tech |
Vincze, Markus | Vienna Univ. of Tech |
Keywords: Robots in Education, Therapy and Rehabilitation
Abstract: Robotics has been suggested as a field of high potential in education and with high expectancy to impact teaching from kindergarten to university. This paper presents a study conducted with the purpose to have a better understanding of the impact of programming languages among participants of a workshop. An activity, which encompasses ten exercises, was designed and we used three different programming languages (i.e. visual, blocky and text) to program the Thymio robot. A total of six workshops were held using this activity two for each programming language. Qualitative and quantitative data were collected in each workshop. The results suggest that despite the programming language used, participants enjoyed working with robots. Moreover participants with previous experience on programming prefer more advance programming languages.
|
|
09:15-09:30, Paper We1C.2 | Add to My Program |
Wizard of Oz vs Autonomous: Children's Perception Changes According to Robot's Operation Condition |
Tozadore, Daniel Carnieto | Univ. of São Paulo |
Pinto, Adam Henrique | Univ. De São Paulo |
Romero, Roseli Ap. Francelin | Univ. De Sao Paulo |
Trovato, Gabriele | Waseda Univ |
Keywords: Robots in Education, Therapy and Rehabilitation, Degrees of Autonomy and Teleoperation, Computational Architectures
Abstract: The presence of robots in human lifestyle is no longer a distant reality, as robots are being employed in several fields, including educational purposes. However, most of the research in educational robotics does not use autonomous social behavior, but rather techniques like Wizard of Oz (WoZ). This paper presents the very first test in a school environment of a robotic architecture to control an autonomous system for educational interactions, evaluated from user's perspective when compared to a teleoperated situation. The architecture aims to manage three main communication robot resources - speech, vision and gesture - in an autonomous way and provide an interaction as acceptable as when someone controls the robot. The experiment was performed randomly assigning 82 students aged between 7 and 11 to interact with a NAO robot in two conditions of robot operation: autonomous and teleoperated. The results suggest that there is no significant difference between the conditions in user's enjoyment and system time response, but they decreased their perception regarding robot's intelligence after knowing about the teleoperation.
|
|
09:30-09:45, Paper We1C.3 | Add to My Program |
Socially Assistive Child-Robot Interaction in Physical Exercise Coaching |
Guneysu Ozgur, Arzu | EPFL |
Arnrich, Bert | Bogazici Univ |
Keywords: Robots in Education, Therapy and Rehabilitation, Detecting and Understanding Human Activity
Abstract: The main contribution of this study is the design and implementation of an autonomous human robot interaction system to engage children in performing several physical exercise motions by providing real-time feedback and guidance. The system is designed after several preliminary experiments with children and exercise coaches. In order to test the feasibility and the effectiveness of the exercise system across a variety of performance and evaluation measures, an experimental study was conducted with 19 healthy children. The results of the study validate the effectiveness of the system in motivating and helping children to complete physical exercises. The children engaged in physical exercise throughout the interaction sessions and rated the interaction highly in terms of enjoyableness, and rated the robot exercise coach highly in terms of social attraction, social presence, and companionship via a questionnaire answered after each session.
|
|
09:45-10:00, Paper We1C.4 | Add to My Program |
Personalised Self-Explanation by Robots: The Role of Goals versus Beliefs in Robot-Action Explanation for Children and Adults |
Kaptein, Frank | TU Delft |
broekens, joost | TU Delft |
Hindriks, Koen | Delft Univ. of Tech |
Neerincx, Mark | TNO |
Keywords: Robots in Education, Therapy and Rehabilitation, Social Intelligence for Robots, Cooperation and Collaboration in Human-Robot Teams
Abstract: A good explanation takes the user who is receiving the explanation into account. We aim to get a better understanding of user preferences and the differences between children and adults who receive explanations from a robot. We implemented a Nao-robot as a belief-desire-intention (BDI)-based agent and explained its actions using two different explanation styles. Both are based on how humans explain and justify their actions to each other. One explanation style communicates the beliefs that give context information on why the agent performed the action. The other explanation style communicates the goals that inform the user of the agent's desired state when performing the action. We conducted a user study (19 children, 19 adults) in which a Nao-robot performed actions to support type 1 diabetes mellitus management. We investigated the preference of children and adults for goal- versus belief-based action explanations. From this, we learned that adults have a significantly higher tendency to prefer goal-based action explanations. This work is a necessary step in addressing the challenge of providing personalised explanations in human-robot and human-agent interaction.
|
|
10:00-10:15, Paper We1C.5 | Add to My Program |
Designing Telepresence Robots for K-12 Education |
Cha, Elizabeth | Univ. of Southern California |
Chen, Samantha | Univ. of Southern California |
Mataric, Maja | Univ. of Southern California |
Keywords: Robots in Education, Therapy and Rehabilitation, Social Presence for Robots and Virtual Humans, User-centered Design of Robots
Abstract: Telepresence robots have the potential to improve access to K-12 education for students who are unable to attend school for a variety of reasons. Since previous telepresence research has largely focused on the needs of adult users in workplace settings, it is unknown what challenges must be addressed for these robots to be effective tools in classrooms. In this paper, we seek to better understand how a telepresence robot should function in the classroom when operated by a remote student. Toward this goal, we conducted field sessions in which four designers operated a telepresence robot in a real K-12 classroom. Using the results, we identify key research challenges and present design insights meant to inform the HRI community in particular and robot designers in general.
|
|
10:15-10:30, Paper We1C.6 | Add to My Program |
My Classroom Robot: Exploring Telepresence for K-12 Education in a Virtual Environment |
Cha, Elizabeth | Univ. of Southern California |
Greczek, Jillian | Univ. of Southern California |
Song, Ao | Univ. of Southern California |
Mataric, Maja | Univ. of Southern California |
Keywords: Social Presence for Robots and Virtual Humans, Evaluation Methods and New Methodologies, Robots in Education, Therapy and Rehabilitation
Abstract: Telepresence robots have the potential to improve access to K-12 education. However, designing robots for classroom use presents unique challenges from both logistical and technological perspectives. To address these challenges, we created My Classroom Robot, an interactive game in which players can operate a virtual telepresence robot in a classroom environment. The virtual classroom environment allows us to collect data and prototype different designs prior to involving the high overhead required in going into the real classroom. In this work, we present the design of My Classroom Robot, an initial evaluation, and the lessons learned from its development.
|
|
We1D Regular Session, Belem I |
Add to My Program |
Creating Human-Robot Relationships (I) |
|
|
Chair: Okita, Sandra | Teachers Coll. Columbia Univ |
Co-Chair: Gonçalves, Paulo | Inst. Pol. De Castelo Branco |
|
09:00-09:15, Paper We1D.1 | Add to My Program |
Communicating Spatial Knowledge in Japanese for Interaction with Autonomous Robots |
Cao, Lu | Saitama Univ |
Fukuda, Hisato | Saitama Univ |
Lam, Antony | Saitama Univ |
Kuno, Yoshinori | Saitama Univ |
Keywords: Creating Human-Robot Relationships, Applications of Social Robots, Assistive Robotics
Abstract: In this work, our goal is to communicate spatial knowledge in Japanese with autonomous robots. We first describe the data collection scheme. We then conduct a study to investigate how Japanese describe spatial relations and what relations they prefer. Based on the observations, we formalize the knowledge by ontologies. With the help of an inference mechanism, our knowledge base is able to store textit{commonsense} and discover inexplicit knowledge. We model 16 spatial relations using geometry information. At the language level, we present a natural language interface to execute commands and answer questions. The integrated robotic system unifies visual and spatial information in concert with parsing of semantic interpretation of sentences. Finally, we describe two tasks for validation.
|
|
09:15-09:30, Paper We1D.2 | Add to My Program |
Design of a Robot That Is Capable of High Fiving with Humans |
Okamura, Erina | Univ. of Tsukuba |
Tanaka, Fumihide | Univ. of Tsukuba |
Keywords: Creating Human-Robot Relationships, Interaction Kinesics, Non-verbal Cues and Expressiveness
Abstract: High fiving enhances communication in human society. Therefore, a robot that is capable of high fiving could build a better relationship with humans. To design such a robot, it is necessary to determine the requirements of robotic high fives. The goal of this paper is to present such requirements that were identified from the analysis of human high fives, and to show the actual implementations on a humanoid robot. The process of high fiving is composed of two phases: people determine a high five motion according to the current occasion, and then they adjust the motion according to the situation surrounding them. In this paper, we particularly report these motion adjustment functions, which were tested with human participants. Feedback and other requirements for an effective robotic high five are reported.
|
|
09:30-09:45, Paper We1D.3 | Add to My Program |
Readability of the Gaze and Expressions of a Robot Museum Visitor: Impact of the Low Level Sensory-Motor Control |
Moualla, Aliaa | CNRS, ENSEA, Cergy Pontoise Univ. ETIS Lab |
Karaouzene, Ali | CNRS UMR 8051, ENSEA, Cergy-Pontoise Univ |
Boucenna, Sofiane | CNRS - Cergy-Pontoise Univ |
Vidal, Denis | IRD (Paris) |
Gaussier, Philippe | CNRS UMR 8051, ENSEA, Cergy-Pontoise Univ |
Keywords: Creating Human-Robot Relationships, Motivations and Emotions in Robotics, Non-verbal Cues and Expressiveness
Abstract: In this paper we propose a neural network allowing a mobile robot to learn artwork appreciation. The learning is based on the social referencing approach. We present and analyze specifically the visual system, its impact on the robot behavior, and at the end, we analyze the readability of our robot behavior according to visitors comments. The robot acquires its knowledge (artificial taste) from the inter- action with humans. We show that the low level spatial competition between the values associated to areas of interest in the image are important for the coherence of the robot object evaluation and the readability of its behavior.
|
|
09:45-10:00, Paper We1D.4 | Add to My Program |
Keep on Dancing: Effects of Expressive Motion Mimicry |
Simmons, Reid | Carnegie Mellon Univ |
Knight, Heather | Carnegie Mellon Univ |
Keywords: Creating Human-Robot Relationships, Motivations and Emotions in Robotics, Social Presence for Robots and Virtual Humans
Abstract: Expressive motion refers to movements that help convey an agent’s attitude towards its task or environment. People frequently use expressive motion to indicate internal states such as emotion, confidence, and engagement. Robots can also exhibit expressive motion, and studies have shown that people can legibly interpret such expressive motion. Mimicry involves imitating the behaviors of others, and has been shown to increase rapport between people. The research question addressed in this study is how robots mimicking the expressive motion of children affects their interaction with dancing robots. The paper presents our approach to generating and characterizing expressive motion, based on the Laban Efforts System and the results of the study, which provides both significant and suggestive evidence to support that such mimicry has positive effects on the children’s behaviors.
|
|
10:00-10:15, Paper We1D.5 | Add to My Program |
Hey Robot, Why Don't You Talk to Me? |
Ng, Hwei Geok | Univ. of Hamburg |
Anton, Paul | Univ. of Hamburg |
Brügger, Marc | Univ. of Hamburg |
Churamani, Nikhil | Univ. of Hamburg |
Fließwasser, Erik | Univ. of Hamburg |
Hummel, Thomas | Univ. of Hamburg |
Mayer, Julius | Univ. of Hamburg |
Mustafa, Waleed | Univ. of Hamburg |
Nguyen, Thi Linh Chi | Univ. of Hamburg |
Nguyen, Quan | Univ. of Hamburg |
Soll, Marcus | Univ. of Hamburg |
Springenberg, Sebastian | Univ. of Hamburg |
Griffiths, Sascha | Univ. Hamburg |
Heinrich, Stefan | Univ. Hamburg |
Navarro-Guerrero, Nicolás | Univ. of Hamburg |
Strahl, Erik | Univ. Hamburg |
Twiefel, Johannes | Univ. of Hamburg, Department of Informatics, Knowledge Tech |
Weber, Cornelius | Knowledge Tech. Group, Univ. of Hamburg |
Wermter, Stefan | Univ. of Hamburg |
Keywords: Robot Companions and Social Robots, Creating Human-Robot Relationships
Abstract: This paper describes the techniques used in the submitted video presenting an interaction scenario, realised using the Neuro-Inspired Companion (NICO) robot. NICO engages the users in a personalised conversation where the robot always tracks the users' face, remembers them and interacts with them using natural language. NICO can also learn to perform tasks such as remembering and recalling objects and thus can assist users in their daily chores. The interaction system helps the users to interact as naturally as possible with the robot, enriching their experience with the robot, making it more interesting and engaging.
|
|
10:15-10:30, Paper We1D.6 | Add to My Program |
A Communal Perspective on Shared Robots As Social Catalysts |
Joshi, Swapna | Indiana Univ. Bloomington |
Sabanovic, Selma | Indiana Univ |
Keywords: Evaluation Methods and New Methodologies, Applications of Social Robots, Creating Human-Robot Relationships
Abstract: Recent years have seen robust advancements in robotic platforms for multiple users, while HRI research is increasingly examining small group interactions. However, there has been little consideration of appropriate methodologies for design or development of human-robot interactions to foster and enhance context-specific shared goals, interactions, and experiences within larger communities. This paper presents a preliminary study using a community-centered approach to collective perceptions about shared social robots in a retirement village. It reveals novel aspects regarding people’s sense of community, community roles and purposes of robots. Findings indicate need of a framework for community robotics and further studies using a community perspective to bring rich insight into goal oriented and context specific multi-user experiences and interactions in HRI.
|
|
We1E Regular Session, Ajuda I |
Add to My Program |
Virtual and Augmented Tele-Presence Environments |
|
|
Chair: Nakadai, Kazuhiro | Honda Res. Inst. Japan Co., Ltd |
Co-Chair: NAKAMURA, Akio | Tokyo Denki Univ |
|
09:00-09:15, Paper We1E.1 | Add to My Program |
A Mixed Reality for Virtual Assembly |
ZALDIVAR-COLADO, Ulises | Univ. of Versailles, France |
GARBAYA, Samir | ENSAM |
TAMAYO-SERRANO, Paul | Univ. De Versailles Saint-Quentin-En-Yvelines |
Zaldivar-Colado, Xiomara | Univ. of Sinaloa |
Blazevic, Pierre | Lab. D'ingénierie Des Systèmes De Versailles |
Keywords: Virtual and Augmented Tele-presence Environments
Abstract: Mixed reality (MR) is a hybrid reality where real and virtual objects are merged to produce an enriched interactive environment. Virtual Reality (VR) has been used in the simulation of production processes such as the product assembly and the execution of industrial tasks. Augmented reality (AR) has been widely used as an instructional tool to help the user to perform the task in real world conditions. Most of these works were focused on solving technical problems, specific to the type of application but they did not take advantage of the achievements realized in both VR and AR technologies. This paper presents a mixed reality system that integrates virtual assembly environment with augmented reality. This approach is mainly based on the development of a hybrid tacking system for the synchronization of the virtual and the real hand of the user. The evaluation of this Mixed Reality approach showed a statistically significant improvement of the user performance in the assembly task execution, compared to the task realized in virtual environment.
|
|
09:15-09:30, Paper We1E.2 | Add to My Program |
Automatic Replication of Teleoperator Head Movements and Facial Expressions on a Humanoid Robot |
Ondras, Jan | Univ. of Cambridge |
Celiktutan, Oya | Imperial Coll. London |
Sariyanidi, Evangelos | Istanbul Tech. Univ |
Gunes, Hatice | Univ. of Cambridge |
Keywords: Virtual and Augmented Tele-presence Environments, Non-verbal Cues and Expressiveness
Abstract: Robotic telepresence aims to create a physical presence for a remotely located human (teleoperator) by reproducing their verbal and nonverbal behaviours (e.g. speech, gestures, facial expressions) on a robotic platform. In this work, we propose a novel teleoperation system that combines the replication of facial expressions of emotions (neutral, disgust, happiness, and surprise) and head movements on the fly on the humanoid robot Nao. Robots' expression of emotions is constrained by their physical and behavioural capabilities. As the Nao robot has a static face, we use the LEDs located around its eyes to reproduce the teleoperator expressions of emotions. Using a web camera, we computationally detect the facial action units and measure the head pose of the operator. The emotion to be replicated is inferred from the detected action units by a neural network. Simultaneously, the measured head motion is smoothed and bounded to the robot's physical limits by applying a constrained-state Kalman filter. In order to evaluate the proposed system, we conducted a user study %two user studies by asking 28 participants to use the replication system by displaying facial expressions and head movements while being recorded by a web camera. Subsequently, 18 external observers viewed the recorded clips via an online survey and assessed the quality of the robot's replication of the participants' behaviours. Our results show that the proposed teleoperation system can successfully communicate emotions and head movements, resulting in a high agreement among the external observers (ICC_E = 0.91, ICC_HP = 0.72).
|
|
09:30-09:45, Paper We1E.3 | Add to My Program |
Understanding Human-Robot Interaction in Virtual Reality |
Liu, Oliver Dayun | Univ. of Wisconsin - Madison |
Rakita, Daniel | Http: //graphics.cs.wisc.edu/WP/ |
Mutlu, Bilge | Univ. of Wisconsin–Madison |
Gleicher, Michael | Univ. of Wisconsin - Madison |
Keywords: Virtual and Augmented Tele-presence Environments, Novel Interfaces and Interaction Modalities, Social Presence for Robots and Virtual Humans
Abstract: Interactions with simulated robots are typically presented on screens. Virtual reality (VR) offers an attractive alternative as it provides visual cues that are more similar to the real world. In this paper, we explore how virtual reality mediates human-robot interactions through two user studies. The first study shows that in situations where perception of the robot is challenging, a VR display provides significantly improved performance on a collaborative task. The second study shows that this improved performance is primarily due to stereo cues. Together, the findings of these studies suggest that VR displays can offer users unique perceptual benefits in simulated robotics applications.
|
|
09:45-10:00, Paper We1E.4 | Add to My Program |
A Fully Immersive VR-Based Haptic Feedback System for Size Measurement in Inspection Tasks Using 3D Point Clouds |
Loconsole, Claudio | Pol. Di Bari |
Tattoli, Giacomo | Scuola Superiore Sant'Anna |
Bortone, Ilaria | TeCIP Inst. Scuola Superiore Sant'Anna |
Tecchia, Franco | Scuola Superiore Sant' Anna |
Leonardis, Daniele | Scuola Superiore Sant'Anna - TeCIP Inst |
Frisoli, Antonio | TeCIP Inst. Scuola Superiore Sant'Anna |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Virtual and Augmented Tele-presence Environments, Innovative Robot Designs
Abstract: This paper proposes a size measurement system to be used in inspection tasks, based on the integration of fully immersive Virtual Reality visualization and haptic feedback at fingertip level. In particular, index finger and thumb are intuitively used to perform metric measurement on 3D point clouds visualized using a stereo vision Head Mounted Display. The haptic feedback consisting of contact transition and forces rendered at the fingertip is used to facilitate the user in the measurement task. Experimental results show similar performance of subjects in accomplishing the inspection task with and without the support of haptic feedback in terms of time and precision of measurements. However, qualitative questionnaire responses suggest a significant preference of subjects in using immersive environment enriched by haptic feedback.
|
|
10:00-10:15, Paper We1E.5 | Add to My Program |
Augmented Reality Dialog Interface for Multimodal Teleoperation |
Pereira, André | Disney Res |
Carter, Elizabeth | The Walt Disney Company |
Leite, Iolanda | KTH Royal Inst. of Tech |
Mars, John | Disney Res |
Lehman, Jill | Disney Res |
Keywords: Multimodal Interaction and Conversational Skills, Virtual and Augmented Tele-presence Environments, Novel Interfaces and Interaction Modalities
Abstract: We designed an augmented reality dialog interface that enables the control of multimodal behaviors in telepresence robot applications. This interface, when paired with a telepresence robot, enables a single operator to accurately control and coordinate a robot's verbal and nonverbal behaviors. However, depending on the complexity of the desired interaction, some applications might benefit from having multiple operators controlling different interaction modalities. As such, our interface can either be used by a single operator or a pair of operators. In the paired-operator system, one operator controls verbal behaviors while the other controls nonverbal behaviors. A within-subjects user study was conducted to assess the usefulness and validity of our interface in both single and paired-operator setups. When faced with hard tasks, coordination between verbal and nonverbal behavior improves in the single-operator condition. Despite single operators being slower to produce verbal responses, verbal error rates were unaffected by our conditions. Finally, significantly improved presence measures such as mental immersion, sensory engagement, ability to view and understand the conversation partner, and degree of emotional sensation can benefit single operators that control both the verbal and nonverbal behaviors of a robot.
|
|
10:15-10:30, Paper We1E.6 | Add to My Program |
Self-Reconfigurable Modular Robot Interface Using Virtual Reality: Arrangement of Furniture Made Out of Roombots Modules |
Nigolian, Valentin Zenon | EPFL |
Mutlu, Mehmet | École Pol. Fédérale De Lausanne (EPFL) |
Hauser, Simon | Biorob, EPFL |
Bernardino, Alexandre | IST - Técnico Lisboa |
Ijspeert, Auke | EPFL |
Keywords: Novel Interfaces and Interaction Modalities, Innovative Robot Designs, Virtual and Augmented Tele-presence Environments
Abstract: Self-reconfigurable modular robots (SRMR) offer high flexibility in task space by adopting different morphologies for different tasks. Using the same simple module, complex and more capable morphologies can be built. However, increasing the number of modules increases the degrees of freedom (DOF) of the system. Thus, controlling the system as a whole becomes harder. Indeed, even a 10 DOFs system is difficult to consider and manipulate. Intuitive and easy to use interfaces are needed, particularly when modular robots need to interact with humans. In this study we present an interface to assemble desired structures and placement of such structures, with a focus on the assembly process. Roombots modules, a particular SRMR design, are used for the demonstration of the proposed interface. Two non-conventional input/output devices - a head mounted display and hand tracking system - are added to the system to enhance the user experience. Finally, a user study was conducted to evaluate the interface. The results show that most users enjoyed their experience. However, they were not necessarily convinced by the gesture control, most likely for technical reasons.
|
|
We2A Regular Session, Belem II |
Add to My Program |
Human-Robot Interaction (II) |
|
|
Chair: Chong, Nak Young | Japan Advanced Inst. of Sci. and Tech |
Co-Chair: Barakova, Emilia I. | Eindhoven Univ. of Tech |
|
11:00-11:15, Paper We2A.1 | Add to My Program |
Decision-Theoretic Planning under Uncertainty for Multimodal Human-Robot Interaction |
Garcia, João A. | Inst. Superior Técnico - Inst. for Systems and Robotics |
Lima, Pedro U. | Inst. Superior Técnico - Inst. for Systems and Robotics |
Veiga, Tiago | Inst. Superior Técnico - Inst. for Systems and Robotics |
Keywords: Multimodal Interaction and Conversational Skills, Monitoring of Behaviour and Internal States of Humans, Detecting and Understanding Human Activity
Abstract: This paper proposes a Decision-Theoretic approach to problems involving interaction between robot systems and human users, which takes into account the latent aspects of Human-Robot interaction, e.g., the user’s status. The presented approach is based on the Partially Observable Markov Decision Process framework, which handles uncertainty in planning problems, extended with information rewards to optimize the information-gathering capabilities of the system. The approach is formalized into a framework which considers: observable and latent state variables; gesture and speech observations; and action factors which are related to the agent’s actuators or to the information gain goals (Information-Reward actions). Under the proposed framework, the robot system is able to: actively gain information and react according to latent states, inherent to Human-Robot interaction settings; effectively achieve the goals of the task in which the robot is employed; and follow a socially appealing behavior. Finally, the framework was thoroughly tested in a socially assistive scenario, in a realistic apartment testbed and resorting to an autonomous mobile social robot. The experiments’ results validate the proposed approach for problems involving robot systems in Human-Robot interaction scenarios.
|
|
11:15-11:30, Paper We2A.2 | Add to My Program |
Crowd Sourcing ‘Approach Behavior’ Control Parameters for Human-Robot Interaction |
Ferland, François | ENSTA ParisTech |
Tapus, Adriana | ENSTA-ParisTech |
Keywords: Personalities for Robotic or Virtual Characters, Robot Companions and Social Robots, Social Presence for Robots and Virtual Humans
Abstract: For service robots to be well received in our daily lives, it is desirable that they appear as friendly as possible rather than some unfriendly characters. While a robot's physical appearance influences this perception, its behavior also has an impact. In order to be sure that a specific robot behavior will be correctly perceived, we propose to its potential users to shape the robot's behavior. In this paper, a specific behavior, "approaching a person", is evaluated with a Robosoft Kompaï robot. To avoid logistics issues associated with having large groups of novice users performing demonstrations on a physical robot, a web-based approach built around a simulation of the actual robot is proposed. The relationship between the robot and the person is described by the two dimensions of the interpersonal circumplex: communion (hostile or friendly) and agency (submissive or dominant). The users can adjust three parameters of the approach behavior (i.e., distance, trajectory curvature, and deceleration) in a manner that corresponds the best to the described relationship. An analysis of the data from 69 users is presented, along with a verification experiment done with 10 participants and the real robot. Results suggest that users associate hostile robots with straight trajectories, and submissive robots with smoother deceleration.
|
|
11:30-11:45, Paper We2A.3 | Add to My Program |
Learning Users’ and Personality-Gender Preferences in Close Human-Robot Interaction |
Cruz-Maya, Arturo | ENSTA-ParisTech |
Tapus, Adriana | ENSTA-ParisTech |
Keywords: Personalities for Robotic or Virtual Characters, Social Intelligence for Robots, Robot Companions and Social Robots
Abstract: Robots are expected to interact with persons in their everyday activities and should learn the preferences of their users in order to deliver a more natural interaction. Having a memory system that remembers past events and using them to generate an adapted robot's behavior is a useful feature that robots should have. Nevertheless, robots will have to face unknown situations and behave appropriately. We propose the usage of user's personality (introversion/extroversion) to create a model to predict user's preferences so as to be used when there are no past interactions for a certain robot's task. For this, we propose a framework that combines an Emotion System based on the OCC Model with an Episodic-Like Memory System. We did an experiment where a group of participants customized robot's behavior with respect to their preferences (personal distance, gesture amplitude, gesture speed). We tested the obtained model against preset behaviors based on the literature about extroversion preferences on interaction. For this, a different group of participants was recruited. Results shows that our proposed model generated a behavior that was more preferred by the participants than the preset behaviors. Only the group of introvert-female participants did not present any significant difference between the different behaviors.
|
|
11:45-12:00, Paper We2A.4 | Add to My Program |
Towards Reaction and Response Time Metrics for Real-World Human-Robot Interaction |
Adams, Julie | Oregon State Univ |
Harriott, Caroline | Draper |
Keywords: Evaluation Methods and New Methodologies, Human Factors and Ergonomics, Cooperation and Collaboration in Human-Robot Teams
Abstract: Reaction time and response time have been successfully measured in laboratory settings. As robots move into the real-world, such metrics are needed for human-robot team deployments when evaluating interaction naturalness, ability to maintain safety, and task performance. Potential real-world reaction and response time metrics for peer-based teams are presented. Primary and secondary task reaction and response times were measured via video and auditory coding for a subset of first response tasks. The successful application of the metrics showed that primary task reaction and response times were longer for human-robot teams.
|
|
12:00-12:15, Paper We2A.5 | Add to My Program |
Affective Facial Expressions Recognition for Human-Robot Interaction |
Faria, Diego | Aston Univ |
Vieira, Mário | Inst. of Systems and Robotics, DEEC, Univ. of Coimbra |
Faria, Fernanda da Cunha e Castro | Inst. of Systems and Robotics, Univ. of Coimbra |
Premebida, Cristiano | Univ. of Coimbra |
Keywords: Monitoring of Behaviour and Internal States of Humans, Motivations and Emotions in Robotics, Non-verbal Cues and Expressiveness
Abstract: Affective facial expression is a key feature of non-verbal behaviour and is considered as a symptom of an internal emotional state. Emotion recognition plays an important role in social communication: human-to-human and also for human-to-robot. Taking this as inspiration, this work aims at the development of a framework able to recognise human emotions through facial expression for human-robot interaction. Features based on facial landmarks distances and angles are extracted to feed a dynamic probabilistic classification framework. The public online dataset Karolinska Directed Emotional Faces (KDEF) [1] is used to learn seven different emotions (e.g. angry, fearful, disgusted, happy, sad, surprised, and neutral) performed by seventy subjects. A new dataset was created in order to record stimulated affect while participants watched video sessions to awaken their emotions, different of the KDEF dataset where participants are actors (i.e. performing expressions when asked to). Offline and on-the-fly tests were carried out: leave-one-out cross validation tests on datasets and on-the-fly tests with human-robot interactions. Results show that the proposed framework can correctly recognise human facial expressions with potential to be used in human-robot interaction scenarios.
|
|
12:15-12:30, Paper We2A.6 | Add to My Program |
Online Nod Detection in Human-Robot Interaction |
Wall, Eduard | Bielefeld Univ |
Schillingmann, Lars | Bielefeld Univ |
Kummert, Franz | Bielefeld Univ |
Keywords: Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans, Robot Companions and Social Robots
Abstract: Nodding is an important factor in human communication, providing a physical cue for socially communicative acts such as turn taking, backchanneling, and confirmation. In this article, we describe a vision-based online head nodding detector that works with monocular camera images. Using SVM regression, our system estimates the head pose based on facial landmarks. Subsequence dynamic time-warping is then used to compare head pose features against nod templates. In contrast to many other previous implementations, our system was evaluated with study participants who were not instructed to reply by nodding, and shows good results while maintaining a low false positive rate.
|
|
We2B Regular Session, Ajuda II |
Add to My Program |
Collaboration in Manufacturing Environments |
|
|
Chair: Loconsole, Claudio | Pol. Di Bari |
Co-Chair: Lorenz, Tamara | Univ. of Cincinnati |
|
11:00-11:15, Paper We2B.1 | Add to My Program |
Vibrotactile Feedback for Aiding Robot Kinesthetic Teaching of Manipulation Tasks |
Ruffaldi, Emanuele | Scuola Superiore Sant'Anna |
Di Fava, Alessandro | Scuola Superiore Sant'Anna |
Loconsole, Claudio | Pol. Di Bari |
Frisoli, Antonio | TeCIP Inst. Scuola Superiore Sant'Anna |
Avizzano, Carlo Alberto | Scuola Superiore Sant'Anna |
Keywords: HRI and Collaboration in Manufacturing Environments
Abstract: Kinesthetic teaching is a viable solution for programming robots in the execution of new tasks thanks to the human-mediated mapping between the task objectives and the robot joint space. Redundant designs and differences from human kinematics pose challenges in the efficient execution of the teaching task. In this work we employ vibrotactile feedback letting operators understand specific kinematic constraints such as reaching joint limits and singularities. The experimentation with a Baxter robot and a four-motor vibrotactile bracelet is reported showing the effectiveness of the proposed enhancement to the kinesthetic teaching task.
|
|
11:15-11:30, Paper We2B.2 | Add to My Program |
A User Study on Human-Robot-Interactive Recovery for Industrial Assembly Problems |
Muxfeldt, Arne | Tech. Univ. Braunschweig |
Gopinathan, Sugeeth | Univ. of Bielefeld |
Coenders, Thilo | Tech. Univ. Braunschweig |
Steil, Jochen J. | Tech. Univ. Braunschweig |
Keywords: HRI and Collaboration in Manufacturing Environments, Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams
Abstract: This paper focuses on Human-Robot Interaction (HRI) in a scenario where a robot has to recover from an assembly problem. For this purpose, a human interacts with the robot so that the task-specific knowledge and experience of the human can be used for recovery. The question which interaction method is the most successful for this problem will be answered by a user study with 31 participants. Four different interaction methods are compared: space mouse, keyboard, kinesthetic guidance and a kinesthetic guidance with adaptive stiffness. The results clearly show that the kinesthetic guidance methods have superior performance and user satisfaction compared to the other considered methods.
|
|
11:30-11:45, Paper We2B.3 | Add to My Program |
A User Study on Personalized Adaptive Stiffness Control Modes for Human-Robot Interaction |
Gopinathan, Sugeeth | Univ. of Bielefeld |
Ötting, Sonja K. | Univ. of Bielefeld |
Steil, Jochen J. | Tech. Univ. Braunschweig |
Keywords: HRI and Collaboration in Manufacturing Environments, Assistive Robotics, Human Factors and Ergonomics
Abstract: This paper introduces a Personalized Adaptive Stiffness controller for physical Human-Robot Interaction and validates its performance in an extensive user study with 49 participants. The controller is calibrated to the user’s force profile to account for inter-user variance and individual differences. The user study compares the new scheme to conventional fixed stiffness or gravitation compensation controllers on the 7-DOF KUKA LWR IVb by employing two typical joint-manipulation tasks. Somewhat surprisingly, the experiments suggest that for simpler tasks a standard fixed controller may perform sufficiently well and that respective task dependency strongly prevails over individual differences. In the more complex task, quantitative and qualitative results clearly show differences between the different control modes and a both a performance gains and a user preference for the Personalized Adaptive Stiffness controller.
|
|
11:45-12:00, Paper We2B.4 | Add to My Program |
Collision Detection, Localization and Classification for Industrial Robots with Joint Torque Sensors |
Popov, Dmitry | Innopolis Univ |
Mavridis, Nikolaos | Interactive Robots and Media Lab |
Klimchik, Alexandr | Innopolis Univ |
Keywords: HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation
Abstract: High dynamic capabilities of industrial robots make them dangerous for humans and environment. To reduce this factor and advance collaboration between human and manipulator fast and reliable collision detection algorithm is required. To overcome this problem, we present an approach allowing to detect collision, localize action point and classify collision nature. Internal joint torque and encoder measurements were used to determine potential collisions with the robot links. This work proposes two ways of solving this problem: using classical analytical approach and learning approach implemented with neural network. The suggested algorithms were examined on the industrial robotic arm Kuka iiwa LBR 14 R820, ground truth information on the contact nature and its location were obtained with 3D LIDAR and camera.
|
|
12:00-12:15, Paper We2B.5 | Add to My Program |
Human-Centric Partitioning of the Environment |
Karaoguz, Hakan | Royal Inst. of Tech. KTH |
Bore, Nils | KTH Royal Inst. of Tech |
Folkesson, John | KTH |
Jensfelt, Patric | KTH - Royal Inst. of Tech |
Keywords: HRI and Collaboration in Manufacturing Environments, Motion Planning and Navigation in Human-Centered Environments, Detecting and Understanding Human Activity
Abstract: In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regions is to detect the objects that are commonly associated with frequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information in the robot’s memory to be later used for generating the regions. The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions. The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data is used, the regions are generated at densely occupied locations.
|
|
12:15-12:30, Paper We2B.6 | Add to My Program |
An Integrated Approach for Industrial Robot Control and Programming Combining Haptic and Non-Haptic Gestures |
Hügle, Johannes | Fraunhofer IPK |
Lambrecht, Jens | Berlin Inst. of Tech |
Krüger, Jörg | Fraunhofer Inst. for Production Systems and DesignTechnology |
Keywords: HRI and Collaboration in Manufacturing Environments, Programming by Demonstration, Novel Interfaces and Interaction Modalities
Abstract: We present a hybrid programming method for industrial robots combining advantages of manual haptic guidance of the end-effector and programming approaches using non-haptic pointing gestures for the spatial definition of poses and trajectories. Whereas the bare-hand spatial interaction can be implemented and performed cost- and time-efficiently but lacks accuracy, haptic-interaction is more time-consuming but it is used in a reduced manner in order to enable a highly-accurate refinement of target working poses. Additionally, the user is supported by a mobile Augmented Reality simulation providing spatial validation of the robot program, program management and transmission towards the robot controller. The implementation is realized by a compliance control based on a sensor mounted between flange and end-effector combined with our former introduced approach for spatial programming. We conducted a user study comparing Teach-In and Offline programming. The analysis shows a significant reduction of programming duration as well as a reduction of programming errors compared with Teach-In. Most participants favor the hybrid programming system. No significant differences for the programming duration could be determined between experts and non-experts. In comparison between haptic and non-haptic interaction, non-experts favor non-haptic interaction due to the higher intuitiveness of pointing gestures compared to direct physical interaction.
|
|
We2C Regular Session, Ajuda III |
Add to My Program |
Detecting and Understanding Human Activity (I) |
|
|
Chair: Simmons, Reid | Carnegie Mellon Univ |
Co-Chair: Filippeschi, Alessandro | Scuola Superiore Sant'Anna |
|
11:00-11:15, Paper We2C.1 | Add to My Program |
Multi-Sensor Activity Recognition Using 2DPCA and K-Means Clustering Based on Dual-Measure Distance |
He, Hong | Shanghai Normal Univ |
Huang, Jifeng | Shanghai Normal Univ |
Zhang, Wuxiong | SIMIT, Chinese Acad. of Sciences |
Keywords: Detecting and Understanding Human Activity, Cognitive Skills and Mental Models, Cognitive and Sensorimotor Development
Abstract: Nowadays the activity recognition based on multiple wearable sensors is still a challenging task due to the diversity of human activities. The application of unsupervised classification is helpful to discovery new activity classes and improve the activity classification model. Therefore, a new multi-sensor activity recognition scheme using the two-dimensional principal component analysis (2DPCA) and the k-means clustering with dual-measure distance (DMk-means) is proposed in this paper. Multiple activity signals are firstly decomposed by the wavelet packet decomposition. Then the 2DPCA is applied to wavelet feature matrices of the activity samples without changing the inherent data structure. In the DMk-means, different activities are grouped into clusters through measuring their feature vectors with both Euclidean distance and Pearson correlation distance. The recognition performance of proposed scheme is verified by the public dataset WARD. Clustering results show that more useful wavelet features can be captured by the 2DPCA than by the PCA. The dual-measure distance can calculate both the shape variance and the magnitude difference of feature vectors. The clustering indices of 2DPCA_DMk-means are superior than those of 2DPCA_k-means for activity recognition.
|
|
11:15-11:30, Paper We2C.2 | Add to My Program |
Ex-Amp Robot: Expressive Robotic Avatar with Multimodal Emotion Detection to Enhance Communication of Users with Motor Disabilities |
Kashii, Ai | Keio Univ |
Takashio, Kazunori | Keio Univ |
Tokuda, Hideyuki | Keio Univ |
Keywords: Detecting and Understanding Human Activity, Embodiment, Empathy and Intersubjectivity, Robot Companions and Social Robots
Abstract: In current society, there are numerous robots made for various purposes, including manufacturing, cleaning, therapy, and customer service. Other robots are used for enhancing H2H communication. In this research, we proposed a robotic system which detects the user's emotions and enacts them on a humanoid robot. By using this robotic avatar, users with motor disabilities are able to extend their methods of communication, as a physical form of expression will be added to the conversation.
|
|
11:30-11:45, Paper We2C.3 | Add to My Program |
Automatic Detection of Human Interactions from RGB-D Data for Social Activity Classification |
Coppola, claudio | Univ. of Lincoln |
Cosar, Serhan | Univ. of Lincoln |
Faria, Diego | Aston Univ |
Bellotto, Nicola | Univ. of Lincoln |
Keywords: Detecting and Understanding Human Activity, Machine Learning and Adaptation
Abstract: We present a system for the temporal detection of social interactions. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications, it is important to be able to move to more realistic data. For this reason, it is important to be able to detect temporally the intervals of time in which humans are performing an individual activity or a social one. Recognition of the human activities is a key feature for analysing the human behaviour. In particular, recognition of social activities could be useful to trigger human-robot interactions or to detect situations of potential danger. Based on that, this research has three goals: (1) define a new set of descriptors, which are able to represent the phenomena; (2) develop a computational model, which is able to discern the intervals in which a pair of people are interacting or performing individual activities; (3) provide a public dataset with RGB-D videos where social interactions and individual activities happen in a continuous stream. Results show that using the proposed approach allows to reach a good performance in the temporal segmentation of social activities.
|
|
11:45-12:00, Paper We2C.4 | Add to My Program |
Two Deep Approaches for ADL Recognition: A Multi-Scale LSTM and a CNN-LSTM with a 3D Matrix Skeleton Representation |
Ercolano, Giovanni | Univ. Degli Studi Di Napoli Federico II |
Riccio, Daniel | Univ. Degli Studi Di Napoli Federico II |
Rossi, Silvia | Univ. Di Napoli Federico II |
Keywords: Detecting and Understanding Human Activity, Machine Learning and Adaptation, Assistive Robotics
Abstract: In this work, we propose a deep learning approach for the detection of the activities of daily living (ADL) in a home environment starting from the skeleton data of an RGB-D camera. In this context, the combination of ad hoc features extraction/selection algorithms with supervised classification approaches has reached an excellent classification performance in the literature. Since the recurrent neural networks (RNNs) can learn temporal dependencies from instances with a periodic pattern, we propose two deep learning architectures based on Long Short-Term Memory (LSTM) networks. The first (MT-LSTM) combines three LSTMs deployed to learn different time-scale dependencies from pre-processed skeleton data. The second (CNN-LSTM) exploits the use of a Convolutional Neural Network (CNN) to automatically extract features by the correlation of the limbs in a skeleton 3D-grid representation. These models are tested on the CAD-60 dataset. Results show that the CNN-LSTM model outperforms the state-of-the-art performance with 95.4% of precision and 94.4% of recall.
|
|
12:00-12:15, Paper We2C.5 | Add to My Program |
Unsupervised Embrace Pose Recognition Using K-Means Clustering |
Kleawsirikul, Nutnaree | Tokyo Inst. of Tech |
Mitake, Hironori | Tokyo Inst. of Tech |
Hasegawa, Shoichi | Tokyo Inst. of Tech |
Keywords: Detecting and Understanding Human Activity, Machine Learning and Adaptation, Cognitive and Sensorimotor Development
Abstract: Embrace is an essential part of human-social interactions. It also gives positive health benefits as much as other touch gestures. However, in the recent years, studies of embrace recognition has been largely ignored when compared to recognition of touch gestures such as pat or rub. There are different kinds of embrace, with humans and pets, which can express different meanings. As embraces can be as important as touches, we are interested in what kind of embraces we can model, specially for human-robot interaction. In this paper, we investigate an unsupervised embrace pose recognition system based on a soft-stuffed robot platform. Our proposed method includes a hardware implementation of soft fabric-based capacitive touch sensors and a software algorithm comprising of k-means clustering based on locational features extracted from a sliding window. The result shows that our proposed method can model embrace patterns as different clusters. The method is capable of recognizing and clustering unseen data to a similar cluster's patterns, though there is a limitation when modeling two poses whose touches are similar but different in the alignment of the robot. In the next step, we plan to improve and test the proposed method in real-time environment, and make adjustments to our sensing system to cope with found limitations. If successful, the proposed method will be integrated with a touch gesture recognizer into a gesture recognition system for creating interactive and affective responses with stuffed-toy robot, which can become a medium for robot therapy or our own pets at home.
|
|
12:15-12:30, Paper We2C.6 | Add to My Program |
Classification of Gross Upper Limb Movements Using Upper Arm Electromyographic Features |
Thacham Poyil, Azeemsha | Univ. of Hertfordshire |
Amirabdollahian, Farshid | The Univ. of Hertfordshire |
Steuber, Volker | Univ. of Hertfordshire |
Keywords: Robots in Education, Therapy and Rehabilitation, Machine Learning and Adaptation, Detecting and Understanding Human Activity
Abstract: This research paper explores the possibility of using Electromyogram (EMG) signals for classifying point to point upper limb movements during dynamic muscle contraction in the context of human-robot interactions. Previous studies have mostly focused on classifiers for gesture recognition using steady state EMG. Only few studies have used non-steady-state EMG classifier when gross upper arm muscles are in motion. To investigate it further, our study was designed to take EMG measurements from 4 upper limb muscles of 10 participants while interacting with HapticMaster robot in assisting mode. The participants were asked to move the robotic arm in a rectangular path consisting of 4 segments named S1 to S4. The EMG signals were analyzed by splitting them into nonoverlapping windows of 100 milliseconds width. The initial windows within the initial 1 seconds of each segment iteration were considered to train and test a Support Vector Machine classifier. Various EMG features were calculated for different number of windows and used for classifying different segments. For the different combinations of features and muscles, it was noticed that the near-the-body segments S1 and S4 displayed the highest median accuracy for the feature combination (Waveform Length+Mean Average Value+Zero Crossing Count+Signal Slope Change) which were 100% each. For the same feature combination, it was also noticed that the segments S2 and S3 had the least accuracy, 76.2% and 73.8% respectively, possibly due to the away-from-body movements. In general, the accuracy was found to be more stable and higher for S1 and S4 segments. Considering 700 milliseconds (so 7 windows) for classification provided the best accuracy and the best muscle combination was Trapezius+Deltoid+Biceps Brachii+Triceps Brachii
|
|
We2D Regular Session, Belem I |
Add to My Program |
Creating Human-Robot Relationships (II) |
|
|
Chair: Jeon, Myounghoon | Michigan Tech. Univ |
Co-Chair: Gunes, Hatice | Univ. of Cambridge |
|
11:00-11:15, Paper We2D.1 | Add to My Program |
Keep on Moving! Exploring Anthropomorphic Effects of Motion During Idle Moments |
asselborn, thibault | EPFL |
JOHAL, Wafa | École Pol. Fédérale De Lausanne |
Dillenbourg, Pierre | EPFL |
Keywords: Anthropomorphic Robots and Virtual Humans, Non-verbal Cues and Expressiveness, Interaction with Believable Characters
Abstract: In this paper, we explored the effect of a robot's subconscious gestures made during moments when idle (also called adaptor gestures) on anthropomorphic perceptions of five year old children. We developed and sorted a set of adaptor motions based on their intensity. We designed an experiment involving 20 children, in which they played a memory game with two robots. During moments of idleness, the first robot showed adaptor movements, while the second robot moved its head following basic face tracking. Results showed that the children perceived the robot displaying adaptor movements to be more human and friendly. Moreover, these traits were found to be proportional to the intensity of the adaptor movements. For the range of intensities tested, it was also found that adaptor movements were not disruptive towards the task. These findings corroborate the fact that adaptor movements improve the affective aspect of child-robot interactions (CRI) and do not interfere with the child's performances in the task, making them suitable for CRI in educational contexts.
|
|
11:15-11:30, Paper We2D.2 | Add to My Program |
Exploring Engagement with Robots among Persons with Neurodevelopmental Disorders |
Beccaluva, Eleonora Aida | Fraternità E Amicizia Cooperativa Sociale-ONLUS |
Cerabolini, Roberto | Fraternità E Amicizia Cooperativa Sociale ONLUS |
Garzotto, Franca | Pol. Di Milano |
Gelsomini, Mirko | Pol. Di Milano, MIT Media Lab |
monaco, francesco | Pol. Di Milano |
Viola, Leonardo | Pol. Di Milano |
Clasadonte, Francesco | Pol. Di Milano |
Bonarini, Andrea | Pol. Di Milano |
Iannelli, Vito Antonio | Fraternità E Amicizia Cooperativa Sociale Onlus |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Creating Human-Robot Relationships
Abstract: Our research explores social robots as learning tools for persons with Neurodevelopmental Disorder (NDD). This paper focuses on a specific aspect of the learning process: engagement. Engagement is acknowledged as a learning facilitator and, for our target group, it is also a pre-condition for involvement in any learning activity. The paper reports the design and results of an empirical study performed at a therapeutic center to investigate engagement among persons with Neurodevelopmental Disorder (NDD) interacting with social robots. The study involved 5 subjects with NDD and three robots (two research products developed at our lab, and a commercial one), which were used in sequence during a therapeutic individual session at a care center. The results offer a contribution to our understanding the engagement process that takes place among persons with NDD during robotic experiences and enables us to compare the engagement effects of different social robots on this target group.
|
|
11:30-11:45, Paper We2D.3 | Add to My Program |
Estimation of Child's Personality for Child-Robot Interaction |
Abe, Kasumi | The Univ. of Electro-Communications |
Hamada, Yuki | Univ. of Electro - Communications |
Nagai, Takayuki | Univ. of Electro-Communications |
Shiomi, Masahiro | ATR |
Omori, Takashi | Tamagawa Univ |
Keywords: Monitoring of Behaviour and Internal States of Humans, Creating Human-Robot Relationships, Social Intelligence for Robots
Abstract: We propose a technique to estimate a child's extraversion and agreeableness for social robots that interact with children. The proposed approach observed children's behavior using only the robot's sensors, without any sensor networks in the environment. An RGBD sensor was used to track and identify children's facial expressions. Children's interactions with the robot were observed, such as their distance from the robot and the duration of their eye contact, because such information would provide clues to estimate their personality. Data were collected when a robot, tele-operated by preschool teachers, interacted with kindergarten children individually. The data from 29 children was used to successfully estimate the children's personality compared to chance rates.
|
|
11:45-12:00, Paper We2D.4 | Add to My Program |
Expectation Management in Child-Robot Interaction |
Ligthart, Mike | Delft Univ. of Tech |
Blanson Henkemans, Olivier Anne | TNO |
Hindriks, Koen | Delft Univ. of Tech |
Neerincx, Mark | TNO |
Keywords: Robot Companions and Social Robots, Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots
Abstract: Children are eager to anthropomorphize (ascribe human attributes to) social robots. As a consequence they expect a more unconstrained, substantive and useful interaction with the robot than is possible with the current state-of-the art. In this paper we reflect on several of our user studies and investigate the form and role of expectations in child-robot interaction. We have found that the effectiveness of the social assistance of the robot is negatively influenced by misaligned expectations. We propose three strategies that have to be worked out for the management of expectations in child-robot interaction: 1) be aware of and analyze children's expectations, 2) educate children, and 3) acknowledge robots are (perceived as) a new kind of `living' entity besides humans and animals that we need to make responsible for managing expectations.
|
|
12:00-12:15, Paper We2D.5 | Add to My Program |
The Role of Self-Disclosure in Human-Robot Interaction |
Eyssel, Friederike | Bielefeld Univ |
Wullenkord, Ricarda | CITEC, Bielefeld Univ |
Nitsch, Verena | Univ. Der Bundeswehr München |
Keywords: Social Intelligence for Robots, Robot Companions and Social Robots, Creating Human-Robot Relationships
Abstract: The act of revealing personal information, thoughts, and feelings is known as self-disclosure. Self-disclosure represents an important determinant of liking and is central to the development of close relationships among humans. The present study aimed to investigate the role of self-disclosure in human-robot interaction (HRI). 81 participants were randomly assigned to one of four experimental conditions in which they interacted with the humanoid robot NAO. We manipulated whether the robot disclosed personal information or whether the robot asked personal questions to the human interaction partner, so that the participant had to self-disclose. In two control conditions the robot either made factual statements or the robot asked factual questions. Contrary to the hypotheses, the results indicated no immediate statistically significant effects of self-disclosure on the dependent variables robot likability, human-robot interaction quality, future contact intentions, and mind attribution. However, when taking into account participants’ tendency to anthropomorphize technology, nature, and the animal world as a covariate, self-disclosure was found to significantly affect participants’ tendency to attribute mind to NAO. Furthermore, the results indicate that the form of engagement and dominance in an interaction (i.e., being in the role of a passive listener vs. conversing actively) may affect perceived HRI more than does the content of the verbal exchange. Thus, the paper highlights the importance of considering covariates (i.e., interindividual differences in the tendency to anthropomorphize nonhuman entities) in HRI analyses and points out possible relevant moderators of self-disclosure in HRI.
|
|
12:15-12:30, Paper We2D.6 | Add to My Program |
Good Vibrations: How Consequential Sounds Affect Perception of Robotic Arms |
Tennent, Hamish | Cornell Univ |
Moore, Dylan | Stanford Univ |
Jung, Malte | Cornell Univ |
Ju, Wendy | Stanford Univ |
Keywords: Sound design for robots, Non-verbal Cues and Expressiveness, Creating Human-Robot Relationships
Abstract: How does the sound a robot generates shape our perception of it? We overlaid high-end and low-end audio on videos of the high-end KUKA youBot desktop robotic arm moving a small block in functional (working in isolation) and social (interacting with a human) contexts. The low-end audio was sourced from an inexpensive OWI arm. Crowdsourced participants watched videos and rated the robot along competence, trust, aesthetic, and human-likeness dimensions. We found that the presence and quality of sound shapes subjective perception of the KUKA arm. The presence of sound reduced human-likeness and aesthetic ratings, however the high-end sound rated better in some aesthetic measures when compared to the low-end sound. The social context increased the perceived competence, trust, aesthetic and human-likeness of the robot. Based on motor sound's significant mixed impact on visual perception of robots, we discuss implications for sound design for interactive systems.
|
|
We2E Regular Session, Ajuda I |
Add to My Program |
Novel Interfaces and Interaction Modalities |
|
|
Chair: Hashimoto, Hiroshi | Advanced Inst. of Industrial Tech |
Co-Chair: Terada, Hidetsugu | Univ. of Yamanashi |
|
11:00-11:15, Paper We2E.1 | Add to My Program |
Hand in Air Tapping: A Wearable Input Technology to Type Wireless |
Meli, Leonardo | Univ. of Siena |
Barcelli, Davide | Univ. of Siena |
Lisini Baldi, Tommaso | Univ. of Siena |
Prattichizzo, Domenico | Univ. Di Siena |
Keywords: Novel Interfaces and Interaction Modalities
Abstract: We present Hand in Air Tapping (HAT), a wearable input interface which allows interactions through fingers tapping. It consists in a Bluetooth Low Energy rings enabling wireless communication with any compatible device. Each ring is hardware-wise independent of the others. This allows full modularity, i.e., the number of employed devices can be chosen to meet each application requirements. The proposed system was evaluated in two user studies, both on text input: (1) users learning curve in terms of writing speed; (2) rate of text entry comparison between the proposed interface and that of numpad style keyboards. We associated each keystroke to a set of letters/symbols and compared two approaches: one based on T9 technique and the other on multi-tap input method. Results show comparable performance between HAT and numpad style keyboards. HAT keeps the hands free, not affecting hand movements and human interactions with the surroundings. Moreover, as a general input technology, it might have several potential applications in the field of computer-human interfaces.
|
|
11:15-11:30, Paper We2E.2 | Add to My Program |
How to Teach Your Robot in 5 Minutes: Applying UX Paradigms to Human-Robot-Interaction |
Kraft, Martin | Fortiss, An-Inst. Tech. Univ. München |
Rickert, Markus | Fortiss, An-Inst. Tech. Univ. München |
Keywords: Novel Interfaces and Interaction Modalities
Abstract: When creating modern and visually appealing user experiences for the interaction with industrial robots, previously known and universally applicable paradigms in app and web design can be utilized to increase accessibility and usability of the to be created service. This is especially the case when the expected user group consists of untrained and inexperienced users and therefore system interaction focus is laid more on build progress overview, safety for human and robot, as well as overall simplification of complicated features. In this paper, we present four of the most important paradigms of modern graphical user experiences in web and app design that can be used to forward the concept of interacting with an industrial robot without any experience-related thresholds. By redesigning an existing interaction concept of a working robot cell system for assembly tasks in a small and medium-sized enterprise environment the presented paradigms are being utilized. The achieved improvements are then examined in a before-after user study to analyze the paradigm's success in suiting the user's expectation and anticipation using the redesigned service.
|
|
11:30-11:45, Paper We2E.3 | Add to My Program |
On the Recognition of Human Hand Touch from Robotic Skin Pressure Measurements Using Convolutional Neural Networks |
Denei, Simone | Univ. of Genova |
Albini, Alessandro | Univ. of Genova |
Cannata, Giorgio | Univ. of Genova |
Keywords: Machine Learning and Adaptation, Detecting and Understanding Human Activity, Novel Interfaces and Interaction Modalities
Abstract: This paper presents a novel approach for recognizing a human hand touch by processing pressure measurements generated by a robotic skin. Physical cooperation among humans is mainly based on the sense of touch and usually starts with hand contacts. If a robot can distinguish a human touch from a generic contact, the human-robot cooperation can be more natural and effective. The proposed approach consists in transforming the sensor pressure measurements distributed on the robot surface into a convenient 2D representation of the contact shape, i.e., a contact image. The image-based representation of contacts allows facing the problem of human touch classification by applying machine learning methods already developed for image classification. The experiments have been performed using a robotic skin, composed of 768 tactile elements, placed on a Baxter robot forearm. The contact classification has been performed using a Convolutional Neural Network obtaining an accuracy higher than 97% experimentally validating the proposed approach.
|
|
11:45-12:00, Paper We2E.4 | Add to My Program |
Tortoise and the Hare Robot: Slow and Steady Almost Wins the Race, but Finishes More Safely |
Rea, Daniel J. | Univ. of Manitoba |
Rahmani Hanzaki, Mahdi | Sharif Univ. of Tech |
Bruce, Neil | Univ. of Manitoba |
Young, James Everett | Univ. of Manitoba |
Keywords: Novel Interfaces and Interaction Modalities, Human Factors and Ergonomics
Abstract: We investigated the effects of changing the tele-operation feel of operating a robot by modifying its speed and acceleration profiles, and found that reducing a robot’s maximum speed by half can reduce collisions by 32%, while only increasing navigation task time by 10%. Teleoperated robots are increasingly popular for enabling people to remotely attend meetings, explore dangerous areas, or view tourist destinations. As these robots are being designed to work in crowded areas with people, obstacles, or even unpredictable debris, interfaces that support piloting them in a safe and controlled manner are important for successful teleoperation. We investigate modifying a teleoperated robot’s speed and acceleration profiles on an operator remotely navigating through an obstacle course. Our results indicate that lower maximum speeds result in lower operator workload, fewer collisions, and are only slightly slower than other profiles with a higher maximum speed. Our results raise questions about how robot designers should think about physical robot capability design and default driving software settings, the robot control interface, and the relation of robot speed to control.
|
|
12:00-12:15, Paper We2E.5 | Add to My Program |
Monocle: Interactive Detail-In-Context Using Two Pan-And-Tilt Cameras to Improve Teleoperation Effectiveness |
Seo, Stela Hanbyeol | Univ. of Manitoba |
Rea, Daniel J. | Univ. of Manitoba |
Wiebe, Joel | Univ. of Manitoba |
Young, James Everett | Univ. of Manitoba |
Keywords: Novel Interfaces and Interaction Modalities, Multi-modal Situation Awareness and Spatial Cognition
Abstract: Robot teleoperation, such as for search and rescue, uses multiple specialized cameras (e.g., wide environmental and sharp narrow views) to aid in task awareness. Simple display techniques, such as tiling, require ongoing mental mapping between the views; cameras that pan or tilt exacerbate the problem as the inter-view relationship changes. The detail-in-context technique bypasses this mental mapping requirement by providing a single integrated feed showing all cameras, with detail overlaid within the context. However, how this can be adapted to for robot teleoperation with multiple pan-and-tilt cameras has not yet been demonstrated. We present Monocle, an interactive detail-in-context teleoperation interface that integrates a pan-and-tilt narrow-angle first-person view into a wide-angle behind-robot view; operators can move the Monocle around a scene to obtain more resolution when and where needed. Evaluation results demonstrate Monocle’s feasibility and show that it can help operators complete search and rescue tasks more effectively in comparison to simple solutions.
|
|
12:15-12:30, Paper We2E.6 | Add to My Program |
Artificial Neural Networks Based Myoelectric Control System for Automatic Assistance in Hand Rehabilitation |
AMRANI, MOHAMED ZINE EL ABIDINE | USTHB |
Daoudi, Abdelaghani | Usthb |
Achour, Nouara | USTHB |
Keywords: Robots in Education, Therapy and Rehabilitation, Medical and Surgical Applications, Novel Interfaces and Interaction Modalities
Abstract: Myoelectric control is using electromyography (EMG) signal as a source of control, with this technique, we can control any computer based system such as robots, devices or even virtual objects. The tendon gliding exercise is one of the most common hand's rehabilitation exercises. In this paper, we present a patterns recognition based myoelectric control system (MCS) for the automatic assistance in tendon gliding exercise. The user is assisted by visual indicators and demo videos. EMG patterns recognition is done with EMG features and a multi-layer Artificial neural network (ANN), the ANN classifier output is used to synchronize the demo video with the detected movement, the transition between states is done automatically when the current state's movement is correct and the required number of repetition is reached. The ANN learning is done using back-propagation algorithm, we have used only two sEMG electrodes and four common used time-domain EMG feature extraction methods, the features quality is evaluated by the average Rand index using eight unsupervised clustering algorithms. The efficacy of the proposed method is experimentally validated with five able-bodied subjects, where we have reached an average classification accuracy of 95.11% and a processing time less than 300ms.
|
|
WePoster1 Poster Session, Ajuda I |
Add to My Program |
Poster Session |
|
|
Chair: Benvenuto, Antonella | Univ. Campus Bio-Medico Di Roma |
Co-Chair: Barreto, João P. | Univ. of Coimbra |
|
15:30-16:15, Paper WePoster1.1 | Add to My Program |
Wearing Your Arm on Your Sleeve: Studying Usage Contexts for a Wearable Robotic Forearm |
Vatsal, Vighnesh | Cornell Univ |
Hoffman, Guy | Cornell Univ |
Keywords: User-centered Design of Robots, Innovative Robot Designs, Human Factors and Ergonomics
Abstract: This paper presents the design of a wearable robotic forearm that provides the user with an assistive third hand, along with a study of interaction schema for the design. Technical advances in sensors, actuators, and materials have made wearable robots feasible for personal use, but the interaction with such robots has not been sufficiently studied. We describe the development of a working prototype along with three usability studies. In an online survey we find that respondents presented with images and descriptions of the device see its use mainly as a functional tool in professional and military contexts. A subsequent contextual inquiry among building construction workers reveals three themes for user needs: extending a worker's reach, enhancing their safety and comfort through bracing and stabilization, and reducing their cognitive load in repetitive tasks. A subsequent laboratory study in which participants wear a working prototype of the robot finds that they prioritize lowered weight and enhanced dexterity, seek adjustable autonomy and transparency of the robot's intent, and prefer a robot that looks distinct from a human arm. These studies inform design implications for further development of wearable robotic arms.
|
|
15:30-16:15, Paper WePoster1.2 | Add to My Program |
The Role of Security in Human-Robot Shared Environments: A Case Study in ROS-Based Surveillance Robots |
Portugal, David | Ingeniarius, Ltd |
Pereira, Samuel | Ingeniarius, Ltd |
Couceiro, Micael | Univ. of Coimbra |
Keywords: Ethical Issues in Human-robot Interaction Research, Philosophical Issues in Human-Robot Coexistence
Abstract: With the growing proliferation of robots in our society comes the natural concern of security. However, this is an often overlooked issue in robotic systems, as the focus is commonly placed in robot functionality and innovation. Unauthorized access to a robot, or a multi-robot network, may seriously compromise the system, potentially leading to unacceptable consequences, such as putting in danger humans that share the environment with the robot(s). In this paper, a deeper look is taken into the security issue in human-robot shared environments by surveying existing work and analyzing security issues in the widely used Robot Operation System (ROS), discussing the different layers of security in a robotic network architecture, and proposing several hierarchical security mechanisms, using the STOP project case study in surveillance robotics.
|
|
15:30-16:15, Paper WePoster1.3 | Add to My Program |
Evaluating the Usability and Users’ Acceptance of a Kitchen Assistant Robot in Household Environment |
Pham, Thi Xuan Ngan | Hanover Univ. of Applied Sciences and Arts |
Hayashi, Kotaro | Tokyo Univ. of Agriculture and Tech |
Becker-Asano, Christian | Robert Bosch GmbH |
Lacher, Sebastian | Bosch Corp. Japan |
Mizuuchi, Ikuo | Tokyo Univ. of Agriculture and Tech |
Keywords: Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams
Abstract: The main characteristics of service robots are their human-centered working environment and their non-expert users. Therefore, service robots need to exhibit human robot interaction that lets users feel comfortable in order for them to accept these robots at home. In this research, we propose a concept of a kitchen assistant robot supporting people in completing stressful cooking tasks. The stressful cooking tasks in this study are also verified to gain the understanding of the applicability of service robots in a wider sense. For the proposed concept a prototype of the kitchen assistant robot is constructed which consists of a robot arm and a control device Myo gesture armband. 18 university students experienced the prototype by completing the stressful tasks and gave feedback by answering questionnaires. This study investigates the usability and users’ acceptance of the kitchen assistant robot.
|
|
15:30-16:15, Paper WePoster1.4 | Add to My Program |
Learning Generalizable Surface Cleaning Actions from Demonstration |
Elliott, Sarah | Univ. of Washington |
Xu, Zhe | Yale Univ |
Cakmak, Maya | Univ. of Washington |
Keywords: Programming by Demonstration
Abstract: When surveyed, potential users often report cleaning as a desired robot capability. Cleaning tasks, such as dusting, wiping, or scrubbing, involve applying a tool on a surface. A general-purpose robotic solution to household cleaning needs to address manipulation of the numerous cleaning tools made for different purposes. Finding a universal solution to this manipulation problem is extremely challenging and it is not feasible for developers to pre-program the robot to use every possible tool. Instead, our work seeks to allow end users to program robots by demonstration using their own specific tools. We propose a method to extract a compact representation of a cleaning action from a single demonstration, such that the tool can be applied on different surfaces. The method exploits key insights about tool directionality and constraints placed on the provided demonstration. We demonstrate that our method is able to reliably learn cleaning actions for six different tools and apply those actions on different testing surfaces, even ones smaller than the training surface. Our method reproduces the cleaning performance of the demonstrated trajectory when applied on the training surface and it captures different user preferences.
|
|
15:30-16:15, Paper WePoster1.5 | Add to My Program |
Coordinating Flexible Human-Robot Teams by Local World State Observation |
Riedelbauch, Dominik | Univ. Bayreuth |
Henrich, Dominik | Univ. of Bayreuth |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Cognitive Skills and Mental Models
Abstract: We envision a system that treats humans and robots as equal partners in achieving a common goal. Equality of agents offers high flexibility through dynamic task allocation and flexible team composition, but thereby complicates the coordination of robots with human actions. We contribute a formal framework and system architecture that integrates planning of perception operations into dynamic plan execution to recognize whether subtasks have already been carried out by another agent or are ready to be executed by the robot. This is achieved by locally observing the world state with robot-mounted sensors to evaluate pre- and postconditions linked to every operation of the task. Our approach is prepared to handle arbitrary teams of multiple humans and robots. Experiments show, that our prototype is able to coordinate a stationary robot arm with human actions in pick-and-place tasks formulated as precedence graphs.
|
|
15:30-16:15, Paper WePoster1.6 | Add to My Program |
Design and Evaluation of P300 Visual Brain-Computer Interface Speller in Cyrillic Characters |
ABIBULLAEV, BERDAKH | Nazarbayev Univ |
Zhumadilova, Arailym | Nazarbayev Univ |
Tokmurzina, Dana | Nazabayev Univ |
Akbay, KUDERBEKOV | Nazarbayev Univ |
Keywords: Novel Interfaces and Interaction Modalities, Human Factors and Ergonomics, Multimodal Interaction and Conversational Skills
Abstract: A visual Brain-Computer Interface (BCI) speller is a system which assists disabled persons with severe neuromuscular diseases to communicate with the external world. It acquires brain signals in response to visual stimuli shown to the person on a screen, and then analyzes in real-time to predict the desired symbol on a single trial basis. To date most BCI design paradigms have been focused on the development of a speller to communicate English or Latin based languages. Due to a lack of BCI spellers for patients speaking Cyrillic-based languages, this study presents the initial design and evaluation of a speller that contains Cyrillic alphanumeric characters. The visual BCI speller was evaluated on five healthy subjects, who showed encouraging results during both the offline training phases, as well as during the real-time BCI spelling experiments. We discuss each steps of the design in detail and share the challenges and limitation of such design approaches with possible solutions.
|
|
15:30-16:15, Paper WePoster1.7 | Add to My Program |
Integrating Olfaction in a Robotic Telepresence Loop |
Monroy, Javier | Univ. of Málaga |
Melendez-Fernandez, Francisco | Univ. of Malaga |
Gongora, Andres | Univ. De Málaga |
González-Jiménez, Javier | Univ. of Málaga |
Keywords: Novel Interfaces and Interaction Modalities, Virtual and Augmented Tele-presence Environments
Abstract: In this work we propose enhancing a typical robotic telepresence architecture by considering olfactory and wind flow information in addition to the common audio and video channels. The objective is to expand the range of applications where robotics telepresence can be applied, including those related to the detection of volatile chemical substances (e.g. land-mine detection, explosive deactivation, operations in noxious environments, etc.). Concretely, we analyze how the sense of smell can be integrated in the telepresence loop, covering the digitization of the gases and wind flow present in the remote environment, the transmission through the communication network, and their display at the user location. Experiments under different environmental conditions are presented to validate the proposed telepresence system when localizing a gas emission leak at the remote environment.
|
|
15:30-16:15, Paper WePoster1.8 | Add to My Program |
Multimodal Communication for Guiding a Person Following Robot |
Sarne-Fleischmann, Vardit | Ben-Gurion Univ. of the Negev |
Honig, Shanee | Ben-Gurion Univ. of the Negev |
Oron-Gilad, Tal | BGU |
Edan, Yael | Ben-Gurion Univ. of the Negev |
Keywords: Multimodal Interaction and Conversational Skills, Robot Companions and Social Robots, User-centered Design of Robots
Abstract: Robots that are designed to support people in different tasks at home and in public areas need to be able to recognize user's intentions and operate accordingly. To date, research has been mostly concentrated on developing the technological capabilities of the robot and the mechanism of recognition. Still, little is known about navigational commands that could be intuitively communicated by people in order control a robot's movement. A two-part exploratory study was conducted in order to evaluate how people naturally guide the motion of a robot and whether an existing gesture vocabulary used for human-human communication can be applied to human-robot interaction. Fourteen participants were first asked to demonstrate ten different navigational commands while interacting with a Pioneer robot using a WoZ technique. In the second part of the study participants were asked to identify eight predefined commands from the U.S. Army vocabulary. Results show that simple commands yielded higher consistency among participants regarding the commands they demonstrated. Also, voice commands were more frequent than using gestures, though a combination of both was sometimes more dominant for certain commands. In the second part, an inconsistency of identification rates for opposite commands was observed. The results of this study could serve as a baseline for future developed commands vocabulary promoting a more natural and intuitive human-robot interaction style.
|
|
15:30-16:15, Paper WePoster1.9 | Add to My Program |
Deep Recurrent Q-Learning of Behavioral Intervention Delivery by a Robot from Demonstration Data |
Clark-Turner, Madison | Univ. of New Hampshire |
Begum, Momotaz | Univ. of New Hampshire |
Keywords: Assistive Robotics, Monitoring of Behaviour and Internal States of Humans, Degrees of Autonomy and Teleoperation
Abstract: We present a learning from demonstration (LfD) framework that uses a deep recurrent Q-network (DRQN) to learn how to deliver a behavioral intervention (BI) from demonstrations performed by a human. The trained DRQN enables a robot to deliver a similar BI in an autonomous manner. BIs are highly structured procedures wherein children with developmental delays/disorders (e.g. autism, ADHD, etc.) are trained to perform new behaviors and life-skills. Mounting anecdotal evidence from human-robot interaction (HRI) research has shown that BI benefits from the use of robots as a delivery tool. Most of the HRI research on robot-based intervention relies on tele-operated robots. However, the need for autonomy has become increasingly evident, especially when it comes to the real-world deployment of these systems. The few studies that have used autonomy in robot-based BI relied on hand-picked features of the environment in order to trigger correct robot actions. Additionally, none of these automated architectures attempted to learn the BI from human demonstrations, though this appears to be the most natural way of learning. This paper represents the first attempt to design a robot that uses LfD to learn BI. We generate a model then correctly predict appropriate actions with greater than 80% accuracy. To the best of our knowledge, this is the first attempt to employ DRQN within an LfD framework to learn high level reasoning embedded in human actions and behaviors simply from observations.
|
|
15:30-16:15, Paper WePoster1.10 | Add to My Program |
Towards the Use of Consumer-Grade Electromyographic Armbands for Interactive, Artistic Robotics Performances |
Côté-Allard, Ulysse | Univ. Laval |
St-Onge, David | Ec. Pol. De Montreal |
Giguere, Philippe | Univ. Laval |
Laviolette, François | Univ. Laval |
Gosselin, Benoit | Univ. Laval |
Keywords: Robots in art and entertainment, Cooperation and Collaboration in Human-Robot Teams, Non-verbal Cues and Expressiveness
Abstract: In recent years, gesture-based interfaces have been explored in order to control robots in non-traditional ways. These require the use of systems that are able to track human body movements in 3D space. Deploying Mo-cap or camera systems to perform this tracking tend to be costly, intrusive, or require a clear line of sight, making them ill-adapted for artistic performances. In this paper, we explore the use of consumer-grade armbands ( Myo armband) which capture orientation information (via an inertial measurement unit) and muscle activity (via electromyography) to ultimately guide a robotic device during live performances. To compensate for the drop in information quality, our approach rely heavily on machine learning and leverage the multimodality of the sensors. In order to speed-up classification, dimensionality reduction was performed automatically via a method based on Random Forests (RF). Online classification results achieved 88% accuracy over nine movements created by a dancer during a live performance, demonstrating the viability of our approach. The nine movements are then grouped into three semantically-meaningful moods by the dancer for the purpose of an artistic performance achieving 94% accuracy in real-time. We believe that our technique opens the door to aesthetically-pleasing sequences of body motions as gestural interface, instead of traditional static arm poses.
|
|
15:30-16:15, Paper WePoster1.11 | Add to My Program |
Impact of Continuous Eye Contact of a Humanoid Robot on User Experience and Interactions with Professional User Background |
Kühnlenz, Barbara | Coburg Univ. of Applied Sciences and Arts |
Wang, Zhi Qiao | Coburg Univ |
Kühnlenz, Kolja | Coburg Univ. of Applied Sciences and Arts |
Keywords: Non-verbal Cues and Expressiveness, Social Presence for Robots and Virtual Humans, Anthropomorphic Robots and Virtual Humans
Abstract: This work investigates the impact of simply keeping eye-contact on users’ perceptions of the humanoid robot NAO during the carry and handover phases of a fetch-and-handover scenario. Two conditions of NAO looking either towards its planned path or towards the human test subject’s face are implemented in a within-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction. The results of the user study reveal a significant increase of anthropomorphism, animacy, and social presence in the eye-contact condition even without an intelligent gaze control implementation. Additionally, significant interactions with the professional background of the test participants are found.
|
|
15:30-16:15, Paper WePoster1.12 | Add to My Program |
Formation Control Using GQ(λ ) Reinforcement Learning |
Knopp, Martin | Tech. Univ. München |
Aykın, Can | Tech. Univ. München |
Feldmaier, Johannes | Tech. Univ. München |
Shen, Hao | Tech. Univ. München |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation, Cooperation and Collaboration in Human-Robot Teams
Abstract: Formation control is an important subtask for autonomous robots. From flying drones to swarm robotics, many applications need their agents to control their group behavior. Especially when moving autonomously in human-robot teams, motion and formation control of a group of agents is a critical and challenging task. In this work, we propose a method of applying the GQ(λ) reinforcement learning algorithm to a leader-follower formation control scenario on the e-puck robot platform. In order to allow control via classical reinforcement learning, we present how we modeled a formation control problem as a Markov decision making process. This allows us to use the Greedy-GQ(λ) algorithm for learning a leader-follower control law. The applicability and performance of this control approach is investigated in simulation as well as on real robots. In both experiments, the followers are able to move behind the leader. Additionally, the algorithm improves the smoothness of the follower’s path online, which is beneficial in the context of human-robot interaction.
|
|
15:30-16:15, Paper WePoster1.13 | Add to My Program |
Reducing the Gap between Cognitive and Robotic Systems |
Azevedo, Helio | CTI - Renato Archer and USP - São Carlos |
Romero, Roseli Ap. Francelin | Univ. De Sao Paulo |
Ribeiro Belo, José Pedro | Univ. of São Paulo |
Keywords: Robot Companions and Social Robots, Cognitive and Sensorimotor Development, Evaluation Methods and New Methodologies
Abstract: Service robots will gradually be present in residences interacting with human beings in unstructured environments. Their acceptance is conditioned to the evolution of research in the area of social robotics, in particular using cognitive systems. One factor that compromises the rapid evolution of these studies is the difficulty in modeling cognitive systems due to the volume and complexity of information produced by a chaotic world full of sensory information. In addition, the validation of results with the use of real environments involving buildings, and people presents a high cost of installation and maintenance. This article offers two strategies to speed up this process. The first involves the definition of OntCog ontology that models the senses captured by the agent robotic sensors. This modeling facilitates the reproduction of experiments associated with cognitive models and the comparison among different implementations. The second is associated with the development of Robot House Simulator, which provides an environment where a robot and a human character can interact socially with increasing levels of cognitive processing. An unprecedented feature of this simulator is to provide information about all the senses of the robot, actually only the sense of vision or touch has been considered in the existing robotic simulators.
|
|
15:30-16:15, Paper WePoster1.14 | Add to My Program |
Developing Child-Robot Interaction Scenarios with a Humanoid Robot to Assist Children with Autism in Developing Visual Perspective Taking Skills |
Wood, Luke Jai | Univ. of Hertfordshire |
Dautenhahn, Kerstin | Univ. of Hertfordshire |
Robins, Ben | Univ. of Hertfordshire |
Zaraki, Abolfazl | Univ. of Hertfordshire |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Assistive Robotics
Abstract: Children with autism often find it difficult to understand that other people might have perspectives, viewpoints, beliefs and knowledge that are different from their own. One fundamental aspect of this difficulty is Visual Perspective Taking (VPT). Visual perspective taking is the ability to see the world from another person's perspective, taking into account what they see and how they see it, drawing upon both spatial and social information. In this paper, we outline the child-robot interaction scenarios that we have developed as part of the European BabyRobot project to assist children with autism explore elements that are important in developing VPT skills. Further to this we describe the standard pre and post assessments that we will perform with the children in order to measure their progress. The games were implemented with the Kaspar robot. To our knowledge this is the first attempt to improve the VPT skills of children with autism through playing and interacting with a humanoid robot.
|
|
15:30-16:15, Paper WePoster1.15 | Add to My Program |
The Timing of Multimodal Robot Behaviors During Human-Robot Collaboration |
Jensen, Lars Christian | Univ. of Southern Denmark |
Fischer, Kerstin | Univ. of Southern Denmark |
Suvei, Daniel | Univ. of Southern Danmark |
Bodenhagen, Leon | Univ. of Southern Denmark |
Keywords: Multimodal Interaction and Conversational Skills, Non-verbal Cues and Expressiveness
Abstract: In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, a large service robot, in a medical measurement scenario, we compare the timing of the robot's behaviors in three between-subject conditions. The results show that the relative timing of robot behaviors has significant effects on the number of problems participants encounter, and that the robot's verbal output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots.
|
|
15:30-16:15, Paper WePoster1.16 | Add to My Program |
H-RRT-C : Haptic Motion Planning with Contact |
Blin, Nassime Michel | Laas-Cnrs, Lgp-Enit |
Taïx, Michel | LAAS-CNRS/Univ. Paul Sabatier |
Fillatreau, Philippe | ENIT Tarbes |
Fourquet, Jean-Yves | ENIT |
Keywords: Motion Planning and Navigation in Human-Centered Environments, HRI and Collaboration in Manufacturing Environments, Virtual and Augmented Tele-presence Environments
Abstract: This paper focuses on interactive motion planning processes intended to assist a human operator when simulating industrial tasks in Virtual Reality. Such applications need motion planning on surfaces. We propose an original haptic path planning algorithm with contact, H-RRT-C, based on a RRT planner and a real-time interactive approach involving a haptic device for computer-operator authority sharing. Force feedback allows the human operator to keep contact consistently and provides the user with the feel of the contact, and the force applied by the operator on the haptic device is used to control the roadmap extension. Our approach has been validated through two experimental examples, and brings significant improvement over state of the art methods in both free and contact space to solve path-planning queries and contact operations such as insertion or sliding in highly constrained environments.
|
|
15:30-16:15, Paper WePoster1.17 | Add to My Program |
Studying Human Haptic Communication in a Realistic Setting - Challenges and Opportunities |
Javaid, Maria | Jacksonville Univ |
Keywords: Novel Interfaces and Interaction Modalities, Detecting and Understanding Human Activity, Assistive Robotics
Abstract: This paper presents study of human-to-human communication through haptics while performing collaborative tasks in a realistic setting. Research progress is most notable in two areas: classification of handover stages and identification of physical manipulation actions. Details about experiments, data analysis and results have been presented in previously published papers. The main contribution of this paper is identification of challenges encountered, thoughts about improvement possibilities and future research extensions of the overall research conducted on human-to-human haptic communication in an unstructured realistic setting. The findings presented in this paper may be used by other researchers to further research in the area of haptic interaction. This research was conducted as a part of a multidisciplinary project, RoboHelper. The ultimate goal of RoboHelper is to develop a communication interface for assistive robots that can help the elderly to live independently in their homes.
|
|
15:30-16:15, Paper WePoster1.18 | Add to My Program |
Playing the Mirror Game with a Humanoid: Probing the Social Aspects of Switching Interaction Roles |
Sicat, Shelly | Univ. of Calgary |
Chopra, Shreya | Univ. of Calgary |
Li, Nico | Univ. of Calgary |
Sharlin, Ehud | Univ. of Calgary |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness
Abstract: Individuals can easily change interaction roles during everyday tasks, for example by shifting from following someone's lead, to leading the task themselves. We are interested in how these existing social experiences scale to human-robot interaction (HRI), how would robots change their interaction roles when working with people? Would changes in interaction roles pose a challenge unique to robots? In this paper, we propose a testbed for changing interaction roles in HRI based on a drama exercise known as the Mirror Game. The Mirror Game enables close collaboration between two individuals with each closely following the other's movements. Utilizing the Mirror Game with a large humanoid robot allowed us to examine people's reactions to changes in the humanoid interaction roles. We contribute: 1) the design of a human robot interaction role-switching testbed based on the mirror game 2) a prototype of our testbed realized with a Rethink Robotic’s humanoid, Baxter, and 3) the results of a preliminary study examining people's reactions to the robot's changing interaction roles.
|
|
15:30-16:15, Paper WePoster1.19 | Add to My Program |
Gait Measurement by a Mobile Humanoid Robot As a Walking Trainer |
Piezzo, Chiara | Univ. of Tsukuba |
Leme, Bruno | Univ. of Tsukuba |
Hirokawa, Masakazu | Univ. of Tsukuba |
Suzuki, Kenji | Univ. of Tsukuba |
Keywords: User-centered Design of Robots, Detecting and Understanding Human Activity, Assistive Robotics
Abstract: It is well-known that walking offers many health benefits for everyone, especially for older people who need to maintain mobility and independence coping with declining of functional capacity. In this paper, we present the design of a humanoid walking trainer that has to monitor and encourage walking in the elderly. This design is based on our target users’ preferences. We present as well a preliminary walking experiment that was carried out in order to test the accuracy of the gait data obtained from the laser range sensor, which is positioned on the robot, during motion.
|
|
15:30-16:15, Paper WePoster1.20 | Add to My Program |
Trajectory Adaptation Method for Fast Walking Control of an Exoskeleton |
Lee, Sang Hoon | ADD (Agency of Defense Development) |
Seo, Changhoon | Agency for Defense Development |
Choi, Byunghun | Agency for Defense Developement |
Kim, Byungun | Agency for Defense Development |
Kim, Soohyun | KAIST(Korea Advanced Inst. of Science and Tech |
Keywords: Assistive Robotics, Novel Interfaces and Interaction Modalities
Abstract: In this paper, we propose the novel gait trajectory adaptation method applicable to an exoskeleton for strength augmentation. The proposed method consists of the online positive feedback trajectory adaptation method and the swing phase duration adaptation method. The swing test results of an exoskeleton prototype showed that the proposed method is more adaptable to the change of the trajectory amplitude than the general trajectory control method mainly used for rehabilitation and it can adjust the swing phase duration as the walking speed changes. This method can be utilized to design the fast walking control of an exoskeleton in a swing phase.
|
|
15:30-16:15, Paper WePoster1.21 | Add to My Program |
Above Knee Prosthesis for Ascending/descending Stairs with No External Energy Source |
Fujino, Ryota | Tokai Univ |
Kikuchi, Takayuki | Tokai Univ |
Koganezawa, Koichi | Tokai Univ |
Keywords: Robots in Education, Therapy and Rehabilitation, Assistive Robotics, Human Factors and Ergonomics
Abstract: The study deals with an above-knee prosthesis that allows amputees stairs ascending/descending with no external energy sources. Our previous study certified that the employed hydraulic system propelled by the antagonistic actions of knee and ankle joints enables stairs ascending/descending as well as level walking. The paper deals with the subsequent developments to get amputee’s walking gait close to normal ones. We combined flow control valve (FCV) into the hydraulic system that is automatically driven during walking. The walking experiments certified that the FCV provides the double knee action that normally appears in healthier persons’ walking gait, and it also provides a smooth transition from level ground walking to stairs ascending. However, it was found some instability in the stairs descending.
|
|
15:30-16:15, Paper WePoster1.22 | Add to My Program |
The Gap between Human’s Attitude towards Robots in General and Human’s Expectation of an Ideal Everyday Life Robot |
Kuhnert, Barbara | Univ. of Freiburg |
Ragni, Marco | Univ. of Freiburg |
Lindner, Felix | Univ. of Freiburg |
Keywords: User-centered Design of Robots, Creating Human-Robot Relationships, Evaluation Methods and New Methodologies
Abstract: Acceptance, trust and a successful deployment of robots in human’s everyday life depends both on technical implementation and on psychological aspects. The current work identifies essential features and characteristics that have the potential to increase the acceptance of robots, reduce prejudices and improve the development of appropriate and suitable robots with target-group specific features and characteristics. We present the first part of a major research project that aims to develop a valid and reliable toolkit for an encompassing measurement of human’s attitude towards robots. By means of the Semantic Differential Scale a comparison of human’s attitude towards robots in general and human’s expectation of an ideal, personal everyday life robot has been performed. Results reveal major differences between these two concepts and the demand of robots adapted to human’s requirements.
|
|
15:30-16:15, Paper WePoster1.23 | Add to My Program |
Unintentional Entrainment Effect in a Context of Human Robot Interaction: An Experimental Study |
Ansermin, Eva | CNRS, ENSEA, Univ. De Cergy Pontoise |
Mostafaoui, Ghiles | CNRS, Univ. of CergyPontoise, ENSEA |
Sargentini, Xavier | CNRS ENSEA Univ. De Cergy Pontoise |
Gaussier, Philippe | CNRS UMR 8051, ENSEA, Cergy-Pontoise Univ |
Keywords: Non-verbal Cues and Expressiveness, Applications of Social Robots
Abstract: Modelling nonverbal communication in robotics is a crucial issue to improve Human Robot interactions (HRI). Among several nonverbal behaviors we focus in this article on unintentional rhythmic entrainment and synchronization which has been proven to be highly important in intuitive and natural Human Human communication. Hence, the rising question is whether or no this phenomenon can be reproduced in a context of HRI and what are the prerequisites to ensure its emergence. In this paper, we study rhythmical interactions during imitation games between a NAO robot and naive subjects. We analysed two main types of interactions, a first where NAO performs movements at a fixed rhythm (unidirectional) and a second one where the robot is able to adopt the human motion dynamic (bidirectional) using a neural model of the entrainment effect based on dynamical systems. We show that using such model allows us to reach synchronization during the interactions and that both partners (robot and human) adapt their frequency as observed in natural HHI. This put forward the importance of bidirectionality for HRI. Moreover, the participants shifted their motion dynamics during the interaction without noticing it, proving the presence of such unintentional rhythmic entrainment in HRI.
|
|
15:30-16:15, Paper WePoster1.24 | Add to My Program |
Gesture Recognition for Humanoid Robot Teleoperation |
Ajili, Insaf | IBISC |
Keywords: Detecting and Understanding Human Activity, Machine Learning and Adaptation, Creating Human-Robot Relationships
Abstract: Interactive robotics is a vast and expanding research field. Interactions must be sufficiently natural, with robots having socially acceptable behavior by Humans, adaptable to user expectations. Thus allowing easy integration in our daily lives in various fields (science, industry, health,etc). Natural interaction during Human-Robot collaborative action needs suitable interaction techniques. In our paper we develop a gesture recognition system for natural and intuitive communication between Human and NAO robot. However recognizing meaningful gesture patterns from whole-body gestures is a complex task. That is why we used the Laban Movement Analysis technique to describe high level gestures for NAO tele-operation. The major contributions of the present work is: (1) an efficient preprocessing step based on view invariant Human motion, (2) a robust descriptor vector based on Laban Movement Analysis technique in order to generate compact and informative representations of Human movement, and (3) a gesture recognition system based on Hidden Markov Model method was applied to teleoperate NAO based on our proper database dedicated to the tele-operation of NAO. Our approach was evaluated with two challenging datasets, Microsoft Research Cambridge-12 (MSRC-12) and UTKinect- Action. Experimental results show that our approach outperforms the state-of-the-art methods.
|
|
15:30-16:15, Paper WePoster1.25 | Add to My Program |
Evaluation of a Robot Programming Framework for Non-Experts Using Symbolic Planning Representations |
Liang, Ying Siu | Univ. Grenoble Alpes |
Pellier, Damien | Lab. D'informatique De Grenoble - CNRS |
Fiorino, Humbert | Univ. of Grenoble-Alps |
Pesty, Sylvie | Univ. of Grenoble-Alps |
Keywords: Programming by Demonstration, Evaluation Methods and New Methodologies, HRI and Collaboration in Manufacturing Environments
Abstract: Cobots (collaborative robots) are revolutionising industries by allowing robots to work in close collaboration with humans. But many companies hesitate their adoption, due to the lack of programming experts. In this work, we evaluate a robot programming framework for non-expert users, that requires users to teach action models expressed in a symbolic planning language (PDDL). These action models would allow the robot to leverage modern automated planners to achieve any user-defined goal. We conducted qualitative user experiments with a Baxter robot to evaluate the non-expert user's understanding of the symbolic planning language and the usability of the framework. We showed that users with little to no programming experience can adopt the symbolic planning language, and use the framework.
|
|
15:30-16:15, Paper WePoster1.26 | Add to My Program |
Evaluation of Deviation Detection and Isolation in Robot Task Execution |
Orendt, Eric M. | Univ. of Bayreuth |
Henrich, Dominik | Univ. of Bayreuth |
Keywords: Innovative Robot Designs, User-centered Design of Robots, Computational Architectures
Abstract: Robot Programming by Demonstration (PbD) allows the use of a general purpose robot without specific programming skills. The most approaches in this field use Machine Learning methods to learn a robot behavior or skill. Besides, there are a few One-Shot PbD-approaches, which reduce the programming effort to a single demonstration and therefore allow a non-expert the quick and intuitive programming of a robot. However, a common drawback of One-Shot approaches is their decreased robustness during the program execution compared to Multi-Shot approaches. This leads to our main motivation: Making One-Shot PbD programs more robust. An important aspect of robustness is the ability to detect and handle unexpected events, e.g. a dropped object or a storage place, which should be free, but is already occupied. We call such events deviations. In our previous work we proposed an approach that provides the detection and isolation of deviations by a monitoring concept using entity-based resources. The advantages of this approach include a unified detection and complete classification principle. In this paper we evaluate our concept in a user study and give several insights to learned lessons to improve usability and comparability of our approach.
|
|
15:30-16:15, Paper WePoster1.27 | Add to My Program |
Investigation of Joint Action: Eye Blinking Behavior Improving Human-Robot Collaboration |
Hayashi, Kotaro | Tokyo Univ. of Agriculture and Tech |
Mizuuchi, Ikuo | Tokyo Univ. of Agriculture and Tech |
Keywords: Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness, Detecting and Understanding Human Activity
Abstract: Robots have become required to collaborate with people in real environments. In such collaboration, robots should develop teamwork with people as soon as possible like a capable worker would. Non-verbal behavior plays a crucial role in establishing good team interactions, and it is associated with the influence of facial components such as eyes. However, the necessary facial components of a robot for good human-robot collaboration have not been determined yet. Many robots are designed based on a designer’s ideas. Thus, this study focused on blink behavior and an investigation on improving teamwork. To tackle this issue, we used one idea of cognitive science: “Joint action” where interacting agents share what is on their minds. We conducted a Go/Nogo test with 29 participants to determine whether or not interacting participants are sharing their minds with the robot as part of a team. In this study, action space was used as an action representing their minds. The participants performed a task with robots having four eye conditions: Blink-in-sync, independent blinking, no blinking, and without eyes. The results indicated that a robot that exhibited two blinking behaviors made participants’ action space move toward the robot because the area which response times were shorter than far area moved from participants’ near area to middle area (center) between the participant and the robot.
|
|
15:30-16:15, Paper WePoster1.28 | Add to My Program |
Assessing the Social Criteria for Human-Robot Collaborative Navigation: A Comparison of Human-Aware Navigation Planners |
Khambhaita, Harmish | Lab. D’analyse Et D’architecture Des Système, Univ |
Alami, Rachid | CNRS |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Social Intelligence for Robots, Cooperation and Collaboration in Human-Robot Teams
Abstract: This paper focuses on requirements for effective human robot collaboration in interactive navigation scenarios. We designed several use-cases where humans and robot had to move in the same environment that resemble canonical path-crossing situations. These use-cases include open as well as constrained spaces. Three different state-of-the-art human-aware navigation planners were used for planning the robot paths during all selected use-cases. We compare results of simulation experiments with these human-aware planners in terms of quality of generated trajectories together with discussion on capabilities and limitations of the planners. The results show that the human-robot collaborative planner performs better in everyday path-crossing configurations. This suggests that the criteria used by the human-robot collaborative planner ( safety, time-to-collision, directional-costs) are possible good measures for designing acceptable human-aware navigation planners. Consequently, we analyze the effects of these social criteria and draw perspectives on future evolution of human-aware navigation planning methods.
|
|
15:30-16:15, Paper WePoster1.29 | Add to My Program |
Efficient Programming of Manipulation Tasks by Demonstration and Adaptation |
Elliott, Sarah | Univ. of Washington |
Toris, Russell | Fetch Robotics |
Cakmak, Maya | Univ. of Washington |
Keywords: Programming by Demonstration, Novel Interfaces and Interaction Modalities
Abstract: Programming by Demonstration (PbD) is a promising technique for programming mobile manipulators to perform complex tasks, such as stocking shelves in retail environments. However, programming such tasks purely by demonstration can be cumbersome and time-consuming as they involve many steps and they are different for each item being manipulated. We propose a system that allows programming new tasks with a combination of demonstration and adaptation. This approach eliminates the need to demonstrate repetitions within one task or variations of a task for different items, replacing those demonstrations with a much more time-efficient adaptation procedure. We develop a Graphical User Interface (GUI) that enables the adaptation procedure. This GUI allows grouping, duplicating, removing, reordering, and repositioning parts of a demonstration to adapt and extend it. We implement our approach on a single-armed mobile manipulator. We evaluate our system on several test scenarios with one expert user and four novice users. We demonstrate that the combination of demonstration and adaptation requires substantially less time to program than purely by demonstration.
|
|
15:30-16:15, Paper WePoster1.30 | Add to My Program |
Exploring Data Augmentation Methods in Reverberant Human-Robot Voice Communication |
Gomez, Randy | Honda Res. Inst. Japan Co., Ltd |
Nakamura, Keisuke | Honda Res. Inst. Japan Co., Ltd |
Keywords: Multimodal Interaction and Conversational Skills, Sound design for robots
Abstract: Collecting training data is not an easy task especially in situation involving robots that require tremendous physical effort. The ability to augment data through synthetic means is a convenient tool to solve this problem. Therefore it is important to evaluate the extent of the usefulness of augmented data. In this paper, we will explore data augmentation schemes in reverberant environment and investigate a method to effectively select data. We experiment in a real reverberant environment condition and investigate both the traditional automatic speech recognition (ASR) system based on gaussian mixture model-hidden markov model (GMM-HMM) and the most current system based on Deep Neural Networks (i.e, HMM-DNN). Our results show that the combination of data augmentation and data selection, further improves system performance. In our experiments, we used real test data in reverberant hands-free human-robot communication scenario.
|
|
15:30-16:15, Paper WePoster1.31 | Add to My Program |
Development of Easily Wearable Assistive Device with Elastic Exoskeleton for Paralyzed Hand |
Kawashimo, Josuke | Yokohama National Univ |
Yamanoi, Yusuke | Yokohama National Univ |
Kato, Ryu | Yokohama National Univ |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Medical and Surgical Applications
Abstract: Numerous robotic devices have been developed to assist hand rehabilitation; however, a majority of these are difficult for stroke survivors to wear. The purpose of this study was to develop an assistive device for treating flexion contracture, which supports the extension of each finger and may easily be worn on a paralyzed hand. To facilitate ease of use, we suggested a new wearing method for this wire-driven device with an elastic skeleton, allowing users to extend the device from the back of the hand onto the fingertip. The functional capacity of this device was measured through fingertip contact force and estimations of supporting torque. Results showed the device provides sufficient torque for finger extension with controlled wire tension. Moreover, experimental results confirmed that the novel design significantly decreased the time it took users to don the device compared to other designs.
|
|
15:30-16:15, Paper WePoster1.32 | Add to My Program |
Spike Response Threshold Model for Task Allocation in Multi-Agent Systems |
Lee, Wonki | Yonsei Univ |
Keywords: Social Intelligence for Robots
Abstract: This paper focuses on task allocation problem in multi-agent system and we consider the solution for regulating the proportion of groups performing tasks equally to the proportion of task demands. The response threshold model inspired by division of labor observed in insect societies is applied. Each agent converts the information from surrounding environment to the spike and decides its task based on the response threshold model. The decision of an agent has partial control over its environment, but the overall system shows a desired performance. The algorithm is implemented using simulated robots and demonstrates its adaptivity with changes for task demands and the number of agents in a group.
|
|
15:30-16:15, Paper WePoster1.33 | Add to My Program |
A Long Time Ago in a Galaxy Far, Far Away…the Effects of Narration and Appearance on the Perception of Robots |
Rosenthal-von der Pütten, Astrid Marieke | Univ. Duisburg-Essen |
Straßmann, Carolin | Univ. of Duisburg-Essen |
Mara, Martina | Ars Elecronica Center |
Keywords: Narrative and Story-telling in Interaction, Storytelling in HRI, Applications of Social Robots
Abstract: First evidence suggests that introducing robots by means of a narrative story can lead to more positive interactions and evaluations. It is unclear whether this positive framing of robots by narratives works equally for different robot design approaches and appearances. To address this open question we conducted 2x6 between-subjects online experiment and varied the introduction (narrative vs. instruction manual) and appearance of the robot (6 different robot appearances). We replicated previous results on evaluation effects for different robot appearances. Results indicate that robots introduced by a narrative story were evaluated as being more likable, intelligent, autonomous, and humanlike. They were also perceived to be less mechanical and less uncanny. However, there were no interaction effects between narration and robot appearance suggesting that narration is beneficial for robots regardless of their appearance and hence is a strong mechanism to shape positive expectations before actually interacting with a robot.
|
|
We3A Regular Session, Belem I |
Add to My Program |
Human-Centered Motion Planning and Navigation (I) |
|
|
Chair: Shibata, Tomohiro | Kyushu Inst. of Tech |
Co-Chair: Crick, Christopher | Oklahoma State Univ |
|
16:15-16:30, Paper We3A.1 | Add to My Program |
A Framework for Interactive Teaching of Virtual Borders to Mobile Robots |
Sprute, Dennis | Bielefeld Univ. of Applied Sciences |
Rasch, Robin | Bielefeld Univ. of Applied Sciences |
Tönnies, Klaus | Otto-Von-Guericke Univ. Magdeburg |
König, Matthias | Bielefeld Univ. of Applied Sciences |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot's workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot's workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.
|
|
16:30-16:45, Paper We3A.2 | Add to My Program |
Socially Acceptable Robot Navigation Over Groups of People |
Vega, Araceli | Univ. of Extremadura |
Manso, Luis J. | Univ. of Extremadura |
Bustos, Pablo | Univ. De Extremadura |
Núñez Trujillo, Pedro | Univ. De Extremadura |
Guimarães Macharet, Douglas | Univ. Federal De Minas Gerais |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: Considering the widespread use of mobile robots in different parts of society, it is important to provide them with the capability to behave in a textit{socially acceptable} manner. Therefore, a research topic of great importance recently has been the study of Human-Robot Interaction. Autonomous navigation is a fundamental task in Robotics, and several different strategies that produce paths that are either length or time optimized can be found in the literature. However, considering the recent use of mobile robots in a more social context, the use of such classical techniques is restricted. Therefore, in this article we present a social navigation approach considering environments with groups of people. The proposal uses a density function to efficiently represent groups of people, and modify the navigation architecture in order to include the social behaviour of the robot during its motion. This architecture is based on the combined use of the ac{PRM} and the ac{RRT} path planners and an adaptation of the emph{elastic band} algorithm. Experimental evaluation was carried out in different simulated environments, providing insight on the performance of the proposed technique, which surpasses classical techniques with no proxemics awareness in terms of social impact.
|
|
16:45-17:00, Paper We3A.3 | Add to My Program |
Generating 3D Fundamental Map by Large-Scale SLAM and Graph-Based Optimization Focused on Road Center Line |
Niijima, Shun | Tokyo Univ. of Science, National Inst. of Advanced Indu |
Nitta, Jirou | Tokyo Univ. of Science |
Sasaki, Yoko | National Inst. of Advanced Industrial Science and Tech |
Mizoguchi, Hiroshi | Tokyo Univ. of Science |
Keywords: Applications of Social Robots, Evaluation Methods and New Methodologies, Motion Planning and Navigation in Human-Centered Environments
Abstract: The paper presents a method to generate a large-scale 3D fundamental map from a running vehicle. To create an easy-to-use approach for frequent updates, we propose a system to utilize simultaneous localization and mapping (SLAM), which is robot mapping technology. In traditional methods, special machines or many manual operations cause higher mapping costs. The existing mobile mapping method (MMS) requires manual anchoring point measurement for ensuring accuracy. To solve this problem, we propose a 3D map optimization method by using road information from the standard map issued by the Geospatial Information Authority of Japan. From the SLAM result, the road center line of 3D shape map is estimated by assuming the car is running on road. Pose graph optimization between the estimated road center line and that of the standard map corrects cumulative distortion of the SLAM result. The experimental results of on-vehicle 3D LIDAR observation show that the proposed system could correct the cumulative distortion of the SLAM results and automatically generate a large-scale 3D map assuring reference accuracy.
|
|
17:00-17:15, Paper We3A.4 | Add to My Program |
Progressive Stochastic Motion Planning for Human-Robot Interaction |
Oguz, Ozgur Salih | Tech. Univ. of Munich |
Sari, Omer Can | Tech. Univ. Muenchen |
Hoang Dinh, Khoi | Tech. Univ. München |
Wollherr, Dirk | Tech. Univ. München |
Keywords: Motion Planning and Navigation in Human-Centered Environments, HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: This paper introduces a new approach to optimal online motion planning for human-robot interaction scenarios. For a safe, comfortable, and efficient interaction between human and robot working in close proximity, robot motion has to be agile and perceived as natural by the human partner. The robot has to be aware of its environment, including human motions, in order to proactively take actions while ensuring safety, and task fulfillment. Human motion prediction constitutes the fundamental perception input for the motion planner. The prediction system, which is based on probabilistic movement primitives, generates a prediction of human motion as a trajectory distribution learned in an offline phase. The proposed stochastic optimization-based planning algorithm then progressively finds feasible optimization parameters to re-plan the motion online that ensures collision avoidance while minimizing the task-related trajectory cost. Our simulation results show that the proposed approach produces collision-free trajectories while still reaching the goal successfully. We also highlight the performance of our planner in comparison to previous methods in stochastic motion planning.
|
|
17:15-17:30, Paper We3A.5 | Add to My Program |
Climbing Over Large Obstacles with a Humanoid Robot Via Multi-Contact Motion Planning |
Kanajar, Pavan | Istituto Italiano Di Tecnologia |
Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Kormushev, Petar | Imperial Coll. London |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Programming by Demonstration, Novel Interfaces and Interaction Modalities
Abstract: Incremental progress in humanoid robot locomotion over the years has achieved important capabilities such as navigation over flat or uneven terrain, stepping over small obstacles and climbing stairs. However, the locomotion research has mostly been limited to using only bipedal gait and only foot contacts with the environment, using the upper body for balancing without considering additional external contacts. As a result, challenging locomotion tasks like climbing over large obstacles relative to the size of the robot have remained unsolved. In this paper, we address this class of open problems with an approach based on multi-body contact motion planning guided through physical human demonstrations. Our goal is to make the humanoid locomotion problem more tractable by taking advantage of objects in the surrounding environment instead of avoiding them. We propose a multi-contact motion planning algorithm for humanoid robot locomotion which exploits the whole-body motion and multi-body contacts including both the upper and lower body limbs. The proposed motion planning algorithm is applied to a challenging task of climbing over a large obstacle. We demonstrate successful execution of the climbing task in simulation using our multi-contact motion planning algorithm initialized via a transfer from real-world human demonstrations of the task and further optimized.
|
|
We3B Special Session, Ajuda II |
Add to My Program |
Human-Assistive Technologies in the "Real World" |
|
|
Chair: Chugo, Daisuke | Kwansei Gakuin Univ |
Co-Chair: Yokota, Sho | Toyo Univ |
Organizer: Chugo, Daisuke | Kwansei Gakuin Univ |
Organizer: Yokota, Sho | Toyo Univ |
Organizer: Makino, Koji | Univ. of Yamanashi |
Organizer: Hashimoto, Hiroshi | Advanced Inst. of Industrial Tech |
|
16:15-16:30, Paper We3B.1 | Add to My Program |
Development of a Concavo-Convex Non-Woven Cloth to Reduce Shock to Fruit (I) |
Makino, Koji | Univ. of Yamanashi |
Ishida, Kazuyoshi | Univ. of Yamanashi |
Watanabe, Hiromi | Univ. of Yamanashi |
SUZUKI, Yutaka | Univ. of Yamanashi |
Kotani, Shinji | Univ. of Yamanashi |
Terada, Hidetsugu | Univ. of Yamanashi |
Keywords: Assistive Robotics, Innovative Robot Designs, Novel Interfaces and Interaction Modalities
Abstract: This paper describes an unique sheet developed from non-woven fabric that can be used to reduce transportation shock experienced by fruits that are easily damaged, such as peaches or strawberries, when transporting to foreign countries where there is a significant demand for these fruits. The proposed sheet comprises two non-woven fabric cloths suitable for packing these types of fruit. The non-woven cloth has useful characteristics such as air permeability, sealing properties, and X-ray transmission properties that make it suitable for transporting easily damaged fruit. In this paper, the application of the proposed sheet is applied for the packing of the fruits, however, the sheet that has the air permeability and shock resistance property is useful for the assistive robot and human interaction robot. And, the sheet is made manually, since the method to produce it is not constructed. The task of the human is reduced, if the mechanism for making the sheet can be realized. This paper describes that the properties of the sheet are introduced, and that the performance is investigated by experimental methods. Finally, the mechanism for producing the sheet is described and investigated using an experimental prototype.
|
|
16:30-16:45, Paper We3B.2 | Add to My Program |
Pattern Based Standing Assistance Adapted to Individual Subjects on a Robotic Walker (I) |
Chugo, Daisuke | Kwansei Gakuin Univ |
Kawazoe, Shohei | Kwansei Gakuin Univ |
Yokota, Sho | Toyo Univ |
Hashimoto, Hiroshi | Advanced Inst. of Industrial Tech |
Katayama, Takahiro | RT.WORKS |
Mizuta, Yasuhide | RT.WORKS |
Koujina, Atsushi | RT.WORKS |
Keywords: Assistive Robotics
Abstract: This study proposes a pattern based standing assistance on a robotic walker with a standing and sitting assistance function. In our previous works, we have developed a standing assistance system. Our assistive system consists of a powered walker and a standing assistance manipulator, and is small enough to be used in a typical narrow room such as a bathroom. Our system focuses on domestic use for elderly people who need nursing in their day-to-day lives. Usually, the body conditions of elderly users are different individually and required standing assistance schemes are also different. However, it is difficult to design the suitable standing assistance scheme according to the symptom of elderly users individually at real time because of its cost. Therefore, the proposed walker has several standing assistance patterns which are designed for the user of the different symptom and selects the suitable one for its user. Our idea is practicality for implementation with low cost. The performance of our proposed system was verified through experiments using our prototype with elderly and handicapped subjects.
|
|
16:45-17:00, Paper We3B.3 | Add to My Program |
Proposal of Non-Rotating Joint Drive Type High Output Power Assist Suit for Squat Lifting (I) |
Mohri, Shun | Chuo Univ |
Inose, Hiroki | Chuo Univ |
Arakawa, Hirokazu | Chuo Univ |
yokoyama, kazuya | Nabtesco Corp |
Yamada, Yasuyuki | Chuo Univ |
Kikutani, Isao | Nabtesco Corp |
Nakamura, Taro | Chuo Univ |
Keywords: Assistive Robotics, User-centered Design of Robots, Innovative Robot Designs
Abstract: Lower back pain is a major health concern worldwide. One cause of lower back pain is the burden on the lumbar region caused by the handling of heavy objects. To reduce this burden, the Ministry of Health, Labour and Welfare in Japan has recommended “squat lifting.” However, this technique, which supports a large force on lower limbs, is not very popular. Therefore, we aimed to develop a power assist suit for squat lifting. In this paper, we propose a gastrocnemius-reinforcing mechanism. Next, we discuss estimation of joint torque from motion analysis of squat lifting in order to construct a prototype. Finally, we describe the performance of the prototype mounted on a human body. The %MVC of the gastrocnemius while performing squat lifting was reduced by 40% using the prototype assist suit compared with the value without using the suit.
|
|
17:00-17:15, Paper We3B.4 | Add to My Program |
Liquid Feeding System Using Cooperative Towing by Multiple Drones (I) |
Suzuki, Masaya | Toyo Univ |
Yokota, Sho | Toyo Univ |
Imadu, Atsushi | Osaka City Univ |
Matsumoto, Akihiro | Toyo Univ |
Chugo, Daisuke | Kwansei Gakuin Univ |
Hashimoto, Hiroshi | Advanced Inst. of Industrial Tech |
Keywords: Applications of Social Robots, Evaluation Methods and New Methodologies, Cooperation and Collaboration in Human-Robot Teams
Abstract: Recently, some spraying systems have been proposed, that use a drone towing a liquid tube. However towable tube length is limited, due to the payload of a drone, and there is a possibility that liquid leaks out from the tube and the drone will fall. For these problems, this research proposes the liquid feeding system where the tube and cables are towed cooperatively by multiple drones. By using multiple drones, the length of the towable tube can be longer, and working space can be enlarged. In addition, the power cable is also towed together with the tube that can expand working time by being released from the capacity of the battery. In particular, this paper describes the movable range of drone in consideration of tube tension.
|
|
17:15-17:30, Paper We3B.5 | Add to My Program |
Basic Study on Appearance-Based Proficiency Evaluation of the Football Inside Kick (I) |
Kobayashi, Naomichi | Tokyo Denki Univ |
Sato, Shin'ichi | Tokyo Denki Univ |
Matsuzaki, Yuta | Tokyo Denki Univ |
NAKAMURA, Akio | Tokyo Denki Univ |
Keywords: Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans, Evaluation Methods and New Methodologies
Abstract: We are developing an appearance-based proficiency evaluation system for individual fundamental football skills. As a basic stage of the research, we propose a method to discriminate between beginners and experts based on features of the inside-kick motion, which is a fundamental skill in football. In addition, we propose a method for scoring the inside-kick motion. To provide ground truth, a football coach (former professional football player) scored a series of inside-kick trials based on five criteria. Each criterion was scored out of 20 points, resulting in an overall score of up to 100 points for each trial. We utilized dense trajectories to extract features from the inside-kick motion and bag of features for feature vectorization. A random forest was then adopted to remove the effects of individual techniques from the feature vectors. Finally, a support vector machine was utilized to discriminate between beginners and experts. Each trial was evaluated based on five criteria to score the inside-kick motion using support vector regression. Total score estimates were calculated as the sum of the scores for each criterion. The results reveal that the accuracy of beginner/expert discrimination reached a maximum value of 91%. The average difference between the scores estimated by the proposed method and those rated by the football coach was approximately 0.17 points. Overall, these experimental results confirm the effectiveness of the appearance-based proficiency evaluation system.
|
|
We3C Special Session, Ajuda III |
Add to My Program |
Social and Affective Robots |
|
|
Chair: Lee, Jaeryoung | Chubu Univ |
Co-Chair: Barakova, Emilia I. | Eindhoven Univ. of Tech |
Organizer: Lee, Jaeryoung | Chubu Univ |
Organizer: Rudovic, Ognjen | MIT Media Lab |
Organizer: Picard, Rosalind W. | MIT Media Lab |
|
16:15-16:30, Paper We3C.1 | Add to My Program |
User Experience of Conveying Emotions by Touch (I) |
Alenljung, Beatrice | Univ. of Skövde |
Andreasson, Rebecca | Uppsala Univ |
Billing, Erik Alexander | Univ. of Skövde |
Lindblom, Jessica | Univ. of Skövde |
Lowe, Robert | Univ. of Skövde |
Keywords: User-centered Design of Robots, Social Intelligence for Robots, Creating Human-Robot Relationships
Abstract: In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular.
|
|
16:30-16:45, Paper We3C.2 | Add to My Program |
Electrodermal Activity: Explorations in the Psychophysiology of Engagement with Social Robots in Dementia (I) |
Perugia, Giulia | Eindhoven Univ. of Tech |
Rodríguez-Martín, Daniel | CETpD-UPC |
Díaz-Boladeras, Marta | Res. Center for Dependency Care and Autonomous Living, UPC |
Català, Andreu | Univ. Pol. De Catalunya |
Barakova, Emilia I. | Eindhoven Univ. of Tech |
Rauterberg, Matthias | Eindhoven Univ. of Tech |
Keywords: Monitoring of Behaviour and Internal States of Humans, Robots in Education, Therapy and Rehabilitation, Motivations and Emotions in Robotics
Abstract: The study of engagement is central to improve the quality of care and provide people with dementia with meaningful activities. Current assessment techniques of engagement for people with dementia rely exclusively on behavior observation. However, novel unobtrusive sensing technologies, capable of tracking psychological states during activities, can provide us with a deeper layer of knowledge about engagement. We compared the engagement of persons with dementia involved in two playful activities, a game-based cognitive stimulation and a robot-based free play, using observational rating scales and electrodermal activity (EDA). Results highlight significant differences in observational rating scales and EDA between the two activities and several significant correlations between the items of observational rating scales of engagement and affect, and EDA features.
|
|
16:45-17:00, Paper We3C.3 | Add to My Program |
Method and Improvisation: Theatre Arts Performance Techniques to Further HRI in Social and Affective Robots (I) |
Greer, Julienne | Univ. of Texas at Arlington |
Keywords: Embodiment, Empathy and Intersubjectivity, Narrative and Story-telling in Interaction, Personalities for Robotic or Virtual Characters
Abstract: Theatre Arts methodologies of improvisational humor and method performance techniques are combined for a social robot study to further the affective, relational, and social interaction between humans and robots for positive effect. The complexity of performance methodology develops authentic human social interaction and is thereby well-suited to a positive human-robot model. The analysis of this pilot research study incorporating the proposed theatre methodology approach allows innovative solutions in affective communication to be studied from a multimodal sensory perspective.
|
|
17:00-17:15, Paper We3C.4 | Add to My Program |
A Pre-Investigation for Social Robotics for Older Adults Based on User Expectations (I) |
Ho, YiHsin | Tokyo Metropolitan Univ |
Sato-Shimokawara, Eri | Tokyo Metropolitan Univ |
Yamaguchi, Toru | Tokyo Metropolitan Univ |
Tagawa, Norio | Tokyo Metropolitan Univ |
Keywords: User-centered Design of Robots, Creating Human-Robot Relationships, Social Intelligence for Robots
Abstract: This paper based on user’s questionnaire data to discuss and suggest appearance/function of robotics for older adults. It starts from people’s familiar view, particularly makes robot a rule that help people but not let people feel disturb. Questionnaire data are collected as basic data, data analysis and data mining are applied for distinguish people’s different needs and situations. The questionnaire includes people’s background data and the trend of using electronic equipment, which helps construct different clusters and image of robot. According to user’s opinions, the ideal robot/robotics’ image can be made, and the possible of useful robot/robotics and its background system is probably to be created. This paper is not only a self-criticism for developed robot and robotic system, but also hoping to provides a new way for researches on developing a better assistant robot for older adults.
|
|
17:15-17:30, Paper We3C.5 | Add to My Program |
The Effects of the Robot's Information Delivery Types on Users' Perception Toward the Robot (I) |
Kang, Dahyun | Ewha Womans Univ |
Kim, Min-Gyu | Korea Inst. of Robot and Convergence |
Kwak, Sonya Sona | Ewha Womans Univ |
Keywords: User-centered Design of Robots, Creating Human-Robot Relationships, Robot Companions and Social Robots
Abstract: This study aims to investigate the effects of information delivery types on users’ perception toward the robot. We executed two experiments to explore appropriate information delivery types in each situation. In the first study, we compared which type of information delivery is suitable to the situation that the robot conveys environment states. In the second study, we examined a proper information delivery type in the situation that the robot conveys its internal states. We conducted a 3(information delivery types: speech vs. reflexive cue vs. none) within-participants experiment (N=24) both in the first study and in the second study. The results of the study showed that participants perceived a robot with the reflexive cue as more anthropomorphic and animate than one with speech and one without response. In addition, participants evaluated the service of the robot with the reflexive cue more positively than the robot with speech and the robot without the response in the both studies.
|
|
We3D Special Session, Belem II |
Add to My Program |
Cloud Technologies: Empowering Robots to Connect Society |
|
|
Chair: Luis, Santos | Univ. of Coimbra |
Co-Chair: Sgorbissa, Antonio | Univ. of Genova |
Organizer: Samaras, George | Univ. of Cyprus |
Organizer: Andreou, Panayiotis | Univ. of Central Lancashire, Cyprus |
Organizer: Luis, Santos | Univ. of Coimbra |
|
16:15-16:30, Paper We3D.1 | Add to My Program |
Learning through Sharing and Distributing Knowledge with Application to Object Recognition and Information Retrieval (I) |
Mignon, Alexis | ProbaYes |
Le Hy, Ronan | ProbaYes |
Bronisz, Alban | ProbaYes |
Mekhnacha, Kamel | Probayes |
Luis, Santos | Univ. of Coimbra |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Machine Learning and Adaptation
Abstract: The GrowMeUp project builds an assisted living environment based on a service robotics platform. The platform is able to learn the needs and habits of elderly persons; its functionalities evolve to help them to stay active, independent and socially involved longer. Following the recent interest in cloud-enhanced robotics, we present a general framework used to learn models by sharing and distributing knowledge between a cloud platform and a robots network. We also provide two concrete example services that take advantage of the cloud structure in order to enhance their performance.
|
|
16:30-16:45, Paper We3D.2 | Add to My Program |
BUM: Bayesian User Model for Distributed Social Robots (I) |
Martins, Gonçalo S. | Univ. of Coimbra |
Luis, Santos | Univ. of Coimbra |
Dias, Jorge | Univ. of Coimbra |
Keywords: Long-term Experience and Longitudinal HRI Studies, Computational Architectures, Monitoring of Behaviour and Internal States of Humans
Abstract: In this work we present a Bayesian User Model for inferring the characteristics and inter-user patterns of a population users. The model can receive evidence gathered by various interactive devices, such as social robots or wearable devices. The system is modular, with each module being responsible for gathering information and observations from persons present in the system's operation scenario. This information enables each module to determine a single characteristic of the person. New observations and measurements received by the system are fused with previous knowledge by a sub-process based on an information theory technique. This allows the system to be implemented in diverse heterogeneous distributed system topologies, extending beyond robotics. We have conducted experiments involving a team of social robots and simulated user population.% with four sets of person types. Our experiments have shown that the system is able to learn and classify the persons' characteristics, and to find relevant user groups via clustering. This system can potentially be used to gather information on a large set of persons, as well as to be an information source for user-adaptive applications in areas such as Robotics, Ambient Assisted Living (AAL) and Internet of Things.
|
|
16:45-17:00, Paper We3D.3 | Add to My Program |
Speaking Robots: The Challenges of Acceptance by the Ageing Society (I) |
Oliveira, José | Univ. of Coimbra |
Martins, Gonçalo S. | Univ. of Coimbra |
Jegundo, Ana | Cáritas Diocesana De Coimbra |
Dantas, Carina | Cáritas Diocesana De Coimbra |
Wings, Cindy | Zuyderland, Sittard-Geleen, Netherlands |
Luis, Santos | Univ. of Coimbra |
Dias, Jorge | Univ. of Coimbra |
Perdigão, Fernando | Univ. of Coimbra |
Keywords: Creating Human-Robot Relationships, Applications of Social Robots
Abstract: The ability of robots to dialogue with humans appears as one critical Human-Machine Interaction feature when it comes to transferring robots into society. This ability gains additional importance when it comes to elderly people, since they find it more comfortable and natural to interact using voice, due to possible natural physical impairments that hinder the usage of some of the interaction modalities (e.g. touch screens). Challenges like recognition accuracy, distant speech, the idiosyncrasies of elderly voices (fading, muffled pronunciation, etc.), the effects of surrounding environment noise or the expressiveness of the robot when speaking, become highly relevant in the acceptance and usability of service robots by the ageing population. In this paper, we present the results, challenges and solutions developed during a nine-month iterative evaluation process that took place within the GrowMeUp project, with focus on speech recognition and synthesis. The paper concludes with an identification of open scientific and technological problems, based on our interpretation of results, which we identify as critical for the acceptance and usability of robots by an ageing society.
|
|
17:00-17:15, Paper We3D.4 | Add to My Program |
Interoperability in Cloud Robotics - Developing and Matching Knowledge Information Models for Heterogenous Multi-Robot (I) |
Quintas, João | Inst. Pedro Nunes |
Menezes, Paulo | Inst. of Systems and Robotics |
Dias, Jorge | Univ. of Coimbra |
Keywords: Computational Architectures, Machine Learning and Adaptation, Robot Companions and Social Robots
Abstract: Every file, document, database and digital information is now going through the Cloud. Leveraged by the developments in information systems, Cloud Robotics is evolving at a steady pace and raised attention in the past 5 years. This recent field of Robotics is allowing engineers to envisage new and exciting applications for robots in the near future. This work proposes Cloud Robotics as a mean to integrate semantic reasoning in a multi-robot system, using self-created knowledge bases in each robot, in order to perform the coordination of complex task allocation. An auction-based coordination method and a knowledge matching algorithm were implemented to study this subject. The obtained results demonstrated that, the coordination of a large multi-robot system and the knowledge matching process can be computationally demanding, thus making them perfect candidate features to be "cloudyfied".
|
|
17:15-17:30, Paper We3D.5 | Add to My Program |
A Cloud-Based Scene Recognition Framework for In-Home Assistive Robots (I) |
Menicatti, Roberto | Univ. Di Genova |
Sgorbissa, Antonio | Univ. of Genova |
Keywords: Assistive Robotics, Detecting and Understanding Human Activity
Abstract: The rapidly increasing number of elderly people has led to the development of in-home assistive robots for assisting and monitoring elderly people in their daily life. To these ends, indoor scene and human activity recognition is fundamental. However, image processing is an expensive process, in computational, energy, storage and pricing terms, which can be problematic for consumer robots. For this reason, we propose the use of computer vision cloud services and a Naive Bayes model to perform indoor scene and human daily activity recognition. We implement the developed method on the telepresence robot Double to make it autonomously find and approach the person in the environment as well as detect the performed activity.
|
| |