| |
Last updated on August 6, 2017. This conference program is tentative and subject to change
Technical Program for Tuesday August 29, 2017
|
Tu1A Special Session, Ajuda I |
Add to My Program |
Cognitive Interaction Design (I) |
|
|
Chair: Terada, Kazunori | Gifu Univ |
Co-Chair: Eyssel, Friederike | Bielefeld Univ |
Organizer: Terada, Kazunori | Gifu Univ |
Organizer: Yamada, Seiji | National Inst. of Informatics |
|
11:00-11:15, Paper Tu1A.1 | Add to My Program |
Designing Robot Faces Suited to Specific Tasks That These Robots Are Good At (I) |
Komatsu, Takanori | Meiji Univ |
Masahiro, Kamide | Meiji Univ |
Keywords: Innovative Robot Designs, Human Factors and Ergonomics
Abstract: The purpose of this paper is to comprehend which kinds of robot faces are suited to the specific tasks that these robots are good at. Specifically, we focused on 64 design elements of robot faces; two different sizes and two different positions of the eyes, ears, and mouth, and we then conducted a questionnaire based investigation to clarify the relationship between these elements and five different tasks that robots engage in. As a result of this investigation, we could comprehend which kinds of robot faces cause users to judge robots as being good at education tasks and so on.
|
|
11:15-11:30, Paper Tu1A.2 | Add to My Program |
Video Conference Environment Using Representative Eye-Gaze Motion of Remote Participants (I) |
Takeuchi, Yugo | Shizuoka Univ |
Takahashi, Genki | NEC Facilities, Ltd. / Shizuoka Univ |
Keywords: Novel Interfaces and Interaction Modalities, Linguistic Communication and Dialogue, Detecting and Understanding Human Activity
Abstract: In multi-participant conversation, speaker and listener pay attention to all participants to carry out smooth turn-taking. However, in video conversation it is difficult to perceive the timing of when to take turns talking because participants cannot perceive each other’s eye-gaze. Consequently, one participant often misses the sign of when another person wants to speak, and continues to speak longer than appropriate. Considering this problem, in this study we propose a video-conference environment in which a robot symbolizes and unifies the eye-gaze motion of several participants. As a result of an experiment to investigate its efficacy and usefulness, it was suggested that the proposed video-conference environment suppresses inappropriate speech compared with a general video-conversation environment.
|
|
11:30-11:45, Paper Tu1A.3 | Add to My Program |
Projection Mapping of Behavioral Expressions Onto Manufactured Figures for Speech Interaction (I) |
Ishihara, Yoshihisa | Shinshu Univ |
Kobayashi, Kazuki | Shinshu Univ |
Keywords: Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness, Personalities for Robotic or Virtual Characters
Abstract: Natural language user interfaces, such as Apple Siri and Google Voice Search have been embedded in consumer devices; however, speaking to objects can feel awkward. Use of these interfaces should feel natural, like speaking to a real listener. This paper proposes a method for manufactured objects such as anime figures to exhibit highly realistic behavioral expressions to improve speech interaction between a user and an object. Using a projection mapping technique, an anime figure provides back-channel feedback to a user by appearing to nod or shake its head. We developed a listener agent based on the anime figure that listens to a user give directions to a specific location. We performed experiments to investigate the users’ impression of the speech interaction and compared it between four conditions. The experimental results suggested that the anime figure with projection mapping made the agent seem more realistic.
|
|
11:45-12:00, Paper Tu1A.4 | Add to My Program |
Investigating Effects of Light Animations on Perceptions of a Computer: Preliminary Results (I) |
Song, Sichao | The Graduate Univ. for Advanced Studies (SOKENDAI) |
Yamada, Seiji | National Inst. of Informatics |
Keywords: Motivations and Emotions in Robotics, Non-verbal Cues and Expressiveness
Abstract: A preliminary experiment is carried out to investigate the effects of LED light animations on a user's perception of a computer. As anthropomorphism has become an important factor in interaction design, current research tends to add human-like expression abilities to interactive devices. Such methods, however, have limitations as they are complex and not applicable to many currently-in-use appearance-constrained devices such as personal computers. Thus, in this work we investigate an alternative method: expressive light. We attached a programmable RGB LED strip to the front-bottom of a monitor and developed a ping pong game for carrying out an experiment. We collected both game log and questionnaire data from participants. Our results show that participants who played the game with LED light animations liked the game more and perceived the computer as better and more human-like. In addition, no evidence suggested a negative effect on a user's task performance or lead to additional workload.
|
|
12:00-12:15, Paper Tu1A.5 | Add to My Program |
Entropy-Based Eye-Tracking Analysis When a User Watches a PRVA’s Recommendations (I) |
Matsui, Tetsuya | National Inst. of Informatics |
Yamada, Seiji | National Inst. of Informatics |
Keywords: Creating Human-Robot Relationships, Personalities for Robotic or Virtual Characters, Human Factors and Ergonomics
Abstract: We conducted three experiments to discover the effect of a virtual agent’s state transition on a user’s eye gaze. Many previous studies showed that an agent’s state transition affects a user’s state. We focused on two kinds of transitions, the internal state transition and appearance state transition. In this research, we used a product recommendation virtual agent (PRVA) and aimed to discover the effect of its state transitions on users’ eye gaze as it made recommendations. We used entropy-based analysis to visualise the deviation of a user’s fixations. In experiment 1, the PRVA made recommendations without state transitions. In experiment 2, the amount of the PRVA’s knowledge transitioned from low to high during the recommendations. This is an internal state transition. In experiment 3, the PRVA’s facial expressions and gestures transitioned from a neutral to positive emotion during the recommendations. This is an appearance state transition. As a result, both the entropy-based analysis and fixation duration based analysis showed significant differences in experiment 3. These results show that an agent’s appearance state transitions cause a user’s eye gaze to transition.
|
|
12:15-12:30, Paper Tu1A.6 | Add to My Program |
A Pilot Study Investigating Self-Disclosure by Elderly Participants in Agent-Mediated Communication (I) |
Noguchi, Yohei | Univ. of Tsukuba |
Tanaka, Fumihide | Univ. of Tsukuba |
Keywords: Creating Human-Robot Relationships, Robot Companions and Social Robots, Assistive Robotics
Abstract: Generation gap can make communication difficult, even within a family. Each family member has a preferred style of communication. To address this, we proposed a shared-agent system for encouraging remote communication between family members. This paper reports the results of a pilot study,in which a prototype robot interface was evaluated, and the acceptance of asynchronous communication by elderly participants was investigated. The effects on family communication were approached from the viewpoint of self-disclosure in old age. Feedback suggested new research hypotheses, for example that the establishment of relationships between the shared-agent and the family members may encourage deeper communication within the family.
|
|
Tu1B Regular Session, Belem II |
Add to My Program |
Social Robotics (I) |
|
|
Chair: Nomura, Tatsuya | Ryukoku Univ |
Co-Chair: Trovato, Gabriele | Waseda Univ |
|
11:00-11:15, Paper Tu1B.1 | Add to My Program |
Puffy – an Inflatable Robotic Companion for Pre-Schoolers |
Gelsomini, Mirko | Pol. Di Milano, MIT Media Lab |
Degiorgi, Marzia | Pol. Di Milano |
Garzotto, Franca | Pol. Di Milano |
Leonardi, Giulia | Pol. Di Milano |
Penati, Simone | Pol. Di Milano |
Ramuzat, Noëlie | ENSTA Bretagne |
Silvestri, Jacopo | Pol. Di Milano |
Clasadonte, Francesco | Pol. Di Milano |
Kinoe, Yosuke | Hosei Univ |
Keywords: Applications of Social Robots
Abstract: Puffy is a learning and playful robotic companion for pre-school children. Designed in cooperation with a team of educators, Puffy has a combination of features which makes it unique with respect to existing robots used in educational contexts. It is mobile and has an egg-shaped inflatable and soft body; it supports multimodal interaction, reacting to children's gestures and movements, facial expressions and emotions, and communicates using voice, lights, movements in space, and inside-out projections on its body. The paper describes the design of Puffy and discusses an exploratory study that has involved 79 children aged 4-5 at a local pre-school to investigate the Likeability of the robot and how much children enjoyed, engaged to and remembered some specific design features and play activities.
|
|
11:15-11:30, Paper Tu1B.2 | Add to My Program |
Would You Like to Sample? Robot Engagement in a Shopping Centre |
Tonkin, Meg | Univ. of Tech. Sydney |
Vitale, Jonathan | Univ. of Tech. Sydney |
Ojha, Suman | Univ. of Tech. Sydney |
Williams, Mary-Anne | Univ. of Tech. Sydney |
Fuller, Paul | Stockland |
Judge, William | Commonwealth Bank |
Wang, Xun | Univ. of Tech. Sydney |
Keywords: Applications of Social Robots
Abstract: Nowadays, robots are gradually appearing in public spaces such as libraries, train stations, airports and shopping centres. Only a limited percentage of research literature explores robot applications in public spaces. Studying robot applications in the wild is particularly important for designing commercially viable applications able to meet a specific goal. Therefore, in this paper we conduct an experiment to test a robot application in a shopping centre, aiming to provide results relevant for today’s technological capability and market. We compared the performance of a robot and a human in promoting food samples in a shopping centre, a well known commercial application, and then analysed the effects of the type of engagement used to achieve this goal. Our results show that the robot is able to engage customers similarly to a human as expected. However unexpectedly, while an actively engaging human was able to perform better than a passively engaging human, we found the opposite effect for the robot. In this paper we investigate this phenomenon, with possible explanation ready to be explored and tested in subsequent research.
|
|
11:30-11:45, Paper Tu1B.3 | Add to My Program |
Semantic-Based Interaction for Teaching Robot Behavior Compositions |
Paléologue, Victor | SoftBank Robotics Europe |
Martin, Jocelyn | SoftBank Robotics Europe |
Coninx, Alexandre | UPMC |
Pandey, Amit Kumar | SoftBank Robotics |
Chetouani, Mohamed | Univ. Pierre Et Marie Curie |
Keywords: Applications of Social Robots, Cognitive Skills and Mental Models, Machine Learning and Adaptation
Abstract: Allowing humans to teach robot behaviors will facilitate acceptability as well as long-term interactions. Humans would mainly use speech to transfer knowledge or to teach high-level behaviors. In this paper, we propose a proof-of-concept application allowing a Pepper robot to learn behaviors from their natural-language-based description, provided by naive human users. In our model, natural language input is provided by grammar-free speech recognition, and is then processed to produce semantic knowledge, grounded in language and primitive behaviors. The same semantic knowledge is used to represent any kind of perceived input as well as actions the robot can perform. The experiment shows that the system can work independently from the domain of application, but also that it has limitations. Progress in semantic extraction, behavior planning and interaction scenario could stretch these limits.
|
|
11:45-12:00, Paper Tu1B.4 | Add to My Program |
He Said, She Said, It Said: Effects of Robot Group Membership and Human Authority on People's Willingness to Follow Their Instructions |
Sembroski, Catherine | Indiana Univ |
Fraune, Marlena | Indiana Univ |
Sabanovic, Selma | Indiana Univ |
Keywords: Applications of Social Robots, Human Factors and Ergonomics
Abstract: Research in HRI indicates that people follow a robot’s instructions even when they are incorrect. However, when a robot’s instructions or requests contradict those of a human (e.g. an authoritative experimenter), people obey the human instead. This might be due to the experimenter’s perceived ingroup status, or to their higher presumed authority compared to the robot. This study manipulated experimenter authority (high, low) and robot group membership (ingroup, neutral) to test how they affected responses to conflicting orders from the two agents depending on the request's importance (big, small). While there was no main effect of group membership and authority on most participant behavior, when experimenter authority was low and the robot an ingroup member, participants defied the experimenter’s instructions to turn off an ingroup robot at the end of the experiment, following the robot’s instructions instead. Further, request importance affected participant behavior. Participants typically followed the robot’s low-importance requests (e.g., moving from one chair to another), but not high-importance requests (e.g., how to perform a simulated task of diagnosing and talking to patients).
|
|
12:00-12:15, Paper Tu1B.5 | Add to My Program |
Effectiveness of Socially Assistive Robotics During Cognitive Stimulation Interventions: Impact on Caregivers |
Shukla, Jainendra | Rovira I Virgili Univ |
Barreda-Ángeles, Miguel | Eurecat - Tech. Centre of Catalonia |
Oliver, Joan | Inst. De Robótica Para La Dependencia |
Puig, Domenec | Rovira I Virgili Univ |
Keywords: Applications of Social Robots, Creating Human-Robot Relationships, Assistive Robotics
Abstract: Execution of cognitive stimulation interventions for cognitive training of individuals in need represents significant burden on caregivers in time and labor costs. Recent advancements in Socially Assistive Robotics (SAR) research can be exploited to reduce caregivers burden by work sharing with robots and supplementing/complementing human resources in execution of interventions. Current research evaluates the effectiveness of the SAR empowered cognitive training activity of Bingo Musical among thirty individuals with ID in multi-center trials. A multidimensional evaluation of caregivers workload was conducted; including subjective workload, time spent on users personalized interventions, and qualitative interviews with caregivers. The results of the research confirm a significant reduction in caregivers burden and raise a concern about the need of a specific training of the caregivers to take maximum advantage of SAR in health care.
|
|
12:15-12:30, Paper Tu1B.6 | Add to My Program |
Evaluation of Experiments in Social Robotics: Insights from the MOnarCH Project |
Sequeira, Joao | Inst. Superior Técnico - Inst. for Systems and Robotics |
Keywords: Evaluation Methods and New Methodologies
Abstract: The paper discusses the assessment of human-robot interaction (HRI) experiments in social robotics. Some of the MOnarCH project experiments are analyzed, illustrating key ideas on performance indicators based in activation rates of micro-behaviors and environment models. % The consistency of the results obtained indicates that the ideas are fully applicable to other experiments in social robotics.
|
|
Tu1C Regular Session, Belem I |
Add to My Program |
Rehabilitation and Assistive Robotics (I) |
|
|
Chair: Iáñez, Eduardo | Univ. Miguel Hernandez De Elche |
Co-Chair: Filippeschi, Alessandro | Scuola Superiore Sant'Anna |
|
11:00-11:15, Paper Tu1C.1 | Add to My Program |
I See You Lying on the Ground - Can I Help You? Fast Fallen Person Detection in 3D with a Mobile Robot |
Lewandowski, Benjamin | Ilmenau Univ. of Tech |
Wengefeld, Tim | Ilmenau Univ. of Tech |
Schmiedel, Thomas | Ilmenau Univ. of Tech |
Gross, Horst-Michael | Ilmenau Univ. of Tech |
Keywords: Applications of Social Robots, Assistive Robotics
Abstract: One important function in assistive robotics for home applications is the detection of emergency cases, like falls. In this paper, we present a new detection system which can run on a mobile robot to detect persons after a fall event robustly. The system is based on 3D Normal Distributions Transform (NDT) maps on which a powerful segmentation is applied. Segments most likely belonging to a person lying on the ground are grouped into clusters. After extracting features with a soft encoding approach, each cluster is classified separately. Our experiments show that the system is able to reliably detect fallen persons in real-time. It clearly outperforms other 3D state-of-the-art approaches. We can show that our system is able to handle even very challenging situations, where fallen persons are very close to other objects in the apartment. Such complex fall events often occur in real-world applications.
|
|
11:15-11:30, Paper Tu1C.2 | Add to My Program |
Sit-To-Stand Assistance System Based on Using EMG to Predict Movement |
Hiyama, Takahiro | Panasonic Corp |
Kato, Yusuke | Panasonic Corp |
Inoue, Tsuyoshi | Panasonic Corp |
Keywords: Assistive Robotics
Abstract: We propose herein a method to predict the sit-to-stand movement before a user leaves their seat. The proposed method is evaluated by using it for sit-to-stand and noisy movements, and the sit-to-stand movement is predicted with an average accuracy of 99.5%. Furthermore, based on this proposed method, we develop a prototype system to assist the sit-to-stand movement. To verify the effectiveness of this system, we test it with seven subjects. The results show that, based on the predicted movement, the assist starts about 114.3 ms before the user leaves the seat. In addition, the results confirm that, when using this assistance system, muscle activity is reduced by about 30% compared with the unassisted sit-to-stand movement.
|
|
11:30-11:45, Paper Tu1C.3 | Add to My Program |
A Vibrotactile Stimulation System for Improving Postural Control and Knee Joint Proprioception in Lower-Limb Amputees |
Lauretti, Clemente | Univ. Campus Bio-Medico Di Roma |
Pinzari, Giulia | Univ. Campus Bio-Medico Di Roma |
Ciancio, Anna Lisa | Campus Bio-Medico Univ |
DAVALLI, Angelo | INAIL Prosthesis Center |
Sacchetti, Rinaldo | INAIL Prosthesis Center |
Sterzi, Silvia | Univ. Campus Bio-Medico Di Roma |
Guglielmelli, Eugenio | Univ. Campus Bio-Medico |
Zollo, Loredana | Univ. Campus Bio-Medico |
Keywords: Detecting and Understanding Human Activity, Assistive Robotics, Novel Interfaces and Interaction Modalities
Abstract: The lack of sensory feedback in lower-limb amputees is the major cause of (i) amputees difficulty to keep balance, (ii) suboptimal performance in gait functions and (iii) increased energy consumption. This also hugely affects their participation in the activities of daily living. Improving postural control functions in people with lower-limb amputation has the aim to enhance their quality of life and is achieved through the restoration of their lost sensory feedback. The objective of this paper is to propose a stimulation system for restoring plantar pressure perception and knee-joint proprioception in lower-limb amputees. The proposed approach is based on the combined use of two FSRs, two accelerometers and two or three vibrotactile actuators. For validating the system, two experimental sessions were carried out on sixteen healthy subjects and one lower-limb amputee. They were aimed to (i) assess if vibrotactile feedback can improve balance control in lower-limb amputees and (ii) investigate the potentiality of the vibrotactile perception as a means to restore amputees’ kneejoint proprioception. Quantitative indices capable of describing users’ performance were extracted from the processed data and a statistical analysis was performed to compare different types of sensory feedback: i.e. (i) augmented visuo-proprioceptive feedback, no feedback, forearm vibrotactile feedback and lowback vibrotactile feedback in the 1st experimental session and (ii) forearm continuous vibrotactile feedback, low-back continuous vibrotactile feedback, forearm discrete vibrotactile feedback, and low-back discrete vibrotactile feedback in the 2nd experimental session. The achieved results on the vibrotactile stimulation were encouraging for both applications.
|
|
11:45-12:00, Paper Tu1C.4 | Add to My Program |
Empirical Mode Decomposition Use in Electroencephalography Signal Analysis for Detection of Starting and Stopping Intentions During Gait Cycle |
Ortiz, Mario | Univ. Miguel Hernández |
Iáñez, Eduardo | Univ. Miguel Hernandez De Elche |
Rodríguez-Ugarte, Marisol | Miguel Hernandez Univ. of Elche |
Azorin, Jose M. | Univ. Miguel Hernandez De Elche |
Keywords: Assistive Robotics, Medical and Surgical Applications, Evaluation Methods and New Methodologies
Abstract: Electroencephalography signals can be used to detect start and stop times of gait. This is useful for people who have lost or present serial low limb motor difficulties in order to work in conjunction with an exoskeleton. Normally, the frequency bands that are used to detect the gait or stop intentions are related to mu and beta frequency bands. However, in order to enhance the electroencephalography signal quality, it is necessary to increase the signal-to-noise ratio. In the paper, a former research is complimented with the use of different types of frequency and spatial filters. A multi resolution analysis tool based on Hilbert-Huang transform is also introduced as a new processing tool and its results discussed with the help of a recent developed comparison index.
|
|
12:00-12:15, Paper Tu1C.5 | Add to My Program |
Estimating Double Support in Pathological Gaits Using an HMM-Based Analyzer for an Intelligent Robotic Walker |
Chalvatzaki, Georgia | NATIONAL Tech. Univ. OF ATHENS |
Papageorgiou, Xanthi S. | National Tech. Univ. of Athens |
Tzafestas, Costas S. | ICCS - Inst. of Communication and Computer Systems |
Maragos, Petros | National Tech. Univ. of Athens |
Keywords: Assistive Robotics, Monitoring of Behaviour and Internal States of Humans, Detecting and Understanding Human Activity
Abstract: For a robotic walker designed to assist mobility constrained people that would improve their quality of life, it is important to take into account the different spectrum of pathological walking patterns, which result into completely different needs to be covered for each specific user. For a deployable intelligent assistant robot it is necessary to have a precise gait analysis system, providing real-time monitoring of the user and extracting specific gait parameters, which are associated with the rehabilitation progress and the risk of fall. In this paper, we present a completely non-invasive framework for the on-line analysis of pathological human gait and the recognition of specific gait phases and events. The performance of this gait analysis system is assessed, in particular, as related to the estimation of double support phases, which are typically difficult to extract reliably, especially when applying non-wearable and non-intrusive technologies. Furthermore, the duration of double support phases constitutes an important gait parameter and a critical indicator in pathological gait patterns. The performance of this framework is assessed using real data collected from an ensemble of elderly persons with different pathologies. The estimated gait parameters are experimentally validated using ground truth data provided by a Motion Capture system. The results obtained and presented in this paper demonstrate that the proposed human data analysis(modeling, learning and inference) framework has the potential to support efficient detection and classification of specific walking pathologies, as needed to empower a cognitive robotic mobility-assistance device with user-adaptive and context-aware functionalities.
|
|
12:15-12:30, Paper Tu1C.6 | Add to My Program |
Short-Range Gait Pattern Analysis for Potential Applications on Assistive Robotics |
Paulo, João | Univ. OF COIMBRA |
Garrote, Luís Carlos | Inst. of Systems and Robotics |
Asvadi, Alireza | Inst. of Systems and Robotics |
Premebida, Cristiano | Univ. of Coimbra |
Peixoto, Paulo | Univ. of Coimbra |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Human Factors and Ergonomics
Abstract: In this paper we propose a vision-based system, placed on board a robotic walker, which is able to perceive the user’s gait pattern, even when the user is in close proximity to the platform. Mobility assistances, like walkers, provide mobility by physically supporting the user, operating at a close proximity. This work contributes with a user’s state monitoring system, that allows the development of more user-centered approaches, such as safer and adaptive HMIs, as well as, to provide a tool to help healthcare personnel in medical assessments. Taking advantage of a stereo vision-based sensor, the system models the user’s gait pattern by applying a weighted kernel-density estimator to the captured data, and through a sliding temporal window, features are extracted and classified into one of the trained gait patterns. We have performed experiments, both to validate the discriminative potential of the proposed gait pattern classification system, and to validate its usability, by implementing an extension to our robotic walker’s HMI. The results obtained from the different experiments evidenced a satisfactory system’s performance.
|
|
Tu1D Regular Session, Ajuda III |
Add to My Program |
Robot Companions |
|
|
Chair: Dominey, Peter Ford | INSERM Stem Cell & Brain Res. Inst |
Co-Chair: Crook, Nigel | Oxford Brookes Univ |
|
11:00-11:15, Paper Tu1D.1 | Add to My Program |
NICO - Neuro-Inspired COmpanion: A Developmental Humanoid Robot Platform for Multimodal Interaction |
Kerzel, Matthias | Uni Hamburg |
Strahl, Erik | Univ. Hamburg |
Magg, Sven | Univ. of Hamburg |
Navarro-Guerrero, Nicolás | Univ. of Hamburg |
Heinrich, Stefan | Univ. Hamburg |
Wermter, Stefan | Univ. of Hamburg |
Keywords: Robot Companions and Social Robots, Anthropomorphic Robots and Virtual Humans, Innovative Robot Designs
Abstract: Interdisciplinary research, drawing from robotics, artificial intelligence, neuroscience, psychology, and cognitive science, is a cornerstone to advance the state-of-the-art in multimodal human-robot interaction and neuro-cognitive modeling. Research on neuro-cognitive models benefits from the embodiment of these models into physical, humanoid agents that possess complex, human-like sensorimotor capabilities for multimodal interaction with the real world. For this purpose, we develop and introduce NICO (Neuro-Inspired {COmpanion}), a humanoid developmental robot that fills a gap between necessary sensing and interaction capabilities and flexible design. This combination makes it a novel neuro-cognitive research platform for embodied sensorimotor computational and cognitive models in the context of multimodal interaction as shown in our results.
|
|
11:15-11:30, Paper Tu1D.2 | Add to My Program |
Huggable: Impact of Embodiment on Promoting Verbal and Physical Engagement for Young Pediatric Inpatients |
Jeong, Sooyeon | MIT |
Breazeal, Cynthia | MIT |
Logan, Deirdre | Boston Children's Hospital |
Weinstock, Peter | Boston Children's Hospital |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Medical and Surgical Applications
Abstract: Children and their parents may undergo challeng- ing experiences when admitted for in-patient care at pediatric hospitals. While most pediatric hospitals make an effort to provide socio-emotional support for patients and their families during care, such as with child life services, gaps still exist between professional resource supply and patient demand. There is an opportunity to apply interactive companion-like technologies as a way to augment and extend professional care teams. To explore the opportunity of social robots to augment child life services, we performed a randomized clinical trial at a local pediatric hospital to investigate how three different companion-like interventions (a plush toy, a virtual character on a screen, and a social robot) affected child-patients physical activity and social engagement – both linked to positive patient outcomes. We recorded video of patients, families and a certified child life specialist with each intervention to gather behavioral data. Our results suggest that children are the most physically and verbally engaged when interacting with the physically co-present social robot over time than the other two interventions. A post-study interview with child life specialists reveals their perspective on potential opportunities for social robots (and other companion-like interventions) to assist them with providing education, diversion, and companionship in the pediatric inpatient care context.
|
|
11:30-11:45, Paper Tu1D.3 | Add to My Program |
Improving Quality of Life with a Narrative Companion |
Dominey, Peter Ford | INSERM Stem Cell & Brain Res. Inst |
Paléologue, Victor | SoftBank Robotics Europe |
Pandey, Amit Kumar | SoftBank Robotics |
Ventre-Dominey, Jocelyne | INSERM |
Keywords: Robot Companions and Social Robots, Narrative and Story-telling in Interaction, Applications of Social Robots
Abstract: A central component of the human self is the narrative history of shared interactions with others, which provides the foundation for social relations that develop over extended time. The loss of this narrative self becomes successively catastrophic for aging subjects with degenerative disease of the memory system. A prosthetic device for narrative memory can provide an at least temporary solution to this problem. We identify requirements for a narrative memory capability that can allow individuals with diminished memory to continue to interact socially with partners with whom they have shared experiences. A memory prosthetic should provide access to past memories of the subject, and should accompany the subject in the formation, organization and retrieval of new memories. Based on these requirements, we have implemented the V1.0 narrative memory companion on the Pepper humanoid robot using the native Choreograph and NAOqi system capabilities. We exploit principals developed in our research in autobiographical memory and the organization of experience in cooperative humanoid robots, and the mapping of narrative structure onto this experience. In the narrative companion, past memories are first collected from the subject or members of their entourage via a template-based interview, and a small number of photographs that illustrate important people and events in the subject’s past. New memories constructed via interaction with Pepper, and by simple narratives told by the human partner, are stored in the Autobiographical Memory (ABM) implemented in the ALKnolwedge base of the NaoQi system. Memories are then recalled and shared by narrative. Results from a naïve case study are presented, and future applications for improved quality of life are discussed.
|
|
11:45-12:00, Paper Tu1D.4 | Add to My Program |
Robotic Companions in Stroke Therapy: A User Study on the Efficacy of Assistive Robotics among 30 Patients in Neurological Rehabilitation |
Meyer, Sibylle | SIBIS Inst. for Social Res. Berlin |
Fricke, Christa | SIBIS Inst. for Social Res. Berlin |
Keywords: Assistive Robotics, Robot Companions and Social Robots, Ethical Issues in Human-robot Interaction Research
Abstract: This article summarizes and explains the results of our recently completed German research project ROREAS (Robotic Rehabilitation Assistant for Gait Training of Stroke Patients). The project combines medical, technical and sociological expertise to develop an autonomous robot companion to aid stroke patients’ recoveries. The robotic companion aims to bridge the gap between human assisted gait training and independent exercise. From the beginning, the project was carried out in the real surroundings of the users – the corridors of a rehabilitation clinic. N=12 stroke patients were included in the technical development and N=30 patients in the evaluation of the robot companion. The robotic platform and HRI (Human-Robot Interaction) have been developed specifically for the particular requirements of this study. The empirical results show that the majority of robot users accept the mobile robotic companion and would incorporate it into their gait training. Despite severe mobility and/or cognitive handicaps, all patients could easily handle the robot. The robotic assistance motivated patients to leave their room despite difficulties in spatial orientation and ultimately they were able to increase the length of their routes and the duration of their training units.
|
|
12:00-12:15, Paper Tu1D.5 | Add to My Program |
Sociable Driving Agents to Maintain Driver’s Attention in Automatic Driving |
Karatas, Nihan | Toyohashi Univ. of Tech |
Yoshikawa, Soshi | Toyohashi Univ. of Tech |
Tamura, Shintaro | Toyohashi Univ. of Tech |
Otaki, Sho | Toyota Motor Corp |
Funayama, Ryuji | Toyota Motor Corp |
Okada, Michio | Toyohashi Univ. of Tech |
Keywords: Robot Companions and Social Robots, Non-verbal Cues and Expressiveness, User-centered Design of Robots
Abstract: Recently, many studies have been conducted on increasing the automation level in cars to achieve safer and more efficient transportation. The increased automation level creates room for the drivers to shift their attention to non-driving related activities. However, there are cases that cannot be handled by automation where a driver should take over the control. This pilot study investigates a paradigm for keeping the drivers' situation-awareness active during autonomous driving by utilizing a social robot system, NAMIDA. NAMIDA is an interface consisting of three sociable driving agents that can interact with the driver through eye-gaze behaviors. We analyzed the effectiveness of NAMIDA on maintaining the drivers' attention to the road, by evaluating the response time of the drivers to a critical situation on the road. An experiment consisting of a take over scenario was conducted in a dynamic driving simulator. The results showed that existence of NAMIDA significantly reduced the response time of the drivers. However, surprisingly, NAMIDA without eye-gaze behaviors was more effective in reducing the response time than NAMIDA with eye-gaze behaviors. Additionally, the results revealed better subjective impressions for NAMIDA with eye-gaze behaviors behaviors.
|
|
12:15-12:30, Paper Tu1D.6 | Add to My Program |
Both “Look and Feel” Matter: Essential Factors for Robotic Companionship |
FakhrHosseini, Maryam | Michigan Tech. Univ |
Lettinga, Dylan | Michigan Tech. Univ |
Vasey, Eric | Michigan Tech. Univ |
Zheng, Zhi | Michigan Tech. Univ |
Jeon, Myounghoon | Michigan Tech. Univ |
Park, Chung Hyuk | George Washington Univ |
Howard, Ayanna | Georgia Inst. of Tech |
Keywords: Motivations and Emotions in Robotics, Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships
Abstract: Physical embodiment of robots provides users with a social environment. To design social robots further to be accepted as our companions, we need to understand the essential factors and implement them and so, users get to bring them to their personal environments. To this aim, we focused on two important factors in robotic companionship: robot appearance (look) and emotional expression (feel). Twenty-one participants played an online game with the help from two humanoid robots, Nao (more human-like looking) and Darwin (less human-like looking). Participants interacted with each robot either with emotional words or without emotional words. Results show that only when the robot both looks more human-like and speaks with emotional expression, participants perceive it as their companion. Implications are discussed with future works.
|
|
Tu1E Regular Session, Ajuda II |
Add to My Program |
Tele-Operated and Autonomous Robots |
|
|
Chair: Adams, Julie | Oregon State Univ |
Co-Chair: Kühnlenz, Kolja | Coburg Univ. of Applied Sciences and Arts |
|
11:00-11:15, Paper Tu1E.1 | Add to My Program |
A Teleoperated Control Approach for Anthropomorphic Manipulator Using Magneto-Inertial Sensors |
Noccaro, Alessia | Univ. Campus Bio-Medico Di Roma |
Cordella, Francesca | Univ. Campus Biomedico of Rome |
Zollo, Loredana | Univ. Campus Bio-Medico |
Di Pino, Giovanni | Univ. Campus Bio-Medico Di Roma |
Guglielmelli, Eugenio | Univ. Campus Bio-Medico |
Formica, Domenico | Univ. Campus Bio-Medico Di Roma |
Keywords: Degrees of Autonomy and Teleoperation, Anthropomorphic Robots and Virtual Humans, Evaluation Methods and New Methodologies
Abstract: In this paper we propose and validate a teleoperated control approach for an anthropomorphic redundant robotic manipulator, using magneto-inertial sensors (IMUs). The proposed method allows mapping the motion of the human arm (used as the master) on the robot end-effector (the slave). We record arm movements using IMU sensors, and calculate human forward kinematics to be mapped on robot movements. In order to solve robot kinematic redundancy, we implemented different algorithms for inverse kinematics that allows imposing anthropomorphism criteria on robot movements. The main objective is to let the user to control the robotic platform in an easy and intuitive manner by providing the control input freely moving his/her own arm and exploiting redundancy and anthropomorphism criteria in order to achieve humanlike behaviour on the robot arm. Therefore, three inverse kinematics algorithms are implemented: Damped Least Squares (DLS), Elastic Potential (EP) and Augmented Jacobian (AJ). In order to evaluate the performance of the algorithms, four healthy subjects have been asked to control the motion of an anthropomorphic robot arm (i.e. the Kuka Light Weight Robot 4+) through four magneto-inertial sensors (i.e. Xsens Wireless Motion Tracking sensors - MTw) positioned on their arm. Anthropomorphism indeces and position and orientation errors between the human hand pose and the robot end-effector pose were evaluated to assess the performance of our approach.
|
|
11:15-11:30, Paper Tu1E.2 | Add to My Program |
Blame My Telepresence Robot, Joint Effect of Proxemics and Attribution on Interpersonal Attraction |
van Houwelingen-Snippe, Josca | Radboud Univ |
Vroon, Jered | Univ. of Twente |
Englebienne, Gwenn | Univ. of Twente |
Willem Haselager, Pim | Radboud Univ. Nijmegen |
Keywords: Degrees of Autonomy and Teleoperation, Social Presence for Robots and Virtual Humans
Abstract: When remote users share autonomy with a telepresence robot, questions arise as to how the behaviour of the robot is interpreted by local users. We investigated how a robot’s violations of social norms under shared autonomy influence the local user’s evaluation of the robot’s remote users. Specifically, we examined how attribution of such violations to either the robot or the remote user influences social perception of the remote user. Using personal space invasion as a salient social norm violation, we conducted a within-subject experiment (n=20) to investigate these questions. Participants saw several people introducing themselves through a telepresence robot, personal space invasion and attribution were manipulated. We found a significant (p=0.007) joint effect of the manipulations on interpersonal attraction. After these first 20 participants our robot broke down, and we had to continue with another robot (n=20). We found a difference between the two robots, causing us to discard this data from our main analysis. Subsequent video annotation and comparison of the two robots suggests that accuracy of the followed trajectory modifies attribution. Our results offer insights into the mechanisms of attribution in interactions with a telepresence robot as a mediator.
|
|
11:30-11:45, Paper Tu1E.3 | Add to My Program |
Exploring User-Defined Gestures to Control a Group of Four UAVs |
Peshkova, Ekaterina | Alpen-Adria-Univ. Klagenfurt |
Hitz, Martin | Alpen-Adria-Univ. Klagenfurt |
Keywords: User-centered Design of Robots, Creating Human-Robot Relationships, Cognitive Skills and Mental Models
Abstract: We present the results of an elicitation study exploring gestures that novice users find intuitive for controlling a group of four Unmanned Aerial Vehicles (UAVs). Particularly, we focus on group commands for (1) spatial distribution of the operated UAVs, (2) selection of the required UAV(s), and (3) their formation control. To elicit user-defined gestures, we conducted interview sessions in which we animated the considered commands with a 3D simulator. We identified commonalities in users’ behavior and then used them to create the final input vocabulary. By using the concept of mental models we achieved coherence among the vocabulary entries.
|
|
11:45-12:00, Paper Tu1E.4 | Add to My Program |
Exploring Intuitiveness of Metaphor-Based Gestures for UAV Navigation |
Peshkova, Ekaterina | Alpen-Adria-Univ. Klagenfurt |
Hitz, Martin | Alpen-Adria-Univ. Klagenfurt |
Ahlström, David | Alpen-Adria-Univ. Klagenfurt |
Alexandrowicz, Rainer W. | Alpen-Adria-Univ. Klagenfurt |
Kopper, Alexander | Alpen-Adria-Univ. Klagenfurt |
Keywords: User-centered Design of Robots, Creating Human-Robot Relationships, Cognitive Skills and Mental Models
Abstract: We investigate how the use of metaphors supports the intuitiveness of gesture input vocabularies for Unmanned Aerial Vehicle (UAV) navigation. We compare gesture sets constituting a single metaphor to gesture sets that are based on mixed metaphors in terms of their respective intuitiveness. To this end, we implemented a 3D simulator in order to check how well novice users steer a UAV without knowing the valid gestures, using only a hint on the underlying metaphor. We compared their task completion time (indirect assessment of intuitiveness) with the one achieved after studying a gesture set that consists of gestures from several metaphors. We analyzed users’ feedback reflected in questionnaires (direct assessment of intuitiveness) to further compare single metaphor gesture sets with mixed metaphors gesture sets. The results of the study support our hypothesis that a metaphor-based approach is an expedient means for gesture-based UAV navigation.
|
|
12:00-12:15, Paper Tu1E.5 | Add to My Program |
Study Investigating the Ease of Talking Via a Robot Tele-Operated from Same or Different Rooms |
Shimaya, Jiro | Osaka Univ |
Yoshikawa, Yuichiro | Osaka Univ |
Ishiguro, Hiroshi | Osaka Univ |
Keywords: Robot Companions and Social Robots, Robots in Education, Therapy and Rehabilitation, Assistive Robotics
Abstract: In this study, ease of talking is compared in three conversation forms: face-to-face conversation (direct conversation) and conversation with a robot that is tele-operated from the same room and different room (semi-indirect and indirect conversation, respectively). As a result, it was found that the silent time, which indicates how long the participants make an interlocutor wait until a response is given, in both semi-indirect and indirect conversation exceeds that in direct conversation. Additionally, the silent time in direct conversation after semi-indirect conversation exceeds that after direct conversation. The results indicate that a robot tele-operated from different rooms as well as from the same room can provide partners with time to think of the words uttered by them. Moreover, the results indicate that these effects are sustained in direct conversation following a conversation with a robot that is operated from the same room. These results indicate the potential of constructing a system of conversation via a robot that can be used in easier counseling for clients who experience difficulties in directly communicating with a counselor.
|
|
12:15-12:30, Paper Tu1E.6 | Add to My Program |
Ontology for Autonomous Robotics |
Li, Howard | Univ. of New Brunswick |
Gonçalves, Paulo | Inst. Pol. De Castelo Branco |
Fiorini, Sandro | Univ. Paris-Est Creteil |
Olszewska, Joanna Isabelle | Univ. of Gloucestershire, United Kingdom |
Keywords: Degrees of Autonomy and Teleoperation
Abstract: Creating a standard for knowledge representation and reasoning in autonomous robotics is an urgent task if we consider recent advances in robotics as well as predictions about the insertion of robots in human daily life. Indeed, this will impact the way information is exchanged between multiple robots or between robots and humans and how they can all understand it without ambiguity. Indeed, Human Robot Interaction (HRI) represents the interaction of at least two cognition models (Human and Robot). Such interaction informs task composition, task assignment, communication, cooperation and coordination in a dynamic environment, requiring a flexible representation. Hence, this paper presents the IEEE RAS Autonomous Robotics (AuR) Study Group, which is a spin-off of the IEEE Ontologies for Robotics and Automation (ORA) Working Group, and its ongoing work to develop the first IEEE-RAS ontology standard for autonomous robotics. In particular, this paper reports on the current version of the ontology for autonomous robotics as well as on its first implementation successfully validated for a human-robot interaction scenario, demonstrating the developed ontology’s strengths which include semantic interoperability and capability to relate ontologies from different fields for knowledge sharing and interactions.
|
|
Tu2A Special Session, Ajuda I |
Add to My Program |
Cognitive Interaction Design (II) |
|
|
Chair: Terada, Kazunori | Gifu Univ |
Co-Chair: Dias, Jorge | Univ. of Coimbra |
Organizer: Terada, Kazunori | Gifu Univ |
Organizer: Yamada, Seiji | National Inst. of Informatics |
|
14:00-14:15, Paper Tu2A.1 | Add to My Program |
Investigating How People Deal with Silence in a Human-Robot Conversation (I) |
Oto, Kiyona | Keio Univ |
Feng, Jianmei | Keio Univ |
Imai, Michita | Keio Univ |
Keywords: Non-verbal Cues and Expressiveness, Multimodal Interaction and Conversational Skills, Curiosity, Intentionality and Initiative in Interaction
Abstract: In this paper, we focus on ”silence,” which appears as a gap or delay in giving a response during a conversation and is one of the most important factors to consider to have a more natural conversation with robots. In the conversation between a human and a robot, silence can be divided into two parts: first, a silence that a human uses for a robot and second, a silence that a robot takes for a human. Therefore, we conducted a conversation test between a human and a robot in order to clarify the following two points: one, whether humans use silence for a robot and two, how silence used by a robot can be interpreted by humans. The results of the experiment indicate that humans certainly use silence for a robot for some reasons. Participants were asked to label the silences in four different types: Semantic Silence, Syntactical and Grammatical Silence, Interactive Silence, and Robotic Silence. As a result of this classification, there were cases where humans used Interactive Silence to be concerned for a robot, similar to that in case of a human conversation partner. It is now clear that humans use and regard silence in a form closer to a human conversation partner rather than a machine partner while in conversation with a communication robot. In particular, we found that sometimes humans use silence in social sense such as Interactive Silence, which is for the consciousness of a conversation partner.
|
|
14:15-14:30, Paper Tu2A.2 | Add to My Program |
Agent Auto-Generation System : Interact with Your Favorite Things (I) |
Sawada, Shiori | Keio Univ |
Sono, Taichi | Keio Univ |
Imai, Michita | Keio Univ |
Keywords: Applications of Social Robots, Novel Interfaces and Interaction Modalities, Personalities for Robotic or Virtual Characters
Abstract: This paper proposes a framework for an Agent Auto-Generation System (AAGS) which gets information from sensor devices and improvises a conversational agent from arbitrary things. AAGS has agent-types to prepare a perception for an improvised agent according to the shape of the agentization targets. It also has virtual-input, which generates knowledge representation related to information extracted from the networked sensor devices. The virtual-input employs a viewpoint based on the agent-type to generate a knowledge expression, and gives it to the improvised agent. Experiments to evaluate AAGS revealed that it is necessary to consider the perception of the improvised agents based on the agent-types. In addition, the agent’s viewpoint for the perception has an effect on how people recognize the improvised agent.
|
|
14:30-14:45, Paper Tu2A.3 | Add to My Program |
A Robot Counseling System -What Kinds of Topics Do We Prefer to Disclose to Robots? (I) |
Uchida, Takahisa | ATR, Osaka Univ |
Takahashi, Hideyuki | Osaka Univ |
Ban, Midori | Doshisha Univ |
Shimaya, Jiro | Osaka Univ |
Yoshikawa, Yuichiro | Osaka Univ |
Ishiguro, Hiroshi | Osaka Univ |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Robot Companions and Social Robots
Abstract: Our research goal was to develop a robot counseling system. It is important for a counselor to promote self-disclosure of clients to reduce their anxiety feelings. However, when a counselor is human, clients sometimes hesitate to disclose intrusive topics due to embarrassment and self-esteem issues. We hypothesized that a robot counselor, on account of its unique kind of agency, could remove mental barriers between the counselor and the client, and promote in-depth self-disclosure about negative topics. In this study, we prepared two robots (an android and a desktop robot) as robot counselors. First, we confirmed that subjects eagerly self-disclosed to these prepared robots from the numbers of spoken words about self-disclosure in preliminary experiment. And next, we conducted the experiment to verify whether it is possible to expose more of subjects' weakness to robots than humans. The experimental result suggested that robots can draw out subjects' self-disclosure about negative topics than the human counselor.
|
|
14:45-15:00, Paper Tu2A.4 | Add to My Program |
Telepresence Robot with Behavior Synchrony: Merging the Emotions and Behaviors of Users (I) |
Yonezu, Soji | Univ. of Tsukuba |
Osawa, Hirotaka | Univ. of Tsukuba |
Keywords: Virtual and Augmented Tele-presence Environments, Embodiment, Empathy and Intersubjectivity, Social Presence for Robots and Virtual Humans
Abstract: The authors propose a new telepresence robot that merges the behaviors of two users and realizes behavior synchrony automatically. Our telepresence robot synchronizes facial expressions, gaze, face orientation, and neck inclination to its client users. The robot exhibited behavior synchrony that gave the client user a feeling that he/she got consent from the remote user. We investigated how behavior synchrony for client partner improves the interactions of the telepresence robot, and examined how this system improves the quality of telepresence. Our evaluation indicates that effect of the synchrony exhibited by a telepresence robot increases the feeling of consent during a discussion.
|
|
15:00-15:15, Paper Tu2A.5 | Add to My Program |
Analysis of Robot Hotel: Reconstruction of Works with Robots (I) |
Osawa, Hirotaka | Univ. of Tsukuba |
Ema, Arisa | The Univ. of Tokyo |
Hattori, Hiromitsu | Ritsumeikan Univ |
Akiya, Naonori | Yamaguchi Univ |
Kanzaki, Nobotsugu | Nanzan Univ |
Kubo, Akinori | Hitotsubashi Univ |
Koyama, Tora | Osaka Univ |
Ichise, Ryutaro | National Inst. of Informatics |
Keywords: Philosophical Issues in Human-Robot Coexistence, Robot Companions and Social Robots, User-centered Design of Robots
Abstract: Due to the rise of artificial intelligence (AI) technology, discussions are progressing on how robots could replace human labor. Conventional surveys have suggested that human labor is expected to gradually be replaced as tasks become automated. We conducted a survey at the world’s first robot hotel recently opened in Japan – called a Henn-na hotel (“strange/change hotel”) in Japanese – which already uses robots for most of the work. We discovered that human labor is divided into small tasks, and that robot actions affect human emotional control. However, the hotel not only divides human work but also reconstructs it from tasks. Moreover, the purpose of reconstruction is not simply for replacement of works. Such modification of task is often observed taking place in human-system interactions. It is an extremely creative process of labor emerging in this area.
|
|
Tu2B Regular Session, Belem II |
Add to My Program |
Social Robotics (II) |
|
|
Chair: Nomura, Tatsuya | Ryukoku Univ |
Co-Chair: Shiomi, Masahiro | ATR |
|
14:00-14:15, Paper Tu2B.1 | Add to My Program |
Stopping Distance for a Robot Approaching Two Conversating Persons |
Ruijten, Peter | Eindhoven Univ. of Tech |
Cuijpers, Raymond | Eindhoven Univ. of Tech |
Keywords: Applications of Social Robots, Monitoring of Behaviour and Internal States of Humans, Creating Human-Robot Relationships
Abstract: In recent years, much attention has been given to developing robots with various social skills. An important social skill is navigation in the presence of people. Earlier research has indicated preferred approach angles and stopping distances for a robot when approaching people who are interacting with each other. However, an experimental validation of user experiences with such a robot is largely missing. The current study investigates the shape and size of a shared interaction space and evaluations of a robot approaching from various angles. Results show an expected pattern of stopping distances, but only when a robot approaches the middle point between two persons. Additionally, more positive evaluations were found when a robot approached on the side of the participant compared to other participant’s side. These findings highlight the importance of using a smart path planning method.
|
|
14:15-14:30, Paper Tu2B.2 | Add to My Program |
Security and Guidance: Two Roles for a Humanoid Robot in an Interaction Experiment |
Trovato, Gabriele | Waseda Univ |
Lopez Manrique, Jose Alexander | Pontificia Univ. Catolica Del Peru |
Paredes, Renato | Pontificia Univ. Católica Del Perú |
Cuellar, Francisco | Pontificia Univ. Catolica Del Peru |
Keywords: Applications of Social Robots, Non-verbal Cues and Expressiveness, Personalities for Robotic or Virtual Characters
Abstract: Security is one of the possible fields in human society in which robotics can be applied. Human guards usually perform a range of tasks in which a robot can provide help. A security company collaborated with us in the design and development of a robot that should serve in patrolling large indoor areas, interacting with humans, welcoming, providing information, and be a telepresence platform for the human security guards. In this paper we present a preliminary experiment which involved this new robot in two roles: security and guidance. The former being important especially during night, and the latter being common in daytime, when guards are usually interacting with people who ask them information. The results of the experiment with 45 participants showed how the perception of the appearance of the robot and its effectiveness are influenced by its behaviour and its related perceived more authoritative or kinder traits. These results provide useful indication for the employment of robot guards in a real world situation.
|
|
14:30-14:45, Paper Tu2B.3 | Add to My Program |
Understanding Social Interactions with Socially Assistive Robotics in Intergenerational Family Groups |
Short, Elaine Schaertl | Univ. of Southern California |
Swift-Spong, Katelyn | Univ. of Southern California |
Shim, Hyunju | Univ. of Southern California |
Wisniewski, Kristi M. | Univ. of Southern California |
Zak, Deanah Kim | Univ. of Southern California |
Wu, Shinyi | Univ. of Southern California |
Zelinski, Elizabeth | Univ. of Southern California |
Mataric, Maja | Univ. of Southern California |
Keywords: Applications of Social Robots, Robot Companions and Social Robots, Assistive Robotics
Abstract: We present a pilot study of a socially assistive robot interacting with intergenerational groups. The system is designed to improve the social well-being of older adults by supporting interactions within families. Six intergenerational family groups interacted with the robot in four different tablet-based games. Users' behavior during the sessions was used to compare the games and understand how members of different generations and different families interact with the robot. Interviews with users provide insight into users' priorities for in-home deployment of socially assistive robots, as well as preferences about the activities, appearance, and behavior of the robot.
|
|
14:45-15:00, Paper Tu2B.4 | Add to My Program |
He Can Read Your Mind: Perceptions of a Character-Guessing Robot |
Henkel, Zachary | Mississippi State Univ |
Bethel, Cindy L. | Mississippi State Univ |
Kelly, John | Mississippi State Univ |
Jones, Alexis | Mississippi State Univ |
Stives, Kristen | Mississippi State Univ |
Buchanan, Zach | Mississippi State Univ |
Eakin, Deborah | Mississippi State Univ |
May, David C. | Mississippi State Univ |
Pilkinton, Melinda | Mississippi State Univ |
Keywords: Applications of Social Robots, Robot Companions and Social Robots, User-centered Design of Robots
Abstract: After playing a five to seven minute character guessing game with a Nao robot, children answered questions about their perceptions of the robot's abilities. Responses from interactions with 30 children, ages eight to twelve, showed that when the robot made an attempt at guessing the participant's character, rather than being stumped and unable to guess, the robot was more likely to be perceived as being able to understand the participant's feelings and able to provide advice. Regardless of their game experience, boys were more likely than girls to feel they could have discussions with the robot about things they could not talk to other people about. This article provides details associated with the implementation of a game used to guess a character the children selected; a twelve question verbally-administered survey that examined their perceptions of the robot; quantitative and qualitative results from the study; and a discussion of the implications, limitations, and future directions of this research.
|
|
15:00-15:15, Paper Tu2B.5 | Add to My Program |
A Robotic Couples Counselor for Promoting Positive Communication |
Utami, Dina | Northeastern Univ |
Bickmore, Timothy | Northeastern Univ |
Kruger, Louis | Northeastern Univ |
Keywords: Applications of Social Robots, Robots in Education, Therapy and Rehabilitation
Abstract: Intimate relationships are crucially important in all human societies, yet many relationships are in some degree of distress. Couple psychotherapy has been demonstrated to be effective at reducing relationship distress, yet most couples do not seek help from professionals. Automated couples counselors could provide help to many couples who avoid professional help due to cost, logistics, or discomfort disclosing personal problems. We explore reactions to and acceptance of a humanoid robot that takes the role of a couples counselor in promoting positive communication skills among asymptomatic intimate couples. Couples were comfortable with the robot in this role; displaying intimate behavior during the counseling session. They followed the directions of the robot in practicing interpersonal communication skills, were largely satisfied with the experience, and described several advantages to working with a robot compared to human counselors or self-help materials.
|
|
15:15-15:30, Paper Tu2B.6 | Add to My Program |
The Influence of Individual Social Traits on Robot Learning in a Human-Robot Interaction |
GUEDJOU, Hakim | UPMC Univ |
Boucenna, Sofiane | CNRS - Cergy-Pontoise Univ |
Xavier, jean | CHU Pitié-Salpêtrière |
COHEN, David | APHP, Department of Child and Adolescent Psychiatry |
Chetouani, Mohamed | Univ. Pierre Et Marie Curie |
Keywords: Applications of Social Robots, Social Intelligence for Robots, Machine Learning and Adaptation
Abstract: Interactive Machine Learning considers that a robot is learning with and/or from a human. In this paper, we investigate the impact of human social traits on the robot learning. We explore social traits such as age (children vs. adult) and pathology (typical developing children vs. children with autistic spectrum disorders). In particular, we consider learning to recognize both postures and identity of a human partner. A human-robot posture imitation learning, based on a neural network architecture, is used to develop a multi-task learning framework. This architecture exploits three learning levels : 1) visual feature representation, 2) posture classification and 3) human partner identification. During the experiment the robot interacts with children with autism spectrum disorders (ASD), typical developing children (TD) and healthy adults. Previous works assessed the impact on learning of these social traits at the group level. In this paper, we focus on the analysis of individuals separately. The results show that the robot is impacted by the social traits of these different groups' individuals. First, the architecture needs to learn more visual features when interacting with a child with ASD (compared to a TD child) or with a TD child (compared to an adult). However, this surplus in the number of neurons helped the robot to improve the TD children's posture recognition but not that of children with ASD. Second, preliminary results show that this need of a neurons surplus while interacting with children with ASD is also generalizable to the identity recognition task.
|
|
Tu2C Regular Session, Belem I |
Add to My Program |
Rehabilitation and Assistive Robotics (II) |
|
|
Chair: Chugo, Daisuke | Kwansei Gakuin Univ |
Co-Chair: Papageorgiou, Xanthi S. | National Tech. Univ. of Athens |
|
14:00-14:15, Paper Tu2C.1 | Add to My Program |
Measuring Multimodal Deformations in Soft Inflatable Actuators Using Embedded Strain Sensors |
Hart, Alexander | Georgia Inst. of Tech |
Cahoon, Thomas | Georgia Inst. of Tech |
Hammond III, Frank L. | Georgia Inst. of Tech |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Assistive Robotics, Robots in Education, Therapy and Rehabilitation
Abstract: The intrinsic mechanical compliance that makes soft robotic systems ideal for safe, adaptive physical interactions in the presence of uncertainty also poses significant challenges in control. The highly-nonlinear deformations that soft robots undergo are difficult to model and predict, and are even harder to measure and modulate when the soft structures are exposed to complex mechanical loads. This paper presents a method for tracking the deformation modes of soft inflatable bending actuators used in grasp assist devices. Soft strain sensors are designed and strategically placed within the body of a bending actuator to allow measurement of the primary deflection mode – flexion/extension – and measurement of axial twist and lateral deflection modes which are induced by grasping forces. Multimodal bending sensor prototypes were tested independently and embedded in pneumatic bending actuators. Results demonstrated varying levels of measurement accuracy and highlighted the challenges of tracking motion in soft devices. Multimodal sensors were also evaluated in a virtual proprioception experiment and demonstrated efficacy in providing sensory capabilities for human augmentation devices.
|
|
14:15-14:30, Paper Tu2C.2 | Add to My Program |
The Wheelie - a Facial Expression Controlled Wheelchair Using 3D Technology |
Pinheiro, Paulo Gurgel | HOOBOX Robotics |
Gurgel Pinheiro, Cláudio | HOOBOX Robotics |
Cardozo, Eleri | UNICAMP |
Keywords: Assistive Robotics, Human Factors and Ergonomics, Novel Interfaces and Interaction Modalities
Abstract: This work presents the Wheelie, a computer program capable of detecting and translating facial expressions into commands to control equipment, such as wheelchairs or assistive robotic vehicles, using 3D technology. Every year, degenerative diseases and traumas put thousands of people into situations that inhibit them to control the joystick of a wheelchair using their hands. Most current technologies are considered invasive and uncomfortable such as those requiring the user to wear body sensor to control the wheelchair. The Wheelie is a solution that does not require the user to wear body sensors, using instead a 3D camera pointing to the user's face. We call this solution as "the mathematics behind the smile" which is able to classify 9 facial expressions in real-time, such as smiles, kisses, and raised eyebrows that are translated into steering commands to the wheelchair (turn right, go forward, and so on). This work evaluates the use of facial expressions to drive commercially available wheelchairs over real life situations. A series of experiments were conducted in order to assess the efficiency of the command acquisition process and the user experience in driving a wheelchair through facial expressions.
|
|
14:30-14:45, Paper Tu2C.3 | Add to My Program |
Multimodal Sensory Feedback for Virtual Proprioception in Powered Upper-Limb Prostheses |
Lee, Joshua | Georgia Inst. of Tech |
Choi, Mi Hyun | Georgia Inst. of Tech |
Jung, Ji Hwan | Georgia Inst. of Tech |
Hammond III, Frank L. | Georgia Inst. of Tech |
Keywords: Assistive Robotics, Cognitive and Sensorimotor Development, Multi-modal Situation Awareness and Spatial Cognition
Abstract: This paper demonstrates the use of mechanotactile feedback to provide humans with virtual proprioception of their prosthetic devices. Traditional prostheses provide little or no sensory feedback, requiring the user to visually inspect many tasks performed with device. Virtual proprioception can allow humans to incorporate the kinematic and kinetic states of an external device into their body image, leading to greater physical intuition of device activity, lower cognitive loading, more reliable usage models, and more dexterous manipulation. Vibrotactile stimuli are used the display sensory information about the grasp aperture, grasp force, and object surface texture through a powered split-hook prosthesis. Experimental evaluation of manipulation with mechanotactile-based virtual proprioception strong capability to accurately determine object properties (85.4% success) without need for visual inspection
|
|
14:45-15:00, Paper Tu2C.4 | Add to My Program |
What’s “up”? - Resolving Interaction Ambiguity through Non-Visual Cues for a Robotic Dressing Assistant |
Chance, Gregory | Univ. of the West of England |
Caleb-Solly, Praminda | Univ. of the West of England |
Jevtić, Aleksandar | Inst. of Robotics and Industrial Informatics, CSIC-UPC |
Dogramadzi, Sanja | Univ. of the West of England |
Keywords: Assistive Robotics, Non-verbal Cues and Expressiveness
Abstract: Robots that can assist in activities of daily living (ADL) such as dressing assistance, need to be capable of intuitive and safe interaction. Vision systems are often used to provide information on the position and movement of the robot and user. However, in a dressing context, technical complexity, occlusion and concerns over user privacy pushes research to investigate other approaches for human-robot interaction (HRI). We analysed verbal, proprioceptive and force feedback from 18 participants during a human-human dressing experiment where users received dressing assistance from a researcher mimicking robot behaviour. This paper investigates the occurrence of contextually specific deictic speech in an assisted dressing task and how any ambiguity could be resolved to ensure safe and reliable HRI. We focus on one of the most frequently occurring deictic words “up”, which was captured over 300 times during the experiments and is used as an example of an ambiguous command. We attempt to resolve the ambiguity of these commands through predictive models. These models were used to predict end effector choice and the direction in which the garment should move. The model for predicting end effector choice resulted in 70.4% accuracy based on the user’s head orientation. For predicting garment direction, the model used the angle of the user’s arm and resulted in 87.8% accuracy. We also found that additional categories such as the starting position of the user’s arms and end-effector height may improve the accuracy of a predictive model. We present suggestions on how these inputs may be attained through non-visual means, for example through haptic perception of end-effector position, proximity sensors and acoustic source localisation.
|
|
15:00-15:15, Paper Tu2C.5 | Add to My Program |
A Taxonomy of Preferences for Physically Assistive Robots |
Canal, Gerard | CSIC-UPC |
Alenyà, Guillem | CSIC-UPC |
Torras, Carme | Csic - Upc |
Keywords: Assistive Robotics, Personalities for Robotic or Virtual Characters
Abstract: Assistive devices and technologies are getting common and some commercial products are starting to be available. However, the deployment of robots able to physically interact with a person in an assistive manner is still a challenging problem. Apart from the design and control, the robot must be able to adapt to the user it is attending in order to become a useful tool for caregivers. This robot behavior adaptation comes through the definition of user preferences for the task such that the robot can act in the user's desired way. This article presents a taxonomy of user preferences for assistive scenarios, including physical interactions, that may be used to improve robot decision-making algorithms. The taxonomy categorizes the preferences based on their semantics and possible uses. We propose the categorization in two levels of application (global and specific) as well as two types (primary and modifier). Examples of real preference classifications are presented in three assistive tasks: feeding, shoe fitting and coat dressing.
|
|
15:15-15:30, Paper Tu2C.6 | Add to My Program |
Development of an Upper Limb Neuroprosthesis to Voluntarily Control Elbow and Hand |
Ogiri, Yosuke | Yokohama National Univ |
Yamanoi, Yusuke | Yokohama National Univ |
Nishino, Wataru | Yokohama National Univ |
Kato, Ryu | Yokohama National Univ |
Takagi, Takehiko | Tokai Univ |
Yokoi, Hiroshi | The Univ. of Electro-Communications |
Keywords: Robots in Education, Therapy and Rehabilitation, Assistive Robotics
Abstract: This work reports research and development of a lightweight neuroprosthesis, which can control the impaired motion by using voluntary biological signal. The total weight of the upper limb neuroprosthesis is 900 g, which is 40 % lesser than the commercially available ones. For a trans-humeral amputee who had targeted muscle reinnervation (TMR) surgery, pattern classification of five motions was possible by using surface electromyogram (s-EMG) extracted from four dry electrodes.
|
|
Tu2D Regular Session, Ajuda II |
Add to My Program |
Linguistic Communication and Dialogue |
|
|
Chair: Wada, Kazuyoshi | Tokyo Metropolitan Univ |
Co-Chair: Rosenthal-von der Pütten, Astrid Marieke | Univ. Duisburg-Essen |
|
14:00-14:15, Paper Tu2D.1 | Add to My Program |
Learning to Understand Questions on the Task History of a Service Robot |
Perera, Vittorio | Carnegie Mellon Univ |
Veloso, Manuela | Carnegie Mellon Univ |
Keywords: Linguistic Communication and Dialogue
Abstract: We present a novel approach to enable a mobile service robot to understand questions about the history of tasks it has executed. We frame the problem of understanding such questions as grounding an input sentence to a query that can be executed on the logs recorded by the robot during its runs. We define a query as an operation followed by a set of filters. In order to ground sentence to a query we introduce a joint probabilistic model. The model is composed by a shallow semantic parser and a knowledge base to store and re-use the groundings of a sentence. The Knowledge Base and its predicates are designed to match the structure of a query. Our results show that, by using such Knowledge Base, the approach proposed requires fewer and fewer corrections as users interact with the system.
|
|
14:15-14:30, Paper Tu2D.2 | Add to My Program |
Reprompts As Error Handling Strategy in Human-Agent-Dialog? User Responses to a System's Display of Non-Understanding |
Opfermann, Christiane Silke | Univ. of Duisburg-Essen |
Pitsch, Karola | Univ. of Duisburg-Essen |
Keywords: Linguistic Communication and Dialogue, Anthropomorphic Robots and Virtual Humans, Detecting and Understanding Human Activity
Abstract: In speech based technical systems, a ‘reprompt’ can be deployed as a verbally non-explicit and semantically unspecific practice of making a failure-to-understand transparent. Users’ repeats or rephrasings of their previous answers might lead to further non-understandings, resulting in further reprompts by the system. On the basis of a Wizard-of-Oz video corpus in a schedule management setting with an embodied conversational agent and the special user groups of elderly and mildly cognitively impaired persons, we investigate in a conversation analytical approach the interactional impact of threefold reprompts on subsequent user actions to an appointment suggestion. We focus especially on the type of user actions during the course of multiple reprompts in a confirmation/disconfirmation context. Analysis reveals more fine-grained user response types, testifying that all users ratify the first reprompt. After the second and third one, users tend to either add problem manifestations or initiations of the relevant next move. Or they substitute their previous answer by these types of actions. While additional or substituting problem manifestations call for more specific and linguistically restricting error handling practices, the user-initiated next moves are technically exploitable as implicit cues for confirmation in the presented special yes/no-context.
|
|
14:30-14:45, Paper Tu2D.3 | Add to My Program |
Context-Aware Selection of Multi-Modal Conversational Fillers in Human-Robot Dialogues |
Gallé, Matthias | Xerox Res. Centre Europe |
Kynev, Ekaterina | Xerox Res. Centre Europe |
Monet, Nicolas | Xerox Res. Centre Europe |
Legras, Christophe | Xerox Res. Centre Europe |
Keywords: Linguistic Communication and Dialogue, Multimodal Interaction and Conversational Skills, Machine Learning and Adaptation
Abstract: We study the problem of handling the inter-turn pauses in a human-robot dialogue. In order to reduce the impression of elapsed time while the robot transcribes, understands and starts uttering a response we propose to automatically generate conversational fillers, to fill the silences. These fillers combine verbal utterances with body movements. We propose a Bayesian model that samples filler whose production duration time is close to the expected computational time needed by the robot. To increase the sensation of engagement, the fillers also include contextual information gathered during the dialogue (such as the name of the interlocutor), if this information is present with high confidence. We evaluate this approach with an indirect user study measuring time perception, comparing three different strategies to overcome the inter-turn time (silence, static filler and our approach). The results show that users prefer the dynamic fillers, even when the conversation is objectively shorter with one of the other strategies.
|
|
14:45-15:00, Paper Tu2D.4 | Add to My Program |
Strategies and Mechanisms to Enable Dialogue Agents to Respond Appropriately to Indirect Speech Acts |
Briggs, Gordon | NAVAL Res. Lab |
Scheutz, Matthias | Tufts Univ |
Keywords: Linguistic Communication and Dialogue, Social Intelligence for Robots, Robotic Etiquette
Abstract: Humans often use indirect speech acts (ISAs) when issuing directives. Much of the work in handling ISAs in computational dialogue architectures has focused on correctly identifying and handling the underlying non-literal meaning. There has been less attention devoted to how linguistic responses to ISAs might differ from those given to literal directives and how to enable different response forms in these computational dialogue systems. In this paper, we present ongoing work toward developing dialogue mechanisms within a cognitive, robotic architecture that enables a richer set of response strategies to non-literal directives.
|
|
15:00-15:15, Paper Tu2D.5 | Add to My Program |
Dealing with 'Long Turns' Produced by Users of an Assistive System: How Missing Uptake and Recipiency Lead to Turn Increments |
Cyra, Katharina | Univ. of Duisburg-Essen |
Pitsch, Karola | Univ. of Duisburg-Essen |
Keywords: Multimodal Interaction and Conversational Skills, Linguistic Communication and Dialogue, Assistive Robotics
Abstract: Based on a user study, we start from the observation that ‘long turns’ uttered by users towards an assistive system constitute a challenge for the dialog management of a voice-operated system. Assuming an interactional perspective, we address the question as to how ‘long turns’ emerge in interaction. We suggest to conceive of these utterances as being co-constructed by both, the user and the multimodal conduct of the technical system. In this paper, we examine how such ‘long turns’ emerge step by step in terms of an initial utterance being expanded by so-called ‘increments’ as well as their specific structure. Analysis shows that such utterance expansions (causing ‘long turns’) react to the user facing problems with a lack of uptake resp. display of recipiency by the technical system. Combining qualitative micro-analysis with quantification, we discuss specific interactional contexts of turn increments, different actions performed by them and the role of uptake resources in the light of designing autonomous speech-based systems.
|
|
15:15-15:30, Paper Tu2D.6 | Add to My Program |
Recommendation Dialogue System through Pragmatic Argumentation |
Cheng, Ching-Ying | NTU |
Qian, Xiaobei | National Taiwan Univ |
Tseng, Shih-Huan | National Taiwan Univ |
Fu, Li-Chen | National Taiwan Univ |
Keywords: Applications of Social Robots, Linguistic Communication and Dialogue, Robot Companions and Social Robots
Abstract: In an ageing society, we expect that a robotic caregiver is able to persuade the elderly to perform a healthier behavior. In this work, pragmatic argument is adopted to make the elderly realize that a choice beneficial for health is really worthwhile, such as eating suitable fruits. Based on this concept, an adaptive recommendation dialogue system through pragmatic argumentation is proposed. There are three objectives in this system. First, a knowledge base for pragmatic argument construction is built, which concerns not only the effect of a decision but also the reason for the effect. Secondly, the robot is endowed with the ability to do recommendation that adapts to different states of the elder, and the recommendation is determined based on the integration of both the robot's and the elder's preference for different perspectives so that the robot knows how to reach a compromise with the elder. Lastly, through learning about the elder's preference for perspectives in conversation, the robot will try to select such a perspective to construct arguments that the elder can be more easily convinced to accept its recommendation. We invited 21 volunteers to interact with the robot. The experimental result has proved that the recommendation system has potential to affect the decision making of the elderly and help him/her pursue a healthier life.
|
|
Tu2E Regular Session, Ajuda III |
Add to My Program |
Human-Robot Collaboration and Cooperation (I) |
|
|
Chair: Eyssel, Friederike | Bielefeld Univ |
Co-Chair: Kuhnert, Barbara | Univ. of Freiburg |
|
14:00-14:15, Paper Tu2E.1 | Add to My Program |
Co-Representation of Human-Generated Actions vs. Machine-Generated Actions: Impact on Our Sense of We-Agency? |
Sahaï, Aïsha | Ens & Onera |
Pacherie, Elisabeth | Inst. Jean Nicod, ENS |
Grynszpan, Ouriel | Inst. Des Systèmes Intelligents Et De Robotique |
Berberian, Bruno | ONERA |
Keywords: Cooperation and Collaboration in Human-Robot Teams
Abstract: Many studies suggest that individuals are not able to build a sense of we-agency during joint actions with automated artificial systems. We sought to examine whether or not this lack of sense of control was linked with individuals’ inability to represent automaton-generated action into their own cognitive system. Indeed, during human interactions, we automatically represent our partner’s actions in our own sensorimotor system. This might sustain our capacity to build a sense of we-agency over our partner-generated actions. However, to our knowledge, no studies have investigated the potential relation between our ability to use our sensorimotor system to represent a partner’s action and we-agency. Our approach consisted in performing a pilot study coupling together a Simon target detection task wherein RTs served as an index of action co-representation and an implicit measure of one’s sense of agency as indicated by intentional binding phenomenon. The preliminary observations suggested that individuals could represent other-generated action and have a sense of agency over these actions provided that their partner was another human and not an artificial automated system.
|
|
14:15-14:30, Paper Tu2E.2 | Add to My Program |
Proactive, Incremental Learning of Gesture-Action Associations for Human-Robot Collaboration |
Shukla, Dadhichi | Univ. of Innsbruck |
Erkent, Ozgur | Univ. of Innsbruck |
Piater, Justus | Univ. of Innsbruck |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Detecting and Understanding Human Activity, HRI and Collaboration in Manufacturing Environments
Abstract: Identifying an object of interest, grasping it, and handing it over are key capabilities of collaborative robots. In this context we propose a fast, supervised learning framework for learning associations between human hand gestures and the intended robotic manipulation actions. This framework enables the robot to learn associations on the fly while performing a task with the user. We consider a domestic scenario of assembling a kid’s table where the role of the robot is to assist the user. To facilitate the collaboration we incorporate the robot’s gaze into the framework. The proposed approach is evaluated in simulation as well as in a real environment. We study the effect of accurate gesture detection on the number of interactions required to complete the task. Moreover, our quantitative analysis shows how purposeful gaze can significantly reduce the amount of time required to achieve the goal.
|
|
14:30-14:45, Paper Tu2E.3 | Add to My Program |
Legible Action Selection in Human-Robot Collaboration |
Zhu, Huaijiang | Tech. Univ. of Munich |
Gabler, Volker | Tech. Univ. München |
Wollherr, Dirk | Tech. Univ. München |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments, Detecting and Understanding Human Activity
Abstract: Humans are error-prone in the presence of multiple similar tasks. While Human-Robot Collaboration (HRC) brings the advantage of combining the superiority of both humans and robots in their respective talents, it also requires the robot to communicate the task goal clearly to the human collaborator. We formalize such problems in interactive assembly tasks with hidden goal Markov decision processes (HGMDPs) to enable the symbiosis of human intention recognition and robot intention expression. In order to avoid the prohibitive computational requirements, we provide a myopic heuristic along with a feature-based state abstraction method for assembly tasks to approximate the solution of the resulting HGMDP. A user study with human subjects in round-based LEGO assembly tasks shows that our algorithm improves HRC and helps the human collaborators when the task goal is unclear to them.
|
|
14:45-15:00, Paper Tu2E.4 | Add to My Program |
Adaptation to a Humanoid Robot in a Collaborative Joint Task |
Vannucci, Fabio | Istituto Italiano Di Tecnologia |
Sciutti, Alessandra | Istituto Italiano Di Tecnologia |
Jacono, Marco | Istituto Italiano Di Tecnologia |
Rea, Francesco | Istituto Italiano Di Tecnologia |
Sandini, Giulio | Italian Inst. of Tech |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Human Factors and Ergonomics, HRI and Collaboration in Manufacturing Environments
Abstract: Mutual synchronization plays a decisive role in effective collaborations in human joint tasks. Interaction between humans and robots need to show similar emergent coordination. To this aim models of human synchronization have recently been ported on collaborative robots with success [1]. However, it is also important to consider under which conditions the human partner is willing to adapt to the robot while performing a joint task. The main research goal of this study is to understand whether the temporal adaptation usually observed during human-human interaction occurs also during human-robot cooperation. We present a collaborative joint task engaging both human subject and the humanoid robot iCub in pursuing an identical common goal: putting blocks into a box. We examine human action timing, evinced from motion capture data, to investigate whether humans adapt their behavior to the robot. We compare a quantitative measure of such adaptation with the subjective evaluation extracted from questionnaires. We observe that on average participants tend to adapt to their robotic partner. However, looking at individual behaviors, despite the vast majority of the subjects reported to have been influenced by the robot, only few showed a clear adaptation to its timing. We conclude discussing the potential factors influencing human adaptability, with the suggestion that the speed of execution of the robot is determinant in the coordination.
|
|
15:00-15:15, Paper Tu2E.5 | Add to My Program |
A Human Workload Assessment Algorithm for Collaborative Human-Machine Teams |
Heard, Jamison | Vanderbilt Univ |
Harriott, Caroline | Draper |
Adams, Julie | Oregon State Univ |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Monitoring of Behaviour and Internal States of Humans, Machine Learning and Adaptation
Abstract: Mass casualty events caused by a biological weapon require fully capable first response teams. However, human first responders are equipped with protective gear, which limits their capabilities to complete tasks. Robots can be employed to work collaboratively with the first responders in order to augment the human’s reduced abilities. The robot needs to understand and adapt to the human’s workload level in order for the human-machine team to effectively complete tasks. The automatic detection of human workload levels can provide valuable insight into the human’s capabilities, as workload has a direct relationship with task performance. The robot can monitor the objective metrics of the human’s workload level in order to accurately estimate workload via a workload assessment algorithm. The algorithm must be able to assess overall workload and the components of workload, in order for the robot to correctly adapt its interactions or reallocate tasks among the team. A novel workload assessment algorithm that provides an accurate estimate of overall workload and each workload component is presented and evaluated. The algorithm is capable of distinguishing between high and low workload conditions; however, the algorithm’s workload values correlate poorly to a generated workload model. Modifications to enhance the algorithm’s capabilities are discussed and will be investigated in future work.
|
|
15:15-15:30, Paper Tu2E.6 | Add to My Program |
A Robust Multimodal Fusion Framework for Command Interpretation in Human-Robot Cooperation |
Cacace, Jonathan | Univ. of Naples |
Finzi, Alberto | Univ. of Naples |
Lippiello, Vincenzo | Univ. of Naples FEDERICO II |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Multimodal Interaction and Conversational Skills, HRI and Collaboration in Manufacturing Environments
Abstract: In this work, we present a novel multimodal interaction framework supporting robust human-robot communication. We consider a scenario where a human operator can exploit multiple communication channels to interact with one or more robots in order to accomplish shared tasks. Moreover, we assume that the human is not fully dedicated to the robot control, but also involved in other activities, hence only able to interact with the robotic system in a sparse and incomplete manner. In this context, several human or environmental factors could bring the operator to involuntary generate multimodal inputs causing a wrong interpretation of the commands. The main goal of this work is to improve the robustness of human-robot interaction systems in similar situations. In particular, we propose a multimodal fusion method based on the following steps: for each communication channel, unimodal classifiers are firstly deployed in order to generate unimodal interpretations of the human inputs; the unimodal outcomes are then grouped into different multimodal recognition lines, each representing a possible interpretation of a sequence of multimodal inputs; these lines are finally assessed in order to recognize the human commands. We discuss the system at work in a case study providedin which a human rescuer interacts with a team of flying robots during Search & Rescue missions. In this scenario, we present and discuss a series of real world experiments in order to demonstrate the effectiveness of the proposed framework.
|
|
Tu3B Regular Session, Belem II |
Add to My Program |
Social Robotics (III) |
|
|
Chair: Chetouani, Mohamed | Univ. Pierre Et Marie Curie |
Co-Chair: Crick, Christopher | Oklahoma State Univ |
|
16:00-16:15, Paper Tu3B.1 | Add to My Program |
Adapting a Robot's Linguistic Style Based on Socially-Aware Reinforcement Learning |
Ritschel, Hannes | Augsburg Univ |
Baur, Tobias | Augsburg Univ |
Andre, Elisabeth | Augsburg Univ |
Keywords: Social Intelligence for Robots, Multimodal Interaction and Conversational Skills, Robot Companions and Social Robots
Abstract: When looking at Socially Interactive Robots, adaptation to the user's preferences plays an important role in today's Human-Robot Interaction to keep interaction interesting and engaging over a long period of time. Findings indicate an increase in user engagement for robots with adaptive behavior and personality, but also that it depends on the task context whether a similar or opposing robot personality is preferred. We present an approach based on Reinforcement Learning, which gets its reward directly from social signals in real-time during the interaction, to quickly learn about and dynamically address individual human preferences. Our scenario involves a Reeti robot in the role of a story teller talking about the main characters in the novel "Alice's Adventures in Wonderland" by generating descriptions with varying degree of introversion/extraversion. After initial simulation results, an interactive prototype is presented which allows to explore the learning process adapting to the human interaction partner's engagement.
|
|
16:15-16:30, Paper Tu3B.2 | Add to My Program |
Robot Moderation of a Collaborative Game: Towards Socially Assistive Robotics in Group Interactions |
Short, Elaine Schaertl | Univ. of Southern California |
Mataric, Maja | Univ. of Southern California |
Keywords: Social Intelligence for Robots, Robot Companions and Social Robots, Assistive Robotics
Abstract: This paper presents an algorithm for enabling a robot to act as the moderator in a group interaction centered around a tablet-based assembly game. The algorithm uses one of two different objective functions: one intended to be "performance equalizing", wherein the robot attempts to equalize scoring among users, and another intended to be "performance reinforcing", wherein the robot attempts to help the group score as many points as possible. In an evaluation study with ten groups of three participants, we found that the "performance equalizing" algorithm improved task performance and reduced group cohesion, while the "performance reinforcing" algorithm improved group cohesion and reduced task performance.
|
|
16:30-16:45, Paper Tu3B.3 | Add to My Program |
Semantic Structure for Robotic Teaching and Learning |
Roy, Sayanti | Oklahoma State Univ |
Kieson, Emily | Oklahoma State Univ |
Abramson, Charles | Oklahoma State Univ |
Crick, Christopher | Oklahoma State Univ |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Cognitive Skills and Mental Models, Cooperation and Collaboration in Human-Robot Teams
Abstract: Instructing human novices on complex tasks in non-standardized environments are an underexplored potential use for social co-robots, since instruction and skill transfer involving human experts can require an enormous commitment of time and resources. In this paper, we enable a humanoid Baxter robot to build a semantically accessible framework for task learning, teaching and representation via active learning with human experts using hierarchical semantic labels. This process not only helps the robot to learn tasks from expert demonstrations, but later improves the ability of the robot to teach novice human operators. Our results show that the better-understood learning from demonstration (LfD) task is greatly enhanced by the active learning and mutual semantic structure building in a expert-robot partnership, while the robot's ability to teach novices is improved, though the results are suggestive rather than conclusive at this point. We discuss the important aspects and power of learning and teaching from demonstration and how both benefit from communication and joint human-robot creation of semantic hierarchies.
|
|
16:45-17:00, Paper Tu3B.4 | Add to My Program |
The Authority of Appearance: How Robot Features Influence Trait Inferences and Evaluative Responses |
Benitez, Jonathan | Disney Res |
Wyman, Alisa | Disney Res |
Carpinella, Colleen | Disney Res |
Stroessner, Steven | Disney Res |
Keywords: Social Presence for Robots and Virtual Humans, Evaluation Methods and New Methodologies, Non-verbal Cues and Expressiveness
Abstract: Recent research indicates that robots’ physical appearance affects inferences about their social traits, but little research has examined how features may influence evaluative responses (i.e., liking of and willingness to interact). The current studies investigated how robot appearance and function affect evaluative responses. Across four experiments, we assessed how robotic facial features (Study 1) and roles (e.g., companion vs. military; Study 2) influence trait inferences and evaluative responses, and the relationship between them. Study 3 examined the association between robot facial features and roles, and Study 4 examined both facial features and roles to see which drove social judgments. Results indicated that trait inferences and evaluative responses vary in response to both facial features and roles. Trait inferences predicted, with differing strength, evaluative responses towards robots. When pitting facial features against roles, features accounted more strongly for judgments. Implications for human-robot interaction and robot design are considered.
|
|
17:00-17:15, Paper Tu3B.5 | Add to My Program |
Socially-Aware Navigation Planner Using Models of Human-Human Interaction |
Sebastian, Meera | Univ. of Nevada, Reno |
Banisetty, Santosh Balajee | Univ. of Nevada, Reno |
Feil-Seifer, David | Univ. of Nevada, Reno |
Keywords: Robotic Etiquette, Social Learning and Skill Acquisition Via Teaching and Imitation, Social Intelligence for Robots
Abstract: In this paper, we revisit a real-time socially-aware navigation planner which helps a mobile robot to navigate alongside humans in a socially acceptable manner. This navigation planner is a modification of the nav_core package of the Robot Operating System (ROS) originally presented in, modified to use only egocentric sensors. The planner can be utilized to provide safe as well as socially appropriate robot navigation. Primitive features including interpersonal distance between the robot and an interaction partner, distances between these agents and features of the environment (such as hallways detected in real-time) are used to reason about the current state of an interaction. Gaussian Mixture Models (GMM) are trained over these features from human-human interaction demonstrations of various interaction scenarios. This model is both used to discriminate different human actions related to their navigation behavior and to help in the trajectory selection process to provide a social-appropriateness score for a potential trajectory. This paper presents an evaluation done both in simulation and utilizing real human interaction.
|
|
17:15-17:30, Paper Tu3B.6 | Add to My Program |
Pardon the Rude Robot: Social Cues Diminish Reactance to High Controlling Language |
Ghazali, Aimi Shazwani | Eindhoven Univ. of Tech |
Ham, Jaap | Eindhoven Univ. of Tech |
Barakova, Emilia I. | Eindhoven Univ. of Tech |
Markopoulos, Panos | Eindhoven Univ. of Tech |
Keywords: Interaction with Believable Characters, Linguistic Communication and Dialogue, Social Intelligence for Robots
Abstract: In many future social interactions between robots and humans, robots may need to convince people to change their behavior. People may dislike and resist such persuasive attempts, a phenomenon known as psychological reactance. This paper examines how reactance, measured in terms of negative cognitions and feelings of anger, is affected by the persuading agent’s social agency cues and the level of controlling language used. Participants played a decision-making game in which a persuasive agent attempted to influence their choices exhibiting high or low controlling language, and three different levels of social agency. Results suggest that controlling language will lead to increased reactance when the persuasive agent does not exhibit social cues. Surprisingly, reactance is not affected by controlling language in the same way when the persuading agent is a social robot exhibiting social cues.
|
|
Tu3C Regular Session, Ajuda II |
Add to My Program |
Non-Verbal Cues and Expressiveness |
|
|
Chair: Kuno, Yoshinori | Saitama Univ |
Co-Chair: Kawamura, Kazuhiko | Vanderbilt Univ |
|
16:00-16:15, Paper Tu3C.1 | Add to My Program |
A Hug from a Robot Encourages Prosocial Behavior |
Shiomi, Masahiro | ATR |
Nakata, Aya | NAIST |
Kanbara, Masayuki | Nara Inst. of Science and Tech |
Hagita, Norihiro | ATR |
Keywords: Non-verbal Cues and Expressiveness, Applications of Social Robots, Robot Companions and Social Robots
Abstract: This paper presents the effects of being hugged by a robot to encourage prosocial behaviors. In human-human interaction, touches including hugs are essential for communication with others. Touches also show interesting effects, including the “Midas touch,” which encourages prosocial behaviors from the people who have been touched. Previous research demonstrated that people who touched a robot experienced positive impressions of it without clarifying whether being hugged by a robot causes the Midas touch effect, i.e., positively influences engagement in prosocial behaviors. We developed a huge, teddy-bear-like robot that can give reciprocal hugs to people and experimentally investigated its effects on their behaviors. In the experiment, a robot first asked participants to give a hug and then asked them to make charitable donations in two conditions: with or without a reciprocated hug. Our experiment results with 38 participants showed that those who were hugged by a robot donated more money than those who only hugged the robot, i.e., without a reciprocated hug.
|
|
16:15-16:30, Paper Tu3C.2 | Add to My Program |
Enriching Robot's Actions with Affective Movements |
Angel-Fernandez, Julian M. | Vienna Univ. of Tech |
Bonarini, Andrea | Pol. Di Milano |
Keywords: Non-verbal Cues and Expressiveness, Computational Architectures
Abstract: Emotions are considered by many researchers as beneficial elements in social robotics, since they can enrich human-robot interaction. Although there have been works that have studied emotion expression in robots, mechanisms to express emotion are usually highly integrated with the rest of the system. This limits the possibility to use these approaches in other applications. This paper presents a system that has been initially created to facilitate the study of emotion projection, but it has been designed to enable its adaptation in other fields. The emotional enrichment system has been envisioned to be used with any action decision system. A description of the system components and their characteristics are provided. The system has been adapted to two different platforms with different degrees of freedom: Keepon and Triskarino.
|
|
16:30-16:45, Paper Tu3C.3 | Add to My Program |
Gesture Mimicry in Social Human-Robot Interaction |
Stolzenwald, Schachar Janis Immanuel | Univ. of Bristol |
Bremner, Paul | Univ. of the West of England |
Keywords: Non-verbal Cues and Expressiveness, Creating Human-Robot Relationships
Abstract: Mimicry of social behaviours is an imitation behaviour among humans that benefits building rapport, teamwork and aids blending into social situations. As social robots become more popular, these are aspects that gain importance in robotic behaviour designs. A key part of human communication to which mimicry might be applied is co-verbal gesture, which we have investigated. In our proposed system dynamic cues are extracted from human gestures and adapted to a robot's gesture motion. We have conducted empirical studies to validate this mimicry approach. Our results support that the concept of imitating gesture features is a successful method for robotic gesture mimicry, which can be helpful to support human-robot cooperation and building rapport.
|
|
16:45-17:00, Paper Tu3C.4 | Add to My Program |
A Speech-Driven Pupil Response Robot Synchronized with Burst-Pause of Utterance |
Sejima, Yoshihiro | Okayama Prefectural Univ |
Egawa, Shoichi | Okayama Prefectural Univ |
Maeda, Ryosuke | Okayama Prefectural Univ |
Sato, Yoichiro | Okayama Prefectural Univ |
Watanabe, Tomio | Okayama Prefectural Univ |
Keywords: Non-verbal Cues and Expressiveness, Motivations and Emotions in Robotics, Innovative Robot Designs
Abstract: We have developed a pupil response robot called, “Pupiloid,” that generates pupil responses that are closely related to human emotions as well as gaze. Pupiloid can express human-like pupil response by using a mechanism that rotates feathers sterically. In this study, in order to create a smooth interaction between human and robot, we performed an analysis of pupil response during utterance by using a pupil measurement device. Based on the results, we propose a method in which the robot's pupil is dilated by being synchronized with the burst-pause of utterance. Then, we developed an advanced communication robot that was used with the Pupiloid in order to enhance the robot's affect during utterance. This advanced robot generates a vivid pupil response via mechanical structures based on the burst-pause of utterance. We carried out a sensory evaluation experiment under the condition that the robot speaks. The results demonstrated that the developed robot effectively enhances affect.
|
|
17:00-17:15, Paper Tu3C.5 | Add to My Program |
I Get It Already! the Influence of ChairBot Motion Gestures on Bystander Response |
Knight, Heather | Carnegie Mellon Univ |
Lee, Timothy | Stanford Univ |
Brittany, Hallawell | Stanford Univ |
Ju, Wendy | Stanford Univ |
Keywords: Non-verbal Cues and Expressiveness, Robotic Etiquette, Curiosity, Intentionality and Initiative in Interaction
Abstract: How could a rearranging chair convince you to let it by? This paper explores how robotic chairs might negotiate passage in shared spaces with people, using motion as an expressive cue. The user study evaluates the efficacy of three gestures at convincing a busy participant to let it by. This within-participants study consisted of three subsequent trials, in which a person is completing a puzzle on a standing desk and a robotic chair approaches to squeeze by. The measure was whether participants moved out of the robot’s way or not. People deferred to the robot in slightly less than half the trials as they were engaged in the activity. The main finding, however, is that over-communication cues more blocking behaviors, perhaps because it is annoying or because people want chairs to know their place (socially speaking). The Forward-Back gesture that was most effective at negotiating passage in the first trail was least effective in the second and third trial. The more subtle Pause and the slightly loud but less-aggressive Side- to-Side gesture, were much more likely to be deferred to in later trials, but not a single participant deferred to them in the first trial. The results demonstrate that the Forward-Back gesture was the clearest way to communicate the robot’s intent, however, they also give evidence that there is a communicative trade-off between clarity and politeness, particularly when direct communication has an association with aggression. The takeaway for robot design is: be informative initially, but avoid over-communicating later.
|
|
17:15-17:30, Paper Tu3C.6 | Add to My Program |
Robots Educate in Style: The Effect of Context and Non-Verbal Behaviour on Children's Perceptions of Warmth and Competence |
Peters, Rifca | Delft Univ. of Tech |
broekens, joost | TU Delft |
Neerincx, Mark | TNO |
Keywords: Non-verbal Cues and Expressiveness, Robots in Education, Therapy and Rehabilitation, Creating Human-Robot Relationships
Abstract: Social robots are entering the private and public domain where they engage in social interactions with non-technical users. This requires robots to be socially interactive and intelligent, including the ability to display appropriate social behaviour. Progress has been made in emotion modelling. However, research into behaviour style is less thorough; no comprehensive, validated model exists of non-verbal behaviours to express style in human-robot interactions. Based on a literature survey, we created a model of non-verbal behaviour to express high/low warmth and competence-two dimensions that contribute to teaching style. In a perception study, we evaluated this model applied to a NAO robot giving a lecture at primary schools and a diabetes camp in the Netherlands. For this, we developed, based on expert ratings, an instrument measuring perceived warmth, competence, dominance and affiliation. We show that even subtle manipulations of robot behaviour influence children's perceptions of the robot's level of warmth and competence.
|
|
Tu3D Regular Session, Belem I |
Add to My Program |
Medical Robotics |
|
|
Chair: Vanderborght, Bram | Vrije Univ. Brussel |
Co-Chair: Formica, Domenico | Univ. Campus Bio-Medico Di Roma |
|
16:00-16:15, Paper Tu3D.1 | Add to My Program |
Stiffness Perception During Pinching and Dissection with Teleoperated Haptic Forceps |
Ng, Canaan | Univ. of Calgary |
Zareinia, Kourosh | Univ. of Calgary |
Sun, Qiao | Univ. of Calgary |
Kuchenbecker, Katherine J. | Univ. of Pennsylvania |
Keywords: Human Factors and Ergonomics, Medical and Surgical Applications, Novel Interfaces and Interaction Modalities
Abstract: Robotic-assisted surgery requires an intuitive and effective human-machine interface. Providing haptic feedback for pinching and dissecting motions of bipolar forceps, a tool commonly used in neurosurgery, could potentially improve the surgeon's experience. Current haptic hand controllers have limited actuation and feedback capability, requiring surgeons to hold the handle differently compared to a conventional tool. This paper presents a new master design that provides 1-DOF force feedback by adding a Hall-effect sensor and a voice coil actuator directly onto a bipolar forceps. Twenty participants used this interface to perform a remote stiffness perception test that employed the method of constant stimuli. Ten participants pinched the samples, and the other ten dissected them. Each participant did two blocks of 35 trials with only visual feedback or with visual and haptic feedback in random order. Psychometric functions were created from the results to compare perceptual capabilities, metrics were calculated from the force and position data, and participant survey responses were analyzed. The results show that providing the force feedback made the task seem easier, increased the participant's confidence, and reduced the total tip distance traveled in the pinching task. The haptic feedback slightly improved stiffness perception in the pinching task but did not improve perception in the dissection task. These results support the utility of a force-feedback attachment to conventional forceps for pinching and motivate further investigation into the design for dissection.
|
|
16:15-16:30, Paper Tu3D.2 | Add to My Program |
Exploring the Effectiveness of Using Temporal Order Information for the Early-Recognition of Suture Surgery’s Six Steps Based on Video Image Analyses of Surgeons’ Hand Actions |
Tsubota, Miwa | Waseda Univ |
Li, Ye | Waseda Univ |
Ohya, Jun | Waseda Univ |
Keywords: Medical and Surgical Applications, Detecting and Understanding Human Activity
Abstract: To alleviate the recent shortage problem of nurses and increase the efficiency of surgery, the actualization of RSN (Robotic Scrub Nurse) that can autonomously judge the current step of the surgery and pass the surgical instruments needed for the next step to surgeons is desired. The authors developed a computer vision based algorithm that can early-recognize only two steps (surgeons’ hand actions) of suture surgery. Based on the past work, this paper explores the effectiveness of utilizing temporal order of the six steps in suture surgery for the early-recognition of the six steps. Our early-recognition algorithm consists of two modules: start point detection and hand action early-recognition. Segments of the test video that start from each quasi-start point are compared with the training data, and their probabilities are calculated. According to the calculated probabilities, hand actions could be early-recognized. To improve the early-recognition accuracy, temporal order information could be useful. This paper checks confusions of three steps’ early recognition results, and if necessary, early-recognizes again after eliminating the wrong result, while for the other three steps, temporal order information is not utilized. Experimental results show our early-recognition method that utilizes the temporal order information achieves better performances than without the temporal order information.
|
|
16:30-16:45, Paper Tu3D.3 | Add to My Program |
Development of a Two DOF Needle Driver for CT-Guided Needle Insertion-Type Interventional Robotic System |
Kim, Ki-Young | Korea Inst. of Machinery and Materials |
Woo, Hyun Soo | KIMM |
Cho, Jang Ho | Korea Inst. of Machinery & Materials |
Lee, Yongkoo | Korea Inst. of Machinery and Materials |
Keywords: Medical and Surgical Applications, Innovative Robot Designs, Assistive Robotics
Abstract: We present a compact and lightweight two Degrees of Freedom (DOF) needle driver to be applied to a teleoperated needle insertion-type interventional robotic system. The interventional slave manipulator is located beside the patient bed inside a CT scanner room. Physicians manipulate a master device, separately located in the control room, in order to control needle placement. The needle driver provides one translational motion and one rotational motion to a biopsy needle, and the needle is easily detachable from the needle driver. We performed several experiments to measure the repeatability and the insertion force of the needle to evaluate the basic performance of the proposed needle driver. The maximum error of the repeatability was 0.16 mm and the standard deviation was 0.0058 mm. The maximum insertion force of the needle was about 2.2 kgf, which meets the minimum insertion force of the needle for needle insertion-type interventions. Therefore, we expect the suggested needle driver is suitable to the clinical environment for CT-guided needle insertion-type interventions.
|
|
16:45-17:00, Paper Tu3D.4 | Add to My Program |
Teleoperated Multimodal Robotic Interface for Telemedicine: A Case Study on Remote Auscultation |
Falleni, Sara | Scuola Superiore Sant'Anna |
Filippeschi, Alessandro | Scuola Superiore Sant'Anna |
Ruffaldi, Emanuele | Scuola Superiore Sant'Anna |
Avizzano, Carlo Alberto | Scuola Superiore Sant'Anna |
Keywords: Novel Interfaces and Interaction Modalities, Medical and Surgical Applications, Virtual and Augmented Tele-presence Environments
Abstract: The remote examination is becoming more and more important as the population is aging and experts lack as ever before. We propose a novel system which is suitable for remote examination and in particular for remote auscultation. The system is located at two sites, at the patient site a robot holds a stethoscope which is placed on the patient while an RGB-D sensor streams a video of the scene. At the doctor site, the doctor interacts with a haptic interface that allows s/he to move the stethoscope while receiving haptic feedback when the stethoscope is in contact with the patient and looking at the remote scene on a screen. The doctor listens to the noise from the stethoscope thanks to a diaphragm and a headset where the audio stream from the patient site is played. After presenting this novel system, we show its effectiveness by means of experiments that involve auscultation-like tasks. We show the usability of the system to place the stethoscope, the usability to hear correctly the noise of the heart as well as the overall quality of the streamed audio signal.
|
|
17:00-17:15, Paper Tu3D.5 | Add to My Program |
Integrating the Users in the Design of a Robot for Making Comprehensive Geriatric Assessments (CGA) to Elderly People in Care Centers |
Lan Hing Ting, Karine | Troyes Univ. of Tech |
Voilmy, Dimitri | Troyes Univ. of Tech |
Iglesias, Ana | Univ. Carlos III De Madrid |
Pulido Pascual, José Carlos | Univ. Carlos III De Madrid |
Garcia, Javier | Univ. Carlos III De Madrid |
Romero-Garces, Adrian | Univ. of Malaga |
Bandera Rubio, Juan Pedro | Univ. of Malaga |
Marfil, Rebeca | Univ. of Malaga |
Dueñas Ruiz, Álvaro | Hospital Univ. Virgen Del Rocío |
Keywords: User-centered Design of Robots, Medical and Surgical Applications
Abstract: Comprehensive Geriatric Assessment (CGA) is a multidimensional and multidisciplinary diagnostic instrument that helps provide personalized care to the elderly, by evaluating their physical and mental state. In a social and economic context of growing ageing populations, medical experts can save time and effort if provided with interactive tools to efficiently assist them in doing CGAs, managing standardized tests or data collection. Recent research proposes the use of social robots as the central part of these tools. These robots must be able to unfold all functionalities that questionnaires or motion-based tests require, including natural language, face tracking and monitoring, human motion capture and so on. But another issue is the robot's acceptability and trust by the end-users, both patients (elderly people) and clinicians: the robot needs to be able to engage with the patients during the interaction sessions, and must be perceived as a useful and efficient tool by the clinicians. This paper presents the acquisition of new user requirements for CLARC, through participatory and user-centered design approach, to inform the improvement of both interface and interaction. Thirty eight persons (elderly people, caregivers and health professionals) were involved in the design process of CLARC, based on user-centered methods and techniques of Human-Computer Interaction discipline.
|
|
17:15-17:30, Paper Tu3D.6 | Add to My Program |
Interactive Balance Rehabilitation Tool with Wearable Skin Stretch Device |
Pan, Yi-Tsen | Texas A&M Univ |
Hur, Pilwon | Texas A&M Univ |
Keywords: Robots in Education, Therapy and Rehabilitation, Medical and Surgical Applications, Assistive Robotics
Abstract: Physical interactions between human and machine are essential in facilitating effective physical therapy training programs. Nowadays, physical training largely involves robotic assistive devices or wearable haptics. In this study, we propose a lightweight wearable sensory augmentation device using skin stretch feedback to provide individuals with additional sensory cues during balance training. The goals of this study are i) to determine the effectiveness of the proposed novel system in improving the dynamic stability of healthy individuals and ii) to test the efficacy of additional cutaneous cues in substituting for missing visual feedback in said healthy subjects. The entire system comprises of a haptic wristband, a visual display, and a force platform. The haptic wristband provides real-time skin stretch feedback at the dorsal side of the wrist in response to user’s postural sway. Center of pressure (COP) was displayed on a screen and users were asked to move the COP to a target position displayed on the screen by controlling their body posture in the sagittal plane. Results showed that subjects could complete the tasks when they received both visual feedback and skin stretch feedback by shifting their weights. When visual feedback was subsequently removed, subjects successfully interpreted the tactile cues at the wrist from the skin stretch device and completed the tasks. Larger sample size, diverse groups, and longitudinal studies are needed to demonstrate the effectiveness of the proposed device as a balance rehabilitation tool.
|
|
Tu3E Regular Session, Ajuda III |
Add to My Program |
Human-Robot Collaboration and Cooperation (II) |
|
|
Chair: Melhuish, Chris | BRL |
Co-Chair: Law, Edith | Univ. of Waterloo |
|
16:00-16:15, Paper Tu3E.1 | Add to My Program |
Contact Detection and Physical Interaction on Low Cost Personal Robots |
Flacco, Fabrizio | CNRS |
Kheddar, Abderrahmane | CNRS-AIST JRL (Joint Robotics Lab. UMI3218/CRT |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments
Abstract: We present a methodology for estimating joints torque due to external forces applied to a robot with large joints backlash and friction. This undesired non-linearity is common in personal robot, due to the use of low cost mechanical components and type of usage. Our method enables contact detection and human-robot physical interaction capabilities without using extra sensors. The effectiveness of our approach is shown with experiments on a Romeo robot arm from SoftBank Robotics.
|
|
16:15-16:30, Paper Tu3E.2 | Add to My Program |
Contextual Awareness: Understanding Monologic Natural Language Instructions for Autonomous Robots |
Arkin, Jacob | Univ. of Rochester |
Walter, Matthew | Toyota Tech. Inst. at Chicago |
Boteanu, Adrian | Cornell Univ |
Napoli, Michael | Univ. of Rochester |
Biggie, Harel | Univ. of Rochester |
Kress-Gazit, Hadas | Cornell Univ |
Howard, Thomas | Univ. of Rochester |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Linguistic Communication and Dialogue, HRI and Collaboration in Manufacturing Environments
Abstract: Today, there are many examples of humans and robots regularly interacting in a variety of domains, such as manufacturing, coordinated assembly, and rehabilitation. A resulting demand for more generally accessible communication interfaces has motivated several recent independent research efforts focused on providing robotic systems with a robust natural language interface. Natural language interfaces enable intuitive interaction for untrained and non-expert users. However, achieving real-time performance is particularly challenging, yet essential, to enable flexible, efficient communication. The length of the language input directly impacts the run-time performance and quickly becomes a practical issue when the input is a sequence of multiple sentences, or a monologue. In this work, we propose a variant of a contemporary probabilistic graphical model for language understanding that introduces novel segmentation of the input into a sequence of sentences to be labeled in order. We introduce the notion of a continuously updated prior context that retains the meaning of previous sentences as the inference process proceeds. This prior context serves as evidence during future sentence evaluations. We evaluate our model on two natural language corpora and demonstrate its utility on a Clearpath Husky A200 mobile manipulator and a simulated Rethink Robotics Baxter Robot.
|
|
16:30-16:45, Paper Tu3E.3 | Add to My Program |
Towards Robot-Human Reliable Hand-Over: Continuous Detection of Object Perturbation Force Direction |
Gómez Eguíluz, Augusto | Univ. of Ulster |
Rano, Inaki | Ulster Univ |
Coleman, Sonya | Univ. of Ulster |
McGinnity, Martin | Univ. of Ulster |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Machine Learning and Adaptation, Detecting and Understanding Human Activity
Abstract: A fundamental aspect of many human-robot collaborative tasks is object exchange or handover. Several techniques have been proposed to decide when a robot hand or gripper should release an object for a human to receive. However, these techniques typically neglect the reliability of the handover, assuming the process will occur without issue. Based on the fact that humans apply pulling forces in specific directions when receiving an object, this paper presents a recursive procedure enabling a robot to release an object appropriately and timely during a handover. Experiments with naive users showed subject-specific consistent pulling directions during robot-human handovers, highlighting the need for a system to be capable of detecting force directions in relation to objects. The approach reported in this paper shows that through tactile sensing the proposed approach can accurately classify between five different actions impacting an object held by a robot hand. The recursive nature of the system also enables detection of sequences of different actions, enabling the robot to decide to safely release the object only when the pulling performed by the human is in the right direction.
|
|
16:45-17:00, Paper Tu3E.4 | Add to My Program |
Towards Understanding User Preferences in Robot-Human Handovers: How Do We Decide? |
Martinson, Eric | Toyota InfoTechnology Center, USA |
Huaman, Ana | Georgia Inst. of Tech |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation
Abstract: Service robots are expected to provide assistance to users by performing useful tasks, such as handing over objects upon request. Most robot-human handover studies implicitly assume that the handover action being executed is the same every time. We postulate that for real scenarios, however, a robot should be capable of accomplishing the handover task using multiple styles of handover, selecting the action that will most likely result in the successful execution of the task and the action that best accommodates different user preferences. This paper addresses the human aspect of this theory, investigating: (1) Will users prefer to have the robot execute more than one type of robot-human handovers? and (2) What factors do users take into account to favor one handover over another? In a survey realized with 62 participants from 2 different countries we conclude that not only is having more than one handover action important for the robot, but also identify 2 factors for selecting the best action autonomously.
|
|
17:00-17:15, Paper Tu3E.5 | Add to My Program |
Where Are the Robots? In-Feed Embedded Techniques for Visualizing Robot Team Member Locations |
Seo, Stela Hanbyeol | Univ. of Manitoba |
Young, James Everett | Univ. of Manitoba |
Irani, Pourang | Univ. of Manitoba |
Keywords: Novel Interfaces and Interaction Modalities, Cooperation and Collaboration in Human-Robot Teams
Abstract: We present a set of mini-map alternatives for indicating the relative locations of robot team members in a teleoperation interface, and evaluation results showing that these can perform as well as mini-maps while being less intrusive. Teleoperation operators often work with a team of robots to improve task effectiveness. Maintaining awareness of where robot team members are, relative to oneself, is important for team effectiveness, such as for deciding which robot may help with a task, may be best suited to investigate a point of interest, or to determine where one should move next. We explore the use of established interface techniques from mobile computing for supporting teleoperators in maintaining peripheral awareness of robot team members’ relative locations. We evaluate the non-trivial adoption of these techniques to teleoperation, comparing to an overview mini-map base case. Our results indicate that in-feed embedded indicators perform comparatively well to mini-maps, while being less obtrusive, indicating that they are a viable alternative for teleoperation interfaces.
|
|
17:15-17:30, Paper Tu3E.6 | Add to My Program |
Predicting Trust in Human Control of Swarms Via Inverse Reinforcement Learning |
Nam, Changjoo | Carnegie Mellon Univ |
Walker, Phillip | Univ. of Pittsburgh |
Lewis, Michael | Univ. of Pittsburgh |
Sycara, Katia | Carnegie Mellon Univ |
Keywords: Detecting and Understanding Human Activity, Human Factors and Ergonomics, Cooperation and Collaboration in Human-Robot Teams
Abstract: In this paper, we study the model of human trust where an operator controls a robotic swarm remotely for a search mission. Existing trust models in human-in-the-loop systems are based on task performance of robots. However, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes physical parameters of the swarm, command inputs from the operator, and the operator's trust level. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from a single demonstration. The learned behaviors are used to predict the trust level of the operator based on the current physical state of the swarm and commands that the operator issues.
|
|
Tu4A Special Session, Ajuda I |
Add to My Program |
Cultural Factors in Human-Robot Interactions |
|
|
Chair: Sgorbissa, Antonio | Univ. of Genova |
Co-Chair: Chong, Nak Young | Japan Advanced Inst. of Sci. and Tech |
Organizer: Sgorbissa, Antonio | Univ. of Genova |
Organizer: Chong, Nak Young | Japan Advanced Inst. of Sci. and Tech |
Organizer: Pandey, Amit Kumar | SoftBank Robotics |
Organizer: Saffiotti, Alessandro | Orebro Univ |
|
16:00-16:15, Paper Tu4A.1 | Add to My Program |
Cultural Differences in Social Acceptance of Robots (I) |
Nomura, Tatsuya | Ryukoku Univ |
Keywords: Creating Human-Robot Relationships, Ethical Issues in Human-robot Interaction Research, Applications of Social Robots
Abstract: The paper summarizes the results of the questionnaire surveys conducted by the author’s research group, along 1) attitudes toward robots, 2) assumptions and images about robots, 3) anxiety and expectation toward humanoid robots based on the concept of “Frankenstein Syndrome”, and 4) ethical problems related to robots. Then, the paper discusses about the future direction of the research on cultural differences on social acceptance of robots.
|
|
16:15-16:30, Paper Tu4A.2 | Add to My Program |
Ethical Considerations of Gendering Very Humanlike Androids from an Interdisciplinary Perspective (I) |
Knox, Elena | Waseda Univ |
Watanabe, Katsumi | Waseda Univ |
Keywords: Ethical Issues in Human-robot Interaction Research, Androids, Robots in art and entertainment
Abstract: A large proportion of “very humanlike” androids are assigned aesthetics typically associated with femininity. Ethical and discriminatory issues have yet to be given in-depth attention in procedural literature. This position paper suggests that implicitly viewing humanlike robots as agents that could, in future, substitute “undesirable” and/or exploitable humans may affect not just the robots’ design, but also the human demographics considered replaceable. Such tendencies must be carefully considered by researchers, businesses, and policy makers. Interdisciplinary analysis may inform and expand social and cultural negotiations in the design of these androids.
|
|
16:30-16:45, Paper Tu4A.3 | Add to My Program |
Encoding Cultures in Robot Emotion Representation (I) |
Dang, Thi Le Quyen | Japan Advanced Inst. of Science and Tech |
Tuyen, Nguyen Tan Viet | Japan Advanced Inst. of Science and Tech |
Jeong, Sungmoon | Japan Advanced Inst. of Science and Tech |
Chong, Nak Young | Japan Advanced Inst. of Sci. and Tech |
Keywords: Motivations and Emotions in Robotics, Creating Human-Robot Relationships, Cognitive Skills and Mental Models
Abstract: Cultural differences may influence interactions between humans with different social norms and cultural traits, incurring different emotional and behavioral responses. The same applies to human-robot interaction (HRI). We believe that controlling robot emotions based on the cultural context can help robots adapt to humans from culturally diverse backgrounds. Such culturally aligned robots are expected to be easily accepted by humans as part of daily life. In this paper, we aim at investigating the role of culture in representing robot emotions which are injected by humans during its early stage of development and subject to change through their own experience thereafter. Our experiments with social humanoid robots Pepper show that robots can learn to behave socially in alignment with an individual’s cultural background. Moreover, we have demonstrated that robots under the effect of different cultures can generate different behavioral responses to the same stimuli, which is considered one of the most important issues in socially assitive robotics.
|
|
16:45-17:00, Paper Tu4A.4 | Add to My Program |
Paving the Way for Culturally Competent Robots: A Position Paper (I) |
Bruno, Barbara | Univ. of Genova |
Chong, Nak Young | Japan Advanced Inst. of Sci. and Tech |
Kamide, Hiroko | Nagoya Univ |
Kanoria, Sanjeev | Advinia Health Care Limited LTD |
Lee, Jaeryoung | Chubu Univ |
Lim, Yuto | Japan Advanced Inst. of Science and Tech |
Pandey, Amit Kumar | SoftBank Robotics |
Papadopoulos, Chris | Univ. of Bedfordshire |
Papadopoulos, Irena | Middlesex Univ. Higher Education Corp |
Pecora, Federico | Örebro Univ |
Saffiotti, Alessandro | Orebro Univ |
Sgorbissa, Antonio | Univ. of Genova |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots, Assistive Robotics
Abstract: Cultural competence is a well known requirement for an effective healthcare, widely investigated in the nursing literature. We claim that personal assistive robots should likewise be culturally competent, aware of general cultural characteristics and of the different forms they take in different individuals, and sensitive to cultural differences while perceiving, reasoning, and acting. Drawing inspiration from existing guidelines for culturally competent healthcare and the state-of-the-art in culturally competent robotics, we identify the key robot capabilities which enable culturally competent behaviours and discuss methodologies for their development and evaluation.
|
|
17:00-17:15, Paper Tu4A.5 | Add to My Program |
Mind Attribution to Androids: A Comparative Study with Italian and Japanese Adolescents (I) |
Trovato, Gabriele | Waseda Univ |
Eyssel, Friederike | Bielefeld Univ |
Keywords: Embodiment, Empathy and Intersubjectivity, Androids, Motivations and Emotions in Robotics
Abstract: The attribution of mental states to humanoid robots is a complex psychological phenomenon, as it depends on several factors, including robot's appearance, its behaviour, its country of origin, as well as people's cultural background and exposition to robots. In this paper, we present a cross-cultural study on mind attribution to androids comparing Italian and Japanese high school students’ evaluations. Results suggest that the cultural in-group bias not necessarily applies to mind attribution, as some other factors, such as anthropomorphism of nature, and exposure to robots related popular culture, can modify how androids are perceived.
|
|
17:15-17:30, Paper Tu4A.6 | Add to My Program |
Face Image-Based Age and Gender Estimation with Consideration of Ethnic Difference (I) |
SHIN, MINCHUL | KAIST |
Seo, JuHwan | KAIST(Korea Advanced Inst. of Science and Tech |
Kwon, Dong-Soo | KAIST |
Keywords: Machine Learning and Adaptation, Social Intelligence for Robots, Applications of Social Robots
Abstract: This study presents an age and gender estimation system that considers ethnic difference in face images using a Convolutional Neural Network(CNN) and Support Vector Machine(SVM). Most age and gender estimation systems using face images are trained on ethnicity-biased databases. Therefore, these systems show limited performance on face images of ethnic groups occupying a small proportion of the training data. To resolve this problem, we propose an age and gender estimation system that considers the ethnic difference in face images. At the first stage of the system, the ethnicity of the facial image is determined by a CNN trained with manually collected face images of Asian and non-Asian celebrities. Then, one of the SVM classifiers is selected according to the ethnicity for the final age and gender estimation. We compared the proposed system with an estimation system that does not consider ethnic difference. The result shows improved performance for age estimation but no improvement for gender recognition.
|
|
17:30-17:45, Paper Tu4A.7 | Add to My Program |
Cross-Cultural Differences for Adaptive Strategies of Robots in Public Spaces (I) |
Mussakhojayeva, Saida | Nazarbayev Univ |
Sandygulova, Anara | Nazarbayev Univ |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots, Applications of Social Robots
Abstract: Robots deployed in public spaces must necessarily deal with situations that demand them to engage humans in a socially and culturally appropriate manner. However, social environments are often complex and ambiguous: many queries to the robot are collaborative (e.g. a family), and in case of conflicting queries, social robots need to participate in value decisions and negotiating multi-party interactions. Given the strong influence of the people's demographic information and social schema among people, such as relationships and hierarchies, the focus of this research is to examine whether and how people exhibit socio-psychological effects with a shared robot deployed at international events or spaces (e.g. airports). With the aim to investigate who robots should adapt to (children or adults) in multi-party situations within human-robot interactions in public spaces and whether this adaptation can be influenced by culture, this paper presents a cross-cultural study conducted online. The results include a number of interesting findings based on people's relationship with a child and their parental status. In addition, a number of cross-cultural differences were identified in respondents' attitude towards robot's multi-party adaptation in various public settings.
|
| |