| |
Last updated on August 22, 2018. This conference program is tentative and subject to change
Technical Program for Wednesday August 29, 2018
|
WeAT1 |
308 |
Computer Vision and Machine Learning: Foundations and Applications in
Advanced |
Special Session |
Organizer: Wang, Shuihua | Nanjing Normal Univ |
Organizer: Han, Liangxiu | Manchester Metropolitan Univ |
Organizer: Dong, Zhengchao | Columbia Univ |
|
08:00-08:15, Paper WeAT1.1 | |
Renal Segmentation Algorithm Combined Low-Level Features with Deep Coding Feature |
xia, kaijian (Changshu Affiliated Hospital of Soochow Univ. (Changshu No) |
Keywords: Social Intelligence for Robots
Abstract: In the field of medical imaging research, renal segmentation is an important task which is tedious and error-prone when performed manually. Deep learning methods have been successfully applied to feature learning in medical applications. In this paper, We focused on the high accuracy of the classification task because of its effect on the accuracy of a better segmentation, and a Renal Segmentation Algorithm Combined Low-level Features with Deep Coding Feature from medicine images is proposed. Firstly, we use the advantage of Stacked auto-encoder networks to automatically learn the high-level features that capture the structured information and semantic context in the image. Several low-level features are extracted, which can effectively capture contrast and spatial information in the renal regions, and incorporated to compensate with the learned high-level features at the output of the very last fully connected layer. The concatenated feature vector is further fed into a Least squares SVM detector with Morlwet kernel to obtain classification results. We trained the deep network on medicine data set and experimentally shows that our proposed method has high classification accuracy and can speed up the clinical task to segment the renal.
|
|
08:15-08:30, Paper WeAT1.2 | |
Breast Cancer Detection Via Wavelet Energy and Support Vector Machine |
Guo, Zewei (Kunming Univ. of Science and Tech), Jiang, Lin (Kunming Univ. of Science and Tech), Suchkov, Matben (Inst. of Management, Ec. and Finance, Kazan Federal Un), Yan, Lee-Ze (School of Information and Software Engineering, Univ. of El) |
Keywords: Medical and Surgical Applications
Abstract: Breast cancer as one of the most feared killers of women, there are still no effective means of prevention and treatment on it. However, the popularity of its research continues to rise in academic field. The traditional medical diagnosis is mainly by observing the patient's symptoms to confirm the variety of diseases, but the efficiency is undesirable, and the scientific contribution is poor. At present, due to the dramatic development of the application of machine learning in data detection, the application of computer technology in disease diagnosis has become a new and effective means. This paper used the wavelet energy to extract features of breast cancer, then established a breast cancer predicting model, while re-use data grouping function of support vector machine (SVM), then algorithm would accurately distinguish the characteristics of the data among benign malignant tumors. So, the accuracy of intelligent diagnosis in breast cancer has be improved, and proven to be better than two state-of-the-art approaches.
|
|
08:30-08:45, Paper WeAT1.3 | |
SCA-RELM: A New Regularized Extreme Learning Machine Based on Sine Cosine Algorithm for Automated Detection of Pathological Brain |
Nayak, Deepak Ranjan (National Inst. of Tech. Rourkela), Dash, Ratnakar (National Inst. of Tech. Rourkela), Zhihai, Lu (Nanjing Normal Univ), Siyuan, Lu (Nanjing Normal Univ), Majhi, Banshidhar (National Inst. of Tech. Rourkela) |
Keywords: Machine Learning and Adaptation
Abstract: This paper aims at developing a new method for automated diagnosis of pathological brain using magnetic resonance imaging (MRI). The method derives features using unequally-spaced FFT based fast discrete curvelet transform (FDCT-USFFT). Thereafter, a reduced feature set is obtained using PCA+LDA algorithm. Finally, for classification, we hybridize regularized extreme learning machine and sine cosine algorithm (SCA-RELM) which aims at overcoming the drawbacks of conventional ELM and other classical learning algorithms. We evaluate our proposed scheme on three well-studied datasets and observe that it earns significant improvements over the existing methods. Moreover, the effectiveness of proposed SCA-RELM paradigm is tested against other learning algorithms for single layer feed-forward neural network. Our system will aid the clinicians to effectively diagnose pathological brain.
|
|
08:45-09:00, Paper WeAT1.4 | |
Alcoholism Detection by Wavelet Entropy and Support Vector Machine Trained by Genetic Algorithm |
Chen, Yiyang (Nanjing Normal Univ) |
Keywords: Medical and Surgical Applications, Machine Learning and Adaptation
Abstract: Nowadays, alcoholism becomes a more serious social problem, and we proposed a method to help doctors to detect the alcoholism patients. In our method, wavelet entropy (WE) was used to extract the features, support vector machine (SVM) was used classify the samples, and genetic algorithm (GA) was used to optimize our classifier. In our method, the average sensitivity is 88.42±1.74%, the average specificity is 88.93±1.62%, and the average accuracy is 88.68±0.30%, which is better than the three state-of-the-art approaches: FA-PNN, HMI, and FRFT. Our method is effective in alcoholism detection and can help doctors to reduce the massive detection work.
|
|
09:00-09:15, Paper WeAT1.5 | |
Probabilistic Methods for Analyzing and Measuring Tremor in Humans |
Li, Zhichao (Kita Tech), Yi, Yang (Nanjing Tech. Univ), Chellali, Ryad (Nanjing Forestry Univ) |
Keywords: Medical and Surgical Applications, Human Factors and Ergonomics, Monitoring of Behaviour and Internal States of Humans
Abstract: In this paper a system for analyzing human body movements with tremors is proposed. We developed a latent forces based method to evaluate quantitatively tremors, as they are important indicators for some diseases, such as Parkinson, Essential tremor and Strokes. Our contribution is a data driven method. It relies on a mechanistic model, from which latent mechanistic features are derived to explain human body movements in general and tremors in particular. To this end, we define two groups of forces: one group drives the normal body movements and the other one governs tremors. Moreover, in order to quantify tremors, we created an index based on distributions distances indicating the severity of symptoms. We applied this formalism to tremors in human body and tested it on a collection of realistic data.
|
|
09:15-09:30, Paper WeAT1.6 | |
Studying Effects of Incorporating Automated Affect Perception Withspoken Dialog in Social Robots |
Mollahosseini, Ali (Univ. of Denver), Abdollahi, Hojjat (Univ. of Denver), Mahoor, Mohammad (Univ. of Denver) |
Keywords: Motivations and Emotions in Robotics, Embodiment, Empathy and Intersubjectivity, Creating Human-Robot Relationships
Abstract: Social robots are becoming an integrated part of our daily lives with the goal of understanding humans' social intentions and feelings, a capability which is often referred to as empathy. Despite significant progress towards the development of empathic social agents, current social robots have yet to reach the full emotional and social capabilities. This paper presents our recent effort on incorporating an automated Facial Expression Recognition (FER) system based on deep neural networks into the spoken dialog of a social robot (Ryan) to extend and enrich its capabilities beyond spoken dialog and integrate the user's affect state into the robot's responses. In order to evaluate whether this incorporation can improve social capabilities of Ryan, we conducted a series of Human-Robot-Interaction (HRI) experiments. In these experiments the subjects watched some videos and Ryan engaged them in a conversation driven by user's facial expressions perceived by the robot. We measured the accuracy of the automated FER system on the robot when interacting with different human subjects as well as three social/interactive aspects, namely task engagement, empathy, and likability of the robot. The results of our HRI study indicate that the subjects rated empathy and likability of the affect-aware Ryan significantly higher than non-empathic (the control condition) Ryan. Interestingly, we found that the accuracy of the FER system is not a limiting factor, as subjects rated the affect-aware agent equipped with a low accuracy FER system as empathic and likable as when facial expression was recognized by a human observer.
|
|
09:30-09:45, Paper WeAT1.7 | |
A Pilot Study on Facial Expression Recognition Ability of Autistic Children Using Ryan, a Rear-Projected Humanoid Robot |
askari, farzaneh (Univ. of Denver), Feng, Howard (Univ. of Denver), Sweeny, Timothy (Univ. of Denver), Mahoor, Mohammad (Univ. of Denver) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Non-verbal Cues and Expressiveness
Abstract: Rear-projected robots use computer graphics technology to create facial animations and project them on a mask to show the robot’s facial cues and expressions. These types of robots are becoming commercially available, though more research is required to understand how they can be effectively used as a socially assistive robotic agent. This paper presents the results of a pilot study on comparing the facial expression recognition abilities of children with Autism Spectrum Disorder (ASD) with typically developing (TD) children using a rear-projected humanoid robot called Ryan. Six children with ASD and six TD children participated in this research, where Ryan showed them six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise) with different intensity levels. Participants were asked to identify the expressions portrayed by Ryan. The results of our study show that there is not any general impairment in expression recognition ability of the ASD group comparing to the TD control group; however, both groups showed deficiencies in identifying disgust and fear. Increasing the intensity of Ryan’s facial expressions significantly improved the expression recognition accuracy. Both groups were successful to recognize the expressions demonstrated by Ryan with high average accuracy.
|
|
09:45-10:00, Paper WeAT1.13 | |
Mission Allocation and Execution for Human and Robot Agents in Industrial Environment |
Djezairi, Salim (Signals and System Lab, Inst. of Electronics and Electrical), Akli, Isma (CDTA), Boushaki Zamoum, Razika (Signals and System Lab, Inst. of Electronics and Electrical), Bouzouia, Brahim (Advanced Tech. Development Centre (CDTA)) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments
Abstract: This paper deals with the problem of mission allocation for robot agents cooperating with human agents, in industrial application. RFID (Radio frequency identification) sensor platform is exploited to describe the environment state. Relevant information is available in RFID tags which are placed on objects and machines, and agents (robots or humans) types. The mission allocator receives the information from the RFID platform, and calculates the adequate sub-mission (set of sequential tasks) assigned to each agent. The robot agents execute the assigned tasks, organized through task planning strategy. The human agents perform the assigned sub-mission plans introduced via Human Machine Interfaces (HMIs). After the execution of each task by the agents, the environment state is updated. A general framework allowing mission distribution for robot and human agents is proposed in this article. The effectiveness of the proposed solution is tested in simulation with mobile manipulator agents in cooperation with a group of human agents.
|
|
WeAT2 |
309 |
Assistive Robotics for Elderly Care |
Special Session |
Chair: Rossi, Silvia | Univ. Di Napoli Federico II |
Organizer: Rossi, Silvia | Univ. Di Napoli Federico II |
|
08:00-08:15, Paper WeAT2.1 | |
Psychometric Evaluation Supported by a Social Robot: Personality Factors and Technology Acceptance |
Rossi, Silvia (Univ. Di Napoli Federico II), Santangelo, Gabriella (Univ. of Campania L. Vanvitelli), Staffa, Mariacarla (Univ. of Naples "Federico II"), Varrasi, Simone (Univ. of Catania), Conti, Daniela (Sheffield Hallam Univ), Di Nuovo, Alessandro (Sheffield Hallam Univ) |
Keywords: Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots, Applications of Social Robots
Abstract: Robotic psychological assessment is a novel field of research that explores social robots as psychometric tools for providing quick and reliable screening exams. In this study, we involved elderly participants to compare the prototype of a robotic cognitive test with a traditional paper-and-pencil psychometric tool. Moreover, we explored the influence of personality factors and technology acceptance on the testing. Results demonstrate the validity of the robotic assessment conducted under professional supervision. Additionally, results show the positive influence of Openness to experience on the interaction with robot’s interfaces, and that some factors influencing technology acceptance, such as Anxiety, Trust, and Intention to use, correlate with the performance in the psychometric tests. Technical feasibility and user acceptance of the robotic platform are also discussed.
|
|
08:15-08:30, Paper WeAT2.2 | |
Seeking and Approaching Users in Domestic Environments: Testing a Reactive Approach on Two Commercial Robots |
Ercolano, Giovanni (Univ. Degli Studi Di Napoli Federico II), Raggioli, Luca (Univ. of Naples Federico II), Leone, Enrico (Univ. of Naples "Federico II"), Ruocco, Martina (Univ. of Naples Federico II), Savino, Emanuele (Univ. of Naples Federico II), Rossi, Silvia (Univ. Di Napoli Federico II) |
Keywords: Robot Companions and Social Robots, Detecting and Understanding Human Activity, Motion Planning and Navigation in Human-Centered Environments
Abstract: Socially Assistive Robots used for elderly care are required to determine the location of a person and to approach him/her in order to provide assistance. Human tracking systems are applied to detect and track people that are already in the proximity of the robot, while its limited field of view makes the user easily lost. Moreover, navigation algorithms typically need the availability of reliable sensors on the robot and the possibility of marking possible user locations. On the contrary, in this work, we investigate the opportunity to use a reactive control mechanism for detecting and approaching people. Our approach is tested on two commercial mobile robots that present a different sensors configuration and by using off-the-shelf algorithms for people localization and tracking. Results show the feasibility of the approach with respect to the considered domain that does not require precise positioning, but hopes for a real application of such low-cost robot into the wild. Features of the considered robots and their impact on performance are also discussed.
|
|
08:30-08:45, Paper WeAT2.3 | |
The Outcome of a Week of Intensive Cognitive Stimulation in an Elderly Care Setup: A Pilot Test |
Agrigoroaie, Roxana (ENSTA-ParisTech), Tapus, Adriana (ENSTA-ParisTech) |
Keywords: Assistive Robotics
Abstract: In the context of a worldwide aging population, it is important to find solutions to help the elderly maintain their cognitive functions. This research was done in the context of the ENRICHME. We investigate the outcome of a 5 day intensive cognitive stimulation with an elderly individual. Each day was composed of two sessions (one at 11am, and one at 3pm). During each session the participant played three cognitive games (i.e., digit cancellation, integer matrix task, Stroop game), two of them having two difficulty levels. The mood of the participant was also recorded before and after each interaction session. Evidence was found that even after a few sessions, the performance of the participant increased for all games. The performance of each game and each difficulty level was analyzed based on the interaction time (11am or 3pm) and the interaction day. A detailed analysis of the performance is presented together with a discussion of these results.
|
|
08:45-09:00, Paper WeAT2.4 | |
Towards a Robust Robotic Assistant for Comprehensive Geriatric Assessment Procedures: Updating the CLARC System |
Martínez, Jesús (Univ. of Málaga), Romero-Garces, Adrian (Univ. of Malaga), Cristina Suárez Mejias, Cristina (Fundación Pública Andaluza Para La Gestión De La Investigación D), Marfil, Rebeca (Univ. of Malaga), Lan Hing Ting, Karine (Troyes Univ. of Tech), Iglesias, Ana (Univ. Carlos III De Madrid), Garcia, Javier (Univ. Carlos III De Madrid), Fernandez, Fernando (Univ. Carlos III of Madrid), Dueńas Ruiz, Álvaro (Hospital Univ. Virgen Del Rocío), Calderita, Luis (Univ. of Málaga), Bandera, Antonio (Univ. De Málaga), Bandera Rubio, Juan Pedro (Univ. of Malaga) |
Keywords: Assistive Robotics, Applications of Social Robots, Detecting and Understanding Human Activity
Abstract: Socially assistive robots appear as a powerful tool in the upcoming silver society. They are among the technologies for Assisted Living, offering a natural interface with smart environments, while helping people through social interaction. The CLARC project aims to develop a socially assistive robot to help clinicians perform Comprehensive Geriatric Assessment (CGA) procedures. This robot autonomously drives some tests and processes, saving time for the clinician to perform more added-value activities, like designing care plans. The project has recently finished its first two phases, and now it faces its final one. This paper details the current prototype of the CLARC system and the main results collected so far during its evaluation. Then, it describes the updates and modifications planned for the next year, in which long term extensive evaluations will be conducted to validate its acceptability and utility
|
|
09:00-09:15, Paper WeAT2.5 | |
A Cognitive Loop for Assistive Robots: Connecting Reasoning on Sensed Data to Acting |
Cesta, Amedeo (CNR -- National Res. Council of Italy, ISTC), Cortellessa, Gabriella (CNR -- National Res. Council of Italy, ISTC), Orlandini, Andrea (National Res. Council of Italy), Umbrico, Alessandro (National Res. Council of Italy) |
Keywords: Assistive Robotics, Applications of Social Robots
Abstract: The deployment of assistive robots in everyday life scenarios and their capability of providing an effective and useful support for independent living is an open and challenging research problem. The development of suitable robot control systems requires effective solutions for addressing issues concerning performance, reliability, flexibility and proactivity. In this work, we propose an AI-based cognitive architecture aiming at integrating knowledge representation with automated planning and execution techniques in order to endow assistive robots with proactivity and self-configuration capabilities.
|
|
09:15-09:30, Paper WeAT2.6 | |
Design of a Sensory Augmentation Walker with a Skin Stretch Feedback Handle |
Pan, Yi-Tsen (Texas A&M Univ), Shih, Chin-Cheng (Texas A&M Univ), DeBuys, Christian (Texas A&M Univ), Hur, Pilwon (Texas A&M Univ) |
Keywords: Novel Interfaces and Interaction Modalities, Assistive Robotics, Robots in Education, Therapy and Rehabilitation
Abstract: Mobility aids such as canes, crutches, and walkers are widely used among the elderly and people with poor balance as a means for physical support to improve balance during walking. Advances in technology have led to the development of robotic walking aids which can provide active physical support and navigation by incorporating sensors and actuators in conventional walking aids. These devices have shown great potentials in enhancing mobility; however, few studies have employed the functionality to detect user's posture or have investigated the feedback approaches to augment this information. Thus, it is important for those with impaired balance not to just be passively supported by mobility aids but to also actively be engaged in correcting their posture. In this paper, we introduce the concept of a sensory augmentation walker that can provide real-time directional information via skin stretch feedback to the user. The design and the user study of perceiving directions on a novel skin stretch handle are presented. Results show that the directional cues rendered by skin stretch feedback can be accurately perceived by all healthy young subjects (n = 8) at their fingertips, while the palm is shown to be a less effective location for perceiving this kind of feedback. Positive feedback about the benefits in helping people with improper posture is also reported. Based on the results of this pilot study, a full system for improving balance performance in elderly or people with impaired balance will be undertaken.
|
|
WeAT3 |
Theater |
Robots in Education, Therapy, and Rehabilitation |
Regular Session |
Chair: Barakova, Emilia I. | Eindhoven Univ. of Tech |
Co-Chair: McCarthy, Chris | Swinburne Univ. of Tech |
|
08:00-08:15, Paper WeAT3.1 | |
Socially-Assistive Robots to Enhance Learning for Secondary Students with Intellectual Disabilities and Autism |
Silvera-Tawil, David (CSIRO), Roberts-Yates, Christine (Murray Bridge High School) |
Keywords: Robots in Education, Therapy and Rehabilitation, Long-term Experience and Longitudinal HRI Studies, Applications of Social Robots
Abstract: For some time now, researchers have explored the use of social robots as tools to assist during therapy and education for children with intellectual disabilities and autism. Although encouraging results suggest that robots can be beneficial, there has been minimal progress in integrating this technology as formal tools to supplement therapy and education. A significant reason behind this is that the benefits of using robots have been demonstrated primarily in short-term case studies, with very few demonstrations over large samples. This study explores the impact of a prolonged exposure to socially-assistive robots in an education context with secondary students with intellectual disabilities and autism. The evaluation was carried out 24 months after the introduction of two different robots into the disability unit from a public secondary school. Participants responded positively toward the use of robots at the school, and would recommend other schools to use them as well. Benefits, challenges, and limitations of the robots are discussed.
|
|
08:15-08:30, Paper WeAT3.2 | |
Design and System Validation of Rassle: A Novel Active Socially Assistive Robot for Elderly with Dementia |
Zheng, Zhaobo (Vanderbilt Univ), Fan, Jing (Vanderbilt Univ), Zhu, James (Vanderbilt Univ), Sarkar, Nilanjan (Vanderbilt Univ) |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Human Factors and Ergonomics
Abstract: The population around the globe is aging rapidly. People are living longer due to an increase in life expectancy and less young people become available to help the elderly population. Therefore, elderly people are facing functional and mental declines that affect their everyday activities and quality of life. Emergence of Socially Assistive Robots (SAR) in recent years and application of animal-like SARs in particular for elder care have shown positive effects including reduced stress, improved communication and social interaction among older adults. However, the existing animal-like SARs are generally passive and limited in terms of gesture-based interaction while the existing humanoid SARs have hard exteriors that prevent them from being in close proximity to the people to have touch-based interaction. In this paper, we have designed and developed a novel active SAR, Rassle, with whole body tactile sensing and movable limbs to take full advantage of touch-based interactions with older adults. Touching can create social bonding and it involves upper limb movement. We believe Rassle can encourage gross motor activity and deliver mental stimuli during interaction. In addition, we have conducted a system validation study with twelve unimpaired adults. Experimental results demonstrate Rassle’s ability to deliver mental stimuli with a variety of difficulty levels and show that subjects enjoyed interacting with Rassle.
|
|
08:30-08:45, Paper WeAT3.3 | |
Physiotherapists' Acceptance of a Socially Assistive Robot in Ongoing Clinical Deployment |
Marti Carrillo, Felip (Swinburne Univ. of Tech), Butchart, Joanna (Royal Children's Hopsital, Melbourne. Murdoch Children's Res), Kruse, Nicholas Jacob (The Univ. of Melbourne), Scheinberg, Adam (Murdoch Children's Res. Inst), Wise, Lisa Zafrina (Swinburne Univ. of Tech), McCarthy, Chris (Swinburne Univ. of Tech) |
Keywords: Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots, Applications of Social Robots
Abstract: We report on physiotherapists' acceptance of a Socially Assistive Robot (SAR) as a therapeutic aid for paediatric rehabilitation. The SAR is undergoing in situ evaluation while being deployed as part of the clinical care of paediatric rehabilitation patients at the Royal Children's Hospital in Melbourne, Australia. The robot is equipped to lead rehabilitation sessions of up to 30 minutes under the guidance of a therapist, without technician support or Wizard-of-Oz operation. In this paper we report on quantitative and qualitative data collected from 8 therapists participating in our study across 19 rehabilitation sessions. Data was collected after each therapy session. Our results show our system achieves a high degree of acceptance, particularly with respect to its perceived usefulness, and ease-of-use. Moreover, multiple sessions operating the SAR appears to strengthen positive perceptions of our system.
|
|
08:45-09:00, Paper WeAT3.4 | |
Dyadic Gaze Patterns During Child-Robot Collaborative Gameplay in a Tutoring Interaction |
Mwangi, Eunice Njeri (Eindhoven Univ. of Tech), Barakova, Emilia I. (Eindhoven Univ. of Tech), Díaz-Boladeras, Marta (Res. Center for Dependency Care and AutonomousLiving, UPC, Sp), Catalŕ, Andreu (Univ. Pol. De Catalunya), Rauterberg, Matthias (Eindhoven Univ. of Tech) |
Keywords: Robots in Education, Therapy and Rehabilitation
Abstract: This study examines patterns of coordinated gaze between a child and a robot (NAO) during a card matching game, 'Memory'. Dyadic gaze behavior like mutual gaze, gaze following, and joint attention is indications both of child' engagement with the robot and of the quality of child-robot interaction. Eighteen children interacted with a robot tutor in two settings. In the first set, the robot tutor gave clues to assist children in finding the matching cards, and in the other setting, the robot tutor only looked at the participants during the play. We investigated the coordination between child and robots’ gaze behaviors. We found that more occurrences of mutual gaze and gaze following made the children aware of the gaze hints given by the robot and improved the efficacy of the robot tutor as a helping agent. This study, therefore, provides guidelines for gaze behaviors design to enrich child-robot interaction in a tutoring context.
|
|
WeBT1 |
308 |
Motion Planning and Navigation in Human-Centered Environments |
Regular Session |
Chair: Seleem, Ibrahim | Egypt Japan Univ. of Science and Tech |
Co-Chair: ULLAH, SAMI | Shanghai Jiao Tong Univ |
|
11:30-11:45, Paper WeBT1.1 | |
Formalizing a Transient-Goal Driven Approach for Pedestrian-Aware Robot Navigation |
K. Narayanan, Vishnu (ATR), Miyashita, Takahiro (ATR), Hagita, Norihiro (ATR) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: In this paper, we lay the algorithmic foundations of a unifying strategy for pedestrian-aware navigation that is aimed at service/social robots deployed in large human-crowded environments. In order to accommodate both modeled and learned social navigation behaviors, we formalize an approach within which the robot traverses to a specific goal (or sub-goal) via a trajectory of optimal transient-goals or optimal short-term waypoints. We then evaluate an implementation of the navigation strategy, by utilizing an augmented Risk-based Rapidly Exploring Random Trees (RRT) planner, and demonstrate its efficacy for real-world deployment using discriminative simulations and by providing avenues for future work.
|
|
11:45-12:00, Paper WeBT1.2 | |
Motion Planning for Continuum Robots: A Learning from Demonstration Approach |
Seleem, Ibrahim (Egypt Japan Univ. of Science and Tech), El-Hussieny, Haitham (Faculty of Engineering(Shoubra), Benha Univ), Assal, Samy F. M. (Faculty of Engineering, Tanta Univ. Tanta, Egypt) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Programming by Demonstration, Medical and Surgical Applications
Abstract: Continuum robots have been recently used in the inspection of tight and confined spaces. State-of-the-art motion planning algorithms that are developed for rigid robots could be inadequate when applied to redundant and complainant continuum robots. In this research, a Demonstration-Guided Motion Planning (DGMP) framework is proposed to let continuum robots imitate a set of given demonstrations to plan and execute point-point spatial motions. A flexible interface is incorporated to allow humans to intuitively demonstrate motions for the robot via teleoperation. The Dynamic Movement Primitives (DMP) framework is adopted to learn, reproduce and generalize the given demonstrations while avoiding novel moving obstacles existing in the environment. Meanwhile, a Model Reference Adaptive Controller (MRAC) is proposed to ensure the robot robustness towards tracking the generated motions from the DGMP. The developed approach is evaluated over a simulated model of a two-section continuum robot and results show evidence that the proposed DGMP is effective in generating and tracking spatial motions for continuum robots. This could encourage a further investigation towards planning complex motions in future for redundant continuum robot.
|
|
12:00-12:15, Paper WeBT1.3 | |
3-D Visual Feedback for Automated Sorting of Cells with Ultra-Low Proportion under Dark Field |
Tan, Jieyu (Beijig Inst. of Tech), Wang, Huaping (Beijig Inst. of Tech), Shi, Qing (Beijing Inst. of Tech), ZHENG, ZHIQIANG (Beijing Inst. of Tech), Cui, Juan (Beijing Inst. of Tech), Sun, Tao (Beijing Inst. of Tech), Huang, Qiang (Beijing Inst. of Tech), Fukuda, Toshio (Meijo Univ) |
Keywords: Assistive Robotics, Cognitive and Sensorimotor Development
Abstract: Study of cellular behaviors, especially the ultra-rare cell type, can aid in the accuracy of clinic diagnoses as well as the development of bio-research engineering, thus the importance of isolating them from heterogeneous mixtures. However, current methods may fail in purity, versatility or cause contamination to cell targets, which is fatal drawback to rare cells. To address this issue, we propose a versatile method to automatically select and capture fluorescent stained target cells with high purity and recovery rate, through developing a novel 3D image processing algorithm under dark field. With the automated pick-and-place strategies, the micro-robotic system achieves cell screening even in an environment with ultra-sparse cells. In the proposed visual method, Markov Random Field (MRF) separation is adapted into the fluorescent environment to attain real-time planar location of micro-pipette and target cells. A reformative method derived from Depth from Defocus (DFD) is brought up to acquire 3D information. The basic system for this method mainly consists of a camera mounted on motorized fluorescent microscope and a micro-manipulator for cell capture. The fluorescent label help to screen out most of the undesired cells while also bring extra constraints and requisition to our visual method. Finally, experiments of collecting 3T3 cells are performed to verify the feasibility and validity of the designed method, achieving average 98% purity and 80% recovery rate within the time limits. This study indicates that proposed visual processing method can not only provides reliable location feedback for micro-manipulation in rare cell sorting, but also can be easily extended to satisfy other automated micro-robotics manipulation.
|
|
12:15-12:30, Paper WeBT1.4 | |
EMoVI-SLAM: Embedded Monocular Visual Inertial SLAM with Scale Update for Large Scale Mapping and Localization |
ULLAH, SAMI (Shanghai Jiao Tong Univ), Song, Bowen (Shanghai Jiao Tong Univ), Chen, Weidong (Shanghai Jiao Tong Univ) |
Keywords: Robots in Education, Therapy and Rehabilitation, User-centered Design of Robots, Assistive Robotics
Abstract: In recent researches, monocular simultaneous localization and mapping (SLAM) remains a well-known technique for ego-motion tracking however it significantly suffers from scale drift. Depth estimation in a monocular vision system, which is yet a challenging factor, is relevant to this drift issue and hence monocular SLAM remains unsuitable for large scale mapping and localization. This paper presents a novel solution, a wearable and embedded EMoVI-SLAM system, to resolve scale drift through multi-sensor fusion architecture for integrating visual and inertial data, using monocular SLAM as basis of a visual framework. Firstly, the unknown scale parameter in a monocular vision system is addressed based on the IMU measurements, meanwhile gravity direction and gyroscope bias is initialized. Secondly, the estimated pose from monocular visual sensor and the IMU sensor is fused together using Unscented Kalman Filter (UKF). Furthermore, to minimize scale drift, the scale is re-computed after IMU bias errors exceeds the safe threshold limit. Finally, the experiments are carried out by mounting embedded SLAM system on a head-gear in two different test-environments for indoor and outdoor large-scale motion as well as on EuRoC dataset. Experiment results shows that proposed algorithm performs better than the state of the art visual inertial SLAM systems.
|
|
WeBT2 |
309 |
Social Intelligence in Kinesthetic, Personalized, Adaptive Human-Robot
Interaction |
Special Session |
Chair: Solis, Jorge | Karlstad Univ. / Waseda Univ |
Co-Chair: Sřrensen, Anders | Univ. of Southern Denmark |
Organizer: Solis, Jorge | Karlstad Univ. / Waseda Univ |
Organizer: Sřrensen, Anders | Univ. of Southern Denmark |
Organizer: Rasmussen, Gitte | Univ. of Southern Denmark |
|
11:30-11:45, Paper WeBT2.1 | |
Towards Skills Evaluation of Elderly for Human-Robot Interaction |
Filippeschi, Alessandro (Scuola Superiore Sant'Anna), Peppoloni, Lorenzo (Scuola Superiore Sant'Anna), Kostavelis, Ioannis (Center for Res. and Tech. Hellas), Gerłowska, Justyna (Medical Univ. of Lublin, Inst. of Methodology and Psych), Ruffaldi, Emanuele (Scuola Superiore Sant'Anna), Giakoumis, Dimitris (Centre for Res. and Tech. Hellas), Tzovaras, Dimitrios (Centre for Res. and Tech. Hellas), Rejdak, Konrad (Medical Univ. of Lublin), Avizzano, Carlo Alberto (Scuola Superiore Sant'Anna) |
Keywords: Monitoring of Behaviour and Internal States of Humans, Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots
Abstract: For a proactive and user-centered robotic assistance and communication, an assistive robot must make decisions about the level of assistance to be provided. Therefore, the robot must be aware of the preferences and the capabilities of the elderly. At the same time, relying on a sensing setup which is totally embedded in the assistive robot would increase its usability. In the framework of the RAMCIP project, a novel skills evaluation methodology has been developed to make the robot aware of the user’s perceptual, cognitive and motor skills. This paper presents such a methodology and its preliminary evaluation. Based on a task analysis of the activities for which the robot provides assistance, the user’s skills are given a score which is updated at different time scales based on the source of information. Highly reliable information is gathered from caregivers at a low rate by means of a graphical interface hosted by the robot. This information refers to standard medical examinations. Based on the modules for motion tracking, object and activity recognition, specific actions of ADL are selected to update motor skills score at a higher rate, which is typically twice per day. The two sources of information are then fused in a Kalman filter. Preliminary results on the illustrative example of arm precision show that the robot’s sensing and cognitive capabilities suffice to obtain a state-of-the-art evaluation of the arm precision skill.
|
|
11:45-12:00, Paper WeBT2.2 | |
RoBody Interaction: A New Approach at Kinesthetic Human Robot Interaction |
Sřrensen, Anders (Univ. of Southern Denmark), Rasmussen, Gitte (Univ. of Southern Denmark) |
Keywords: Detecting and Understanding Human Activity, Novel Interfaces and Interaction Modalities, Robots in Education, Therapy and Rehabilitation
Abstract: This paper presents a novel method for analyzing and designing kinesthetic interaction between humans and robots. The method is based on a combination of Social Embodied Interaction (SEI) analysis and Finite State Machine design. The method is presented along with it’s application to design and analyze an interactive training program for the robotic training device RoboTrainer-Light, with the objective of programming the robot so an optimal training pattern will emerge in the interaction, using only the exchange of force and motion to achieve this.
|
|
12:00-12:15, Paper WeBT2.3 | |
Trust in Medical Human-Robot Interactions Based on Kinesthetic Guidance |
Weigelin, Bente Charlotte (Univ. of Southern Denmark), Mathiesen, Mia (Univ. of Southern Denmark), Nielsen, Christina (Univ. of Sourthern Denmark), Fischer, Kerstin (Univ. of Southern Denmark), Nielsen, Jacob (Univ. of Southern Denmark) |
Keywords: Robots in Education, Therapy and Rehabilitation, Interaction Kinesics
Abstract: In medical human-robot interactions, trust plays an important role since for patients there may be more at stake than during other kinds of encounters with robots. In the current study, we address issues of trust in the interaction with a prototype of a therapeutic robot, the Universal RoboTrainer, in which the therapist records patient-specific tasks for the patient by means of kinesthetic guidance of the patient’s arm, which is connected to the robot. We carried out a user study with twelve pairs of participants who collaborate on recording a training program on the robot. We examine a) the degree with which participants identify the situation as uncomfortable or distressing, b) participants' own strategies to mitigate that stress, c) the degree to which the robot is held responsible for the problems occurring and the amount of agency ascribed to it, and d) when usability issues arise, what effect these have on participants' trust. We find signs of distress mostly in contexts with usability issues, as well as many verbal and kinesthetic mitigation strategies intuitively employed by the participants. Recommendations for robots to increase users' trust in kinesthetic interactions include the timely production of verbal cues that continuously confirm that everything is alright as well as increased contingency in the presentation of strategies for recovering from usability issues arising.
|
|
12:15-12:30, Paper WeBT2.4 | |
Quantitative Comfort Evaluation of Eating Assistive Devices Based on Interaction Forces Estimation Using an Accelerometer |
Garcia Ricardez, Gustavo Alfonso (Nara Inst. of Science and Tech. (NAIST)), Solis, Jorge (Karlstad Univ. / Waseda Univ), Takamatsu, Jun (Nara Inst. of Science and Tech), Ogasawara, Tsukasa (Nara Inst. of Science and Tech) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Human Factors and Ergonomics
Abstract: Robot usage in the fields of human support and healthcare is expanding. Robotic devices to assist humans in the self-feeding task have been developed to help patients with limited mobility in the upper limbs but the acceptance of these robots has been limited. In this work, we investigate how to quantitatively evaluate the comfort of an eating assistive device by estimating the interaction forces between the human and the robot when eating. Rather than using expensive or commercially unavailable devices to directly measure the forces involved in feeding, we use an accelerometer to estimate these forces, which are calculated using a previously observed estimation of the system mass and the measured acceleration during the feeding process. We experimentally verify our concept with a commercially-available eating assistive device and a human subject. The evaluation results demonstrate the feasibility of our approach.
|
|
WeBT3 |
theater |
Novel Interfaces, Interaction Modalities, and Ergonomics - 3 |
Regular Session |
Chair: Blazevic, Pierre | Univ. of Versailles |
Co-Chair: Gradmann, Michael | Univ. of Bayreuth |
|
11:30-11:45, Paper WeBT3.1 | |
Implementation Issues of EMG-Based Motion Intention Detection for Exoskeletal Robots |
Kyeong, Seulki (KAIST), Kim, Won Dong (KAIST), Feng, Jirou (Korea Advanced Inst. of Science and Tech), Kim, Jung (KAIST) |
Keywords: Assistive Robotics, Cognitive and Sensorimotor Development
Abstract: Despite the advantages of electromyography (EMG), which can grasp intent before actual movement, there have not been many studies on the use of EMG in exoskeletons. In this paper, we conducted an experiment to analyze the characteristics of electromyography when EMG signals are used in exoskeleton robots. We integrated the advantages of EMG sensors and physical sensors to control exoskeleton robots using EMG signals and physical signals. The walking environment was defined using the surface electromyography (sEMG) signal and the physical signal to an accuracy of 88% or more. To compensate for the limitations of physical sensors, we used sEMG to distinguish changes in the load during walking. Moreover, sEMG signals and physical signals were used to distinguish external collisions and to identify other variables that could be distinguished. Finally, to examine the characteristics of muscular fatigue, which is a disadvantage when using electromyography, we conducted a muscle fatigue experiment in the lower limb and summarized how the EMG characteristics change in relation to the degree of force and the muscle fatigue.
|
|
11:45-12:00, Paper WeBT3.2 | |
Kinesthetic Robot Force Response Enhanced by the Laws of Physics |
Gradmann, Michael (Univ. of Bayreuth), Wölfel, Kim (Univ. of Bayreuth), Henrich, Dominik (Univ. of Bayreuth) |
Keywords: Novel Interfaces and Interaction Modalities, Programming by Demonstration, HRI and Collaboration in Manufacturing Environments
Abstract: A popular approach to intuitive robot programming is kinesthetic guiding, where the user leads the robot through a task providing a trajectory for automatized replication. While the flow of information from the user towards the robot is already well explored, the feedback from the robot to the user within the same modality is often neglected. In this work we present an innovative concept, how the robot can state an imminent collision based on an adaptive virtual kinetic friction force. Our approach features both obstacle indication and collision avoidance without additional output devices. A user study motivates certain requirements for the provided feedback, whose fulfillment is demonstrated by numerical experiments.
|
|
12:00-12:15, Paper WeBT3.3 | |
Forward Kinematics and Compatibility Equations of a Joystick Based on a 12-6 Stewart Redundant Parallel Mechanism |
YOU, Jingjing (Nanjing Forestry Univ), Ye, pengda (NanJing Forestry Univ), Chellali, Ryad (Nanjing Forestry Univ), LIU, Ying (Nanjing Forestry Univ), YU, Maolin (Nanjing Forestry Univ) |
Keywords: Innovative Robot Designs, Degrees of Autonomy and Teleoperation, Motion Planning and Navigation in Human-Centered Environments
Abstract: Remote control with force feedback is one of the key features of tele-robotics systems. In this paper we investigate about the mechanical properties of a joystick based on a 12-6 Stewart platform. In most of classical 6 DOF parallel mechanisms, the forward kinematics cannot be described with closed-form solutions, which prevents from real-time control and straightforward implementations. In this contribution we address a 12-6 Stewart redundant parallel mechanism. We present the forward kinematics of this mechanism and we analyze the corresponding compatibility equations. We introduce intermediate variables, which allow revealing scale relationships among feature points. Accordingly, we derive 15 quadratic isomorphic compatible equations, and then we convert the forward kinematics equations into 12 linear compatible equations. Based on matrix algebra, we determine the final and unique solution of the redundant system. Furthermore, we study the influence of the initial values and the structural parameters. We give simulation results showing both the efficiency and the precision of the proposed compared to classical approaches. Our method outperforms previous works, allowing a better design and a faster control for accurate and effective 6 DOF joysticks.
|
|
12:15-12:30, Paper WeBT3.4 | |
Sound Reduction of Vibration Feedback by Perceptually Similar Modulation |
Cao, Nan (Tohoku Univ), Nagano, Hikaru (Tohoku Univ), Konyo, Masashi (Tohoku Univ), Tadokoro, Satoshi (Tohoku Univ) |
Keywords: Human Factors and Ergonomics, Novel Interfaces and Interaction Modalities, Virtual and Augmented Tele-presence Environments
Abstract: The transmission of high-frequency collision vibration can effectively deliver tactile characteristics in teleoperation of remote robots and the virtual environment. However, high-frequency vibrations also introduce audible noise. To address this issue, we modulated the frequency and amplitude of collision vibration to keep the perceptually similar while reducing the sound of the vibrations. Our experimental results showed that the sound pressure level of the collision vibrations (f = 675 Hz and 1012 Hz) is higher than the perceptual similar collision vibrations (f = 300 Hz and 450 Hz). These results suggest that our modulation method is able to reduce the sound level of the collision vibrations while maintaining the perceptual similarity.
|
|
WeCT1 |
308 |
Applications of Social Robots |
Regular Session |
Chair: Mastrogiovanni, Fulvio | Univ. of Genoa |
Co-Chair: Nakadai, Kazuhiro | Honda Res. Inst. Japan Co., Ltd |
|
13:30-13:45, Paper WeCT1.1 | |
Poker Face Influence: Persuasive Robot with Minimal Social Cues Triggers Less Psychological Reactance |
Ghazali, Aimi Shazwani (Eindhoven Univ. of Tech), Ham, Jaap (Eindhoven Univ. of Tech), Barakova, Emilia I. (Eindhoven Univ. of Tech), Markopoulos, Panos (Eindhoven Univ. of Tech) |
Keywords: Applications of Social Robots, Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness
Abstract: Applications of social robotics in different domains such as education, healthcare, or as companions to people living alone, often entail that robots will act as persuasive agents. However, persuasive attempts can give rise to psychological reactance where people have negative thoughts and emotions that limit adherence to the persuader. To understand the phenomenon of reactance to robotic persuaders, we investigate the effect of social cues of an artificial agent on psychological reactance and compliance. Participants in a laboratory experiment played a decision-making game in which persuasive attempts were delivered in one of three forms: as a persuasive-text, spoken by a social robot (the Socibot™) displaying minimal social cues, or by the same robot displaying enhanced social cues. Our results suggest that a social robot with minimal social cues invokes the lowest reactance. Remarkably, exploratory analyses indicate cross-gender effects (between robot and user) upon invoking lower psychological reactance and female participants have higher compliance than male participants.
|
|
13:45-14:00, Paper WeCT1.2 | |
"How Was Your Stay?": Exploring the Use of Robots for Gathering Customer Feedback in the Hospitality Industry |
Chung, Michael Jae-Yoon (Univ. of Washington), Cakmak, Maya (Univ. of Washington) |
Keywords: Applications of Social Robots, User-centered Design of Robots, Long-term Experience and Longitudinal HRI Studies
Abstract: This paper presents four exploratory studies of the potential use of robots for gathering customer feedback in the hospitality industry. To account for the viewpoints of both hotels and guests, we administered need finding interviews at five hotels and an online survey concerning hotel guest experiences with 60 participants. We then conducted the two deployment studies based on deploying software prototypes for Savioke Relay robots we designed to collect customer feedback: (i) a hotel deployment study (three hotels over three months) to explore the feasibility of robot use for gathering customer feedback as well as issues such deployment might pose and (ii) a hotel kitchen deployment study (at Savioke headquarters over three weeks) to explore the role of different robot behaviors (mobility and social attributes) in gathering feedback and understand the customers' thought process in the context that they experience a service. We found that hotels want to collect customer feedback in real-time to disseminate positive feedback immediately and to respond to unhappy customers while they are still on-site. Guests want to inform the hotel staff about their experiences without compromising their convenience and privacy. We also found that the robot users, e.g. hotel staff, use their domain knowledge to increase the response rate to customer feedback surveys at the hotels. Finally, environmental factors, such as robot's location in the building influenced customer response rates more than altering the behaviors of the robot collecting the feedback.
|
|
14:00-14:15, Paper WeCT1.3 | |
Signal Restoration Based on Bi-Directional LSTM with Spectral Filtering for Robot Audition |
Taniguchi, Ryosuke (Tokyo Inst. of Tech), Hoshiba, Kotaro (Kanagawa Univ), Itoyama, Katsutoshi (Kyoto Univ), Nishida, Kenji (Tokyo Inst. of Tech), Nakadai, Kazuhiro (Honda Res. Inst. Japan Co., Ltd) |
Keywords: Applications of Social Robots, Machine Learning and Adaptation, Evaluation Methods and New Methodologies
Abstract: This paper addresses restoration of acoustic sig- nals for robot audition. A robot usually listens to target acoustic signals such as speech and music in noisy conditions. Acoustic information on such signals inevitably contaminated with noise. Even when noise reduction techniques such as sound source separation are performed, the noise-reduced acoustic signals contain distortion and/or residual noise after the noise reduction to some extent. The distortion and residual noise basically degrade the performance of recognition processes such as automatic speech recognition (ASR). We decided to use bidirec- tional long short-term memory (Bi-LSTM) for acoustic signal restoration since it can represent dynamic behaviors well for a temporal sequence in the forward and backward directions. When applying Bi-LSTM to recover acoustic signals, there is an issue, that is, acoustic signals tend to be sparse in high frequencies, and thus Bi-LSTM training becomes insufficient in such high frequencies due to a lack of training data. Therefore, we propose a new restoration method based on Bi-LSTM with spectral filtering. The spectral filter and the corresponding inverse filter are introduced to a Bi-LSTM framework to accelerate training in high frequencies. Preliminary results showed that the proposed Bi-LSTM with spectral filtering can perform signal restoration even when a small amount of training data is available.
|
|
14:15-14:30, Paper WeCT1.4 | |
Generation of Gestures During Presentation for Humanoid Robots |
Shimazu, Akihito (The Univ. of Electro Communications), Hieida, Chie (The Univ. of Electro-Communications), Nagai, Takayuki (Univ. of Electro-Communications), Nakamura, Tomoaki (The Univ. of Electro-Communications), Takeda, Yuki (Dai Nippon Printing Co., Ltd), Hara, Takenori (Dai Nippon Printing Co., Ltd), Nakagawa, Osamu (Dai Nippon Printing Co., Ltd), Maeda, Tsuyoshi (Dai Nippon Printing Co., Ltd) |
Keywords: Applications of Social Robots, Machine Learning and Adaptation, Non-verbal Cues and Expressiveness
Abstract: For presentation purposes, gestures play an exceptionally important role in improving the information transmission effect. It has been demonstrated that the body language expressing the enthusiasm and intention of the presenter affects the success of the presentation and the impression on the audience. For these reasons, presentation robots are required to perform such movements; however, manual design of these movements is a difficult task. In this research, we propose a method to study the relationship between speech prosodic information and motion using a sequence-to-sequence model, and directly generate appropriate motions using the prosodic information. This study also proposes a method for generating motions that convey the meaning of specific words. We implement the proposed method on the ``Pepper'' robot to evaluate its performance.
|
|
14:30-14:45, Paper WeCT1.5 | |
Development of a Semi-Autonomous Robotic System to Assist Children with Autism in Developing Visual Perspective Taking Skills |
Zaraki, Abolfazl (Univ. of Hertfordshire), Wood, Luke Jai (Univ. of Hertfordshire), Robins, Ben (Univ. of Hertfordshire), Dautenhahn, Kerstin (Univ. of Hertfordshire) |
Keywords: Applications of Social Robots, Assistive Robotics
Abstract: Robot-assisted therapy has been successfully used to help children with Autism Spectrum Condition (ASC) develop their social skills, but very often with the robot being fully controlled remotely by an adult operator. Although this method is reliable and allows the operator to conduct a therapy session in a customised child-centred manner, it increases the cognitive workload on the human operator since it requires them to divide their attention between the robot and the child to ensure that the robot is responding appropriately to the child's behaviour. In addition, a remote-controlled robot is not aware of the information regarding the interaction with children and consequently it does not have the ability to shape live HRIs. Further to this, a remote-controlled robot typically does not have the capacity to record this information and additional effort is required to analyse the interaction data. For these reasons, using a remote-controlled robot in robot-assisted therapy may be unsustainable for long-term interactions. To lighten the cognitive burden on the human operator and to provide a consistent therapeutic experience, it is essential to create some degrees of autonomy and enable the robot to perform some autonomous behaviours during interactions with children. This paper provides an overview of the design and implementation of a robotic system called Sense-Think-Act which convert the remote-controlled scenarios of our humanoid robot into a semi-autonomous social agent with the capacity to play games autonomously (under human supervision) with children in the real-world school settings. The developed system has been implemented on the humanoid robot Kaspar and evaluated in a trial with four children with ASC at a local specialist secondary school in the UK.
|
|
14:45-15:00, Paper WeCT1.6 | |
Dialogue-Based Supervision and Explanation of Robot Spatial Beliefs: A Software Architecture Perspective |
Buoncompagni, Luca (Univ. of Genoa), Mastrogiovanni, Fulvio (Univ. of Genoa) |
Keywords: Computational Architectures, Programming by Demonstration, Multi-modal Situation Awareness and Spatial Cognition
Abstract: The paper presents a software architecture allowing a robot to learn new compositions of objects in table-top scenarios by human demonstrations. The robot qualitatively represents those scenes, reason upon their similarity, and interact with humans through dialogues to talk about represented scenes. We formalise the robot behaviour based on a Description Logic representation of scenes through spatial beliefs, i.e., learned logic predicates, on which the robot applies symbolic reasoning to recognise and explain the scene. We exploit the logical structure of predicates in a software architecture that enables a robot exposing its beliefs, and if required, it allows a human supervisor to apply corrections in a form akin to robot active perception. The paper critically discusses the design of the software components and their interfaces, discriminating between knowledge representation and dialogue management. Those components are developed for human-robot knowledge sharing applications involving visual, verbal, and auditory modalities of interaction. Software components are treated as grey boxes managing an ontology-based formalisation of robot beliefs through four contextualised dialogues, for which we present a unique design pattern.
|
|
WeCT2 |
309 |
Cooperation and Collaboration in Human-Robot Teams |
Regular Session |
Co-Chair: Bader, Hayden | Duke Univ |
|
13:30-13:45, Paper WeCT2.1 | |
Considering Human Behavior in Motion Planning for Smooth Human-Robot Collaboration in Close Proximity |
Zhao, Xuan (City Univ. of Hong Kong), Pan, Jia (The City Univ. of Hong Kong) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments, Motion Planning and Navigation in Human-Centered Environments
Abstract: It is well-known that a deep understanding of co-workers' behavior and preference is important for collaboration effectiveness. In this work, we present a method to accomplish smooth human-robot collaboration in close proximity by taking into account the human's behavior while planning the robot's trajectory. In particular, we first use an occupancy map to summarize human's movement preference over time, and such prior information is then considered in an optimization-based motion planner via two cost items: 1) avoidance of the workspace previously occupied by the human to eliminate interruption and to increase the task success rate; 2) tendency to keep a safe distance between the human and the robot to improve the safety. In the experiments, we compare the collaboration performance among planners using different combinations of human-aware cost items, including the avoidance factor only, both the avoidance and safe distance factor, and a baseline where no human-related factors are considered. The trajectories generated are tested in both simulated and real-world environments, and the results show that our method can significantly increase the collaborative task success rates and is also human-friendly.
|
|
13:45-14:00, Paper WeCT2.2 | |
TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams |
Wen, James (United States Air Force Acad), Stewart, Amanda (United States Air Force Acad), Billinghurst, Mark (Univ. of Canterbury), Tossell, Chad (USAF Acad) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Embodiment, Empathy and Intersubjectivity, Creating Human-Robot Relationships
Abstract: Strong empathic bonding between members of a team can elevate team performance tremendously but it is not clear how such bonding within human-machine teams may impact upon mission success. Prior work using self-reporting surveys and end-of-task metrics do not capture how such bonding may evolve over time and impact upon task fulfillment. Furthermore, sensor-based measures do not scale easily to facilitate the need to collect substantial data for measuring potentially subtle effects. We introduce TEAMMATE, a system designed to provide insights into the emotional dynamics humans may form for machine teammates that could critically impact upon the design of human machine teams.
|
|
14:00-14:15, Paper WeCT2.3 | |
A Study of Human-Robot Copilot Systems for En-Route Destination Changing |
Jiang, Yu-Sian (Univ. of Texas at Austin), Warnell, Garrett (U.S. Army Res. Lab), Munera, Eduardo (Mindtronic AI), Stone, Peter (Univ. of Texas at Austin) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Evaluation Methods and New Methodologies, Motion Planning and Navigation in Human-Centered Environments
Abstract: In this paper, we introduce the problem of en-route destination changing for a self-driving car, and we study the effectiveness of human-robot copilot systems as a solution. The copilot system is one in which the autonomous vehicle not only handles low-level vehicle control, but also continually monitors the intent of the human passenger in order to respond to dynamic changes in desired destination. We specifically consider a vehicle parking task, where the vehicle must respond to the user's intent to drive to and park next to a particular roadside sign board, and we study a copilot system that detects the passenger's intended destination based on gaze. We conduct a human study to investigate, in the context of our parking task, (a) if there is benefit in using a copilot system over manual driving, and (b) if copilot systems that use eye tracking to detect the intended destination have any benefit compared to those that use a more traditional, keyboard-based system. We find that the answers to both of these questions are affirmative: our copilot systems can complete the autonomous parking task more efficiently than human drivers can, and our copilot system that utilizes gaze information enjoys an increased success rate over one that utilizes typed input.
|
|
14:15-14:30, Paper WeCT2.4 | |
Behavior Explanation As Intention Signaling in Human-Robot Teaming |
Gong, Ze (Arizona State Univ), Zhang, Yu (Tony) (Arizona State Univ) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Cognitive Skills and Mental Models, Assistive Robotics
Abstract: Facilitating a shared team understanding is an important task in human-robot teaming. It requires not only the robot to understand what the human is doing, but also the robot’s behavior to be explainable to the human. We propose an approach to explaining robot behavior as intention signaling using natural language sentences. In contrast to recent approaches to generating explicable and legible plans, intention signaling does not require the robot to change its plan; our approach also does not force humans to update their knowledge as generally required for explanation generation. The key questions to be answered here for intention signaling are its content (i.e., what) and timing (i.e, when). Based on our prior work, we formulate human interpreting robot actions as a labeling process to be learned. To capture the dependencies between the interpretation of robot actions that are far apart, skip-chain Conditional Random Fields (CRFs) are used. The answers to the when and what can then be converted to an inference problem in the skip-chain CRF. Potential timings and content of signaling are explored by fixing certain labels in the CRF model; the configuration that maximizes the underlying probability of the labels being interpretable, which reflects the human’s understanding of the robot’s plan, is returned for signaling. For evaluation, we construct a synthetic domain to verify that intention signaling can help achieve better teaming by reducing criticism on robot behavior that may appear undesirable but is otherwise required. We use Amazon MTurk to assess robot behavior with two settings (i.e., with and without signaling). Results show that our approach achieves the desired effect.
|
|
14:30-14:45, Paper WeCT2.5 | |
Interactive Plan Explicability in Human-Robot Teaming |
Zakershahrak, Mehrdad (Arizona State Univ), Sonawane, Akshay (Arizona State Univ), Gong, Ze (Arizona State Univ), Zhang, Yu (Tony) (Arizona State Univ) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Cognitive Skills and Mental Models, Assistive Robotics
Abstract: Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware of its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with the human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot interaction with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where the human and robot's behaviors can influence each other. We term this new measure Interactive Plan Explicability (IPE). We compare the joint plan generated by our approach with the consideration of this measure using the fast forward (FF) planner, with the plan generated by FF without such consideration, as well as with the plan created with human subjects interacting with a robot running an FF planner. Since the human subject is expected to adapt to the robot's behavior dynamically when it deviates from her expectation, the plan created with human subjects is expected to be more explicable than the FF plan, and comparable to the explicable plan generated by our approach. Results indicate that the explicability score of plans generated by our algorithm is indeed closer to the human interactive plan than the plan generated by FF, implying that the plans generated by our algorithms align better with the expected plans of the human during execution.
|
|
14:45-15:00, Paper WeCT2.6 | |
Designing Multimodal Intent Communication Strategies for Conflict Avoidance in Industrial Human-Robot Teams |
Aubert, Miles Clinton (Duke Univ), Bader, Hayden (Duke Univ), Hauser, Kris (Duke Univ) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Multi-modal Situation Awareness and Spatial Cognition
Abstract: Robot-to-human intent communication has been proposed as a method of enabling fluent coordination in human-robot teams. Prior research has focused on identifying modalities by which intent information can be accurately communicated, but has not yet studied whether intent communication enables fluent or safer coordination in human-robot teams in which intent communication is only supportive to the team’s primary task. To address this question, we conduct a study (N = 29) in a mock collaborative manufacturing scenario in which motion-based and display-based intent communication approaches are evaluated under varying penalties for failing to coordinate safely. Subjective and objective measures of team fluency suggest that although intent communication supports fluent coordination, using a purely motion-based or a purely display-based approach may not be the most effective strategy. Although multimodal intent communication did not significantly improve upon unimodal approaches, merging both motion-based and display-based intent communication seems to combine the strengths of both approaches. Interestingly, results also suggest that contrary to theoretical predictions, the positive effect of intent communication is generally robust to teaming scenarios that require members to operate concurrently.
|
|
WeCT3 |
Theater |
User-Centered Design of Robots |
Regular Session |
Chair: Chanseau, Adeline | Univ. of Hertfordshire |
Co-Chair: Kawamura, Kazuhiko | Vanderbilt Univ |
|
13:30-13:45, Paper WeCT3.1 | |
Experimental Evaluation of Cooperativeness and Collision Safety of a Wearable Robot Arm |
Nakabayashi, Koki (Waseda Univ), Iwasaki, Yukiko (Waseda Univ), Takahashi, Shota (Waseda Univ), Iwata, Hiroyasu (Waseda Univ) |
Keywords: User-centered Design of Robots, Assistive Robotics, Human Factors and Ergonomics
Abstract: This paper presents an experimental evaluation of a collaborative task for a design assessment of a wearable robot arm. Wearable robotic devices have been recently proposed as a new concept of the human robot collaboration. In particular, wearable robot arms have been expected to complement our physical capabilities and perform multiple tasks simultaneously. However, a design principle of a wearable robot arm based on the cooperativeness and collision safety has not been discussed sufficiently. The design factors of the wearable robot arm are considered to have a dominant influence on the cooperativeness and collision safety. We therefore conducted an experiment to evaluate time required for a collaborative task and number of collisions with a robot arm device. Arm lengths and attachment positions were compared in the conducted experiment as a design factor of a wearable robot arm. The experimental results indicated a correlation between evaluation indexes and design factors. As a result, we suggest that the concept design of a wearable robot arm is recommended to consist of an extended arm length without constraints of the attachment positions.
|
|
13:45-14:00, Paper WeCT3.2 | |
The Effects of Eye Design on the Perception of Social Robots |
Luria, Michal (Carnegie Mellon Univ), Hodgins, Jessica (Carnegie Mellon Univ), Forlizzi, Jodi (Carnegie Mellon Univ) |
Keywords: User-centered Design of Robots, Robot Companions and Social Robots
Abstract: Engagement with social robots is influenced by their appearance and shape. While robots are designed with various features, almost all designs have some form of eyes. In this paper, we evaluate eye design variations for tabletop robots in a lab study, with the goal of learning how they influence participants' perception of the robots' personality and functionality. This evaluation is conducted with non-working "paper prototypes", a common design methodology which enables quick evaluation of a variety of designs. By comparing sixteen eye designs we found: (1) The more lifelike the design of the eyes was, the higher the robot was rated on personable qualities, and the more suitable it was perceived to be for the home; (2) Eye design did not affect how professional and how suitable for the office the robot was perceived to be. We suggest that designers can use paper prototypes as a design methodology to quickly evaluate variations of a particular feature for social robots.
|
|
14:00-14:15, Paper WeCT3.3 | |
A Framework for Affect-Based Natural Human-Robot Interaction |
Villani, Valeria (Univ. of Modena and Reggio Emilia), Sabattini, Lorenzo (Univ. of Modena and Reggio Emilia), Secchi, Cristian (Univ. of Modena & Reggio Emilia), Fantuzzi, Cesare (Univ. Di Modena E Reggio Emilia) |
Keywords: User-centered Design of Robots, Human Factors and Ergonomics, Monitoring of Behaviour and Internal States of Humans
Abstract: In this paper we present a general framework for affective human-robot interaction that allows users to intuitively interact with a robot and takes into account their mental fatigue, thus simplifying the task or providing assistance when the user feels stressed. Interaction with the robot is achieved by naturally mapping user’s forearm motion, detected with a smartwatch, into robot’s motion. High-level commands can be provided by means of gestures. An approach based on affective robotics is used to adapt the level of robot’s autonomy to the cognitive workload of the user. User’s mental fatigue is detected from the analysis of heart rate, also measured by the smartwatch. The framework is general and can be applied to different robotic systems. In this paper, we consider its experimental validation on a wheeled mobile robot.
|
|
14:15-14:30, Paper WeCT3.4 | |
Project Fantom: Co-Designing a Robot for Demonstrating an Epileptic Seizure |
Zubrycki, Igor (Lodz Univ. of Tech), Szafarczyk, Izabela (Lodz Univ. of Tech), Granosik, Grzegorz (Lodz Univ. of Tech) |
Keywords: User-centered Design of Robots, Robots in Education, Therapy and Rehabilitation, Innovative Robot Designs
Abstract: In this paper, we present the methodology and results of designing a robot for presenting epileptic seizure. The goal of the project was to create a prototype device that would be used in a series of pilot workshops for improving teachers reactions during an epileptic seizure and their attitudes towards epileptic students. Various design goals had to be accomplished to fit the needs of all of the stakeholders. We used a co-design (participatory design) approach though series of workshops participated by members of the association for epileptic patients, students, and faculty, members of the biomedical engineering and robotics departments, teachers, psychologists and medical specialists (epileptologist, neurologist). As the result of the co-designing process, an inexpensive robot was created that was used in a series of 10 pilot workshops with 217 participants, mainly teachers of primary and middle schools. During the workshops, teachers improved their understanding of epilepsy and suggested various improvements for next runs of the workshop. Co-creation strategy used during the project resulted in a prototype robot that combined goals of various stakeholders, such as an accurate presentation of an epileptic seizure, ease of use and control, lightweight while preserving the dignity of persons with epilepsy.
|
|
14:30-14:45, Paper WeCT3.5 | |
“RoboQuin”: A Mannequin Robot with Natural Humanoid Movements |
Shidujaman, Mohammad (Tsinghua Univ), Zhang, Shenghua (Tsinghua Univ), Elder, Robert Romeo (New York Univ), Mi, Haipeng (Tsinghua Univ) |
Keywords: Innovative Robot Designs, Non-verbal Cues and Expressiveness, Robots in art and entertainment
Abstract: This paper presents the design, control and expressive capabilities of RoboQuin—a novel humanoid social robot. Although social robots have become more human-friendly, their capacity for naturalistic human behavior remains relatively limited. We implement an interdisciplinary design method to create a realistic and natural humanoid robot, with accurate body proportions and smooth motion coordination, and the ability to perform nonverbal communication. With this design, we aim to explore the impact of robot body design and dynamic robot postures on human perception towards robots.
|
|
14:45-15:00, Paper WeCT3.6 | |
Does the Appearance of a Robot Influence People's Perception of Task Criticality? |
Chanseau, Adeline (Univ. of Hertfordshire), Dautenhahn, Kerstin (Univ. of Hertfordshire), Walters, Michael Leonard (Univ. of Hertfordshire), Koay, Kheng Lee (Univ. of Hertfordshire), Lakatos, Gabriella (Univ. of Hertfordshire), Salem, Maha (Univ. of Hertfordshire) |
Keywords: Robot Companions and Social Robots
Abstract: As home robot companions become more common, it is important to understand what types of tasks are considered critical to perform correctly. This paper provides working definitions of task criticality, physical and cognitive tasks with respect to robot task performance. Our research also suggests that although people’s perceptions of task criticality is independent of robot appearances, their expectation that a robot performs tasks correctly is affected by it's appearance.
|
|
WeDP |
Hall 3rd floor |
POSTERS_WEDNESDAY |
Poster Session |
|
16:30-18:00, Paper WeDP.1 | |
Improved Particle Swarm Optimization for Multi-Robot SLAM |
Zhao, Ye (Nanjing Tech. Univ), Wang, Ting (Nanjing Tech. Univ), Deng, Xin (Nanjing Tech. Univ), Qin, Wen (Nanjing Tech. Univ) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Motion Planning and Navigation in Human-Centered Environments, Robot Companions and Social Robots
Abstract: This paper addresses the problem of Simultaneous Localization and Mapping (SLAM) for a multi-robot system. The challenge in multi-robot SLAM is to develop an algorithm to manage robots, making them simultaneously explore different regions and arriving their final position. The goal of this paper is to design a new method for improving the accuracy of localization. Firstly, by considering factors of avoiding obstacles and arriving their final position quickly, we apply improved Particle Swarm Optimization (PSO) in multi-robot SLAM to work in localization phase. Secondly, we design a new method to calculate weights including local optimum solution and global optimum solution in the PSO algorithm. The proposed technique is further validated through simulations. The method can increase the explored area with low error during the same exploration time as its counterparts. Experiment results show that our new method can improving the accuracy of localization and the quality of mapping.
|
|
16:30-18:00, Paper WeDP.2 | |
Predicting Response to Joint Attention Performance in Human-Human Interaction Based on Human-Robot Interaction for Young Children with Autism Spectrum Disorder |
Nie, Guangtao (Vanderbilt Univ), Zheng, Zhi (Univ. of Wisconsin Milwaukee), Johnson, Jazette (Vanderbilt Univ), Swanson, Amy (Vanderbilt Univ), Weitlauf, Amy (Vanderbilt Univ), Warren, Zachary (Vanderbilt Univ), Sarkar, Nilanjan (Vanderbilt Univ) |
Keywords: Machine Learning and Adaptation, Social Learning and Skill Acquisition Via Teaching and Imitation
Abstract: Autism Spectrum Disorders (ASD) are characterized by deficits in social communication skills, such as response to joint attention (RJA). Robotic systems have been designed and applied to help children with ASD improve their RJA skills. One of the most important goals of robot-assisted intervention is helping children generalize social interaction skills to interact with other people. Thus predicting children’s human-human interaction (HHI) performance based on their human-robot interaction (HRI) process is an important task. However, to the best of our knowledge, little research exists exploring this topic. The Early Social-Communication Scales (ESCS) test is a measurement of nonverbal social skills, including RJA, for young children. We conducted two longitudinal user studies with a robot-mediated RJA system in young children with ASD, followed by HHI sessions consisting of ESCS administration. In this paper, we present findings regarding how to predict participants’ RJA performance in HHI based on their head pose patterns in HRI, under a semi-supervised machine learning framework. As a three-class classification problem, we achieved a micro-averaged accuracy of 73.5%, which indicates the potential effectiveness of the proposed method.
|
|
16:30-18:00, Paper WeDP.3 | |
Transparent Robot Behavior by Adding Intuitive Visual and Acoustic Feedback to Motion Replanning |
Bolano, Gabriele (FZI Forschungszentrum Informatik), Roennau, Arne (FZI Forschungszentrum Informatik, Karlsruhe), Dillmann, Rüdiger (FZI - Forschungszentrum Informatik - Karlsruhe) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Multimodal Interaction and Conversational Skills
Abstract: Nowadays robots are able to work safely close to humans. They are light-weight, intrinsically safe and capable of avoiding obstacles as well as understand and predict human motions. In this collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to robot understanding and prediction of the human behavior, allowing the robot to replan its motion trajectories. This work is focused on the communication of the robot's intentions to the human to make its goals and planned trajectories easily understandable. Visual and acoustic information has been added to give the human an intuitive feedback to immediately understand the robot's plan. This allows a better interaction and makes the humans feel more comfortable, without any feeling of anxiety related to the unpredictability of the robot motion. Experiments have been conducted in a collaborative assembly scenario. The results of these tests were collected in questionnaires, in which the humans reported the differences and improvements they experienced using the feedback communication system.
|
|
16:30-18:00, Paper WeDP.4 | |
Adaptive Neural Control for Self-Organized Locomotion and Obstacle Negotiation of Quadruped Robots |
Sun, Tao (Nanjing Univ. of Aeronautics and Astronautics), Shao, Donghao (Nanjing Univ. of Aeronautics and Astronautics), Dai, Zhendong (Nanjing Univ. of Aeronautics and Astronautics), Manoonpong, Poramate (Univ. of Southern Denmark) |
Keywords: Machine Learning and Adaptation, Androids
Abstract: Many quadruped robots have been developed to imitate their biological counterparts, several of which show excellent performance. However, the biological neural control mechanisms responsible for self-organized adaptive quadruped locomotion remain elusive. By drawing lessons from biological findings and using an artificial neural approach, we simulated a mammal-like quadruped robot and used it as our simulation platform to investigate and develop neural control mechanisms. In this study, we proposed an adaptive neural control network that can autonomously generate self-organized emergent locomotion with adaptability for the robot. The control network consists of three main components: Decoupled neural central pattern generator circuits (one for each leg), sensory feedback adaptation with dual-rate learning, and multiple neural reflex mechanisms. Simulation results show that the robot can perform quadruped-like gaits in a self-organized manner and adapt its gait to negotiate an obstacle. In addition, this work also suggests that the tight combination of the body-environment interaction and adaptive neural control, guided by sensory feedback adaptation and neural reflexes, is a powerful approach to better understand and solve self-organized adaptive coordination problems in quadruped locomotion.
|
|
16:30-18:00, Paper WeDP.5 | |
Smooth and Efficient Policy Exploration for Robot Trajectory Learning |
Li, Shidi (National Univ. of Singapore), Chew, Chee Meng (National Univ. of Singapore), Subramaniam, Velusamy (National Univ. of Singapore) |
Keywords: Machine Learning and Adaptation, Programming by Demonstration, Interaction Kinesics
Abstract: Many policy search algorithms have been proposed for robot learning and proved to be practical in real robot applications. However, there are still hyperparameters in the algorithms, such as the exploration rate, which requires manual tuning. The existing methods to design the exploration rate manually or automatically may not be general enough or hard to apply in the real robot. In this paper, we propose a learning model to update the exploration rate adaptively. The overall algorithm is a combination of methods proposed by other researchers. Smooth trajectories for the robot can be produced by the algorithm and the updated exploration rate maximizes the lower bound of the expected return. Our method is tested in the ball-in-cup problem. The results show that our method can receive the same learning outcome as the previous methods but with fewer iterations.
|
|
16:30-18:00, Paper WeDP.6 | |
Investigating Deep Learning Approaches for Human-Robot Proxemics |
Gao, Yuan (Uppsala Univ), Wallkötter, Sebastian (Uppsala Univ), Obaid, Mohammad (Uppsala Univ), Castellano, Ginevra (Uppsala Univ) |
Keywords: Machine Learning and Adaptation, Multi-modal Situation Awareness and Spatial Cognition, Social Learning and Skill Acquisition Via Teaching and Imitation
Abstract: In this paper, we investigate the applicability of deep learning methods to adapt and predict comfortable human-robot proxemics. Proposing a network architecture, we experiment with three different layer configurations, obtaining three different end-to-end trainable models. Using these, we compare their predictive performances on data obtained during a human-robot interaction study. We find that our long short-term memory based model outperforms a gated recurrent unit based model and a feed-forward model. Further, we demonstrate how the created model can be exploited to create customized comfort zones that can help create a personalized experience for individual users.
|
|
16:30-18:00, Paper WeDP.7 | |
Physical Human-Robot Interaction through a Jointly-Held Object Based on Kinesthetic Perception |
Jaberzadeh Ansari, Ramin (Chalmers Univ. of Tech), Karayiannidis, Yiannis (Chalmers Univ. of Tech. & KTH Royal Insitute of Tech), Sjöberg, Jonas (Chalmers Univ. of Tech) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Detecting and Understanding Human Activity
Abstract: This paper deals with the problem of human-robot cooperative object manipulation for cases where the grasp position of the operator can change during task execution similar to human-human collaborative scenarios. In state of the art algorithms for cooperative object handling, a constant grasping position is considered for the operator. In order to accommodate the changes of the human grasping point in the control design, we do not depend on sensors on the operator's hand or on the object but we employ estimates obtained through a recursive least-squares estimator. The estimation algorithm uses only the measured wrenches obtained by a force/torque sensor located at the end-effector of the manipulator. We also propose a switching strategy for a damping controller based on the online estimates. Simulation results are provided in order to demonstrate the proposed method.
|
|
16:30-18:00, Paper WeDP.8 | |
Improving Adaptive Human-Robot Cooperation through Work Agreements |
Mioch, Tina (TNO), Peeters, Marieke M. M. (TNO), Neerincx, Mark (TNO) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Degrees of Autonomy and Teleoperation, User-centered Design of Robots
Abstract: Human-robot teams for disaster response need to dynamically adapt their task allocation and coordination to the momentary context, based on adequate trust stances and taking account of the relevant values and norms (such as safety, health, and privacy). This paper presents a Work Agreement framework that supports this capability. The research question is: Which minimal set of concepts, relations, and associated formalization can be used to model Work Agreements with adequate expressiveness and flexibility? An ontology has been developed that defines core concepts and their relations: creditor, debtor, antecedent, consequent, lifespan, acceptance. These concepts encompass the knowledge to specify, activate, monitor, and reason about work agreements. The framework was implemented and tested as part of the TRADR project. The TRADR system brought forward the desired adaptive team behavior of the concerning robot. The tests led to further refinements of the work agreements framework.
|
|
16:30-18:00, Paper WeDP.9 | |
Human Inspired Effort Distribution During Collision Avoidance in Human-Robot Motion |
da Silva Filho, José Grimaldo (Univ. Grenoble Alpes - INRIA), Olivier, Anne-Hélčne (Univ. Rennes, M2S Lab, Inria, MimeTIC), Crétual, Armel (M2S Lab, Univ. Rennes 2), Pettre, Julien (Inria - Irisa), Fraichard, Thierry (INRIA) |
Keywords: Detecting and Understanding Human Activity, Cooperation and Collaboration in Human-Robot Teams, Monitoring of Behaviour and Internal States of Humans
Abstract: Recent works in the area of human robot motion showed that behaving in a human-like manner allows a robot to reduce global cognitive effort for people in the environment. Given that collision avoidance situations between people are solved cooperatively, this work models the manner in which this cooperation is done so that a robot can replicate their behavior. To that end, hundreds of situations where two walkers have crossing trajectories were analyzed. Based on these human trajectories involving a collision avoidance task, we determined how total effort is shared between each walker depending on several factors of the interaction such as crossing angle, time to collision and speed. To validate our approach, a proof of concept is integrated into ROS with Reciprocal Velocity Objects (RVO) in order to distribute collision avoidance effort in a human-like way.
|
|
16:30-18:00, Paper WeDP.10 | |
Use of Autobiographical Memory for Enhancing Adaptive HRI in Multi-User Domestic Environment |
Edirisinghe, Sachi Natasha (Univ. of Moratuwa), Jayasekara, A.G.B.P. (Univ. of Moratuwa) |
Keywords: Machine Learning and Adaptation, Robot Companions and Social Robots
Abstract: The use of social robots in the domestic environment has increased during the past few decades. These robots are intended to maintain long-term interactions with humans while involving in a variety of tasks including daily activities, entertainment, and assisting elderly or disabled people. The ability to learn user’s preferences and adapting interaction accordingly is a must for such robots. As the domestic environment consists of non-experts, social robots must possess more natural and human-friendly interaction capabilities. This paper presents an Autobiographical Memory(AM) based intelligent system which can learn user preferences through natural interactions and provide user adaptive services for each user in the multi-user domestic environment. The system is capable of learning user’s preferences from his/her own statements and from another person’s statements. Furthermore, the system is capable of adapting to user’s hidden preference and changes of preferences easily. The robot’s memory has been structured in such a way that it can easily remember the user groups and the relationship between users. This facilitates the robot for learning preferences which are common to a group of users. The system has been tested and validated using snack and beverage suggestion scenario.
|
|
16:30-18:00, Paper WeDP.11 | |
Deep Reinforcement Learning for Formation Control |
Aykın, Can (Tech. Univ. München), Knopp, Martin (Tech. Univ. München), Diepold, Klaus (Tech. Univ. München) |
Keywords: Machine Learning and Adaptation, Computational Architectures
Abstract: Continuing our work on using reinforcement learning for formation control, we present an end-to-end deep learning system which uses only camera images to learn to control the individual system's correct position within the formation. Mnih et al. created AIs playing video games utilizing the same visual input as a human player by employing convolutional neural networks for automatic feature extraction on images. This published work inspired us to employ a similar approach for processing the camera images and controlling the robot. We repeat the same experiment with two completely different camera positions. The results for both positions are very similar and such demonstrate the flexibility of the presented approach.
|
|
16:30-18:00, Paper WeDP.12 | |
DeFatigue: Online Non-Intrusive Fatigue Detection by a Robot Co-Worker |
Pramanick, Pradip (TCS Res. and Innovation), Sarkar, Chayan (TCS Res. and Innovation) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Detecting and Understanding Human Activity, HRI and Collaboration in Manufacturing Environments
Abstract: A robot as a companion or co-worker is not an emerging concept anymore, but a reality. However, one of the major barriers to this realization is the seamless interaction with the robots that includes both explicit and implicit interaction. In this work, we assume a use-case where a human and a robot together carry a heavy object in a co-habitat (home or workplace/factory). Two human beings while doing such a work understands each other without explicit (vocal) interaction. To realize such behavior, the robot must understand the fatigue state of the human co-worker to enable seamless work experience and ensure safety. In this article, we present DeFatigue, a non-intrusive fatigue state detection mechanism. We assume that the robot's hand is equipped with a force sensor. Based on the change of force from the human side while carrying the object, DeFatigue is able to determine the fatigue state without instrumenting the human being with an additional sensor (internally or externally). Moreover, it detects the fatigues state on-the-fly (online) as well as it does not require any (user-specific) training. Based on our experiments with 18 test subjects, fatigue state detection by DeFatigue overlaps with the ground truth for 85.18% of the cases whereas it deviates 4.09s (on average) for the remaining cases.
|
|
16:30-18:00, Paper WeDP.14 | |
Predicting Arm Movements a Multivariate LSTM Based Approach for Human-Robot High Five Games |
Chellali, Ryad (Nanjing Forestry Univ), Li, Zhichao (Kita Tech) |
Keywords: Detecting and Understanding Human Activity, Computational Architectures, Programming by Demonstration
Abstract: Predicting arm movements is a key issue in physical human robots interactions. It allows robots to prepare for action and meet human requirements and needs on time. Different to human action recognition, the prediction of human movements relies on few samples, namely the first ones. In this paper, we explore the use of LSTM (Long Short Term Memory) networks in deriving the final position and the time of the human hand when performing a high five game with robots. For such a context, the synchrony of human and robot movements should be achieved at early stages of the human to meet the constraints of both real-time robot control and the realism of the robot movement. The results we obtained are very encouraging and opening new questions as well. Our solution predicts acceptable final position and contact time regardless the morphology of people and their positioning.
|
|
16:30-18:00, Paper WeDP.15 | |
Automatic Generation of Head Nods Using Utterance Texts |
Ishii, Ryo (NTT), Katayama, Taichi (NTT Media Intelligence Lab. NTT Corp), Higashinaka, ryuichiro (NTT), Tomita, Junji (NTT Media Intelligence Lab. NTT Corp) |
Keywords: Non-verbal Cues and Expressiveness, Multimodal Interaction and Conversational Skills, Linguistic Communication and Dialogue
Abstract: We propose a model to generate head nods accompanying an utterance from natural language. To the best of our knowledge, previous models generated simple nods from the final words at the end of an utterance, i.e., using bag of words. We focused on various text analyzed using various types of language information such as dialog act, part of speech, a large-scale Japanese thesaurus, and word position in a sentence. We also generated detailed parameters of speaker's nodding presence, frequency, and depth, which was the first attempt to do so. First, we compiled a Japanese corpus of 24 dialogues including utterance and nod information. Next, using the corpus, we constructed our generation model that estimates nodding presence, frequency, and depth, during a phrase by using such various types of language information as well as bag of words. The results indicate that our model outperformed simple automatic nod-generating models using only bag of words and chance level. The results also indicate that dialog act, part of speech, the large-scale Japanese thesaurus, and word position are useful for generating nods. We also evaluated, through subjective evaluation, if our nod-generation model is useful with conversational agents. The results show that the nodding generated with our model improves user impressions of naturalness, humanness, likability, and reliability toward a conversational agent.
|
|
16:30-18:00, Paper WeDP.16 | |
Developing a Deep Learning Agent for HRI: Dataset Collection and Training |
Romeo, Marta (Univ. of Plymouth), Jones, Ray (Mr), Cangelosi, Angelo (Univ. of Plymouth) |
Keywords: Social Intelligence for Robots, Robot Companions and Social Robots, Curiosity, Intentionality and Initiative in Interaction
Abstract: The world population is ageing at a dramatic rate, raising new challenges for social and health care systems. Sometimes, assistance can simply derive from a social interaction between a robotic platform and human users. In these cases, robots cannot rely on human operators. Therefore, they need to gain social intelligence in a fully autonomous way. The focus of this paper is on the initial steps needed to implement a completely autonomous robotic agent able to adapt itself to its users. For this reason, an interactive data collection was carried out to gather a dataset from which the robot could learn how to respond to its users in different situations. From these data, a first evaluation of the performances of the deep learning agent, embodied in the robot, has been completed. The agent was able to generalize to new sets of test data. The study explored how, using modern machine learning algorithms, a robot could learn to understand if, and how, to interact with one, or more people, gathered in a room. This was done by training a robot to read the level of the engagement of the users at the initiation of the interaction.
|
|
16:30-18:00, Paper WeDP.17 | |
Interactive Reinforcement Learning from Demonstration and Human Evaluative Feedback |
Li, Guangliang (Ocean Univ. of China), Gomez, Randy (Honda Res. Inst. Japan Co., Ltd), Nakamura, Keisuke (Honda Res. Inst. Japan Co., Ltd), He, Bo (China Ocean Univ) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Machine Learning and Adaptation
Abstract: Programing robots to perform tasks is difficult in the real world because of its richness and uncertainty. For robots and agents to be more useful, they must be able to learn quickly from ordinary people via natural interactions. In this paper, we investigate how an agent can learn from demonstration and positive and negative evaluative feedback provided by a human teacher. Specifically, we proposed a model-based method---IRL-TAMER---by combining learning from demonstration via inverse reinforcement learning (IRL) and learning from human reward via the TAM-ER framework. We tested our method in the Grid World domain and compared with the TAMER framework using different discount factors on human reward. Our results suggest that although an agent learning via IRL can learn a useful value function indicating what states are good based on the demonstration, it cannot obtain an effective policy navigating to the goal state with one demonstration. However, learning from demonstration can reduce the number of human reward needed to obtain an optimal policy, especially the number of negative feedback. That is to say, learning from demonstration can be a jump-start for agent's learning from human reward and reduce the number of mistakes---incorrect actions. Furthermore, our results show that learning from demonstration can only be useful for agent's learning from human reward when the discount factor is small, i.e., learning from myopic human reward.
|
|
16:30-18:00, Paper WeDP.18 | |
Subjective Experience of Interacting with a Social Robot at a Danish Airport |
Nielsen, Sara (Engineering Psychology Program, Aalborg Univ), Bonnerup, Emil (Engineering Psychology Program, Aalborg Univ), Hansen, Andreas Kornmaaler (Engineering Psychology Program, Aalborg Univ), Nilsson, Juliane (Engineering Psychology Program, Aalborg Univ), Nellemann, Lucca Julie (Engineering Psychology Program, Aalborg Univ), Hansen, Karl Damkjćr (Aalborg Univ), Hammershři, Dorte (Aalborg Univ) |
Keywords: User-centered Design of Robots, Evaluation Methods and New Methodologies, Creating Human-Robot Relationships
Abstract: This study investigates the subjective experience of interacting with a social robot at Aalborg Airport (AAL) by conducting a field study where 23 attributes in Human-Robot Interaction (HRI) were elicited. During two tests, Danish travellers were recruited by a remote controlled textit{Double} robot, which offered four wayfinding options specific for AAL. In the first test 30 subjects participated in a semi-structured interview about their experience. Observations and the subjects' statements were interpreted and coded using an affinity diagram. The affinity diagram resulted in 10 superordinate categories from which the 23 attributes were elicited and developed into scales. The scales were used in the second test at AAL, where 43 subjects rated HRI. The ratings were analysed with Principal Component Analysis (PCA). The developed scales presented in this paper might be used by robot designers for specific contexts and potentially for tailored user experience evaluations.
|
|
16:30-18:00, Paper WeDP.19 | |
Improving Teacher–Student Communication During Lectures Using a Robot and an Online Messaging/Voting System |
Palinko, Oskar (Osaka Univ), Shimaya, Jiro (Osaka Univ), Jinnai, Nobuhiro (Osaka Univ), Ogawa, Kohei (Osaka Univ), Yoshikawa, Yuichiro (Osaka Univ), Ishiguro, Hiroshi (Osaka Univ) |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Social Intelligence for Robots
Abstract: In recent decades and years classroom teaching has been influenced by the appearance of digital devices and the pervasiveness of the internet. In recent years robots started appearing in learning environments. Even though classroom presentations improved considerably, interaction between students and teachers has not improved greatly. We propose a new way of using a desktop robot and an online voting/messaging system to make the classroom experience more interactive. Concretely, in this paper we report on a lecture which was assisted by a robot. The students posted questions and opinions on an online messaging platform. The messages were then sent to the robot to be posed to the teacher, either directly or through a voting procedure. Students found this new way of interaction very efficient and useful for improving their communication with the teacher. At the same time the number of interactions should be controlled so it does not interfere with the teaching process.
|
|
16:30-18:00, Paper WeDP.20 | |
Deciding Shapes and Motions of a Robot Based on Personal Preferences |
Yamamoto, Natsumi (Tokyo Univ. of Agriculture and Tech), Mizuuchi, Ikuo (Tokyo Univ. of Agriculture and Tech) |
Keywords: Robot Companions and Social Robots, Creating Human-Robot Relationships, Machine Learning and Adaptation
Abstract: User preferences for robots differ according to the robot shape and how it moves. This paper proposes a method for generating motions that will be pleasing to a user of a modular robot that can be configured arbitrarily. We propose a preference estimator that uses supervised learning to estimate how well users will like a combination of a motion pattern and robot shape. We also propose a motion selector and a motion generator that select and generate motions in consideration of the balance of the robot’s shape and its motion. These methods were tested with a prototype modular robot system and a group of human test subjects who configured the robot as they wished. The proposed motion selector and generator, trained on a preprogrammed set of motion patterns, then presented a ranked set of motions to the subjects. The motion selector proved effective but the motion generator did not. Overall, the proposed system was received favorably by the subjects.
|
|
16:30-18:00, Paper WeDP.21 | |
Evaluating Child Patrons' Performance and Perception of Robotic Assistance in Library Book Locating |
Lin, Weijane (National Taiwan Univ), Yueh, Hsiu-Ping (National Taiwan Univ) |
Keywords: Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots, Innovative Robot Designs
Abstract: This study explored the possibility of service robot in assisting children’s book locating activities in libraries. A comprehensive review of children’s experiences of library resources locating was made to develop the library robot and the corresponding criteria for evaluation. With a working prototype of library robot, this study examined effective and appropriate interface and information design to assist children in book locating activities by empirically collecting their performance and perception with robotic assistance in a genuine library setting. The result suggested that robot guidance effectively led children to find the assigned book in the library while received high appraisal in its efficiency in guiding children to find the book. Children participants regarded the library robot as a favorable and friendly agent to provide interesting navigation experience in the library.
|
|
16:30-18:00, Paper WeDP.22 | |
Design Exploration on Home Robots to Support Single Women at Home in China |
Gao, Gege (Indiana Univ. Bloomington), Zhang, Yuxuan (Indiana Univ. Bloomington), Bu, Yi (Indiana Univ. Bloomington), Shih, Patrick C. (Indiana Univ. Bloomington) |
Keywords: Innovative Robot Designs, User-centered Design of Robots
Abstract: The widespread adoption of home robots shows a high demand for in-home assistance. Since single women account for a large proportion of the Chinese population, it is important to design home robots to support their lives at home. This study aims to explore the possible design features of home robots to support single women in China. Interviews and an online survey were used to gauge user perception and expectations of home robots. Our research reveals the unique lifestyle preferences of single women in China and how home robots could be designed to support their needs. We discuss our findings and design implications based on three aspects: lifestyle, intelligence, and sense, to inspire better robot design for women.
|
|
16:30-18:00, Paper WeDP.23 | |
Talk to Me: The Role of Human-Robot Interaction in Improving Verbal Communication Skills in Students with Autism or Intellectual Disability |
Silvera-Tawil, David (CSIRO), Roberts-Yates, Christine (Murray Bridge High School), Bradford, DanaKai (CSIRO) |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Assistive Robotics
Abstract: Autism is a developmental condition that can cause significant social, communication, and behavioral challenges. Children on the autism spectrum may have difficulties developing verbal communication skills, understanding what others say, or communicating through non-verbal cues. Similar difficulties are experienced by children with developmental delay. A recent trend in robotics is the design and implementation of robots to assist during therapy and education of children with learning difficulties. Although encouraging results suggests that robots can be beneficial, there has been limited work on the long-term impact of these tools on the verbal communication skills of children with autism or developmental delay. This paper explores the impact of robots on the verbal communication skills of secondary aged students with moderate to severe intellectual disabilities and autism. A qualitative study was carried out, via focus groups and interviews with parents, carers and staff members, 24 months after the introduction of two humanoid robots into the disability unit of a public secondary school. Results show that humanoid robots can provide benefits in articulation, verbal participation and spontaneous conversation in these young adults. Three exemplars are presented.
|
| |