IROS 2015 Paper Abstract

Close

Paper ThAT12.4

Quack, Benjamin (Drittes Physikalisches Institut/ BCCN), Wörgötter, Florentin (University of Göttingen), Agostini, Alejandro (University of Goettingen)

Simultaneously Learning at Different Levels of Abstraction

Scheduled for presentation during the Regular session "Robot Learning 1" (ThAT12), Thursday, October 1, 2015, 09:15−09:30, Saal C3

2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sept 28 - Oct 03, 2015, Congress Center Hamburg, Hamburg, Germany

This information is tentative and subject to change. Compiled on July 20, 2019

Keywords Robot Learning, Service Robots, Integrated Planning and Control

Abstract

Robotic applications in human environments are usually implemented using a cognitive architecture that integrates techniques of different levels of abstraction, ranging from artificial intelligence techniques for making decisions at a symbolic level to robotic techniques for grounding symbolic actions. In this work we address the problem of simultaneous learning at different levels of abstractions in such an architecture. This problem is important since human environments are highly variable, and many unexpected situations may arise during the execution of a task. The usual approach under this circumstance is to train each level individually to learn how to deal with the new situations. However, this approach is limited since it implies long task interruptions every time a new situation needs to be learned. We propose an architecture where learning takes place simultaneously at all the levels of abstraction. To achieve this, we devise a method that permits higher levels to guide the learning at the levels below for the correct execution of the task. The architecture is instantiated with a logic-based planner and an online planning operator learner, at the highest level, and with online reinforcement learning units that learn action policies for the grounding of the symbolic actions, at the lowest one. A human teacher is involved in the decision-making loop to facilitate learning. The framework is tested in a physically realistic simulation of the Sokoban game.

 

 

Technical Content © IEEE Robotics & Automation Society


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2019 PaperCept, Inc.
Page generated 2019-07-20  00:40:05 PST  Terms of use