Martínez, David (CSIC-UPC), Alenyŕ, Guillem (CSIC-UPC), Torras, Carme (CSIC - UPC)
Safe Robot Execution in Model-Based Reinforcement Learning
Scheduled for presentation during the Regular session "Robot Reinforcement Learning" (ThFT12), Thursday, October 1, 2015,
17:35−17:50, Saal C3
2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sept 28 - Oct 03, 2015, Congress Center Hamburg, Hamburg, Germany
This information is tentative and subject to change. Compiled on July 19, 2019
|Keywords AI Reasoning Methods, Robot Reinforcement Learning, Robot Learning
Task learning in robotics requires repeatedly executing the same actions in different states to learn the model of the task. However, in real-world domains, there are usually sequences of actions that, if executed, may produce unrecoverable errors (e.g. breaking an object). Robots should avoid repeating such errors when learning, and thus explore the state space in a more intelligent way. This requires identifying dangerous action effects to avoid including such actions in the generated plans, while at the same time enforcing that the learned models are complete enough for the planner not to fall into dead-ends.
We thus propose a new learning method that allows a robot to reason about dead-ends and their causes. Some such causes may be dangerous action effects (i.e., leading to unrecoverable errors if the action were executed in the given state) so that the method allows the robot to skip the exploration of risky actions and guarantees the safety of planned actions. If a plan might lead to a dead-end (e.g., one that includes a dangerous action effect), the robot tries to find an alternative safe plan and, if not found, it actively asks a teacher whether the risky action should be executed.
This method permits learning safe policies as well as minimizing unrecoverable errors during the learning process. Experimental validation of the approach is provided in two different scenarios: a robotic task and a simulated problem from the international planning competition. Our approach greatly increases success ratios in problems where previous approaches had high probabilities of failing.