Multimedia  

 

Volume 27 Issue 2 - Publication Date: 1 February 2008
 
Learning to Control in Operational Space
 
Jan Peters Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany and University of Southern California, 3641 Watt Way, Los Angeles, CA 90089, USA and Stefan Schaal University of Southern California, 3641 Watt Way, Los Angeles, CA 90089, USA and ATR Computational Neuroscience Laboratory, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
 
One of the most general frameworks for phrasing control problems for complex, redundant robots is operational-space control. However, while this framework is of essential importance for robotics and well understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in the face of modeling errors, which are inevitable in complex robots (e.g. humanoid robots). In this paper, we suggest a learning approach for operational-space control as a direct inverse model learning problem. A first important insight for this paper is that a physically correct solution to the inverse problem with redundant degrees of freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component of our work is based on the insight that many operational-space controllers can be understood in terms of a constrained optimal control problem. The cost function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational-space controller. From the machine learning point of view, this learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees-of-freedom robot arm are used to illustrate the suggested approach. The application to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposedmethodworks in the setting of learning resolvedmotion rate control on a real, physical Mitsubishi PA-10 medical robotics arm.
 
Return to Contents