このエントリーをはてなブックマークに追加
ID 33049
フルテキストURL
著者
Ito, Kazuyuki Okayama University
Kamegawa, Tetsushi Tokyo Institute of Technology Kaken ID publons researchmap
Matsuno, Fumitoshi Tokyo Institute of Technology
抄録

Reinforcement learning is very effective for robot learning. It is because it does not need prior knowledge and has higher capability of reactive and adaptive behaviors. In our previous works, we proposed new reinforce learning algorithm: "Q-learning with dynamic structuring of exploration space based on genetic algorithm (QDSEGA)". It is designed for complicated systems with large action-state space like a robot with many redundant degrees of freedom. However the application of QDSEGA is restricted to static systems. A snake-like robot has many redundant degrees of freedom and the dynamics of the system are very important to complete the locomotion task. So application of usual reinforcement learning is very difficult. In this paper, we extend layered structure of QDSEGA so that it becomes possible to apply it to real robots that have complexities and dynamics. We apply it to acquisition of locomotion pattern of the snake-like robot and demonstrate the effectiveness and the validity of QDSEGA with the extended layered structure by simulation and experiment.

キーワード
genetic algorithms
learning (artificial intelligence)
mobile robots
motion control
robot dynamics
robot kinematics
備考
Published with permission from the copyright holder. this is the institute's copy, as published in Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on , 14-19 Sept. 2003, Volume 1, Pages 791-796.
Publisher URL:http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1241690
Copyright © 2003 IEEE. All rights reserved.
発行日
2003-09-14
出版物タイトル
Robotics and Automation
1巻
開始ページ
791
終了ページ
796
資料タイプ
学術雑誌論文
言語
英語
査読
有り
DOI
Submission Path
mechanical_engineering/7