This website uses browsing/session and functional cookies to ensure you get the best experience. Learn More

JP Tutorial

From Wiki for iCub and Friends
Jump to: navigation, search

Contents

Tutorial: Robot Learning

Presenter: Jan Peters

Slides

Slides can be found here as...

  1. PDF
  2. Quick Time Movie



Reading Assignments


Overview: Peters, J.; Kober, J.; Muelling, K.; Kroemer, O.; Neumann, G. (accepted). Towards Robot Skill Learning: From Simple Skills to Table Tennis, Proceedings of the European Conference on Machine Learning (ECML), Nectar Track.

Model Learning

  1. Overview: Nguyen-Tuong, D.; Peters, J. (2011). Model Learning in Robotics: a Survey, Cognitive Processing, 12, 4.
  2. Success Story 1: Schaal, S.;Atkeson, C. G.;Vijayakumar, S. (2002). Scalable techniques from nonparameteric statistics for real-time robot learning, Applied Intelligence, 17, 1, pp.49-60
  3. Success Story 2: Peters, J.;Schaal, S. (2008). Learning to control in operational space, International Journal of Robotics Research, 27, pp.197-212.
  4. Success Story 3: Nguyen-Tuong, D.; Seeger, M.; Peters, J. (2009). Model Learning with Local Gaussian Process Regression, Advanced Robotics, 23, 15, pp.2015-2034.

Imitation Learning: Behavioral Cloning and Inverse Reinforcement Learning

  1. Overview: B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A Survey of Robot Learning from Demonstration. Robotics and Autonomous Systems. 57(5): 469-483, 2009.
  2. Success Story 1: Nathan Ratliff, Brian Ziebart, Kevin Peterson, J. Andrew Bagnell, Martial Hebert, Anind K. Dey, and Siddhartha Srinivasa (2009). Inverse Optimal Heuristic Control for Imitation Learning, Proc. Aritifical Intelligence and Statistics (AISTATS).
  3. Success Story 2: Boularias, A.; Kober, J.; Peters, J. (2011). Relative Entropy Inverse Reinforcement Learning, Proceedings of Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011).

Reinforcement Learning

  1. Overview: Kober, J; Bagnell, D.; Peters, J. (accepted). Reinforcement Learning in Robotics: A Survey, International Journal of Robotics Research.
  2. Policy Search Survey: The survey Deisenroth, M. P.; Neumann, G.; Peters, J. (conditionally accepted). A Survey on Policy Search for Robotics, Foundations and Trends in Robotics. can be obtained by sending an email request to Marc Deisenroth!
  3. Success Story 1: Kober, J.; Peters, J. (2011). Policy Search for Motor Primitives in Robotics, Machine Learning, 84, 1-2, pp.171-203.
  4. Success Story 2: Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric Berger and Eric Liang (2004). Inverted autonomous helicopter flight via reinforcement learning, In International Symposium on Experimental Robotics, 2004.
  5. Success Story 3: M. Riedmiller, T. Gabel, R. Hafner and S. Lange. Reinforcement Learning for Robot Soccer. Autonomous Robots, 27(1):55–74, Springer, 2009
Personal tools
Namespaces

Variants
Actions
Navigation
Print/export
Toolbox