Reinforcement Learning in Robotics

Authors

  • Mani Manavalan LTI
  • Apoorva Ganapathy Adobe Systems

DOI:

https://doi.org/10.18034/ei.v2i2.572

Keywords:

Learning control, robotic learning, Reinforcement learning

Abstract

Reinforcement learning has been found to offer to robotics the valid tools and techniques for the redesign of valuable and sophisticated designs for robotics. There are multiple challenges related to the prime problems related to the value added in the reinforcement of the new learning. The study has found the linkages between different subjects related to science in particular. We have attempted to make and establish the links that have been found between the two research communities in order to provide a survey-related task in reinforcement learning for behavior in terms of the generation that are found in the study. Many issues have been highlighted in the robot learning process that is used in their learning as well as various key programming tools and methods. We discuss how contributions that aimed towards taming the complexity of the domain of the study and determining representations and goals of RL. There has been a particular focus that is based on the goals of reinforcement learning that can provide the value added function approaches and challenges in robotic reinforcement learning. The analysis has been conducted and has strived to demonstrate the value of reinforcement learning that has to be applied to different circumstances.

 

Downloads

Download data is not yet available.

Author Biographies

  • Mani Manavalan, LTI

    Technical Project Manager, Larsen & Toubro Infotech (LTI), Mumbai, INDIA

  • Apoorva Ganapathy, Adobe Systems

    Senior Developer, Adobe Systems, San Jose, California, USA

References

Atkeson, C. G. (1998). Nonparametric model-based reinforcement learning. In Advances in Neural Information Processing Systems (NIPS).

Bellman, R. E. (1957). Dynamic Programming. Princeton University Press, Princeton, NJ.

Betts, J. T. (2001). Practical methods for optimal control using nonlinear programming, volume 3 of Advances in Design and Control. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA.

Brafman, R. I. and Tennenholtz, M. (2002). R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213– 231.

Bynagari, N. B. (2014). Integrated Reasoning Engine for Code Clone Detection. ABC Journal of Advanced Research, 3(2), 143-152. https://doi.org/10.18034/abcjar.v3i2.575

Dayan, P. and Hinton, G. E. (1997). Using expectation-maximization for reinforcement learning. Neural Computation, 9(2):271–278.

Donepudi, P. K. (2014). Voice Search Technology: An Overview. Engineering International, 2(2), 91-102. https://doi.org/10.18034/ei.v2i2.502

Donoho, D. L. (2000). High-dimensional data analysis: the curses and blessings of dimensionality. In American Mathematical Society Conference Math Challenges of the 21st Century.

Kaelbling, L. P., Littman, M. L., and Moore, A. W. (1996). Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285

Kakade, S. (2003). On the Sample Complexity of Reinforcement Learning. PhD thesis, Gatsby Computational Neuroscience Unit. University College London.

Kearns, M. and Singh, S. P. (2002). Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3), 209–232.

Kober, J. and Peters, J. (2009). Policy search for motor primitives in robotics. In Advances in Neural Information Processing Systems (NIPS).

Muelling, K., Kober, J., Kroemer, O., and Peters, J. (2012). Learning to select and generalize striking movements in robot table tennis. International Journal of Robotics Research.

Ng, A. Y., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., Berger, E., and Liang, E. (2004a). Autonomous inverted helicopter flight via reinforcement learning. In International Symposium on Experimental Robotics (ISER)

Perkins, T. J. and Barto, A. G. (2002). Lyapunov design for safe reinforcement learning. Journal of Machine Learning Research, 3:803–832.

Peters, J. and Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682–697.

Powell, W. B. (2012). AI, OR and control theory: A rosetta stone for stochastic optimization. Technical report, Princeton University

Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience.

Strens, M. and Moore, A. (2001). Direct policy search using paired statistical tests. In International Conference on Machine Learning (ICML)

Sutton, R. S. and Barto, A. G. (1998). Reinforcement Learning. MIT Press, Boston, MA.

Sutton, R. S., McAllester, D., Singh, S. P., and Mansour, Y. (1999). Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems (NIPS).

Theodorou, E. A., Buchli, J., and Schaal, S. (2010). Reinforcement learning of motor skills in high dimensions: A path integral approach. In IEEE International Conference on Robotics and Automation (ICRA).

Yamaguchi, J. and Takanishi, A. (1997). Development of a biped walking robot having antagonistic driven joints using nonlinear spring mechanism. In IEEE International Conference on Robotics and Automation (ICRA).

--0--

Downloads

Published

2014-12-31

Issue

Section

Peer Reviewed Articles

How to Cite

Reinforcement Learning in Robotics. (2014). Engineering International, 2(2), 113-124. https://doi.org/10.18034/ei.v2i2.572

Similar Articles

21-30 of 32

You may also start an advanced similarity search for this article.