Learning robot stiffness for contact tasks using the natural actor-critic

Byungchan Kim, Byungduk Kang, Shin Suk Park, Sungchul Kang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

This paper introduces a novel motor learning strategy for robotic contact task based on a human motor control theory and machine learning schemes. Humans modulate their arm joint impedance parameters during contact tasks, and such aspect suggests a key feature how human successfully executes various contact tasks in variable environments. Our strategy for successful contact tasks is to find appropriate impedance parameters for optimal task execution by Reinforcement Learning (RL). In this study Recursive Least-Square (RLS) filter based episodic Natural Actor-Critic is employed to determine the optimal impedance parameters. Through dynamic simulations of contact tasks, this paper demonstrates the effectiveness of the proposed strategy. The simulation results show that the proposed method successfully optimizes the performance of the contact task and adapts to uncertain conditions of the environment.

Original languageEnglish
Title of host publicationProceedings - IEEE International Conference on Robotics and Automation
Pages3832-3837
Number of pages6
DOIs
Publication statusPublished - 2008 Sep 18
Event2008 IEEE International Conference on Robotics and Automation, ICRA 2008 - Pasadena, CA, United States
Duration: 2008 May 192008 May 23

Other

Other2008 IEEE International Conference on Robotics and Automation, ICRA 2008
CountryUnited States
CityPasadena, CA
Period08/5/1908/5/23

Fingerprint

Robot learning
Stiffness
Reinforcement learning
Control theory
Learning systems
Robotics
Computer simulation

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering

Cite this

Kim, B., Kang, B., Park, S. S., & Kang, S. (2008). Learning robot stiffness for contact tasks using the natural actor-critic. In Proceedings - IEEE International Conference on Robotics and Automation (pp. 3832-3837). [4543799] https://doi.org/10.1109/ROBOT.2008.4543799

Learning robot stiffness for contact tasks using the natural actor-critic. / Kim, Byungchan; Kang, Byungduk; Park, Shin Suk; Kang, Sungchul.

Proceedings - IEEE International Conference on Robotics and Automation. 2008. p. 3832-3837 4543799.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kim, B, Kang, B, Park, SS & Kang, S 2008, Learning robot stiffness for contact tasks using the natural actor-critic. in Proceedings - IEEE International Conference on Robotics and Automation., 4543799, pp. 3832-3837, 2008 IEEE International Conference on Robotics and Automation, ICRA 2008, Pasadena, CA, United States, 08/5/19. https://doi.org/10.1109/ROBOT.2008.4543799
Kim B, Kang B, Park SS, Kang S. Learning robot stiffness for contact tasks using the natural actor-critic. In Proceedings - IEEE International Conference on Robotics and Automation. 2008. p. 3832-3837. 4543799 https://doi.org/10.1109/ROBOT.2008.4543799
Kim, Byungchan ; Kang, Byungduk ; Park, Shin Suk ; Kang, Sungchul. / Learning robot stiffness for contact tasks using the natural actor-critic. Proceedings - IEEE International Conference on Robotics and Automation. 2008. pp. 3832-3837
@inproceedings{0f85b976615d46938f34ee0dd7d3121d,
title = "Learning robot stiffness for contact tasks using the natural actor-critic",
abstract = "This paper introduces a novel motor learning strategy for robotic contact task based on a human motor control theory and machine learning schemes. Humans modulate their arm joint impedance parameters during contact tasks, and such aspect suggests a key feature how human successfully executes various contact tasks in variable environments. Our strategy for successful contact tasks is to find appropriate impedance parameters for optimal task execution by Reinforcement Learning (RL). In this study Recursive Least-Square (RLS) filter based episodic Natural Actor-Critic is employed to determine the optimal impedance parameters. Through dynamic simulations of contact tasks, this paper demonstrates the effectiveness of the proposed strategy. The simulation results show that the proposed method successfully optimizes the performance of the contact task and adapts to uncertain conditions of the environment.",
author = "Byungchan Kim and Byungduk Kang and Park, {Shin Suk} and Sungchul Kang",
year = "2008",
month = "9",
day = "18",
doi = "10.1109/ROBOT.2008.4543799",
language = "English",
isbn = "9781424416479",
pages = "3832--3837",
booktitle = "Proceedings - IEEE International Conference on Robotics and Automation",

}

TY - GEN

T1 - Learning robot stiffness for contact tasks using the natural actor-critic

AU - Kim, Byungchan

AU - Kang, Byungduk

AU - Park, Shin Suk

AU - Kang, Sungchul

PY - 2008/9/18

Y1 - 2008/9/18

N2 - This paper introduces a novel motor learning strategy for robotic contact task based on a human motor control theory and machine learning schemes. Humans modulate their arm joint impedance parameters during contact tasks, and such aspect suggests a key feature how human successfully executes various contact tasks in variable environments. Our strategy for successful contact tasks is to find appropriate impedance parameters for optimal task execution by Reinforcement Learning (RL). In this study Recursive Least-Square (RLS) filter based episodic Natural Actor-Critic is employed to determine the optimal impedance parameters. Through dynamic simulations of contact tasks, this paper demonstrates the effectiveness of the proposed strategy. The simulation results show that the proposed method successfully optimizes the performance of the contact task and adapts to uncertain conditions of the environment.

AB - This paper introduces a novel motor learning strategy for robotic contact task based on a human motor control theory and machine learning schemes. Humans modulate their arm joint impedance parameters during contact tasks, and such aspect suggests a key feature how human successfully executes various contact tasks in variable environments. Our strategy for successful contact tasks is to find appropriate impedance parameters for optimal task execution by Reinforcement Learning (RL). In this study Recursive Least-Square (RLS) filter based episodic Natural Actor-Critic is employed to determine the optimal impedance parameters. Through dynamic simulations of contact tasks, this paper demonstrates the effectiveness of the proposed strategy. The simulation results show that the proposed method successfully optimizes the performance of the contact task and adapts to uncertain conditions of the environment.

UR - http://www.scopus.com/inward/record.url?scp=51649113734&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=51649113734&partnerID=8YFLogxK

U2 - 10.1109/ROBOT.2008.4543799

DO - 10.1109/ROBOT.2008.4543799

M3 - Conference contribution

SN - 9781424416479

SP - 3832

EP - 3837

BT - Proceedings - IEEE International Conference on Robotics and Automation

ER -