Reinforcement Learning (RL) has gained interest in the control and automation communities thanks to its encouraging results in many challenging control problems without requiring a model of the system and of the environment. Yet, it is well-known that employing such a learning-based approach in real scenarios may be problematic, as a prohibitive amount of data might be required to converge to an optimal control policy. In this work, we equip a popular RL algorithm with two tools to improve exploration effectiveness and sample efficiency: the Episodic Noise, that helps useful subsets of actions emerge already in the first few training episodes, and the Difficulty Manager, that generates goals proportioned to the current agent's capabilities. We demonstrate the effectiveness of such proposed tools on a pose regulation task of a four wheel steering four wheel driving robot, suitable for a wide range of industrial scenarios. The resulting agent learns effective sets of actions in just a few hundreds training epochs, reaching satisfactory performance during tests.

Sample-Efficient Reinforcement Learning for Pose Regulation of a Mobile Robot / Brescia, W; De Cicco, L; Mascolo, S. - (2022), pp. 42-47. [10.1109/ICCAIS56082.2022.9990480]

Sample-Efficient Reinforcement Learning for Pose Regulation of a Mobile Robot

Brescia, W;De Cicco, L;Mascolo, S
2022-01-01

Abstract

Reinforcement Learning (RL) has gained interest in the control and automation communities thanks to its encouraging results in many challenging control problems without requiring a model of the system and of the environment. Yet, it is well-known that employing such a learning-based approach in real scenarios may be problematic, as a prohibitive amount of data might be required to converge to an optimal control policy. In this work, we equip a popular RL algorithm with two tools to improve exploration effectiveness and sample efficiency: the Episodic Noise, that helps useful subsets of actions emerge already in the first few training episodes, and the Difficulty Manager, that generates goals proportioned to the current agent's capabilities. We demonstrate the effectiveness of such proposed tools on a pose regulation task of a four wheel steering four wheel driving robot, suitable for a wide range of industrial scenarios. The resulting agent learns effective sets of actions in just a few hundreds training epochs, reaching satisfactory performance during tests.
2022
978-1-6654-5248-9
Sample-Efficient Reinforcement Learning for Pose Regulation of a Mobile Robot / Brescia, W; De Cicco, L; Mascolo, S. - (2022), pp. 42-47. [10.1109/ICCAIS56082.2022.9990480]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/248941
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact