Mobile robotic systems serve as versatile platforms for diverse indoor applications, ranging from warehousing and manufacturing to test benches dedicated to evaluating automated driving (AD) functions. In AD systems, the path following (PF) layer is responsible for defining steering commands to follow the reference path. Recently explored approaches involve artificial intelligence-based methods, such as Deep Reinforcement Learning (DRL). Despite their promising performance, these controllers still suffer from time-consuming training phases and may experience performance degradation when deviating from training conditions. To address these challenges, this paper proposes novel DRL controllers addressing the simulation-to-reality gap in unknown scenarios by: (i) training via an expert demonstrator which also speed up the learning phase; and (ii) a weight adaptation strategy for the resulting neural network (NN) to strengthen controller robustness and enhance PF performance. In addition, an experimentally validated vehicle model is used for training the proposed DRL algorithm and as a model for a federated extended Kalman filter (FEKF) system employed for sensor fusion in vehicle localisation. The proposed DRL-based PF controllers are experimentally evaluated through key performance indicators across multiple maneuvers not considered during training, and it is shown that they outperform benchmarking model-based controllers from the literature. Note to Practitioners - This paper presents a comprehensive toolchain for controlling mobile robots, which includes: (i) a simple yet effective two-stage least-square approach for parameter identification of the longitudinal and lateral dynamics of scaled robotic vehicles; (ii) the utilisation of a no-reset FEKF to enhance positioning leveraging all sensors commonly available on scaled robotic vehicles; (iii) the inclusion of an expert demonstrator to expedite the training phase and address the simulation-to-reality gap resulting from discrepancies between simulation and experimental environments; and (iv) an adaptation strategy for dynamically adjusting the weights of the resulting NN to further improve robustness for scenarios not considered during the traning.

Modeling, Positioning, and Deep Reinforcement Learning Path Following Control of Scaled Robotic Vehicles: Design and Experimental Validation / Caponio, C.; Stano, P.; Carli, R.; Olivieri, I.; Ragone, D.; Sorniotti, A.; Gruber, P.; Montanaro, U.. - In: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING. - ISSN 1545-5955. - 22:(2025), pp. 9856-9871. [10.1109/TASE.2024.3513701]

Modeling, Positioning, and Deep Reinforcement Learning Path Following Control of Scaled Robotic Vehicles: Design and Experimental Validation

Carli R.;
2025-01-01

Abstract

Mobile robotic systems serve as versatile platforms for diverse indoor applications, ranging from warehousing and manufacturing to test benches dedicated to evaluating automated driving (AD) functions. In AD systems, the path following (PF) layer is responsible for defining steering commands to follow the reference path. Recently explored approaches involve artificial intelligence-based methods, such as Deep Reinforcement Learning (DRL). Despite their promising performance, these controllers still suffer from time-consuming training phases and may experience performance degradation when deviating from training conditions. To address these challenges, this paper proposes novel DRL controllers addressing the simulation-to-reality gap in unknown scenarios by: (i) training via an expert demonstrator which also speed up the learning phase; and (ii) a weight adaptation strategy for the resulting neural network (NN) to strengthen controller robustness and enhance PF performance. In addition, an experimentally validated vehicle model is used for training the proposed DRL algorithm and as a model for a federated extended Kalman filter (FEKF) system employed for sensor fusion in vehicle localisation. The proposed DRL-based PF controllers are experimentally evaluated through key performance indicators across multiple maneuvers not considered during training, and it is shown that they outperform benchmarking model-based controllers from the literature. Note to Practitioners - This paper presents a comprehensive toolchain for controlling mobile robots, which includes: (i) a simple yet effective two-stage least-square approach for parameter identification of the longitudinal and lateral dynamics of scaled robotic vehicles; (ii) the utilisation of a no-reset FEKF to enhance positioning leveraging all sensors commonly available on scaled robotic vehicles; (iii) the inclusion of an expert demonstrator to expedite the training phase and address the simulation-to-reality gap resulting from discrepancies between simulation and experimental environments; and (iv) an adaptation strategy for dynamically adjusting the weights of the resulting NN to further improve robustness for scenarios not considered during the traning.
2025
Modeling, Positioning, and Deep Reinforcement Learning Path Following Control of Scaled Robotic Vehicles: Design and Experimental Validation / Caponio, C.; Stano, P.; Carli, R.; Olivieri, I.; Ragone, D.; Sorniotti, A.; Gruber, P.; Montanaro, U.. - In: IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING. - ISSN 1545-5955. - 22:(2025), pp. 9856-9871. [10.1109/TASE.2024.3513701]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/281880
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact