The perception of the environment is essential in mobile robotics applications as it enables the proper planning and execution of efficient navigation strategies. Optical sensors offer many advantages, ranging from precision to understandability, but they can be significantly impacted by lighting conditions and the composition of the surroundings. In contrast, millimeter wave (mmWave) radar sensors are not influenced by such adverse condition and are capable of detecting partially or fully obstructed obstacles, resulting in more informative point clouds. However, such point clouds are often sparse and noisy. This work presents Point2Depth, a cross-modal contrastive learning approach based on Conditional Generative Adversarial Networks (cGANs) to transform sparse point clouds from mmWave sensors into depth images, preserving the distance information while producing a more comprehensible representation. An extensive data collection phase was conducted to create a rich multimodal dataset with each information associated with a timestamp and a pose. The experimental results demonstrate that the approach is able to produce accurate depth images, even in challenging environmental conditions.

Point2Depth: a GAN-based Contrastive Learning Approach for mmWave Point Clouds to Depth Images Transformation / Brescia, Walter; Roberto, Giuseppe; Racanelli, Vito; Mascolo, Saverio; De Cicco, Luca. - STAMPA. - (2023), pp. 529-534. (Intervento presentato al convegno 1st Mediterranean Conference on Control and Automation, MED 2023 tenutosi a Limassol, Cyprus nel June 26-29, 2023) [10.1109/MED59994.2023.10185732].

Point2Depth: a GAN-based Contrastive Learning Approach for mmWave Point Clouds to Depth Images Transformation

Walter Brescia;Vito Racanelli;Saverio Mascolo;Luca De Cicco
2023-01-01

Abstract

The perception of the environment is essential in mobile robotics applications as it enables the proper planning and execution of efficient navigation strategies. Optical sensors offer many advantages, ranging from precision to understandability, but they can be significantly impacted by lighting conditions and the composition of the surroundings. In contrast, millimeter wave (mmWave) radar sensors are not influenced by such adverse condition and are capable of detecting partially or fully obstructed obstacles, resulting in more informative point clouds. However, such point clouds are often sparse and noisy. This work presents Point2Depth, a cross-modal contrastive learning approach based on Conditional Generative Adversarial Networks (cGANs) to transform sparse point clouds from mmWave sensors into depth images, preserving the distance information while producing a more comprehensible representation. An extensive data collection phase was conducted to create a rich multimodal dataset with each information associated with a timestamp and a pose. The experimental results demonstrate that the approach is able to produce accurate depth images, even in challenging environmental conditions.
2023
1st Mediterranean Conference on Control and Automation, MED 2023
979-8-3503-1543-1
Point2Depth: a GAN-based Contrastive Learning Approach for mmWave Point Clouds to Depth Images Transformation / Brescia, Walter; Roberto, Giuseppe; Racanelli, Vito; Mascolo, Saverio; De Cicco, Luca. - STAMPA. - (2023), pp. 529-534. (Intervento presentato al convegno 1st Mediterranean Conference on Control and Automation, MED 2023 tenutosi a Limassol, Cyprus nel June 26-29, 2023) [10.1109/MED59994.2023.10185732].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/253263
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact