Autonomous Mobile Robots (AMRs) are increasingly deployed in diverse scenarios to automate tedious and hazardous tasks. Nonetheless, challenges such as complex environments, sensor occlusions, and limitations of autonomous navigation systems often require human intervention. Teleoperation offers a viable solution, allowing operators to remotely control AMRs when autonomy fails, without requiring physical presence. A key requirement for effective teleoperation is the real-time delivery of rich sensory information to the operator. Head-Mounted Displays (HMDs), combined with volumetric videos, provide an immersive visualization of the robot’s surroundings, enabling natural viewpoint changes and improved spatial awareness compared to traditional 2D video streams. In this paper, we present a teleoperation framework to stream in real-time volumetric videos in the form of point clouds to an operator wearing a HMD. The system includes a distance-based sampling strategy that dynamically adapts the point cloud bitrate to the estimated time-varying network bandwidth, addressing constraints imposed by limited computational resources on both the robot and the HMD. The framework is implemented on a real mobile robot and evaluated under various network conditions, including a 5G connection, demonstrating its effectiveness and robustness in supporting immersive remote teleoperation. Code is available at our GitHub repository (https://github.com/Diane-Spirit).
Seeing Through the Robot’s Eyes: Adaptive Point Cloud Streaming for Immersive Teleoperation / Barone, N.; Brescia, W.; Santangelo, G.; Maggio, A. P.; Cisternino, I.; Cicco, L. D.; Mascolo, S.. - 16101:(2026), pp. 3-21. ( 22nd International Conference on Virtual Reality and Mixed Reality, EuroXR 2025 che 2025) [10.1007/978-3-032-03805-0_1].
Seeing Through the Robot’s Eyes: Adaptive Point Cloud Streaming for Immersive Teleoperation
Barone N.;Brescia W.;Santangelo G.;Maggio A. P.;Cisternino I.;Cicco L. D.;Mascolo S.
2026
Abstract
Autonomous Mobile Robots (AMRs) are increasingly deployed in diverse scenarios to automate tedious and hazardous tasks. Nonetheless, challenges such as complex environments, sensor occlusions, and limitations of autonomous navigation systems often require human intervention. Teleoperation offers a viable solution, allowing operators to remotely control AMRs when autonomy fails, without requiring physical presence. A key requirement for effective teleoperation is the real-time delivery of rich sensory information to the operator. Head-Mounted Displays (HMDs), combined with volumetric videos, provide an immersive visualization of the robot’s surroundings, enabling natural viewpoint changes and improved spatial awareness compared to traditional 2D video streams. In this paper, we present a teleoperation framework to stream in real-time volumetric videos in the form of point clouds to an operator wearing a HMD. The system includes a distance-based sampling strategy that dynamically adapts the point cloud bitrate to the estimated time-varying network bandwidth, addressing constraints imposed by limited computational resources on both the robot and the HMD. The framework is implemented on a real mobile robot and evaluated under various network conditions, including a 5G connection, demonstrating its effectiveness and robustness in supporting immersive remote teleoperation. Code is available at our GitHub repository (https://github.com/Diane-Spirit).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

