This paper outlines the design and implementation of novel object manipulation for a social robot, here Pepper by SoftBank Robotics. It is primarily designed for verbal interaction and has therefore not been equipped with object manipulation capabilities. The proposed routine exploits the built-in RGB and 3D cameras. First, semantic segmentation based on the Mini-YOLOv3 neural network is run on the RGB image. Next, 3D sensor data are used to position the hand over the object, implementing a novel routine to grab the object and to scan it for recognition purposes. To preserve patient and location sensitive data, the here-proposed architecture operates automatically and offline, running on the robot's operating system. Experimental results on 370 grabbing processes showed how the manipulation routine achieves a grabbing success rate of up to 96%. They also proved that the success rate remains unaltered if the target object is positioned in a rectangular area of +/- 6 cm x +/- 3 cm centered in the nominal position provided by an initial positioning grid. The grabbing success rate remains above 80% even if the object to be grabbed is stored with an angle that ranges between 10 degrees and 45 degrees within the above-reported area.

Adding Object Manipulation Capabilities to Social Robots by using 3D and RGB Cameras Data / Mezzina, G.; De Venuto, D.. - ELETTRONICO. - 2021-October:(2021), pp. 1-4. (Intervento presentato al convegno 20th IEEE Sensors 2021 tenutosi a Virtual nel October 31- November 4, 2021) [10.1109/SENSORS47087.2021.9639608].

Adding Object Manipulation Capabilities to Social Robots by using 3D and RGB Cameras Data

Mezzina, G.;De Venuto, D.
2021-01-01

Abstract

This paper outlines the design and implementation of novel object manipulation for a social robot, here Pepper by SoftBank Robotics. It is primarily designed for verbal interaction and has therefore not been equipped with object manipulation capabilities. The proposed routine exploits the built-in RGB and 3D cameras. First, semantic segmentation based on the Mini-YOLOv3 neural network is run on the RGB image. Next, 3D sensor data are used to position the hand over the object, implementing a novel routine to grab the object and to scan it for recognition purposes. To preserve patient and location sensitive data, the here-proposed architecture operates automatically and offline, running on the robot's operating system. Experimental results on 370 grabbing processes showed how the manipulation routine achieves a grabbing success rate of up to 96%. They also proved that the success rate remains unaltered if the target object is positioned in a rectangular area of +/- 6 cm x +/- 3 cm centered in the nominal position provided by an initial positioning grid. The grabbing success rate remains above 80% even if the object to be grabbed is stored with an angle that ranges between 10 degrees and 45 degrees within the above-reported area.
2021
20th IEEE Sensors 2021
978-1-7281-9501-8
Adding Object Manipulation Capabilities to Social Robots by using 3D and RGB Cameras Data / Mezzina, G.; De Venuto, D.. - ELETTRONICO. - 2021-October:(2021), pp. 1-4. (Intervento presentato al convegno 20th IEEE Sensors 2021 tenutosi a Virtual nel October 31- November 4, 2021) [10.1109/SENSORS47087.2021.9639608].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/238340
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact