Robot-assisted radical prostatectomy (RARP) has become the most prevalent treatment for patients with organ-confined prostate cancer. Despite superior outcomes, suboptimal vesicourethral anastomosis (VUA) may lead to serious complications, including urinary leakage, prolonged catheterization, and extended hospitalization. A precise localization of both the surgical needle and the surrounding vesical and urethral tissues to coadapt is needed for fine-grained assessment of this task. Nonetheless, the identification of anatomical structures from endoscopic videos is difficult due to tissue distortions, changes in brightness, and instrument interferences. In this paper, we propose and compare two Deep Learning (DL) pipelines for the automatic segmentation of the mucosal layers and the suturing needle in real RARP videos by exploiting different architectures and training strategies. To train the models, we introduce a novel, annotated dataset collected from four VUA procedures. Experimental results show that the nnU-Net 2D model achieved the highest class-specific metrics, with a Dice Score of 0.663 for the mucosa class and 0.866 for the needle class, outperforming both transformer-based and baseline convolutional approaches on external validation video sequences. This work paves the way for computer-assisted tools that can objectively evaluate surgical performance during the critical phase of suturing tasks.

Deep Learning Strategies for Semantic Segmentation in Robot-Assisted Radical Prostatectomy / Sibilano, Elena; Delprete, Claudia; Marvulli, Pietro Maria; Brunetti, Antonio; Marino, Francescomaria; Lucarelli, Giuseppe; Battaglia, Michele; Bevilacqua, Vitoantonio. - In: APPLIED SCIENCES. - ISSN 2076-3417. - ELETTRONICO. - 15:19(2025). [10.3390/app151910665]

Deep Learning Strategies for Semantic Segmentation in Robot-Assisted Radical Prostatectomy

Sibilano, Elena;Delprete, Claudia;Marvulli, Pietro Maria;Brunetti, Antonio;Marino, Francescomaria;Bevilacqua, Vitoantonio
2025

Abstract

Robot-assisted radical prostatectomy (RARP) has become the most prevalent treatment for patients with organ-confined prostate cancer. Despite superior outcomes, suboptimal vesicourethral anastomosis (VUA) may lead to serious complications, including urinary leakage, prolonged catheterization, and extended hospitalization. A precise localization of both the surgical needle and the surrounding vesical and urethral tissues to coadapt is needed for fine-grained assessment of this task. Nonetheless, the identification of anatomical structures from endoscopic videos is difficult due to tissue distortions, changes in brightness, and instrument interferences. In this paper, we propose and compare two Deep Learning (DL) pipelines for the automatic segmentation of the mucosal layers and the suturing needle in real RARP videos by exploiting different architectures and training strategies. To train the models, we introduce a novel, annotated dataset collected from four VUA procedures. Experimental results show that the nnU-Net 2D model achieved the highest class-specific metrics, with a Dice Score of 0.663 for the mucosa class and 0.866 for the needle class, outperforming both transformer-based and baseline convolutional approaches on external validation video sequences. This work paves the way for computer-assisted tools that can objectively evaluate surgical performance during the critical phase of suturing tasks.
2025
Deep Learning Strategies for Semantic Segmentation in Robot-Assisted Radical Prostatectomy / Sibilano, Elena; Delprete, Claudia; Marvulli, Pietro Maria; Brunetti, Antonio; Marino, Francescomaria; Lucarelli, Giuseppe; Battaglia, Michele; Bevilacqua, Vitoantonio. - In: APPLIED SCIENCES. - ISSN 2076-3417. - ELETTRONICO. - 15:19(2025). [10.3390/app151910665]
File in questo prodotto:
File Dimensione Formato  
2025_Deep_Learning_Strategies_for_Semantic_Segmentation_in_Robot-Assisted_Radical_Prostatectomy_pdfeditoriale.pdf

accesso aperto

Tipologia: Versione editoriale
Licenza: Creative commons
Dimensione 873.75 kB
Formato Adobe PDF
873.75 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/291282
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact