We present a proof-of-concept onboard cloud detection (CD) system for the precursore iperspettrale della missione applicativa (PRISMA) Second Generation (PSG) mission, which combines a secondary forward-looking RGB camera with deep learning (DL) models on a system-on-a-chip (SoC) field-programmable gate array (FPGA). The proposed system enables real-time cloud coverage assessment, optimizing primary hyperspectral (HS) payload data collection and supporting adaptive acquisition scheduling, offering an effective solution for enhancing onboard data processing in Earth observation (EO) missions. To support efficient model development, we derived the design specifications of the secondary camera and constructed a dedicated dataset using astronaut-captured images from the International Space Station—the CloudISS-RGB dataset—employing a self-training approach to generate high-quality pseudo-labels. Through an extensive architectural design exploration and systematic optimization, we tested two fully convolutional networks: U-Net, offering higher segmentation accuracy, and a lightweight convolutional autoencoder (CAE) designed for lower latency inference. The models were deployed on an AMD/Xilinx Zynq UltraScale+ MPSoC using the deep learning processor unit (DPU) IP core for hardware (HW) acceleration. The FPGA-deployed U-Net achieved 98.16% with a false positive rate (FPR) of 0.9%, providing robust segmentation even in challenging conditions, making it the preferred model for reliable inference onboard PSG. The CAE model maintained 97.02% accuracy while achieving over 2\times faster inference (31.81 versus 74.29 ms per image). Both models enable accurate cloud segmentation with real-time inference, meeting the operational constraints derived from the secondary camera design, while operating at an average power consumption of 2.6 W (U-Net) and 2.4 W (CAE), well within the mission power constraint for the HW accelerator. Our results validate the feasibility of integrating DL-based cloud coverage assessment onboard PSG, contributing to the broader effort of advancing artificial intelligence (AI)-powered computing for EO missions to enable more autonomous data processing and decision-making.
Cloud Detection on PRISMA Second Generation Using a Secondary RGB Forward-Looking Camera / Cratere, A.; Cannizzaro, I.; Carbone, A.; Guzman, M. A. D.; Sarvia, F.; Amici, S.; Ansalone, L.; Picchiani, M.; Dell'Olio, F.; Spiller, D.. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 63:(2025), pp. 5650516.1-5650516.16. [10.1109/TGRS.2025.3627469]
Cloud Detection on PRISMA Second Generation Using a Secondary RGB Forward-Looking Camera
Cratere A.;Dell'Olio F.;
2025
Abstract
We present a proof-of-concept onboard cloud detection (CD) system for the precursore iperspettrale della missione applicativa (PRISMA) Second Generation (PSG) mission, which combines a secondary forward-looking RGB camera with deep learning (DL) models on a system-on-a-chip (SoC) field-programmable gate array (FPGA). The proposed system enables real-time cloud coverage assessment, optimizing primary hyperspectral (HS) payload data collection and supporting adaptive acquisition scheduling, offering an effective solution for enhancing onboard data processing in Earth observation (EO) missions. To support efficient model development, we derived the design specifications of the secondary camera and constructed a dedicated dataset using astronaut-captured images from the International Space Station—the CloudISS-RGB dataset—employing a self-training approach to generate high-quality pseudo-labels. Through an extensive architectural design exploration and systematic optimization, we tested two fully convolutional networks: U-Net, offering higher segmentation accuracy, and a lightweight convolutional autoencoder (CAE) designed for lower latency inference. The models were deployed on an AMD/Xilinx Zynq UltraScale+ MPSoC using the deep learning processor unit (DPU) IP core for hardware (HW) acceleration. The FPGA-deployed U-Net achieved 98.16% with a false positive rate (FPR) of 0.9%, providing robust segmentation even in challenging conditions, making it the preferred model for reliable inference onboard PSG. The CAE model maintained 97.02% accuracy while achieving over 2\times faster inference (31.81 versus 74.29 ms per image). Both models enable accurate cloud segmentation with real-time inference, meeting the operational constraints derived from the secondary camera design, while operating at an average power consumption of 2.6 W (U-Net) and 2.4 W (CAE), well within the mission power constraint for the HW accelerator. Our results validate the feasibility of integrating DL-based cloud coverage assessment onboard PSG, contributing to the broader effort of advancing artificial intelligence (AI)-powered computing for EO missions to enable more autonomous data processing and decision-making.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

