In latest years, deep learning (DL) has gained a leading role in the pansharpening of multiresolution images. Given the lack of ground truth data, most DL-based methods carry out supervised training in a reduced-resolution domain. However, models trained on downsized images tend to perform poorly on high-resolution target images. For this reason, several research groups are now turning to unsupervised training in the full-resolution domain, through the definition of appropriate loss functions and training paradigms. In this context, we have recently proposed a full-resolution training framework that can be applied to many existing architectures. Here, we propose a new DL-based pansharpening model that fully exploits the potential of this approach and provides cutting-edge performance. Besides architectural improvements with respect to previous work, such as the use of residual attention modules, the proposed model features a novel loss function that jointly promotes the spectral and spatial quality of the pansharpened data. In addition, thanks to a new fine-tuning strategy, it improves inference-time adaptation to target images. Experiments on a large variety of test images, performed in challenging scenarios, demonstrate that the proposed method compares favorably with the state-of-the-art both in terms of numerical results and visual output. The code is available online at https://github.com/matciotola/Lambda-PNN.

Unsupervised Deep Learning-Based Pansharpening With Jointly Enhanced Spectral and Spatial Fidelity / Ciotola, Matteo; Poggi, Giovanni; Scarpa, Giuseppe. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 61:(2023), pp. 1-17. [10.1109/tgrs.2023.3299356]

Unsupervised Deep Learning-Based Pansharpening With Jointly Enhanced Spectral and Spatial Fidelity

Ciotola, Matteo
Primo
;
Poggi, Giovanni
Secondo
;
Scarpa, Giuseppe
Ultimo
2023

Abstract

In latest years, deep learning (DL) has gained a leading role in the pansharpening of multiresolution images. Given the lack of ground truth data, most DL-based methods carry out supervised training in a reduced-resolution domain. However, models trained on downsized images tend to perform poorly on high-resolution target images. For this reason, several research groups are now turning to unsupervised training in the full-resolution domain, through the definition of appropriate loss functions and training paradigms. In this context, we have recently proposed a full-resolution training framework that can be applied to many existing architectures. Here, we propose a new DL-based pansharpening model that fully exploits the potential of this approach and provides cutting-edge performance. Besides architectural improvements with respect to previous work, such as the use of residual attention modules, the proposed model features a novel loss function that jointly promotes the spectral and spatial quality of the pansharpened data. In addition, thanks to a new fine-tuning strategy, it improves inference-time adaptation to target images. Experiments on a large variety of test images, performed in challenging scenarios, demonstrate that the proposed method compares favorably with the state-of-the-art both in terms of numerical results and visual output. The code is available online at https://github.com/matciotola/Lambda-PNN.
2023
Unsupervised Deep Learning-Based Pansharpening With Jointly Enhanced Spectral and Spatial Fidelity / Ciotola, Matteo; Poggi, Giovanni; Scarpa, Giuseppe. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 0196-2892. - 61:(2023), pp. 1-17. [10.1109/tgrs.2023.3299356]
File in questo prodotto:
File Dimensione Formato  
Unsupervised_Deep_Learning-Based_Pansharpening_With_Jointly_Enhanced_Spectral_and_Spatial_Fidelity.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Non specificato
Dimensione 5.17 MB
Formato Adobe PDF
5.17 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/987815
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 12
social impact