Federated learning (FL) is gaining traction across numerous fields for its ability to foster collaboration among multiple participants while preserving data privacy. In the medical domain, FL enables institutions to share knowledge while maintaining control over their data, which often vary in modality, source, and quantity. Institutions are often specialised in treating one or a few types of tumours, typically focusing on a specific organ. Hence, different institutions may contribute with distinct types of medical imaging data of various organs, originating from diverse machines. Collaboration among these institutions enhances performance on shared tasks across different areas of the body. The framework employs modality-specific models hosted on the server, each designed for a particular imaging modality and designed to predict the presence of tumours in scans from its respective modality, regardless of the organ being imaged. Clients focus on their specific imaging modality, utilising knowledge derived from images contributed by institutions employing the same modality. This approach facilitates broader collaboration, extending beyond institutions specialising in the same organ to include those working within the same imaging modality. This approach also helps avoid the introduction of potential noise from clients with images of different modalities, which might hinder the model's ability to effectively specialise and adapt to the data specific to each institution. Experiments showed that FLAMES achieves strong performance on server data, even when tested across different organs, demonstrating its ability to generalise effectively across diverse medical imaging datasets. Our code is available at https://github.com/MODAL-UNINA/FLAMES.

FLAMES—Federated Learning for Advanced MEdical Segmentation / Savoia, M.; Prezioso, E.; Piccialli, F.. - In: EXPERT SYSTEMS. - ISSN 0266-4720. - 42:8(2025). [10.1111/exsy.70090]

FLAMES—Federated Learning for Advanced MEdical Segmentation

Savoia M.;Prezioso E.;Piccialli F.
2025

Abstract

Federated learning (FL) is gaining traction across numerous fields for its ability to foster collaboration among multiple participants while preserving data privacy. In the medical domain, FL enables institutions to share knowledge while maintaining control over their data, which often vary in modality, source, and quantity. Institutions are often specialised in treating one or a few types of tumours, typically focusing on a specific organ. Hence, different institutions may contribute with distinct types of medical imaging data of various organs, originating from diverse machines. Collaboration among these institutions enhances performance on shared tasks across different areas of the body. The framework employs modality-specific models hosted on the server, each designed for a particular imaging modality and designed to predict the presence of tumours in scans from its respective modality, regardless of the organ being imaged. Clients focus on their specific imaging modality, utilising knowledge derived from images contributed by institutions employing the same modality. This approach facilitates broader collaboration, extending beyond institutions specialising in the same organ to include those working within the same imaging modality. This approach also helps avoid the introduction of potential noise from clients with images of different modalities, which might hinder the model's ability to effectively specialise and adapt to the data specific to each institution. Experiments showed that FLAMES achieves strong performance on server data, even when tested across different organs, demonstrating its ability to generalise effectively across diverse medical imaging datasets. Our code is available at https://github.com/MODAL-UNINA/FLAMES.
2025
FLAMES—Federated Learning for Advanced MEdical Segmentation / Savoia, M.; Prezioso, E.; Piccialli, F.. - In: EXPERT SYSTEMS. - ISSN 0266-4720. - 42:8(2025). [10.1111/exsy.70090]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1027541
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact