Federated Learning (FL) diverges from traditional Machine Learning (ML) models decentralizing data utilization, addressing privacy concerns. This approach involves iterative model updates, where individual devices compute gradients based on local data, share updates with a central server, and receive an improved global model. High-Performance Computing (HPC) systems enhance FL efficiency by leveraging parallel processing. In this study, we aim to explore FL efficiency using four aggregation methods on three datasets across six clients, assess metrics like global model accuracy and communication efficiency, and evaluate FL on HPC. We employ Flower, a versatile FL framework, in our experiments. Our chosen datasets include MNIST, Digits, and Semeion Handwritten Digit, distributed among two clients each. We utilize NVIDIA GPUs for computation, with aggregation methods such as FedAvg, FedProx, FedOpt, and FedYogi. Metrics include Convergence Time, Global Model Accuracy, Communication Efficiency, and HPC Throughput. The results will provide insights into FL performance, especially in HPC environments, impacting convergence, communication, and resource utilization.

Benchmarking Federated Learning on High-Performance Computing: Aggregation Methods and Their Impact / Annunziata, D.; Canzaniello, M.; Savoia, M.; Cuomo, S.; Piccialli, F.. - (2024), pp. 207-214. ( 32nd Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2024 irl 2024) [10.1109/PDP62718.2024.00036].

Benchmarking Federated Learning on High-Performance Computing: Aggregation Methods and Their Impact

Annunziata D.;Canzaniello M.;Savoia M.;Cuomo S.;Piccialli F.
2024

Abstract

Federated Learning (FL) diverges from traditional Machine Learning (ML) models decentralizing data utilization, addressing privacy concerns. This approach involves iterative model updates, where individual devices compute gradients based on local data, share updates with a central server, and receive an improved global model. High-Performance Computing (HPC) systems enhance FL efficiency by leveraging parallel processing. In this study, we aim to explore FL efficiency using four aggregation methods on three datasets across six clients, assess metrics like global model accuracy and communication efficiency, and evaluate FL on HPC. We employ Flower, a versatile FL framework, in our experiments. Our chosen datasets include MNIST, Digits, and Semeion Handwritten Digit, distributed among two clients each. We utilize NVIDIA GPUs for computation, with aggregation methods such as FedAvg, FedProx, FedOpt, and FedYogi. Metrics include Convergence Time, Global Model Accuracy, Communication Efficiency, and HPC Throughput. The results will provide insights into FL performance, especially in HPC environments, impacting convergence, communication, and resource utilization.
2024
Benchmarking Federated Learning on High-Performance Computing: Aggregation Methods and Their Impact / Annunziata, D.; Canzaniello, M.; Savoia, M.; Cuomo, S.; Piccialli, F.. - (2024), pp. 207-214. ( 32nd Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2024 irl 2024) [10.1109/PDP62718.2024.00036].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/959353
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
social impact