Federated learning (FL) is a distributed machine learning paradigm allowing cooperative model training between multiple parties while maintaining local data privacy. FL can be deployed at various scales, ranging from thousands of low-end devices (e.g., smartphones) to just a few high-performance infrastructures (e.g., HPCs), raising critical concerns about the scalability of state-of-the-art FL frameworks. This preliminary study evaluates the scaling performance of a representative FL framework, i.e., Flower, in high-performance controlled environments, providing insights on such frameworks from a computational performance point of view. Two public Top500 pre- exascale HPC infrastructures are exploited to obtain reliable and comparable results: Leonardo and MareNostrum5. Our findings suggest that the design of current FL frameworks, and especially their communication backends, may overlook computational performance, leading to poor scaling in large-scale scenarios.
Benchmarking Federated Learning Frameworks’ Scalability
Gianluca Mittone
First
;Samuele Fonio;Robert Birke;Marco AldinucciLast
2025-01-01
Abstract
Federated learning (FL) is a distributed machine learning paradigm allowing cooperative model training between multiple parties while maintaining local data privacy. FL can be deployed at various scales, ranging from thousands of low-end devices (e.g., smartphones) to just a few high-performance infrastructures (e.g., HPCs), raising critical concerns about the scalability of state-of-the-art FL frameworks. This preliminary study evaluates the scaling performance of a representative FL framework, i.e., Flower, in high-performance controlled environments, providing insights on such frameworks from a computational performance point of view. Two public Top500 pre- exascale HPC infrastructures are exploited to obtain reliable and comparable results: Leonardo and MareNostrum5. Our findings suggest that the design of current FL frameworks, and especially their communication backends, may overlook computational performance, leading to poor scaling in large-scale scenarios.| File | Dimensione | Formato | |
|---|---|---|---|
|
paper40-2.pdf
Accesso aperto
Tipo di file:
PDF EDITORIALE
Dimensione
220.09 kB
Formato
Adobe PDF
|
220.09 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



