Aggregation methods in Federated Learning (FL) play a fundamental role in the performance and convergence of the global model. In this paper, we propose a novel aggregation strategy based on the concept of federated neural velocity. Neural velocity estimates the rate of change of a neuron's learned function, providing insights into model convergence. Leveraging this property, we design a dynamic training approach where the central model's aggregation is adapted according to the neural velocity of the clients participating in the training process. We validate our method on multiple datasets and under both Independent and Identically Distributed (IID) and non-IID data distributions, demonstrating its effectiveness in improving model performance and robustness while addressing challenges related to data heterogeneity and resource management on edge devices. Moreover, due to its modular nature, our approach can be seamlessly integrated into advanced federated learning frameworks, including client selection strategies, to further improve training efficiency.

Efficient Federated Model Aggregation through Neural Velocity

Dalmasso G.
;
Fiandrotti A.;Grangetto M.
2025-01-01

Abstract

Aggregation methods in Federated Learning (FL) play a fundamental role in the performance and convergence of the global model. In this paper, we propose a novel aggregation strategy based on the concept of federated neural velocity. Neural velocity estimates the rate of change of a neuron's learned function, providing insights into model convergence. Leveraging this property, we design a dynamic training approach where the central model's aggregation is adapted according to the neural velocity of the clients participating in the training process. We validate our method on multiple datasets and under both Independent and Identically Distributed (IID) and non-IID data distributions, demonstrating its effectiveness in improving model performance and robustness while addressing challenges related to data heterogeneity and resource management on edge devices. Moreover, due to its modular nature, our approach can be seamlessly integrated into advanced federated learning frameworks, including client selection strategies, to further improve training efficiency.
2025
2025 Federated Learning and Edge AI for Privacy and Mobility, FLEdge-AI 2025
chn
2025
FLEdge-AI 2025 - Proceedings of the 2025 Federated Learning and Edge AI for Privacy and Mobility
Association for Computing Machinery, Inc
30
36
9798400719769
https://dl.acm.org/doi/10.1145/3737899.3768519
Artificial neural networks; Deep learning; Federated learning; Hyper-heuristics; Neural velocity
Dalmasso G.; De Gusmao P.P.B.; Fiandrotti A.; Grangetto M.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2116811
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact