Abstract—Federated Learning has been proposed to develop better AI systems without compromising the privacy of final users and the legitimate interests of private companies. Initially deployed by Google to predict text input on mobile devices, FL has been deployed in many other industries. Since its introduction, Federated Learning mainly exploited the inner working of neural networks and other gradient descent-based algorithms by either exchanging the weights of the model or the gradients computed during learning. While this approach has been very successful, it rules out applying FL in contexts where other models are preferred, e.g., easier to interpret or known to work better. This paper proposes FL algorithms that build federated models without relying on gradient descent-based methods. Specifically, we leverage distributed versions of the AdaBoost algorithm to acquire strong federated models. In contrast with previous approaches, our proposal does not put any constraint on the client-side learning models. We perform a large set of experiments on ten UCI datasets, comparing the algorithms in six non-iidness settings.

Boosting the Federation: Cross-Silo Federated Learning without Gradient Descent

Mirko Polato
;
Roberto Esposito;Marco Aldinucci
2022-01-01

Abstract

Abstract—Federated Learning has been proposed to develop better AI systems without compromising the privacy of final users and the legitimate interests of private companies. Initially deployed by Google to predict text input on mobile devices, FL has been deployed in many other industries. Since its introduction, Federated Learning mainly exploited the inner working of neural networks and other gradient descent-based algorithms by either exchanging the weights of the model or the gradients computed during learning. While this approach has been very successful, it rules out applying FL in contexts where other models are preferred, e.g., easier to interpret or known to work better. This paper proposes FL algorithms that build federated models without relying on gradient descent-based methods. Specifically, we leverage distributed versions of the AdaBoost algorithm to acquire strong federated models. In contrast with previous approaches, our proposal does not put any constraint on the client-side learning models. We perform a large set of experiments on ten UCI datasets, comparing the algorithms in six non-iidness settings.
2022
Inglese
contributo
1 - Conferenza
International Joint Conference on Neural Networks (IJCNN)
Padova, Italy
18-23 Luglio
Proceedings of the International Joint Conference on Neural Networks (IJCNN 2022)
Comitato scientifico
IEEE
Piscataway, NJ 08855-1331
STATI UNITI D'AMERICA
1
10
10
federated learning, cross-silo, boosting, adaboost, ensemble learning
no
   Third party CINI - "The European PILOT - Pilot using Independent Local & Open Technologies" (H2020-JTI-EuroHPC-2020-1)
   The European PILOT
   EUROPEAN COMMISSION
   H2020
   ESPOSITO R. - H2020 RIA G.A. n. 101034126
1 – prodotto con file in versione Open Access (allegherò il file al passo 6 - Carica)
3
info:eu-repo/semantics/conferenceObject
04-CONTRIBUTO IN ATTI DI CONVEGNO::04A-Conference paper in volume
Mirko Polato, Roberto Esposito, Marco Aldinucci
273
partially_open
File in questo prodotto:
File Dimensione Formato  
ijcnn22-internal.pdf

Accesso aperto

Tipo di file: PREPRINT (PRIMA BOZZA)
Dimensione 499.43 kB
Formato Adobe PDF
499.43 kB Adobe PDF Visualizza/Apri
Boosting_the_Federation_Cross-Silo_Federated_Learning_without_Gradient_Descent.pdf

Accesso riservato

Tipo di file: PDF EDITORIALE
Dimensione 1.07 MB
Formato Adobe PDF
1.07 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1857783
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 7
social impact