In recent years, Federated Learning applied to neural networks has garnered significant attention, yet applying this approach to other machine learning algorithms remains underexplored. Support Vector Machines (SVMs), in particular, have seen limited exploration within the federated context, with existing techniques often constrained by the necessity to share the weight vector of the linear classifier. Unfortunately, this constraint severely limits the method’s utility, restricting its application to linear feature spaces. This study addresses and overcomes this limitation by proposing an innovative approach: instead of sharing weight vectors, we advocate sharing support vectors while safeguarding client data privacy through vector perturbation. Simple random perturbation works remarkably well in practice, and indeed we provide a bound on the approximation error of the learnt model which goes to zero as the number of input features grows. We also introduce a refined technique that involves strategically moving the support vectors along the margin of the decision function, which we empirically show to slightly improve the performances. Through extensive experimentation, we demonstrate that our proposed approach achieves state-of-the-art performance and consistently enables the federated classifier to match the performance of classifiers trained on the entire dataset.

SVF: Support Vector Federation

Polato M.;Esposito R.;Sciandra L.
2025-01-01

Abstract

In recent years, Federated Learning applied to neural networks has garnered significant attention, yet applying this approach to other machine learning algorithms remains underexplored. Support Vector Machines (SVMs), in particular, have seen limited exploration within the federated context, with existing techniques often constrained by the necessity to share the weight vector of the linear classifier. Unfortunately, this constraint severely limits the method’s utility, restricting its application to linear feature spaces. This study addresses and overcomes this limitation by proposing an innovative approach: instead of sharing weight vectors, we advocate sharing support vectors while safeguarding client data privacy through vector perturbation. Simple random perturbation works remarkably well in practice, and indeed we provide a bound on the approximation error of the learnt model which goes to zero as the number of input features grows. We also introduce a refined technique that involves strategically moving the support vectors along the margin of the decision function, which we empirically show to slightly improve the performances. Through extensive experimentation, we demonstrate that our proposed approach achieves state-of-the-art performance and consistently enables the federated classifier to match the performance of classifiers trained on the entire dataset.
2025
13
77778
77789
https://ieeexplore.ieee.org/abstract/document/10967378
Federated learning; kernel method; support vector machines
Polato M.; Esposito R.; Sciandra L.
File in questo prodotto:
File Dimensione Formato  
Support Vector Federation.pdf

Accesso aperto

Tipo di file: PDF EDITORIALE
Dimensione 1.79 MB
Formato Adobe PDF
1.79 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2075039
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact