The Support Vector Machines (SVMs) dual formulation has a non-separable structure that makes the design of a convergent distributed algorithm a very difficult task. Recently some separable and distributable reformulations of the SVM training problem have been obtained by fixing one primal variable. While this strategy seems effective for some applications, in certain cases it could be weak since it drastically reduces the overall final performance. In this work we present the first fully distributable algorithm for SVMs training that globally converges to a solution of the original (non-separable) SVMs dual formulation. Besides a detailed convergence analysis, we provide a simple demonstrative example showing the advantages of the original SVMs dual formulation with respect to the weak separable one and highlights the practical effectiveness of our method. We report further tests to show practical convergence of the proposed method on real-world datasets.

A convergent and fully distributable SVMs training algorithm / Manno, Andrea; Sagratella, Simone; Livi, Lorenzo. - 2016:(2016), pp. 3076-3080. (Intervento presentato al convegno 2016 International Joint Conference on Neural Networks, IJCNN 2016 tenutosi a Vancouver, British Columbia; Canada nel 2016) [10.1109/IJCNN.2016.7727590].

A convergent and fully distributable SVMs training algorithm

MANNO, ANDREA
;
SAGRATELLA, SIMONE
;
LIVI, LORENZO
2016

Abstract

The Support Vector Machines (SVMs) dual formulation has a non-separable structure that makes the design of a convergent distributed algorithm a very difficult task. Recently some separable and distributable reformulations of the SVM training problem have been obtained by fixing one primal variable. While this strategy seems effective for some applications, in certain cases it could be weak since it drastically reduces the overall final performance. In this work we present the first fully distributable algorithm for SVMs training that globally converges to a solution of the original (non-separable) SVMs dual formulation. Besides a detailed convergence analysis, we provide a simple demonstrative example showing the advantages of the original SVMs dual formulation with respect to the weak separable one and highlights the practical effectiveness of our method. We report further tests to show practical convergence of the proposed method on real-world datasets.
2016
2016 International Joint Conference on Neural Networks, IJCNN 2016
Distributed learning; Kernel Machines; Supervised Learning; Support Vector Machines; Software; Artificial Intelligence
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A convergent and fully distributable SVMs training algorithm / Manno, Andrea; Sagratella, Simone; Livi, Lorenzo. - 2016:(2016), pp. 3076-3080. (Intervento presentato al convegno 2016 International Joint Conference on Neural Networks, IJCNN 2016 tenutosi a Vancouver, British Columbia; Canada nel 2016) [10.1109/IJCNN.2016.7727590].
File allegati a questo prodotto
File Dimensione Formato  
Manno_A-convergent-and-fully_2016.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 209.89 kB
Formato Adobe PDF
209.89 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/944765
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 10
social impact