The semi-supervised support vector machine (S3VM) is a well-known algorithm for performing semisupervised inference under the large margin principle. In this paper, we are interested in the problem of training a S3VM when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original S3VM, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks.

The semi-supervised support vector machine ((SVM)-V-3) is a well-known algorithm for performing semi-supervised inference under the large margin principle. In this paper, we are interested in the problem of training a (SVM)-V-3 when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original (SVM)-V-3, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks.

Distributed semi-supervised support vector machines / Scardapane, Simone; Fierimonte, R.; Di Lorenzo, P.; Panella, Massimo; Uncini, Aurelio. - In: NEURAL NETWORKS. - ISSN 0893-6080. - STAMPA. - 80:(2016), pp. 43-52. [10.1016/j.neunet.2016.04.007]

Distributed semi-supervised support vector machines

SCARDAPANE, SIMONE;Di Lorenzo, P.;PANELLA, Massimo;UNCINI, Aurelio
2016

Abstract

The semi-supervised support vector machine (S3VM) is a well-known algorithm for performing semisupervised inference under the large margin principle. In this paper, we are interested in the problem of training a S3VM when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original S3VM, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks.
2016
The semi-supervised support vector machine ((SVM)-V-3) is a well-known algorithm for performing semi-supervised inference under the large margin principle. In this paper, we are interested in the problem of training a (SVM)-V-3 when the labeled and unlabeled samples are distributed over a network of interconnected agents. In particular, the aim is to design a distributed training protocol over networks, where communication is restricted only to neighboring agents and no coordinating authority is present. Using a standard relaxation of the original (SVM)-V-3, we formulate the training problem as the distributed minimization of a non-convex social cost function. To find a (stationary) solution in a distributed manner, we employ two different strategies: (i) a distributed gradient descent algorithm; (ii) a recently developed framework for In-Network Nonconvex Optimization (NEXT), which is based on successive convexifications of the original problem, interleaved by state diffusion steps. Our experimental results show that the proposed distributed algorithms have comparable performance with respect to a centralized implementation, while highlighting the pros and cons of the proposed solutions. To the date, this is the first work that paves the way toward the broad field of distributed semi-supervised learning over networks.
Semi-supervised learning; support vector machine; distributed learning; networks
01 Pubblicazione su rivista::01a Articolo in rivista
Distributed semi-supervised support vector machines / Scardapane, Simone; Fierimonte, R.; Di Lorenzo, P.; Panella, Massimo; Uncini, Aurelio. - In: NEURAL NETWORKS. - ISSN 0893-6080. - STAMPA. - 80:(2016), pp. 43-52. [10.1016/j.neunet.2016.04.007]
File allegati a questo prodotto
File Dimensione Formato  
Scardapane_Distributed-semi-supervised_2016.pdf

solo utenti autorizzati

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 700.83 kB
Formato Adobe PDF
700.83 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/873289
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 46
  • ???jsp.display-item.citation.isi??? 40
social impact