In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the (nontrivial) application of the framework to the following relevant large-scale problems ranging from communications to machine learning: 1) (generalizations of) the rate profile maximization in MIMO interference broadcast networks; 2) the max–min fair multicast multigroup beamforming problem in a multicell environment; and 3) a general nonconvex constrained bi-criteria formulation for k -sparse variable selection in statistical learning; the two criteria are a nonconvex loss objective function, measuring the fitness of the model to data, and the latter is a nonconvex sparsity-inducing constraint in the general form of difference-of-convex (DC) functions, which allows to accomodate in a unified fashion convex and nonconvex surrogates of the ℓ 0 function. The proposed algorithms outperform current state-of-the-art schemes for 1)–3) both theoretically and numerically. For instance, they are the first distributed schemes for the class of problems 1) and 2); and they also lead to subproblems enjoying closed form solutions.
Parallel and Distributed Methods for Constrained Nonconvex Optimization-Part II: Applications in Communications and Machine Learning / Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo; Sardellitti, Stefania; Song, Peiran. - In: IEEE TRANSACTIONS ON SIGNAL PROCESSING. - ISSN 1053-587X. - STAMPA. - 65:8(2017), pp. 1945-1960. [10.1109/TSP.2016.2637314]
Parallel and Distributed Methods for Constrained Nonconvex Optimization-Part II: Applications in Communications and Machine Learning
FACCHINEI, Francisco;SARDELLITTI, Stefania;
2017
Abstract
In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the (nontrivial) application of the framework to the following relevant large-scale problems ranging from communications to machine learning: 1) (generalizations of) the rate profile maximization in MIMO interference broadcast networks; 2) the max–min fair multicast multigroup beamforming problem in a multicell environment; and 3) a general nonconvex constrained bi-criteria formulation for k -sparse variable selection in statistical learning; the two criteria are a nonconvex loss objective function, measuring the fitness of the model to data, and the latter is a nonconvex sparsity-inducing constraint in the general form of difference-of-convex (DC) functions, which allows to accomodate in a unified fashion convex and nonconvex surrogates of the ℓ 0 function. The proposed algorithms outperform current state-of-the-art schemes for 1)–3) both theoretically and numerically. For instance, they are the first distributed schemes for the class of problems 1) and 2); and they also lead to subproblems enjoying closed form solutions.File | Dimensione | Formato | |
---|---|---|---|
Scutari_Parallel-and-Distributed_2017.pdf
solo gestori archivio
Note: https://ieeexplore.ieee.org/abstract/document/7776965/
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.1 MB
Formato
Adobe PDF
|
1.1 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.