Residual deep neural networks are formulated as interacting particle systems leading to a description through neural differential equations, and, in the case of large input data, through mean-field neural networks. The mean-field description allows also the recast of the training processes as a controllability problem for the solution to the mean-field dynamics. We show theoretical results on the controllability of the linear microscopic and mean-field dynamics through the Hilbert Uniqueness Method and propose a computational approach based on kernel learning methods to solve numerically, and efficiently, the training problem. Further aspects of the structural properties of the mean-field equation will be reviewed.
Controllability of Continuous Networks and a Kernel-Based Learning Approximation / Herty, Michael; Segala, Chiara; Visconti, Giuseppe. - (2025), pp. 135-155. - DYNAMIC MODELING AND ECONOMETRICS IN ECONOMICS AND FINANCE. [10.1007/978-3-031-85256-5_6].
Controllability of Continuous Networks and a Kernel-Based Learning Approximation
Herty, Michael;Segala, Chiara
;Visconti, Giuseppe
2025
Abstract
Residual deep neural networks are formulated as interacting particle systems leading to a description through neural differential equations, and, in the case of large input data, through mean-field neural networks. The mean-field description allows also the recast of the training processes as a controllability problem for the solution to the mean-field dynamics. We show theoretical results on the controllability of the linear microscopic and mean-field dynamics through the Hilbert Uniqueness Method and propose a computational approach based on kernel learning methods to solve numerically, and efficiently, the training problem. Further aspects of the structural properties of the mean-field equation will be reviewed.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


