In this paper a review of fast-learning algorithms for multilayer neural networks is presented. From the discovery of the Back-propagation algorithm several efforts have been made in order to improve the speed of convergence of the learning. A general approach is to consider the training of a neural net as a nonlinear optimization problem; this makes available a number of techniques already tested and well known in other fields. Recently some methods drawn from the signal processing field have been introduced; these solutions are strictly connected to the point of view of the optimization theory. In particular we show the feasibility of Least Squares and Total least Squares solutions for the learning problem; these approaches lead to fast and robust algorithms whose performance can be justified by recasting them in the optimization framework.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Fast learning algorithms for feedforward neural networks|
|Data di pubblicazione:||1995|
|Appartiene alla tipologia:||04b Atto di convegno in volume|