In this paper a review of fast-learning algorithms for multilayer neural networks is presented. From the discovery of the Back-propagation algorithm several efforts have been made in order to improve the speed of convergence of the learning. A general approach is to consider the training of a neural net as a nonlinear optimization problem; this makes available a number of techniques already tested and well known in other fields. Recently some methods drawn from the signal processing field have been introduced; these solutions are strictly connected to the point of view of the optimization theory. In particular we show the feasibility of Least Squares and Total least Squares solutions for the learning problem; these approaches lead to fast and robust algorithms whose performance can be justified by recasting them in the optimization framework.
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.