In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described, The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps, The first step is the descent of the error functional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimization strategy, In the second step, each linear block is optimized separately by using a least squares (LS) criterion. To demonstrate the effectiveness of the new approach, a detailed treatment of a gradient descent in the neuron space is conducted. The main properties of this approach are the higher speed of convergence with respect to methods that employ an ordinary gradient descent in the weight space backpropagation (BP), better numerical conditioning, and lower computational cost compared to techniques based on the Hessian matrix, The numerical stability is assured by the use of robust LS linear system solvers, operating directly on the input data of each layer. Experimental results obtained in three problems are described, which confirm the effectiveness of the new method.
A generalized learning paradigm exploiting the structure of feedforward neural networks / Parisi, Raffaele; DI CLAUDIO, Elio; Orlandi, Gianni; B. D., Rao. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS. - ISSN 1045-9227. - STAMPA. - 7:6(1996), pp. 1450-1460. [10.1109/72.548172]
A generalized learning paradigm exploiting the structure of feedforward neural networks
PARISI, Raffaele;DI CLAUDIO, Elio;ORLANDI, Gianni;
1996
Abstract
In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described, The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps, The first step is the descent of the error functional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimization strategy, In the second step, each linear block is optimized separately by using a least squares (LS) criterion. To demonstrate the effectiveness of the new approach, a detailed treatment of a gradient descent in the neuron space is conducted. The main properties of this approach are the higher speed of convergence with respect to methods that employ an ordinary gradient descent in the weight space backpropagation (BP), better numerical conditioning, and lower computational cost compared to techniques based on the Hessian matrix, The numerical stability is assured by the use of robust LS linear system solvers, operating directly on the input data of each layer. Experimental results obtained in three problems are described, which confirm the effectiveness of the new method.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.