In recent years, neural networks (NN's) have been extensively applied to many signal processing problems. In particular, due to their capacity to form complex decision regions, NN's have been successfully used in adaptive equalization of digital communication channels. The mean square error (MSE) criterion, which is usually adopted in neural learning, is not directly related to the minimization of the classification error, i.e., bit error rate (BER), which is of interest in channel equalization. Moreover, common gradient-based learning techniques are often characterized by slow speed of convergence and numerical ill conditioning, In this paper, we introduce a novel approach to learning in recurrent neural networks (RNN's) that exploits the principle of discriminative learning, minimizing an error functional that is a direct measure of the classification error. The proposed method extends to RNN's a technique applied with success to fast learning of feedforward NN's and is based on the descent of the error functional in the space of the linear combinations of the neurons (the neuron space); its main features are higher speed of convergence and better numerical conditioning w.r.t. gradient-based approaches. whereas numerical stability is assured by the use of robust least squares solvers. Experiments regarding the equalization of PARI signals in different transmission channels are described, which demonstrate the effectiveness of the proposed approach.
Fast adaptive digital equalization by recurrent neural networks / Parisi, Raffaele; DI CLAUDIO, Elio; Orlandi, Gianni; B. D., Rao. - In: IEEE TRANSACTIONS ON SIGNAL PROCESSING. - ISSN 1053-587X. - STAMPA. - 45:11(1997), pp. 2731-2739. [10.1109/78.650099]
Fast adaptive digital equalization by recurrent neural networks
PARISI, Raffaele;DI CLAUDIO, Elio;ORLANDI, Gianni;
1997
Abstract
In recent years, neural networks (NN's) have been extensively applied to many signal processing problems. In particular, due to their capacity to form complex decision regions, NN's have been successfully used in adaptive equalization of digital communication channels. The mean square error (MSE) criterion, which is usually adopted in neural learning, is not directly related to the minimization of the classification error, i.e., bit error rate (BER), which is of interest in channel equalization. Moreover, common gradient-based learning techniques are often characterized by slow speed of convergence and numerical ill conditioning, In this paper, we introduce a novel approach to learning in recurrent neural networks (RNN's) that exploits the principle of discriminative learning, minimizing an error functional that is a direct measure of the classification error. The proposed method extends to RNN's a technique applied with success to fast learning of feedforward NN's and is based on the descent of the error functional in the space of the linear combinations of the neurons (the neuron space); its main features are higher speed of convergence and better numerical conditioning w.r.t. gradient-based approaches. whereas numerical stability is assured by the use of robust least squares solvers. Experiments regarding the equalization of PARI signals in different transmission channels are described, which demonstrate the effectiveness of the proposed approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.