When multilayer neural networks are implemented with digital hardware, which allows full exploitation of the well developed digital VLSI technologies, the multiply operations in each neuron between the weights and the inputs can create a bottleneck in the system, because the digital multipliers are very demanding in terms of time or chip area. For this reason, the use of weights constrained to be power-of-two has been proposed in the paper to reduce the computational requirements of the networks. In this case, because one of the two multiplier operands is a power-of-two, the multiply operation can be performed as a much simpler shift operation on the neuron input. While this approach greatly reduces the computational burden of the forward phase of the network, the learning phase, performed using the traditional backpropagation procedure, still requires many regular multiplications. In the paper, a new learning procedure, based on the power-of-two approach, is proposed that can be performed using only shift and add operations, so that both the forward and learning phases of the network can be easily implemented with digital hardware.
Backpropagation without multiplier for multilayer neural networks / M. L., Marchesi; F., Piazza; Uncini, Aurelio. - In: IEE PROCEEDINGS. CIRCUITS, DEVICES AND SYSTEMS. - ISSN 1350-2409. - 143:4(1996), pp. 229-232.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Backpropagation without multiplier for multilayer neural networks|
|Data di pubblicazione:||1996|
|Citazione:||Backpropagation without multiplier for multilayer neural networks / M. L., Marchesi; F., Piazza; Uncini, Aurelio. - In: IEE PROCEEDINGS. CIRCUITS, DEVICES AND SYSTEMS. - ISSN 1350-2409. - 143:4(1996), pp. 229-232.|
|Appartiene alla tipologia:||01a Articolo in rivista|