In this paper, a new algorithm for on-line learning of Random Vector Functional-Link neural network is proposed. It is specifically tailored to hardware implementations with finite precision arithmetic in distributed computing scenarios, where the massive use of low-cost hardware resources for sensor networks or computing agents is necessary in order to deal with big data, IoT paradigms and multiple sources of information. The proposed algorithm does not require any specific DSP operation to be implemented in hardware, like matrix inversions or multiplications, for real-time learning. However, experimental results prove that the algorithm outperforms commonly adopted recursive least-squares algorithms optimized for hardware implementation, while the loss of performance with respect to batch training is reduced, as the proposed approach is proved to be very efficient when a reduced number of bits is adopted for finite precision hardware implementation.
On-line learning of RVFL neural networks on finite precision hardware / Rosato, Antonello; Altilio, Rosa; Panella, Massimo. - (2018), pp. 1-5. (Intervento presentato al convegno IEEE International Symposium on Circuits and Systems (ISCAS 2018) tenutosi a Florence; Italy) [10.1109/ISCAS.2018.8351399].
On-line learning of RVFL neural networks on finite precision hardware
Rosato, Antonello;Altilio, Rosa;Panella, Massimo
2018
Abstract
In this paper, a new algorithm for on-line learning of Random Vector Functional-Link neural network is proposed. It is specifically tailored to hardware implementations with finite precision arithmetic in distributed computing scenarios, where the massive use of low-cost hardware resources for sensor networks or computing agents is necessary in order to deal with big data, IoT paradigms and multiple sources of information. The proposed algorithm does not require any specific DSP operation to be implemented in hardware, like matrix inversions or multiplications, for real-time learning. However, experimental results prove that the algorithm outperforms commonly adopted recursive least-squares algorithms optimized for hardware implementation, while the loss of performance with respect to batch training is reduced, as the proposed approach is proved to be very efficient when a reduced number of bits is adopted for finite precision hardware implementation.File | Dimensione | Formato | |
---|---|---|---|
Rosato_On-line-learning_2018.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
461.79 kB
Formato
Adobe PDF
|
461.79 kB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.