The extreme learning machine (ELM) is a growing statistical technique widely applied to regression problems. In essence, ELMs are single-layer neural networks where the hidden layer weights are randomly sampled from a specific distribution, while the output layer weights are learned from the data. Two of the key challenges with this approach are the architecture design, specifically determining the optimal number of neurons in the hidden layer, and the method’s sensitivity to the random initialization of hidden layer weights. This paper introduces a new and enhanced learning algorithm for regression tasks, the Effective Non-Random ELM (ENR-ELM), which simplifies the architecture design and eliminates the need for random hidden layer weight selection. The proposed method incorporates concepts from signal processing, such as basis functions and projections, into the ELM framework. We introduce two versions of the ENR-ELM: the approximated ENR-ELM and the incremental ENR-ELM. Experimental results on both synthetic and real datasets demonstrate that our method overcomes the problems of traditional ELM while maintaining comparable predictive performance.

Effective non-random extreme learning machine / De Canditiis, Daniela; Veglianti, Fabiano. - In: NEURAL COMPUTING & APPLICATIONS. - ISSN 1433-3058. - (2025). [10.1007/s00521-025-11519-5]

Effective non-random extreme learning machine

Fabiano Veglianti
2025

Abstract

The extreme learning machine (ELM) is a growing statistical technique widely applied to regression problems. In essence, ELMs are single-layer neural networks where the hidden layer weights are randomly sampled from a specific distribution, while the output layer weights are learned from the data. Two of the key challenges with this approach are the architecture design, specifically determining the optimal number of neurons in the hidden layer, and the method’s sensitivity to the random initialization of hidden layer weights. This paper introduces a new and enhanced learning algorithm for regression tasks, the Effective Non-Random ELM (ENR-ELM), which simplifies the architecture design and eliminates the need for random hidden layer weight selection. The proposed method incorporates concepts from signal processing, such as basis functions and projections, into the ELM framework. We introduce two versions of the ENR-ELM: the approximated ENR-ELM and the incremental ENR-ELM. Experimental results on both synthetic and real datasets demonstrate that our method overcomes the problems of traditional ELM while maintaining comparable predictive performance.
2025
elm; kernel methods; random feature learning; nonparametric regression
01 Pubblicazione su rivista::01g Articolo di rassegna (Review)
Effective non-random extreme learning machine / De Canditiis, Daniela; Veglianti, Fabiano. - In: NEURAL COMPUTING & APPLICATIONS. - ISSN 1433-3058. - (2025). [10.1007/s00521-025-11519-5]
File allegati a questo prodotto
File Dimensione Formato  
DeCanditiis_preprint_Effective_2025.pdf

accesso aperto

Note: DOI 10.1007/s00521-025-11519-5
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 2.53 MB
Formato Adobe PDF
2.53 MB Adobe PDF
DeCanditiis_Effective_2025.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.6 MB
Formato Adobe PDF
5.6 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1743796
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact