We first consider the convergence of the simple competitive learning with vanishing learning rate parameters (VLRPs), Examples show that even in this setting the learning fails to converge in general, This brings us to consider the following problem, to find out a family of VLRPs such that an algorithm with the VLRPs reaches the global minima with probability one, Here, we present an approach different from stochastic approximation theory and determine a new family of VLRPs such that the corresponding learning algorithm gets out of the metastable states with probability one, In the literature it is generally believed that a family of reasonable VLRPs is of the order of 1/t(alpha) for 1/2 < alpha less than or equal to 1, where t is the time, However, we find that a family of VLRPs which makes the algorithm go to the global minima should be between 1/log t and 1/root log t.

Convergence theorems for a class of learning algorithms with VLRPs / J. F., Feng; Tirozzi, Benedetto. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 15:1(1997), pp. 45-68. [10.1016/s0925-2312(96)00043-4]

Convergence theorems for a class of learning algorithms with VLRPs

TIROZZI, Benedetto
1997

Abstract

We first consider the convergence of the simple competitive learning with vanishing learning rate parameters (VLRPs), Examples show that even in this setting the learning fails to converge in general, This brings us to consider the following problem, to find out a family of VLRPs such that an algorithm with the VLRPs reaches the global minima with probability one, Here, we present an approach different from stochastic approximation theory and determine a new family of VLRPs such that the corresponding learning algorithm gets out of the metastable states with probability one, In the literature it is generally believed that a family of reasonable VLRPs is of the order of 1/t(alpha) for 1/2 < alpha less than or equal to 1, where t is the time, However, we find that a family of VLRPs which makes the algorithm go to the global minima should be between 1/log t and 1/root log t.
1997
simple competitive learning; simulated annealing; stochastic approximation theory; vanishing learning rate parameters (vlrps)
01 Pubblicazione su rivista::01a Articolo in rivista
Convergence theorems for a class of learning algorithms with VLRPs / J. F., Feng; Tirozzi, Benedetto. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 15:1(1997), pp. 45-68. [10.1016/s0925-2312(96)00043-4]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/31067
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact