In the present paper an automatic training procedure for adaptive neuro-fuzzy inference system (ANFIS) networks is presented. The initialization of the net is carried out by the /spl beta/-min-max fuzzy clustering procedure, which is a modified version of the original min-max technique by Simpson (1993). Parameter /spl beta/ affects the number, position and size of resulting clusters. Since different P values yield different initializations, the optimal one is chosen by applying a well known result of the learning theory, which states that, under the same condition of performance on training set, the net that shows the best generalization capability is the one which is characterized by the lowest structural complexity. An automatic backpropagation-like procedure is finally used to perform a fine tuning of the optimal net. Simulation tests and comparison with other non-automatic learning procedures are discussed.

Automatic Training of ANFIS Networks / Rizzi, Antonello; FRATTALE MASCIOLI, Fabio Massimo; Martinelli, Giuseppe. - STAMPA. - 3:(1999), pp. 1655-1660. (Intervento presentato al convegno International Conference on Fuzzy Systems (FUZZ-IEEE '99) tenutosi a Seoul, Corea del Sud nel 22-25 Aug. 1999) [10.1109/FUZZY.1999.790153].

Automatic Training of ANFIS Networks

RIZZI, Antonello;FRATTALE MASCIOLI, Fabio Massimo;MARTINELLI, Giuseppe
1999

Abstract

In the present paper an automatic training procedure for adaptive neuro-fuzzy inference system (ANFIS) networks is presented. The initialization of the net is carried out by the /spl beta/-min-max fuzzy clustering procedure, which is a modified version of the original min-max technique by Simpson (1993). Parameter /spl beta/ affects the number, position and size of resulting clusters. Since different P values yield different initializations, the optimal one is chosen by applying a well known result of the learning theory, which states that, under the same condition of performance on training set, the net that shows the best generalization capability is the one which is characterized by the lowest structural complexity. An automatic backpropagation-like procedure is finally used to perform a fine tuning of the optimal net. Simulation tests and comparison with other non-automatic learning procedures are discussed.
1999
0-7803-5406-0
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/243486
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 18
  • ???jsp.display-item.citation.isi??? ND
social impact