A new neuro-fuzzy classifier, inspired by the min-max neural model, is presented. The classification strategy of Simpson's min-max classifier consists of covering the training data with hyperboxes constrained to have their boundary surfaces parallel to the coordinate axes of the chosen reference system. In order to obtain a more precise covering of each data cluster, in the present work hyperboxes are rotated by a suitable local principal component analysis, so that it is possible to arrange the hyperboxes orientation along any direction of the data space. The new training algorithm is based on the ARC/PARC technique, which overcomes some undesired properties of the original Simpson's algorithm. In particular, the training result does not depend on patterns presentation order and hyperbox expansion is not limited by a fixed maximum size, so that it is possible to have different covering resolutions. A toy problem and two real data benchmarks are considered for illustration.
Generalized Min-Max Classifier / Rizzi, Antonello; FRATTALE MASCIOLI, Fabio Massimo; Martinelli, Giuseppe. - STAMPA. - 1:(2000), pp. 36-41. (Intervento presentato al convegno International Conference on Fuzzy Systems (FUZZ-IEEE 2000) tenutosi a San Antonio, Texas, USA nel 7-10 May 2000) [10.1109/FUZZY.2000.838630].
Generalized Min-Max Classifier
RIZZI, Antonello;FRATTALE MASCIOLI, Fabio Massimo;MARTINELLI, Giuseppe
2000
Abstract
A new neuro-fuzzy classifier, inspired by the min-max neural model, is presented. The classification strategy of Simpson's min-max classifier consists of covering the training data with hyperboxes constrained to have their boundary surfaces parallel to the coordinate axes of the chosen reference system. In order to obtain a more precise covering of each data cluster, in the present work hyperboxes are rotated by a suitable local principal component analysis, so that it is possible to arrange the hyperboxes orientation along any direction of the data space. The new training algorithm is based on the ARC/PARC technique, which overcomes some undesired properties of the original Simpson's algorithm. In particular, the training result does not depend on patterns presentation order and hyperbox expansion is not limited by a fixed maximum size, so that it is possible to have different covering resolutions. A toy problem and two real data benchmarks are considered for illustration.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.