Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw ; 21(4): 529-50, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20172827

RESUMO

In this paper, we present the evolution of adaptive resonance theory (ART) neural network architectures (classifiers) using a multiobjective optimization approach. In particular, we propose the use of a multiobjective evolutionary approach to simultaneously evolve the weights and the topology of three well-known ART architectures; fuzzy ARTMAP (FAM), ellipsoidal ARTMAP (EAM), and Gaussian ARTMAP (GAM). We refer to the resulting architectures as MO-GFAM, MO-GEAM, and MO-GGAM, and collectively as MO-GART. The major advantage of MO-GART is that it produces a number of solutions for the classification problem at hand that have different levels of merit [accuracy on unseen data (generalization) and size (number of categories created)]. MO-GART is shown to be more elegant (does not require user intervention to define the network parameters), more effective (of better accuracy and smaller size), and more efficient (faster to produce the solution networks) than other ART neural network architectures that have appeared in the literature. Furthermore, MO-GART is shown to be competitive with other popular classifiers, such as classification and regression tree (CART) and support vector machines (SVMs).


Assuntos
Algoritmos , Inteligência Artificial , Redes Neurais de Computação , Simulação por Computador , Lógica Fuzzy , Humanos , Distribuição Normal , Reconhecimento Automatizado de Padrão
2.
Neural Netw ; 14(9): 1279-91, 2001 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-11718426

RESUMO

In this paper we are examining the issue of overtraining in Fuzzy ARTMAP. Over-training in Fuzzy ARTMAP manifests itself in two different ways: (a) it degrades the generalization performance of Fuzzy ARTMAP as training progresses; and (b) it creates unnecessarily large Fuzzy ARTMAP neural network architectures. In this work, we are demonstrating that overtraining happens in Fuzzy ARTMAP and we propose an old remedy for its cure: cross-validation. In our experiments, we compare the performance of Fuzzy ARTMAP that is trained (i) until the completion of training, (ii) for one epoch, and (iii) until its performance on a validation set is maximized. The experiments were performed on artificial and real databases. The conclusion derived from those experiments is that cross-validation is a useful procedure in Fuzzy ARTMAP, because it produces smaller Fuzzy ARTMAP architectures with improved generalization performance. The trade-off is that cross-validation introduces additional computational complexity in the training phase of Fuzzy ARTMAP.


Assuntos
Bases de Dados como Assunto , Lógica Fuzzy , Redes Neurais de Computação , Validação de Programas de Computador , Algoritmos
3.
IEEE Trans Neural Netw ; 12(5): 1023-36, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-18249930

RESUMO

This paper describes an approach to classification of noisy signals using a technique based on the fuzzy ARTMAP neural network (FAMNN). The proposed method is a modification of the testing phase of the fuzzy ARTMAP that exhibits superior generalization performance compared to the generalization performance of the standard fuzzy ARTMAP in the presence of noise. An application to textured gray-scale image segmentation is presented. The superiority of the proposed modification over the standard fuzzy ARTMAP is established by a number of experiments using various texture sets, feature vectors and noise types. The texture sets include various aerial photos and also samples obtained from the Brodatz album. Furthermore, the classification performance of the standard and the modified fuzzy ARTMAP is compared for different network sizes. Classification results that illustrate the performance of the modified algorithm and the FAMNN are presented.

4.
Neural Netw ; 12(6): 837-850, 1999 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-12662660

RESUMO

This paper discusses a variation of the Fuzzy ART algorithm referred to as the Fuzzy ART Variant. The Fuzzy ART Variant is a Fuzzy ART algorithm that uses a very large choice parameter value. Based on the geometrical interpretation of the weights in Fuzzy ART, useful properties of learning associated with the Fuzzy ART Variant are presented and proven. One of these properties establishes an upper bound on the number of list presentations required by the Fuzzy ART Variant to learn an arbitrary list of input patterns. This bound is small and demonstrates the short-training time property of the Fuzzy ART Variant. Through simulation, it is shown that the Fuzzy ART Variant is as good a clustering algorithm as a Fuzzy ART algorithm that uses typical (i.e. small) values for the choice parameter.

5.
IEEE Trans Neural Netw ; 10(4): 768-78, 1999.
Artigo em Inglês | MEDLINE | ID: mdl-18252576

RESUMO

In this paper we introduce a procedure, based on the max-min clustering method, that identifies a fixed order of training pattern presentation for fuzzy adaptive resonance theory mapping (ARTMAP). This procedure is referred to as the ordering algorithm, and the combination of this procedure with fuzzy ARTMAP is referred to as ordered fuzzy ARTMAP. Experimental results demonstrate that ordered fuzzy ARTMAP exhibits a generalization performance that is better than the average generalization performance of fuzzy ARTMAP, and in certain cases as good as, or better than the best fuzzy ARTMAP generalization performance. We also calculate the number of operations required by the ordering algorithm and compare it to the number of operations required by the training phase of fuzzy ARTMAP. We show that, under mild assumptions, the number of operations required by the ordering algorithm is a fraction of the number of operations required by fuzzy ARTMAP.

6.
IEEE Trans Neural Netw ; 9(3): 560-70, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18252479

RESUMO

A major problem associated with geometric hashing and methods which have emerged from it is the nonuniform distribution of invariants over the hash space. This has two serious effects on the performance of the method. First, it can result in an inefficient storage of data which can increase recognition time. Second, given that geometric hashing is highly amenable to parallel implementation, a nonuniform distribution of data poses difficulties in tackling the load-balancing problem. Finding a "good" geometric hash function which redistributes the invariants uniformly over the hash space is not easy. Current approaches make assumptions about the statistical characteristics of the data and then use techniques from probability theory to calculate a transformation that maps the nonuniform distribution of invariants to a uniform one. In this paper, a new approach is proposed based on an elastic hash table. In contrast to existing approaches which try to redistribute the invariants over the hash bins, we proceed oppositely by distributing the hash bins over the invariants. The key idea is to associate the hash bins with the output nodes of a self-organizing feature map (SOFM) neural network which is trained using the invariants as training examples. In this way, the location of a hash bin in the space of invariants is determined by the weight vector of the node associated with the hash bin. During training, the SOFM spreads the hash bins proportionally to the distribution of invariants (i.e., more hash bins are assigned to higher density areas while less hash bins are assigned to lower density areas) and adjusts their size so that they eventually hold almost the same number of invariants. The advantage of the proposed approach is that it is a process that adapts to the invariants through learning. Hence, it makes absolutely no assumptions about the statistical characteristics of the invariants and the geometric hash function is actually computed through learning. Furthermore, SOFM's "topology preserving" property ensures that the computed geometric hash function should be well behaved. The proposed approach, was shown to perform well on both artificial and real data.

7.
IEEE Trans Neural Netw ; 5(6): 873-89, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-18267862

RESUMO

A set of nonlinear differential equations that describe the dynamics of the ART1 model are presented, along with the motivation for their use. These equations are extensions of those developed by Carpenter and Grossberg (1987). It is shown how these differential equations allow the ART1 model to be realized as a collective nonlinear dynamical system. Specifically, we present an ART1-based neural network model whose description requires no external control features. That is, the dynamics of the model are completely determined by the set of coupled differential equations that comprise the model. It is shown analytically how the parameters of this model can be selected so as to guarantee a behavior equivalent to that of ART1 in both fast and slow learning scenarios. Simulations are performed in which the trajectories of node and weight activities are determined using numerical approximation techniques.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...