Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw ; 8(2): 349-59, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18255638

RESUMO

Proper initialization is one of the most important prerequisites for fast convergence of feedforward neural networks like high-order and multilayer perceptrons. This publication aims at determining the optimal variance (or range) for the initial weights and biases, which is the principal parameter of random initialization methods for both types of neural networks. An overview of random weight initialization methods for multilayer perceptrons is presented. These methods are extensively tested using eight real-world benchmark data sets and a broad range of initial weight variances by means of more than 30000 simulations, in the aim to find the best weight initialization method for multilayer perceptrons. For high-order networks, a large number of experiments (more than 200000 simulations) was performed, using three weight distributions, three activation functions, several network orders, and the same eight data sets. The results of these experiments are compared to weight initialization techniques for multilayer perceptrons, which leads to the proposal of a suitable initialization method for high-order perceptrons. The conclusions on the initialization methods for both types of networks are justified by sufficiently small confidence intervals of the mean convergence times.

2.
Int J Neural Syst ; 8(5-6): 535-58, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-10065835

RESUMO

Almost all artificial neural networks are by default fully connected, which often implies a high redundancy and complexity. Little research has been devoted to the study of partially connected neural networks, despite its potential advantages like reduced training and recall time, improved generalization capabilities, reduced hardware requirements, as well as being a step closer to biological reality. This publication presents an extensive survey of the various kinds of partially connected neural networks, clustered into a clear framework, followed by a detailed comparative discussion.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Rede Nervosa , Dinâmica não Linear , Processos Estocásticos
3.
Neural Comput ; 8(2): 451-60, 1996 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-8581889

RESUMO

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.


Assuntos
Redes Neurais de Computação , Algoritmos , Cinética
4.
Appl Opt ; 35(26): 5301-7, 1996 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-21127522

RESUMO

Sigmoidlike activation functions, as available in analog hardware, differ in various ways from the standard sigmoidal function because they are usually asymmetric, truncated, and have a nonstandard gain. We present an adaptation of the backpropagation learning rule to compensate for these nonstandard sigmoids. This method is applied to multilayer neural networks with all-optical forward propagation and liquid-crystal light valves (LCLV) as optical thresholding devices. The results of simulations of a backpropagation neural network with five different LCLV response curves as activation functions are presented. Although LCLV's perform poorly with the standard backpropagation algorithm, it is shown that our adapted learning rule performs well with these LCLV curves.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...