Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 54: 17-37, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24637071

RESUMO

Determining good initial conditions for an algorithm used to train a neural network is considered a parameter estimation problem dealing with uncertainty about the initial weights. Interval analysis approaches model uncertainty in parameter estimation problems using intervals and formulating tolerance problems. Solving a tolerance problem is defining lower and upper bounds of the intervals so that the system functionality is guaranteed within predefined limits. The aim of this paper is to show how the problem of determining the initial weight intervals of a neural network can be defined in terms of solving a linear interval tolerance problem. The proposed linear interval tolerance approach copes with uncertainty about the initial weights without any previous knowledge or specific assumptions on the input data as required by approaches such as fuzzy sets or rough sets. The proposed method is tested on a number of well known benchmarks for neural networks trained with the back-propagation family of algorithms. Its efficiency is evaluated with regards to standard performance measures and the results obtained are compared against results of a number of well known and established initialization methods. These results provide credible evidence that the proposed method outperforms classical weight initialization methods.


Assuntos
Modelos Lineares , Redes Neurais de Computação , Resolução de Problemas , Algoritmos , Humanos , Modelos Teóricos , Análise de Regressão
2.
Comput Methods Programs Biomed ; 81(3): 228-35, 2006 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16476503

RESUMO

Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.


Assuntos
Colonoscopia/métodos , Redes Neurais de Computação , Algoritmos , Inteligência Artificial , Simulação por Computador , Metodologias Computacionais , Humanos , Processamento de Imagem Assistida por Computador , Análise Numérica Assistida por Computador , Reconhecimento Automatizado de Padrão , Linguagens de Programação , Processamento de Sinais Assistido por Computador , Software , Design de Software , Interface Usuário-Computador
3.
Comput Biol Med ; 36(10): 1126-42, 2006 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16246320

RESUMO

The development of microarray technologies gives scientists the ability to examine, discover and monitor the mRNA transcript levels of thousands of genes in a single experiment. Nonetheless, the tremendous amount of data that can be obtained from microarray studies presents a challenge for data analysis. The most commonly used computational approach for analyzing microarray data is cluster analysis, since the number of genes is usually very high compared to the number of samples. In this paper, we investigate the application of the recently proposed k-windows clustering algorithm on gene expression microarray data. This algorithm apart from identifying the clusters present in a data set also calculates their number and thus requires no special knowledge about the data. To improve the quality of the clustering, we employ various dimension reduction techniques and propose a hybrid one. The results obtained by the application of the algorithm exhibit high classification success.


Assuntos
Inteligência Artificial , Análise por Conglomerados , Perfilação da Expressão Gênica/métodos , Computação Matemática , Neoplasias/genética , Análise de Sequência com Séries de Oligonucleotídeos/métodos , RNA Mensageiro/genética , Algoritmos , Neoplasias do Colo/genética , Feminino , Humanos , Leucemia Mieloide Aguda/genética , Linfoma/genética , Masculino , Redes Neurais de Computação , Leucemia-Linfoma Linfoblástico de Células Precursoras/genética , Neoplasias da Próstata/genética , Reprodutibilidade dos Testes , Software
4.
IEEE Trans Neural Netw ; 13(3): 774-9, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-18244474

RESUMO

A novel generalized theoretical result is presented that underpins the development of globally convergent first-order batch training algorithms which employ local learning rates. This result allows us to equip algorithms of this class with a strategy for adapting the overall direction of search to a descent one. In this way, a decrease of the batch-error measure at each training iteration is ensured, and convergence of the sequence of weight iterates to a local minimizer of the batch error function is obtained from remote initial weights. The effectiveness of the theoretical result is illustrated in three application examples by comparing two well-known training algorithms with local learning rates to their globally convergent modifications.

5.
IEEE Trans Neural Netw ; 13(6): 1268-84, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-18244526

RESUMO

We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmonotone criterion with respect to the maximum error function value of the M previous epochs, and we propose a subprocedure to dynamically compute M. The nonmonotone strategy can be incorporated in any batch training algorithm and provides fast, stable, and reliable learning. Experimental results in different classes of problems show that this approach improves the convergence speed and success percentage of first-order training algorithms and alleviates the need for fine-tuning problem-depended heuristic parameters.

6.
Neural Comput ; 11(7): 1769-96, 1999 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-10490946

RESUMO

This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.


Assuntos
Adaptação Psicológica , Algoritmos , Inteligência Artificial , Modelos Neurológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...