Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Comput ; 26(11): 2493-526, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25149702

RESUMO

Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns that satisfy certain subspace constraints. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively, such as hippocampus and olfactory cortex. Here we consider associative memories with boundedly noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprising, we show that internal noise improves the performance of the recall phase while the pattern retrieval capacity remains intact: the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.


Assuntos
Algoritmos , Aprendizagem por Associação/fisiologia , Memória/fisiologia , Modelos Neurológicos , Simulação por Computador , Humanos , Rememoração Mental , Redes Neurais de Computação , Probabilidade , Fatores de Tempo
2.
IEEE Trans Neural Netw Learn Syst ; 25(3): 557-70, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24807451

RESUMO

We consider the problem of neural association for a network of nonbinary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall the previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e., comprise a subspace of the set of all possible patterns, then the pattern retrieval capacity is exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e., the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to increase both the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple algorithms are presented for the recall phase. Using analytical methods and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.


Assuntos
Aprendizagem por Associação/fisiologia , Memória/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Algoritmos , Simulação por Computador , Humanos , Rememoração Mental , Probabilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...