Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
ArXiv ; 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38979486

RESUMO

We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues represented by high-dimensional, complex-valued vectors. These are composed into a single vector representing position by a similarity-preserving, conjunctive vector-binding operation. Self-consistency between the representations of the overall position and of the individual residues is enforced by a modular attractor network whose modules correspond to the grid cell modules in entorhinal cortex. The vector binding operation can also associate different contexts to spatial representations, yielding a model for entorhinal cortex and hippocampus. We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension, robust error correction, and hexagonal, carry-free encoding of spatial position. These properties in turn enable robust path integration and association with sensory inputs. More generally, the model formalizes how compositional computations could occur in the hippocampal formation and leads to testable experimental predictions.

2.
ArXiv ; 2023 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-37986727

RESUMO

We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.

3.
Nat Commun ; 14(1): 6033, 2023 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-37758716

RESUMO

A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., hardware implementations of networks of interacting binary spin variables. Most Ising machines leverage second-order interactions although important classes of optimization problems, such as satisfiability problems, map more seamlessly to Ising networks with higher-order interactions. Here, we demonstrate that higher-order Ising machines can solve satisfiability problems more resource-efficiently in terms of the number of spin variables and their connections when compared to traditional second-order Ising machines. Further, our results show on a benchmark dataset of Boolean k-satisfiability problems that higher-order Ising machines implemented with coupled oscillators rapidly find solutions that are better than second-order Ising machines, thus, improving the current state-of-the-art for Ising machines.

4.
Neural Comput ; 35(7): 1159-1186, 2023 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-37187162

RESUMO

We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations. Combining these decoding techniques with interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37022402

RESUMO

Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model.

6.
IEEE Trans Neural Netw Learn Syst ; 34(5): 2191-2204, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-34478381

RESUMO

Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks.


Assuntos
Cognição , Redes Neurais de Computação , Encéfalo , Resolução de Problemas , Neurônios/fisiologia
7.
IEEE Trans Neural Netw Learn Syst ; 34(12): 10993-10998, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35333724

RESUMO

Memory-augmented neural networks enhance a neural network with an external key-value (KV) memory whose complexity is typically dominated by the number of support vectors in the key memory. We propose a generalized KV memory that decouples its dimension from the number of support vectors by introducing a free parameter that can arbitrarily add or remove redundancy to the key memory representation. In effect, it provides an additional degree of freedom to flexibly control the tradeoff between robustness and the resources required to store and compute the generalized KV memory. This is particularly useful for realizing the key memory on in-memory computing hardware where it exploits nonideal, but extremely efficient nonvolatile memory devices for dense storage and computation. Experimental results show that adapting this parameter on demand effectively mitigates up to 44% nonidealities, at equal accuracy and number of devices, without any need for neural network retraining.

8.
Artigo em Inglês | MEDLINE | ID: mdl-36383581

RESUMO

Motivated by recent innovations in biologically inspired neuromorphic hardware, this article presents a novel unsupervised machine learning algorithm named Hyperseed that draws on the principles of vector symbolic architectures (VSAs) for fast learning of a topology preserving feature map of unlabeled data. It relies on two major operations of VSA, binding and bundling. The algorithmic part of Hyperseed is expressed within the Fourier holographic reduced representations (FHRR) model, which is specifically suited for implementation on spiking neuromorphic hardware. The two primary contributions of the Hyperseed algorithm are few-shot learning and a learning rule based on single vector operation. These properties are empirically evaluated on synthetic datasets and on illustrative benchmark use cases, IRIS classification, and a language identification task using the n -gram statistics. The results of these experiments confirm the capabilities of Hyperseed and its applications in neuromorphic hardware.

9.
IEEE Trans Neural Netw Learn Syst ; 33(4): 1688-1701, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33351770

RESUMO

We propose an approximation of echo state networks (ESNs) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer ESN (intESN) is a vector containing only n -bits integers (where is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs, classifying time series, and learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN.

10.
Proc IEEE Inst Electr Electron Eng ; 110(10): 1538-1571, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37868615

RESUMO

This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.

11.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2701-2713, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34699370

RESUMO

Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns-rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.

12.
Front Neurosci ; 16: 867568, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36699525

RESUMO

Operations on high-dimensional, fixed-width vectors can be used to distribute information from several vectors over a single vector of the same width. For example, a set of key-value pairs can be encoded into a single vector with multiplication and addition of the corresponding key and value vectors: the keys are bound to their values with component-wise multiplication, and the key-value pairs are combined into a single superposition vector with component-wise addition. The superposition vector is, thus, a memory which can then be queried for the value of any of the keys, but the result of the query is approximate. The exact vector is retrieved from a codebook (a.k.a. item memory), which contains vectors defined in the system. To perform these operations, the item memory vectors and the superposition vector must be the same width. Increasing the capacity of the memory requires increasing the width of the superposition and item memory vectors. In this article, we demonstrate that in a regime where many (e.g., 1,000 or more) key-value pairs are stored, an associative memory which maps key vectors to value vectors requires less memory and less computing to obtain the same reliability of storage as a superposition vector. These advantages are obtained because the number of storage locations in an associate memory can be increased without increasing the width of the vectors in the item memory. An associative memory would not replace a superposition vector as a medium of storage, but could augment it, because data recalled from an associative memory could be used in algorithms that use a superposition vector. This would be analogous to how human working memory (which stores about seven items) uses information recalled from long-term memory (which is much larger than the working memory). We demonstrate the advantages of an associative memory experimentally using the storage of large finite-state automata, which could model the storage and recall of state-dependent behavior by brains.

13.
IEEE Trans Neural Netw Learn Syst ; 32(8): 3777-3783, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-32833655

RESUMO

The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this brief, we focus on resource-efficient randomly connected neural networks known as random vector functional link (RVFL) networks since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world data sets from the UCI machine learning repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small n -bits integers, which results in a computationally efficient architecture. Finally, through hardware field-programmable gate array (FPGA) implementations, we show that such an approach consumes approximately 11 times less energy than that of the conventional RVFL.

14.
Biomed Phys Eng Express ; 6(2): 025010, 2020 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-33438636

RESUMO

OBJECTIVE: The 2017 PhysioNet/CinC Challenge focused on automatic classification of atrial fibrillation (AF) in short ECGs. This study aimed to evaluate the use of the data and results from the challenge for detection of AF in longer ECGs, taken from three other PhysioNet datasets. APPROACH: The used data-driven models were based on features extracted from ECG recordings, calculated according to three solutions from the challenge. A Random Forest classifier was trained with the data from the challenge. The performance was evaluated on all non-overlapping 30 s segments in all recordings from three MIT-BIH datasets. Fifty-six models were trained using different feature sets, both before and after applying three feature reduction techniques. MAIN RESULTS: Based on rhythm annotations, the AF proportion was 0.00 in the MIT-BIH Normal Sinus Rhythm (N = 46083 segments), 0.10 in the MIT-BIH Arrhythmia (N = 2880), and 0.41 in the MIT-BIH Atrial Fibrillation (N = 28104) dataset. For the best performing model, the corresponding detected proportions of AF were 0.00, 0.11 and 0.36 using all features, and 0.01, 0.10 and 0.38 when using the 15 best performing features. SIGNIFICANCE: The results obtained on the MIT-BIH datasets indicate that the training data and solutions from the 2017 Physionet/Cinc Challenge can be useful tools for developing robust AF detectors also in longer ECG recordings, even when using a low number of carefully selected features. The use of feature selection allows significantly reducing the number of features while preserving the classification performance, which can be important when building low-complexity AF classifiers on ECG devices with constrained computational and energy resources.


Assuntos
Algoritmos , Fibrilação Atrial/diagnóstico , Cardiologia/normas , Bases de Dados Factuais , Diagnóstico por Computador/métodos , Eletrocardiografia/métodos , Processamento de Sinais Assistido por Computador/instrumentação , Humanos
15.
IEEE Trans Biomed Eng ; 65(10): 2248-2258, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29993470

RESUMO

OBJECTIVE: Novel minimum-contact vital signs monitoring techniques like textile or capacitive electrocardiogram (ECG) provide new opportunities for health monitoring. These techniques are sensitive to artifacts and require handling of unstable signal quality. Spatio-temporal blind source separation (BSS) is capable of processing suchlike multichannel signals. However, BSS's permutation indeterminacy requires the selection of the cardiac signal (i.e., the component resembling the electric cardiac activity) after its separation from artifacts. This study evaluates different concepts for solving permutation indeterminacy. METHODS: Novel automated component selection routines based on heartbeat detections are compared with standard concepts, as using higher order moments or frequency-domain features, for solving permutation indeterminacy in spatio-temporal BSS. BSS was applied to a textile and a capacitive ECG dataset of healthy subjects performing a motion protocol, and to the MIT-BIH Arrhythmia Database. The performance of the subsequent component selection was evaluated by means of the heartbeat detection accuracy (ACC) using an automatically selected single component. RESULTS: The proposed heartbeat-detection-based selection routines significantly outperformed the standard selectors based on Skewness, Kurtosis, and frequency-domain features, especially for datasets containing motion artifacts. For arrhythmia data, beat analysis by sparse coding outperformed simple periodicity tests of the detected heartbeats. CONCLUSION: Component selection routines based on heartbeat detections are capable of reliably selecting cardiac signals after spatio-temporal BSS in case of severe motion artifacts and arrhythmia. SIGNIFICANCE: The availability of robust cardiac component selectors for solving permutation indeterminacy facilitates the usage of spatio-temporal BSS to extract cardiac signals in artifact-sensitive minimum-contact vital signs monitoring techniques.


Assuntos
Eletrocardiografia/métodos , Coração/fisiologia , Processamento de Sinais Assistido por Computador , Algoritmos , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/fisiopatologia , Bases de Dados Factuais , Humanos
16.
IEEE Trans Neural Netw Learn Syst ; 29(12): 5880-5898, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29993669

RESUMO

Hyperdimensional (HD) computing is a promising paradigm for future intelligent electronic appliances operating at low power. This paper discusses tradeoffs of selecting parameters of binary HD representations when applied to pattern recognition tasks. Particular design choices include density of representations and strategies for mapping data from the original representation. It is demonstrated that for the considered pattern recognition tasks (using synthetic and real-world data) both sparse and dense representations behave nearly identically. This paper also discusses implementation peculiarities which may favor one type of representations over the other. Finally, the capacity of representations of various densities is discussed.

17.
Neural Comput ; 30(6): 1449-1513, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29652585

RESUMO

To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.


Assuntos
Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Algoritmos , Simulação por Computador , Humanos
18.
IEEE Trans Neural Netw Learn Syst ; 28(6): 1250-1262, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-26978836

RESUMO

In this paper, we propose a new approach to implementing hierarchical graph neuron (HGN), an architecture for memorizing patterns of generic sensor stimuli, through the use of vector symbolic architectures. The adoption of a vector symbolic representation ensures a single-layer design while retaining the existing performance characteristics of HGN. This approach significantly improves the noise resistance of the HGN architecture, and enables a linear (with respect to the number of stored entries) time search for an arbitrary subpattern.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...