Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Neural Comput ; 22(11): 2729-62, 2010 Nov.
Article in English | MEDLINE | ID: mdl-20804386

ABSTRACT

We compare 10 methods of classifying fMRI volumes by applying them to data from a longitudinal study of stroke recovery: adaptive Fisher's linear and quadratic discriminant; gaussian naive Bayes; support vector machines with linear, quadratic, and radial basis function (RBF) kernels; logistic regression; two novel methods based on pairs of restricted Boltzmann machines (RBM); and K-nearest neighbors. All methods were tested on three binary classification tasks, and their out-of-sample classification accuracies are compared. The relative performance of the methods varies considerably across subjects and classification tasks. The best overall performers were adaptive quadratic discriminant, support vector machines with RBF kernels, and generatively trained pairs of RBMs.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging , Pattern Recognition, Automated/methods , Stroke/pathology , Algorithms , Humans
2.
IEEE Trans Pattern Anal Mach Intell ; 30(8): 1415-26, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18566495

ABSTRACT

Extending traditional models for discriminative labeling of structured data to include higher-order structure in the labels results in an undesirable exponential increase in model complexity. In this paper, we present a model that is capable of learning such structures using a random field of parameterized features. These features can be functions of arbitrary combinations of observations, labels and auxiliary hidden variables. We also present a simple induction scheme to learn these features, which can automatically determine the complexity needed for a given data set. We apply the model to two real-world tasks, information extraction and image labeling, and compare our results to several other methods for discriminative labeling.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Artificial Intelligence , Computer Simulation , Models, Statistical
3.
Neural Comput ; 20(9): 2325-60, 2008 Sep.
Article in English | MEDLINE | ID: mdl-18386986

ABSTRACT

Naturally occurring sensory stimuli are dynamic. In this letter, we consider how spiking neural populations might transmit information about continuous dynamic stimulus variables. The combination of simple encoders and temporal stimulus correlations leads to a code in which information is not readily available to downstream neurons. Here, we explore a complex encoder that is paired with a simple decoder that allows representation and manipulation of the dynamic information in neural systems. The encoder we present takes the form of a biologically plausible recurrent spiking neural network where the output population recodes its inputs to produce spikes that are independently decodeable. We show that this network can be learned in a supervised manner by a simple local learning rule.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Nonlinear Dynamics , Computer Simulation
4.
Neural Comput ; 19(2): 404-41, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17206870

ABSTRACT

Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the capabilities of populations of neurons to implement computations in the face of uncertainty. However, one major facet of uncertainty has received comparatively little attention: time. In a dynamic, rapidly changing world, data are only temporarily relevant. Here, we analyze the computational consequences of encoding stimulus trajectories in populations of neurons. For the most obvious, simple, instantaneous encoder, the correlations induced by natural, smooth stimuli engender a decoder that requires access to information that is nonlocal both in time and across neurons. This formally amounts to a ruinous representation. We show that there is an alternative encoder that is computationally and representationally powerful in which each spike contributes independent information; it is independently decodable, in other words. We suggest this as an appropriate foundation for understanding time-varying population codes. Furthermore, we show how adaptation to temporal stimulus statistics emerges directly from the demands of simple decoding.


Subject(s)
Brain/cytology , Computer Simulation , Models, Neurological , Neurons/physiology , Action Potentials/physiology , Animals , Brain/physiology , Nonlinear Dynamics , Normal Distribution , Rats , Time Factors
5.
IEEE Trans Neural Netw ; 15(4): 838-49, 2004 Jul.
Article in English | MEDLINE | ID: mdl-15461077

ABSTRACT

Under-complete models, which derive lower dimensional representations of input data, are valuable in domains in which the number of input dimensions is very large, such as data consisting of a temporal sequence of images. This paper presents the under-complete product of experts (UPoE), where each expert models a one-dimensional projection of the data. Maximum-likelihood learning rules for this model constitute a tractable and exact algorithm for learning under-complete independent components. The learning rules for this model coincide with approximate learning rules proposed earlier for under-complete independent component analysis (UICA) models. This paper also derives an efficient sequential learning algorithm from this model and discusses its relationship to sequential independent component analysis (ICA), projection pursuit density estimation, and feature induction algorithms for additive random field models. This paper demonstrates the efficacy of these novel algorithms on high-dimensional continuous datasets.


Subject(s)
Algorithms , Artificial Intelligence , Decision Support Techniques , Information Theory , Models, Statistical , Neural Networks, Computer , Probability Learning , Computer Simulation , Expert Systems , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated , Principal Component Analysis
6.
Annu Rev Neurosci ; 26: 381-410, 2003.
Article in English | MEDLINE | ID: mdl-12704222

ABSTRACT

In the vertebrate nervous system, sensory stimuli are typically encoded through the concerted activity of large populations of neurons. Classically, these patterns of activity have been treated as encoding the value of the stimulus (e.g., the orientation of a contour), and computation has been formalized in terms of function approximation. More recently, there have been several suggestions that neural computation is akin to a Bayesian inference process, with population activity patterns representing uncertainty about stimuli in the form of probability distributions (e.g., the probability density function over the orientation of a contour). This paper reviews both approaches, with a particular emphasis on the latter, which we see as a very promising framework for future modeling and experimental work.


Subject(s)
Nervous System Physiological Phenomena , Neural Networks, Computer , Neurons/physiology , Animals , Bayes Theorem , Humans , Models, Neurological , Motivation , Nerve Net , Neurons/classification , Orientation , Psychophysics
SELECTION OF CITATIONS
SEARCH DETAIL
...