Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Comput ; 36(1): 128-150, 2023 Dec 12.
Article in English | MEDLINE | ID: mdl-38052077

ABSTRACT

A hypothesis in the study of the brain is that sparse coding is realized in information representation of external stimuli, which has been experimentally confirmed for visual stimulus recently. However, unlike the specific functional region in the brain, sparse coding in information processing in the whole brain has not been clarified sufficiently. In this study, we investigate the validity of sparse coding in the whole human brain by applying various matrix factorization methods to functional magnetic resonance imaging data of neural activities in the brain. The result suggests the sparse coding hypothesis in information representation in the whole human brain, because extracted features from the sparse matrix factorization (MF) method, sparse principal component analysis (SparsePCA), or method of optimal directions (MOD) under a high sparsity setting or an approximate sparse MF method, fast independent component analysis (FastICA), can classify external visual stimuli more accurately than the nonsparse MF method or sparse MF method under a low sparsity setting.


Subject(s)
Algorithms , Brain Mapping , Humans , Brain Mapping/methods , Magnetic Resonance Imaging/methods , Brain , Principal Component Analysis
2.
PLoS One ; 18(6): e0287708, 2023.
Article in English | MEDLINE | ID: mdl-37368916

ABSTRACT

Various brain functions that are necessary to maintain life activities materialize through the interaction of countless neurons. Therefore, it is important to analyze functional neuronal network. To elucidate the mechanism of brain function, many studies are being actively conducted on functional neuronal ensemble and hub, including all areas of neuroscience. In addition, recent study suggests that the existence of functional neuronal ensembles and hubs contributes to the efficiency of information processing. For these reasons, there is a demand for methods to infer functional neuronal ensembles from neuronal activity data, and methods based on Bayesian inference have been proposed. However, there is a problem in modeling the activity in Bayesian inference. The features of each neuron's activity have non-stationarity depending on physiological experimental conditions. As a result, the assumption of stationarity in Bayesian inference model impedes inference, which leads to destabilization of inference results and degradation of inference accuracy. In this study, we extend the range of the variable for expressing the neuronal state, and generalize the likelihood of the model for extended variables. By comparing with the previous study, our model can express the neuronal state in larger space. This generalization without restriction of the binary input enables us to perform soft clustering and apply the method to non-stationary neuroactivity data. In addition, for the effectiveness of the method, we apply the developed method to multiple synthetic fluorescence data generated from the electrical potential data in leaky integrated-and-fire model.


Subject(s)
Neurons , Bayes Theorem , Probability , Neurons/physiology
3.
Neural Comput ; 35(6): 1086-1099, 2023 May 12.
Article in English | MEDLINE | ID: mdl-36944243

ABSTRACT

We study the problem of hyperparameter tuning in sparse matrix factorization under a Bayesian framework. In prior work, an analytical solution of sparse matrix factorization with Laplace prior was obtained by a variational Bayes method under several approximations. Based on this solution, we propose a novel numerical method of hyperparameter tuning by evaluating the zero point of the normalization factor in a sparse matrix prior. We also verify that our method shows excellent performance for ground-truth sparse matrix reconstruction by comparing it with the widely used algorithm of sparse principal component analysis.

4.
Phys Rev E Stat Nonlin Soft Matter Phys ; 80(6 Pt 1): 061124, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20365135

ABSTRACT

The Kronecker channel model of wireless communication is analyzed using statistical mechanics methods. In the model, spatial proximities among transmission/reception antennas are taken into account as certain correlation matrices, which generally yield nontrivial dependence among symbols to be estimated. This prevents accurate assessment of the communication performance by naively using a previously developed analytical scheme based on a matrix integration formula. In order to resolve this difficulty, we develop a formalism that can formally handle the correlations in Kronecker models based on the known scheme. Unfortunately, direct application of the developed scheme is, in general, practically difficult. However, the formalism is still useful, indicating that the effect of the correlations generally increase after the fourth order with respect to correlation strength. Therefore, the known analytical scheme offers a good approximation in performance evaluation when the correlation strength is sufficiently small. For a class of specific correlation, we show that the performance analysis can be mapped to the problem of one-dimensional spin systems in random fields, which can be investigated without approximation by the belief propagation algorithm.


Subject(s)
Algorithms , Data Interpretation, Statistical , Models, Statistical , Signal Processing, Computer-Assisted , Telecommunications , Computer Simulation
5.
Phys Rev E Stat Nonlin Soft Matter Phys ; 78(3 Pt 1): 031116, 2008 Sep.
Article in English | MEDLINE | ID: mdl-18851002

ABSTRACT

The spectral density of various ensembles of sparse symmetric random matrices is analyzed using the cavity method. We consider two cases: matrices whose associated graphs are locally treelike, and sparse covariance matrices. We derive a closed set of equations from which the density of eigenvalues can be efficiently calculated. Within this approach, the Wigner semicircle law for Gaussian matrices and the Marcenko-Pastur law for covariance matrices are recovered easily. Our results are compared with numerical diagonalization, showing excellent agreement.

SELECTION OF CITATIONS
SEARCH DETAIL
...