Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neuroimage ; 157: 157-172, 2017 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-28576413

RESUMO

Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients.


Assuntos
Mapeamento Encefálico/métodos , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Humanos , Modelos Teóricos
2.
Neuroimage ; 96: 143-57, 2014 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-24662577

RESUMO

The localization of brain sources based on EEG measurements is a topic that has attracted a lot of attention in the last decades and many different source localization algorithms have been proposed. However, their performance is limited in the case of several simultaneously active brain regions and low signal-to-noise ratios. To overcome these problems, tensor-based preprocessing can be applied, which consists in constructing a space-time-frequency (STF) or space-time-wave-vector (STWV) tensor and decomposing it using the Canonical Polyadic (CP) decomposition. In this paper, we present a new algorithm for the accurate localization of extended sources based on the results of the tensor decomposition. Furthermore, we conduct a detailed study of the tensor-based preprocessing methods, including an analysis of their theoretical foundation, their computational complexity, and their performance for realistic simulated data in comparison to conventional source localization algorithms such as sLORETA, cortical LORETA (cLORETA), and 4-ExSo-MUSIC. Our objective consists, on the one hand, in demonstrating the gain in performance that can be achieved by tensor-based preprocessing, and, on the other hand, in pointing out the limits and drawbacks of this method. Finally, we validate the STF and STWV techniques on real measurements to demonstrate their usefulness for practical applications.


Assuntos
Algoritmos , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Modelos Neurológicos , Rede Nervosa/fisiologia , Simulação por Computador , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Razão Sinal-Ruído
3.
IEEE Trans Neural Netw ; 7(6): 1535-7, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-18263551

RESUMO

Supervised learning of classifiers often resorts to the minimization of a quadratic error, even if this criterion is more especially matched to nonlinear regression problems. It is shown that the mapping built by a quadratic error minimization (QEM) tends to output the Bayesian discriminating rules even with nonuniform losses, provided the desired responses are chosen accordingly. This property is for instance shared by the multilayer perceptron (MLP). It is shown that their ultimate performance can be assessed with finite learning sets by establishing links with kernel estimators of density.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...