Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phys Med Biol ; 69(5)2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38316044

RESUMO

Objective.Multimodal medical image fusion (MMIF) technologies merges diverse medical images with rich information, boosting diagnostic efficiency and accuracy. Due to global optimization and single-valued nature, convolutional sparse representation (CSR) outshines the standard sparse representation (SR) in significance. By addressing the challenges of sensitivity to highly redundant dictionaries and robustness to misregistration, an adaptive convolutional sparsity scheme with measurement of thesub-band correlationin the non-subsampled contourlet transform (NSCT) domain is proposed for MMIF.Approach.The fusion scheme incorporates four main components: image decomposition into two scales, fusion of detail layers, fusion of base layers, and reconstruction of the two scales. We solved a Tikhonov regularization optimization problem with source images to obtain the base and detail layers. Then, after CSR processing, detail layers were sparsely decomposed using pre-trained dictionary filters for initial coefficient maps. NSCT domain'ssub-band correlationwas used to refine fusion coefficient maps, and sparse reconstruction produced the fused detail layer. Meanwhile, base layers were fused using averaging. The final fused image was obtained via two-scale reconstruction.Main results.Experimental validation of clinical image sets revealed that the proposed fusion scheme can not only effectively eliminate the interference of partial misregistration, but also outperform the representative state-of-the-art fusion schemes in the preservation of structural and textural details according to subjective visual evaluations and objective quality evaluations.Significance. The proposed fusion scheme is competitive due to its low-redundancy dictionary, robustness to misregistration, and better fusion performance. This is achieved by training the dictionary with minimal samples through CSR to adaptively preserve overcompleteness for detail layers, and constructing fusion activity level withsub-band correlationin the NSCT domain to maintain CSR attributes. Additionally, ordering the NSCT for reverse sparse representation further enhancessub-band correlationto promote the preservation of structural and textural details.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Tecnologia , Processamento de Imagem Assistida por Computador/métodos
2.
Math Biosci Eng ; 20(12): 21537-21562, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38124609

RESUMO

In recent years, with the continuous development of artificial intelligence and brain-computer interfaces, emotion recognition based on electroencephalogram (EEG) signals has become a prosperous research direction. Due to saliency in brain cognition, we construct a new spatio-temporal convolutional attention network for emotion recognition named BiTCAN. First, in the proposed method, the original EEG signals are de-baselined, and the two-dimensional mapping matrix sequence of EEG signals is constructed by combining the electrode position. Second, on the basis of the two-dimensional mapping matrix sequence, the features of saliency in brain cognition are extracted by using the Bi-hemisphere discrepancy module, and the spatio-temporal features of EEG signals are captured by using the 3-D convolution module. Finally, the saliency features and spatio-temporal features are fused into the attention module to further obtain the internal spatial relationships between brain regions, and which are input into the classifier for emotion recognition. Many experiments on DEAP and SEED (two public datasets) show that the accuracies of the proposed algorithm on both are higher than 97%, which is superior to most existing emotion recognition algorithms.


Assuntos
Inteligência Artificial , Encéfalo , Cognição , Emoções , Algoritmos , Eletroencefalografia
3.
Entropy (Basel) ; 24(5)2022 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-35626467

RESUMO

The methods based on the convolutional neural network have demonstrated its powerful information integration ability in image fusion. However, most of the existing methods based on neural networks are only applied to a part of the fusion process. In this paper, an end-to-end multi-focus image fusion method based on a multi-scale generative adversarial network (MsGAN) is proposed that makes full use of image features by a combination of multi-scale decomposition with a convolutional neural network. Extensive qualitative and quantitative experiments on the synthetic and Lytro datasets demonstrated the effectiveness and superiority of the proposed MsGAN compared to the state-of-the-art multi-focus image fusion methods.

4.
Artigo em Inglês | MEDLINE | ID: mdl-35404819

RESUMO

As a kind of non-invasive, low-cost, and readily available brain examination, EEG has attached significance to the means of clinical diagnosis of epilepsy. However, the reading of long-term EEG records has brought a heavy burden to neurologists and experts. Therefore, automatic EEG classification for epileptic patients plays an essential role in epilepsy diagnosis and treatment. This paper proposes an Attention Mechanism-based Wavelet Convolution Neural Network for epilepsy EEG classification. Attention Mechanism-based Wavelet Convolution Neural Network firstly uses multi-scale wavelet analysis to decompose the input EEGs to obtain their components in different frequency bands. Then, these decomposed multi-scale EEGs are input into the Convolution Neural Network with an attention mechanism for further feature extraction and classification. The proposed algorithm achieves 98.89% triple classification accuracy on the Bonn EEG database and 99.70% binary classification accuracy on the Bern-Barcelona EEG database. Our experiments prove that the proposed algorithm achieves a state-of-the-art classification effect on epilepsy EEG.


Assuntos
Epilepsia , Processamento de Sinais Assistido por Computador , Algoritmos , Eletroencefalografia , Epilepsia/diagnóstico , Humanos , Redes Neurais de Computação , Análise de Ondaletas
5.
Front Comput Neurosci ; 15: 743426, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34733148

RESUMO

As one of the key technologies of emotion computing, emotion recognition has received great attention. Electroencephalogram (EEG) signals are spontaneous and difficult to camouflage, so they are used for emotion recognition in academic and industrial circles. In order to overcome the disadvantage that traditional machine learning based emotion recognition technology relies too much on a manual feature extraction, we propose an EEG emotion recognition algorithm based on 3D feature fusion and convolutional autoencoder (CAE). First, the differential entropy (DE) features of different frequency bands of EEG signals are fused to construct the 3D features of EEG signals, which retain the spatial information between channels. Then, the constructed 3D features are input into the CAE constructed in this paper for emotion recognition. In this paper, many experiments are carried out on the open DEAP dataset, and the recognition accuracy of valence and arousal dimensions are 89.49 and 90.76%, respectively. Therefore, the proposed method is suitable for emotion recognition tasks.

6.
Entropy (Basel) ; 23(4)2021 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-33808436

RESUMO

The unavoidable noise often present in synthetic aperture radar (SAR) images, such as speckle noise, negatively impacts the subsequent processing of SAR images. Further, it is not easy to find an appropriate application for SAR images, given that the human visual system is sensitive to color and SAR images are gray. As a result, a noisy SAR image fusion method based on nonlocal matching and generative adversarial networks is presented in this paper. A nonlocal matching method is applied to processing source images into similar block groups in the pre-processing step. Then, adversarial networks are employed to generate a final noise-free fused SAR image block, where the generator aims to generate a noise-free SAR image block with color information, and the discriminator tries to increase the spatial resolution of the generated image block. This step ensures that the fused image block contains high resolution and color information at the same time. Finally, a fused image can be obtained by aggregating all the image blocks. By extensive comparative experiments on the SEN1-2 datasets and source images, it can be found that the proposed method not only has better fusion results but is also robust to image noise, indicating the superiority of the proposed noisy SAR image fusion method over the state-of-the-art methods.

7.
Curr Med Imaging ; 16(10): 1243-1258, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32807062

RESUMO

BACKGROUND: Medical image fusion is very important for the diagnosis and treatment of diseases. In recent years, there have been a number of different multi-modal medical image fusion algorithms that can provide delicate contexts for disease diagnosis more clearly and more conveniently. Recently, nuclear norm minimization and deep learning have been used effectively in image processing. METHODS: A multi-modality medical image fusion method using a rolling guidance filter (RGF) with a convolutional neural network (CNN) based feature mapping and nuclear norm minimization (NNM) is proposed. At first, we decompose medical images to base layer components and detail layer components by using RGF. In the next step, we get the basic fused image through the pretrained CNN model. The CNN model with pre-training is used to obtain the significant characteristics of the base layer components. And we can compute the activity level measurement from the regional energy of CNN-based fusion maps. Then, a detail fused image is gained by NNM. That is, we use NNM to fuse the detail layer components. At last, the basic and detail fused images are integrated into the fused result. RESULTS: From the comparison with the most advanced fusion algorithms, the results of experiments indicate that this fusion algorithm has the best effect in visual evaluation and objective standard. CONCLUSION: The fusion algorithm using RGF and CNN-based feature mapping, combined with NNM, can improve fusion effects and suppress artifacts and blocking effects in the fused results.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Artefatos , Núcleo Celular
8.
Sensors (Basel) ; 18(10)2018 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-30322174

RESUMO

In this paper, we propose a boosting synthetic aperture radar (SAR) image despeckling method based on non-local weighted group low-rank representation (WGLRR). The spatial structure information of SAR images leads to the similarity of the patches. Furthermore, the data matrix grouped by the similar patches within the noise-free SAR image is often low-rank. Based on this, we use low-rank representation (LRR) to recover the noise-free group data matrix. To maintain the fidelity of the recovered image, we integrate the corrupted probability of each pixel into the group LRR model as a weight to constrain the fidelity of recovered noise-free patches. Each single patch might belong to several groups, so different estimations of each patch are aggregated with a weighted averaging procedure. The residual image contains signal leftovers due to the imperfect denoising, so we strengthen the signal by leveraging on the availability of the denoised image to suppress noise further. Experimental results on simulated and actual SAR images show the superior performance of the proposed method in terms of objective indicators and of perceived image quality.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...