Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 12(1): 11773, 2022 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-35817814

RESUMO

Over the past few years, the processing of motor imagery (MI) electroencephalography (EEG) signals has been attracted for developing brain-computer interface (BCI) applications, since feature extraction and classification of these signals are extremely difficult due to the inherent complexity and tendency to artifact properties of them. The BCI systems can provide a direct interaction pathway/channel between the brain and a peripheral device, hence the MI EEG-based BCI systems seem crucial to control external devices for patients suffering from motor disabilities. The current study presents a semi-supervised model based on three-stage feature extraction and machine learning algorithms for MI EEG signal classification in order to improve the classification accuracy with smaller number of deep features for distinguishing right- and left-hand MI tasks. Stockwell transform is employed at the first phase of the proposed feature extraction method to generate two-dimensional time-frequency maps (TFMs) from one-dimensional EEG signals. Next, the convolutional neural network (CNN) is applied to find deep feature sets from TFMs. Then, the semi-supervised discriminant analysis (SDA) is utilized to minimize the number of descriptors. Finally, the performance of five classifiers, including support vector machine, discriminant analysis, k-nearest neighbor, decision tree, random forest, and the fusion of them are compared. The hyperparameters of SDA and mentioned classifiers are optimized by Bayesian optimization to maximize the accuracy. The presented model is validated using BCI competition II dataset III and BCI competition IV dataset 2b. The performance metrics of the proposed method indicate its efficiency for classifying MI EEG signals.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Algoritmos , Teorema de Bayes , Eletroencefalografia/métodos , Humanos , Redes Neurais de Computação , Máquina de Vetores de Suporte
2.
NPJ Digit Med ; 4(1): 88, 2021 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-34075194

RESUMO

Coronary artery disease (CAD), the most common manifestation of cardiovascular disease, remains the most common cause of mortality in the United States. Risk assessment is key for primary prevention of coronary events and coronary artery calcium (CAC) scoring using computed tomography (CT) is one such non-invasive tool. Despite the proven clinical value of CAC, the current clinical practice implementation for CAC has limitations such as the lack of insurance coverage for the test, need for capital-intensive CT machines, specialized imaging protocols, and accredited 3D imaging labs for analysis (including personnel and software). Perhaps the greatest gap is the millions of patients who undergo routine chest CT exams and demonstrate coronary artery calcification, but their presence is not often reported or quantitation is not feasible. We present two deep learning models that automate CAC scoring demonstrating advantages in automated scoring for both dedicated gated coronary CT exams and routine non-gated chest CTs performed for other reasons to allow opportunistic screening. First, we trained a gated coronary CT model for CAC scoring that showed near perfect agreement (mean difference in scores = -2.86; Cohen's Kappa = 0.89, P < 0.0001) with current conventional manual scoring on a retrospective dataset of 79 patients and was found to perform the task faster (average time for automated CAC scoring using a graphics processing unit (GPU) was 3.5 ± 2.1 s vs. 261 s for manual scoring) in a prospective trial of 55 patients with little difference in scores compared to three technologists (mean difference in scores = 3.24, 5.12, and 5.48, respectively). Then using CAC scores from paired gated coronary CT as a reference standard, we trained a deep learning model on our internal data and a cohort from the Multi-Ethnic Study of Atherosclerosis (MESA) study (total training n = 341, Stanford test n = 42, MESA test n = 46) to perform CAC scoring on routine non-gated chest CT exams with validation on external datasets (total n = 303) obtained from four geographically disparate health systems. On identifying patients with any CAC (i.e., CAC ≥ 1), sensitivity and PPV was high across all datasets (ranges: 80-100% and 87-100%, respectively). For CAC ≥ 100 on routine non-gated chest CTs, which is the latest recommended threshold to initiate statin therapy, our model showed sensitivities of 71-94% and positive predictive values in the range of 88-100% across all the sites. Adoption of this model could allow more patients to be screened with CAC scoring, potentially allowing opportunistic early preventive interventions.

3.
NPJ Digit Med ; 3: 136, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33083571

RESUMO

Advancements in deep learning techniques carry the potential to make significant contributions to healthcare, particularly in fields that utilize medical imaging for diagnosis, prognosis, and treatment decisions. The current state-of-the-art deep learning models for radiology applications consider only pixel-value information without data informing clinical context. Yet in practice, pertinent and accurate non-imaging data based on the clinical history and laboratory data enable physicians to interpret imaging findings in the appropriate clinical context, leading to a higher diagnostic accuracy, informative clinical decision making, and improved patient outcomes. To achieve a similar goal using deep learning, medical imaging pixel-based models must also achieve the capability to process contextual data from electronic health records (EHR) in addition to pixel data. In this paper, we describe different data fusion techniques that can be applied to combine medical imaging with EHR, and systematically review medical data fusion literature published between 2012 and 2020. We conducted a systematic search on PubMed and Scopus for original research articles leveraging deep learning for fusion of multimodality data. In total, we screened 985 studies and extracted data from 17 papers. By means of this systematic review, we present current knowledge, summarize important results and provide implementation guidelines to serve as a reference for researchers interested in the application of multimodal fusion in medical imaging.

4.
IEEE Trans Radiat Plasma Med Sci ; 2(3): 205-214, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29785411

RESUMO

Liver CT perfusion (CTP) is used in the detection, staging, and treatment response analysis of hepatic diseases. Unfortunately, CTP radiation exposures is significant, limiting more widespread use. Traditional CTP data processing reconstructs individual temporal samples, ignoring a large amount of shared anatomical information between temporal samples, suggesting opportunities for improved data processing. We adopt a prior-image-based reconstruction approach called Reconstruction of Difference (RoD) to enable low-exposure CTP acquisition. RoD differs from many algorithms by directly estimating the attenuation changes between the current patient state and a prior CT volume. We propose to use a high-fidelity unenhanced baseline CT image to integrate prior anatomical knowledge into subsequent data reconstructions. Using simulation studies based on a 4D digital anthropomorphic phantom with realistic time-attenuation curves, we compare RoD with conventional filtered-backprojection, penalized-likelihood estimation, and prior image penalized-likelihood estimation. We evaluate each method in comparisons of reconstructions at individual time points, accuracy of estimated time-attenuation curves, and in an analysis of common perfusion metric maps including hepatic arterial perfusion, hepatic portal perfusion, perfusion index, and time-to-peak. Results suggest that RoD enables significant exposure reductions, outperforming standard and more sophisticated model-based reconstruction, making RoD a potentially important tool to enable low-dose liver CTP.

5.
Artigo em Inglês | MEDLINE | ID: mdl-25571377

RESUMO

This paper presents a compressed sensing based reconstruction method for 3D digital breast tomosynthesis (DBT) imaging. Algebraic reconstruction technique (ART) has been in use in DBT imaging by minimizing the isotropic total variation (TV) of the reconstructed image. The resolution in DBT differs in sagittal and axial directions which should be encountered during the TV minimization. In this study we develop a 3D anisotropic TV (ATV) minimization by considering the different resolutions in different directions. A customized 3D Shepp-logan phantom was generated to mimic a real DBT image by considering the overlapping tissue and directional resolution issues. Results of the ART, ART+3D TV and ART+3D ATV are compared using structural similarity (SSIM) diagram.


Assuntos
Imageamento Tridimensional , Interpretação de Imagem Radiográfica Assistida por Computador , Algoritmos , Anisotropia , Feminino , Humanos , Glândulas Mamárias Humanas/patologia , Mamografia , Imagens de Fantasmas , Tomografia Computadorizada por Raios X
6.
Comput Math Methods Med ; 2013: 250689, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24371468

RESUMO

Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.


Assuntos
Mama/patologia , Imageamento Tridimensional/métodos , Mamografia/métodos , Intensificação de Imagem Radiográfica/métodos , Algoritmos , Artefatos , Gráficos por Computador , Compressão de Dados , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Modelos Teóricos , Imagens de Fantasmas , Linguagens de Programação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Interface Usuário-Computador , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...