Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 27(1): 457-468, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36279347

RESUMO

Deep learning approaches for medical image analysis are limited by small data set size due to factors such as patient privacy and difficulties in obtaining expert labelling for each image. In medical imaging system development pipelines, phases for system development and classification algorithms often overlap with data collection, creating small disjoint data sets collected at numerous locations with differing protocols. In this setting, merging data from different data collection centers increases the amount of training data. However, a direct combination of datasets will likely fail due to domain shifts between imaging centers. In contrast to previous approaches that focus on a single data set, we add a domain adaptation module to a neural network and train using multiple data sets. Our approach encourages domain invariance between two multispectral autofluorescence imaging (maFLIM) data sets of in vivo oral lesions collected with an imaging system currently in development. The two data sets have differences in the sub-populations imaged and in the calibration procedures used during data collection. We mitigate these differences using a gradient reversal layer and domain classifier. Our final model trained with two data sets substantially increases performance, including a significant increase in specificity. We also achieve a significant increase in average performance over the best baseline model train with two domains (p = 0.0341). Our approach lays the foundation for faster development of computer-aided diagnostic systems and presents a feasible approach for creating a robust classifier that aligns images from multiple data centers in the presence of domain shifts.


Assuntos
Neoplasias Bucais , Redes Neurais de Computação , Humanos , Algoritmos , Diagnóstico por Imagem
2.
Artigo em Inglês | MEDLINE | ID: mdl-36793655

RESUMO

Given the prevalence of cardiovascular diseases (CVDs), the segmentation of the heart on cardiac computed tomography (CT) remains of great importance. Manual segmentation is time-consuming and intra-and inter-observer variabilities yield inconsistent and inaccurate results. Computer-assisted, and in particular, deep learning approaches to segmentation continue to potentially offer an accurate, efficient alternative to manual segmentation. However, fully automated methods for cardiac segmentation have yet to achieve accurate enough results to compete with expert segmentation. Thus, we focus on a semi-automated deep learning approach to cardiac segmentation that bridges the divide between a higher accuracy from manual segmentation and higher efficiency from fully automated methods. In this approach, we selected a fixed number of points along the surface of the cardiac region to mimic user interaction. Points-distance maps were then generated from these points selections, and a three-dimensional (3D) fully convolutional neural network (FCNN) was trained using points-distance maps to provide a segmentation prediction. Testing our method with different numbers of selected points, we achieved a Dice score from 0.742 to 0.917 across the four chambers. Specifically. Dice scores averaged 0.846 ± 0.059, 0.857 ± 0.052, 0.826 ± 0.062, and 0.824 ± 0.062 for the left atrium, left ventricle, right atrium, and right ventricle, respectively across all points selections. This point-guided, image-independent, deep learning segmentation approach illustrated a promising performance for chamber-by-chamber delineation of the heart in CT images.

3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3894-3897, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892083

RESUMO

In contrast to previous studies that focused on classical machine learning algorithms and hand-crafted features, we present an end-to-end neural network classification method able to accommodate lesion heterogeneity for improved oral cancer diagnosis using multispectral autofluorescence lifetime imaging (maFLIM) endoscopy. Our method uses an autoencoder framework jointly trained with a classifier designed to handle overfitting problems with reduced databases, which is often the case in healthcare applications. The autoencoder guides the feature extraction process through the reconstruction loss and enables the potential use of unsupervised data for domain adaptation and improved generalization. The classifier ensures the features extracted are task-specific, providing discriminative information for the classification task. The data-driven feature extraction method automatically generates task-specific features directly from fluorescence decays, eliminating the need for iterative signal reconstruction. We validate our proposed neural network method against support vector machine (SVM) baselines, with our method showing a 6.5%-8.3% increase in sensitivity. Our results show that neural networks that implement data-driven feature extraction provide superior results and enable the capacity needed to target specific issues, such as inter-patient variability and the heterogeneity of oral lesions.Clinical relevance- We improve standard classification algorithms for in vivo diagnosis of oral cancer lesions from maFLIm for clinical use in cancer screening, reducing unnecessary biopsies and facilitating early detection of oral cancer.


Assuntos
Neoplasias , Redes Neurais de Computação , Algoritmos , Humanos , Aprendizado de Máquina , Máquina de Vetores de Suporte
4.
Artigo em Inglês | MEDLINE | ID: mdl-35755405

RESUMO

Accurate segmentation of the prostate on computed tomography (CT) has many diagnostic and therapeutic applications. However, manual segmentation is time-consuming and suffers from high inter- and intra-observer variability. Computer-assisted approaches are useful to speed up the process and increase the reproducibility of the segmentation. Deep learning-based segmentation methods have shown potential for quick and accurate segmentation of the prostate on CT images. However, difficulties in obtaining manual, expert segmentations on a large quantity of images limit further progress. Thus, we proposed an approach to train a base model on a small, manually-labeled dataset and fine-tuned the model using unannotated images from a large dataset without any manual segmentation. The datasets used for pre-training and fine-tuning the base model have been acquired in different centers with different CT scanners and imaging parameters. Our fine-tuning method increased the validation and testing Dice scores. A paired, two-tailed t-test shows a significant change in test score (p = 0.017) demonstrating that unannotated images can be used to increase the performance of automated segmentation models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...