Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 95: 103145, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38615432

RESUMO

In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.


Assuntos
Dermatopatias , Humanos , Dermatopatias/diagnóstico por imagem , Imageamento Tridimensional/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos
2.
Med Image Anal ; 77: 102329, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35144199

RESUMO

We present an automated approach to detect and longitudinally track skin lesions on 3D total-body skin surface scans. The acquired 3D mesh of the subject is unwrapped to a 2D texture image, where a trained objected detection model, Faster R-CNN, localizes the lesions within the 2D domain. These detected skin lesions are mapped back to the 3D surface of the subject and, for subjects imaged multiple times, we construct a graph-based matching procedure to longitudinally track lesions that considers the anatomical correspondences among pairs of meshes and the geodesic proximity of corresponding lesions and the inter-lesion geodesic distances. We evaluated the proposed approach using 3DBodyTex, a publicly available dataset composed of 3D scans imaging the coloured skin (textured meshes) of 200 human subjects. We manually annotated locations that appeared to the human eye to contain a pigmented skin lesion as well as tracked a subset of lesions occurring on the same subject imaged in different poses. Our results, when compared to three human annotators, suggest that the trained Faster R-CNN detects lesions at a similar performance level as the human annotators. Our lesion tracking algorithm achieves an average matching accuracy of 88% on a set of detected corresponding pairs of prominent lesions of subjects imaged in different poses, and an average longitudinal accuracy of 71% when encompassing additional errors due to lesion detection. As there currently is no other large-scale publicly available dataset of 3D total-body skin lesions, we publicly release over 25,000 3DBodyTex manual annotations, which we hope will further research on total-body skin lesion analysis.


Assuntos
Algoritmos , Imagem Corporal Total , Humanos , Imagem Corporal Total/métodos
3.
Sci Rep ; 11(1): 7769, 2021 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-33833293

RESUMO

Automated machine learning approaches to skin lesion diagnosis from images are approaching dermatologist-level performance. However, current machine learning approaches that suggest management decisions rely on predicting the underlying skin condition to infer a management decision without considering the variability of management decisions that may exist within a single condition. We present the first work to explore image-based prediction of clinical management decisions directly without explicitly predicting the diagnosis. In particular, we use clinical and dermoscopic images of skin lesions along with patient metadata from the Interactive Atlas of Dermoscopy dataset (1011 cases; 20 disease labels; 3 management decisions) and demonstrate that predicting management labels directly is more accurate than predicting the diagnosis and then inferring the management decision ([Formula: see text] and [Formula: see text] improvement in overall accuracy and AUROC respectively), statistically significant at [Formula: see text]. Directly predicting management decisions also considerably reduces the over-excision rate as compared to management decisions inferred from diagnosis predictions (24.56% fewer cases wrongly predicted to be excised). Furthermore, we show that training a model to also simultaneously predict the seven-point criteria and the diagnosis of skin lesions yields an even higher accuracy (improvements of [Formula: see text] and [Formula: see text] in overall accuracy and AUROC respectively) of management predictions. Finally, we demonstrate our model's generalizability by evaluating on the publicly available MClass-D dataset and show that our model agrees with the clinical management recommendations of 157 dermatologists as much as they agree amongst each other.


Assuntos
Aprendizado Profundo , Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Dermatopatias/diagnóstico por imagem , Pele/diagnóstico por imagem , Humanos
4.
IEEE J Biomed Health Inform ; 23(2): 578-585, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-29994053

RESUMO

The presence of certain clinical dermoscopic features within a skin lesion may indicate melanoma, and automatically detecting these features may lead to more quantitative and reproducible diagnoses. We reformulate the task of classifying clinical dermoscopic features within superpixels as a segmentation problem, and propose a fully convolutional neural network to detect clinical dermoscopic features from dermoscopy skin lesion images. Our neural network architecture uses interpolated feature maps from several intermediate network layers, and addresses imbalanced labels by minimizing a negative multilabel Dice-F 1 score, where the score is computed across the minibatch for each label. Our approach ranked first place in the 2017 ISIC-ISBI Part 2: Dermoscopic Feature Classification Task, challenge over both the provided validation and test datasets, achieving a 0.895% area under the receiver operator characteristic curve score. We show how simple baseline models can outrank state-of-the-art approaches when using the official metrics of the challenge, and propose to use a fuzzy Jaccard Index that ignores the empty set (i.e., masks devoid of positive pixels) when ranking models. Our results suggest that the classification of clinical dermoscopic features can be effectively approached as a segmentation problem, and the current metrics used to rank models may not well capture the efficacy of the model. We plan to make our trained model and code publicly available.


Assuntos
Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Pele/diagnóstico por imagem , Bases de Dados Factuais , Humanos , Melanoma/diagnóstico por imagem , Curva ROC , Neoplasias Cutâneas/diagnóstico por imagem
5.
Artigo em Inglês | MEDLINE | ID: mdl-29993994

RESUMO

We propose a multi-task deep convolutional neural network, trained on multi-modal data (clinical and dermoscopic images, and patient meta-data), to classify the 7-point melanoma checklist criteria and perform skin lesion diagnosis. Our neural network is trained using several multi-task loss functions, where each loss considers different combinations of the input modalities, which allows our model to be robust to missing data at inference time. Our final model classifies the 7-point checklist and skin condition diagnosis, produces multi-modal feature vectors suitable for image retrieval, and localizes clinically discriminant regions. We benchmark our approach using 1011 lesion cases, and report comprehensive results over all 7-point criteria and diagnosis. We also make our dataset (images and metadata) publicly available online at http://derm.cs.sfu.ca.

6.
Comput Methods Programs Biomed ; 145: 85-93, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28552129

RESUMO

BACKGROUND AND OBJECTIVE: Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). METHODS: In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. RESULTS: We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. CONCLUSIONS: We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador , Mamografia/métodos , Redes Neurais de Computação , Algoritmos , Humanos , Mamografia/classificação
7.
Neuroimage ; 146: 1038-1049, 2017 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-27693612

RESUMO

We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Redes Neurais de Computação , Transtornos do Neurodesenvolvimento/diagnóstico por imagem , Encéfalo/patologia , Imagem de Tensor de Difusão , Feminino , Humanos , Recém-Nascido , Recém-Nascido Prematuro , Doenças do Prematuro , Masculino , Vias Neurais/diagnóstico por imagem , Vias Neurais/patologia , Transtornos do Neurodesenvolvimento/patologia
8.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 676-83, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25485438

RESUMO

Laparoscopic ultrasound (US) is often used during partial nephrectomy surgeries to identify tumour boundaries within the kidney. However, visual identification is challenging as tumour appearance varies across patients and US images exhibit significant noise levels. To address these challenges, we present the first fully automatic method for detecting the presence of kidney tumour in free-hand laparoscopic ultrasound sequences in near real-time. Our novel approach predicts the probability that a frame contains tumourous tissue using random forests and encodes this probability combined with a regularization term within a graph. Using Dijkstra's algorithm we find a globally optimal labelling (tumour vs. non-tumour) of each frame. We validate our method on a challenging clinical dataset composed of five patients, with a total of 2025 2D ultrasound frames, and demonstrate the ability to detect the presence of kidney tumour with a sensitivity and specificity of 0.774 and 0.916, respectively.


Assuntos
Documentação/métodos , Neoplasias Renais/diagnóstico , Neoplasias Renais/cirurgia , Laparoscopia/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia de Intervenção/métodos , Gravação em Vídeo/métodos , Algoritmos , Humanos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Nefrectomia/métodos , Sistemas de Informação em Radiologia , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Cirurgia Assistida por Computador/métodos , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...