Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Med Imaging Graph ; 112: 102325, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38228021

RESUMO

Automatic brain segmentation of magnetic resonance images (MRIs) from severe traumatic brain injury (sTBI) patients is critical for brain abnormality assessments and brain network analysis. Construction of sTBI brain segmentation model requires manually annotated MR scans of sTBI patients, which becomes a challenging problem as it is quite impractical to implement sufficient annotations for sTBI images with large deformations and lesion erosion. Data augmentation techniques can be applied to alleviate the issue of limited training samples. However, conventional data augmentation strategies such as spatial and intensity transformation are unable to synthesize the deformation and lesions in traumatic brains, which limits the performance of the subsequent segmentation task. To address these issues, we propose a novel medical image inpainting model named sTBI-GAN to synthesize labeled sTBI MR scans by adversarial inpainting. The main strength of our sTBI-GAN method is that it can generate sTBI images and corresponding labels simultaneously, which has not been achieved in previous inpainting methods for medical images. We first generate the inpainted image under the guidance of edge information following a coarse-to-fine manner, and then the synthesized MR image is used as the prior for label inpainting. Furthermore, we introduce a registration-based template augmentation pipeline to increase the diversity of the synthesized image pairs and enhance the capacity of data augmentation. Experimental results show that the proposed sTBI-GAN method can synthesize high-quality labeled sTBI images, which greatly improves the 2D and 3D traumatic brain segmentation performance compared with the alternatives. Code is available at .


Assuntos
Encefalopatias , Lesões Encefálicas Traumáticas , Humanos , Aprendizagem , Lesões Encefálicas Traumáticas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
2.
J Digit Imaging ; 36(5): 2138-2147, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37407842

RESUMO

To develop a deep learning-based model for detecting rib fractures on chest X-Ray and to evaluate its performance based on a multicenter study. Chest digital radiography (DR) images from 18,631 subjects were used for the training, testing, and validation of the deep learning fracture detection model. We first built a pretrained model, a simple framework for contrastive learning of visual representations (simCLR), using contrastive learning with the training set. Then, simCLR was used as the backbone for a fully convolutional one-stage (FCOS) objective detection network to identify rib fractures from chest X-ray images. The detection performance of the network for four different types of rib fractures was evaluated using the testing set. A total of 127 images from Data-CZ and 109 images from Data-CH with the annotations for four types of rib fractures were used for evaluation. The results showed that for Data-CZ, the sensitivities of the detection model with no pretraining, pretrained ImageNet, and pretrained DR were 0.465, 0.735, and 0.822, respectively, and the average number of false positives per scan was five in all cases. For the Data-CH test set, the sensitivities of three different pretraining methods were 0.403, 0.655, and 0.748. In the identification of four fracture types, the detection model achieved the highest performance for displaced fractures, with sensitivities of 0.873 and 0.774 for the Data-CZ and Data-CH test sets, respectively, with 5 false positives per scan, followed by nondisplaced fractures, buckle fractures, and old fractures. A pretrained model can significantly improve the performance of the deep learning-based rib fracture detection based on X-ray images, which can reduce missed diagnoses and improve the diagnostic efficacy.


Assuntos
Fraturas das Costelas , Humanos , Fraturas das Costelas/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Raios X , Radiografia , Estudos Retrospectivos
3.
IEEE/ACM Trans Comput Biol Bioinform ; 18(6): 2555-2565, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32149651

RESUMO

Breast cancer is the most common invasive cancer with the highest cancer occurrence in females. Handheld ultrasound is one of the most efficient ways to identify and diagnose the breast cancer. The area and the shape information of a lesion is very helpful for clinicians to make diagnostic decisions. In this study we propose a new deep-learning scheme, semi-pixel-wise cycle generative adversarial net (SPCGAN) for segmenting the lesion in 2D ultrasound. The method takes the advantage of a fully convolutional neural network (FCN) and a generative adversarial net to segment a lesion by using prior knowledge. We compared the proposed method to a fully connected neural network and the level set segmentation method on a test dataset consisting of 32 malignant lesions and 109 benign lesions. Our proposed method achieved a Dice similarity coefficient (DSC) of 0.92 while FCN and the level set achieved 0.90 and 0.79 respectively. Particularly, for malignant lesions, our method increases the DSC (0.90) of the fully connected neural network to 0.93 significantly (p 0.001). The results show that our SPCGAN can obtain robust segmentation results. The framework of SPCGAN is particularly effective when sufficient training samples are not available compared to FCN. Our proposed method may be used to relieve the radiologists' burden for annotation.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Ultrassonografia Mamária/métodos , Algoritmos , Mama/diagnóstico por imagem , Feminino , Humanos
4.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 31(6): 1400-4, 2014 Dec.
Artigo em Chinês | MEDLINE | ID: mdl-25868267

RESUMO

Online Mendelian Inheritance in Man (OMIM) is a knowledge source and data base for human genetic diseases and related genes. Each OMIM entry includes clinical synopsis, linkage analysis for candidate genes, chromosomal localization and animal models, which has become an authoritative source of information for the study of the relationship between genes and diseases. As overlap of disease symptoms may reflect interactions at the molecular level, comparison of phenotypic similarity may indicate candidate genes and help to discover functional connections between genes and proteins. However, the OMIM has used free text to describe disease phenotypes, which does not suit computer analysis. Standardization of OMIM data therefore has important implications for large-scale comparison of disease phenotypes and prediction of phenotype-genotype correlations. Recently, standard medical language systems, term frequency-inverse document frequency and the law of cosines for document classification have been introduced for mining of OMIM data. Combined with Gene Ontology and various comparison methods, this has achieved substantial successes. In this article, we have reviewed various methods for standardization and similarity comparison of OMIM data. We also predicted the trend for research in this direction.


Assuntos
Bases de Dados Genéticas , Humanos , Fenótipo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...