Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Medicine (Baltimore) ; 100(7): e24756, 2021 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-33607821

RESUMO

ABSTRACT: This study was conducted to develop a convolutional neural network (CNN)-based model to predict the sex and age of patients by identifying unique unknown features from paranasal sinus (PNS) X-ray images.We employed a retrospective study design and used anonymized patient imaging data. Two CNN models, adopting ResNet-152 and DenseNet-169 architectures, were trained to predict sex and age groups (20-39, 40-59, 60+ years). The area under the curve (AUC), algorithm accuracy, sensitivity, and specificity were assessed. Class-activation map (CAM) was used to detect deterministic areas. A total of 4160 PNS X-ray images were collected from 4160 patients. The PNS X-ray images of patients aged ≥20 years were retrieved from the picture archiving and communication database system of our institution. The classification performances in predicting the sex (male vs female) and 3 age groups (20-39, 40-59, 60+ years) for each established CNN model were evaluated.For sex prediction, ResNet-152 performed slightly better (accuracy = 98.0%, sensitivity = 96.9%, specificity = 98.7%, and AUC = 0.939) than DenseNet-169. CAM indicated that maxillary sinuses (males) and ethmoid sinuses (females) were major factors in identifying sex. Meanwhile, for age prediction, the DenseNet-169 model was slightly more accurate in predicting age groups (77.6 ±â€Š1.5% vs 76.3 ±â€Š1.1%). CAM suggested that the maxillary sinus and the periodontal area were primary factors in identifying age groups.Our deep learning model could predict sex and age based on PNS X-ray images. Therefore, it can assist in reducing the risk of patient misidentification in clinics.


Assuntos
Aprendizado Profundo/estatística & dados numéricos , Seios Paranasais/diagnóstico por imagem , Radiografia/métodos , Adulto , Idoso , Algoritmos , Área Sob a Curva , Gerenciamento de Dados , Bases de Dados Factuais , Feminino , Humanos , Masculino , Seio Maxilar/diagnóstico por imagem , Pessoa de Meia-Idade , Redes Neurais de Computação , Valor Preditivo dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
2.
Sci Rep ; 10(1): 13652, 2020 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-32788635

RESUMO

Colposcopy is widely used to detect cervical cancers, but experienced physicians who are needed for an accurate diagnosis are lacking in developing countries. Artificial intelligence (AI) has been recently used in computer-aided diagnosis showing remarkable promise. In this study, we developed and validated deep learning models to automatically classify cervical neoplasms on colposcopic photographs. Pre-trained convolutional neural networks were fine-tuned for two grading systems: the cervical intraepithelial neoplasia (CIN) system and the lower anogenital squamous terminology (LAST) system. The multi-class classification accuracies of the networks for the CIN system in the test dataset were 48.6 ± 1.3% by Inception-Resnet-v2 and 51.7 ± 5.2% by Resnet-152. The accuracies for the LAST system were 71.8 ± 1.8% and 74.7 ± 1.8%, respectively. The area under the curve (AUC) for discriminating high-risk lesions from low-risk lesions by Resnet-152 was 0.781 ± 0.020 for the CIN system and 0.708 ± 0.024 for the LAST system. The lesions requiring biopsy were also detected efficiently (AUC, 0.947 ± 0.030 by Resnet-152), and presented meaningfully on attention maps. These results may indicate the potential of the application of AI for automated reading of colposcopic photographs.


Assuntos
Colposcopia/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Displasia do Colo do Útero/diagnóstico , Neoplasias do Colo do Útero/classificação , Neoplasias do Colo do Útero/diagnóstico , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Inteligência Artificial , Estudos de Casos e Controles , Feminino , Humanos , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
3.
J Clin Med ; 9(5)2020 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-32456309

RESUMO

Background: Classification of colorectal neoplasms during colonoscopic examination is important to avoid unnecessary endoscopic biopsy or resection. This study aimed to develop and validate deep learning models that automatically classify colorectal lesions histologically on white-light colonoscopy images. Methods: White-light colonoscopy images of colorectal lesions exhibiting pathological results were collected and classified into seven categories: stages T1-4 colorectal cancer (CRC), high-grade dysplasia (HGD), tubular adenoma (TA), and non-neoplasms. The images were then re-classified into four categories including advanced CRC, early CRC/HGD, TA, and non-neoplasms. Two convolutional neural network models were trained, and the performances were evaluated in an internal test dataset and an external validation dataset. Results: In total, 3828 images were collected from 1339 patients. The mean accuracies of ResNet-152 model for the seven-category and four-category classification were 60.2% and 67.3% in the internal test dataset, and 74.7% and 79.2% in the external validation dataset, respectively, including 240 images. In the external validation, ResNet-152 outperformed two endoscopists for four-category classification, and showed a higher mean area under the curve (AUC) for detecting TA+ lesions (0.818) compared to the worst-performing endoscopist. The mean AUC for detecting HGD+ lesions reached 0.876 by Inception-ResNet-v2. Conclusions: A deep learning model presented promising performance in classifying colorectal lesions on white-light colonoscopy images; this model could help endoscopists build optimal treatment strategies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...