Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Radiol Artif Intell ; 6(2): e230137, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38323914

RESUMO

Purpose To evaluate performance improvements of general radiologists and breast imaging specialists when interpreting a set of diverse digital breast tomosynthesis (DBT) examinations with the aid of a custom-built categorical artificial intelligence (AI) system. Materials and Methods A fully balanced multireader, multicase reader study was conducted to compare the performance of 18 radiologists (nine general radiologists and nine breast imaging specialists) reading 240 retrospectively collected screening DBT mammograms (mean patient age, 59.8 years ± 11.3 [SD]; 100% women), acquired between August 2016 and March 2019, with and without the aid of a custom-built categorical AI system. The area under the receiver operating characteristic curve (AUC), sensitivity, and specificity across general radiologists and breast imaging specialists reading with versus without AI were assessed. Reader performance was also analyzed as a function of breast cancer characteristics and patient subgroups. Results Every radiologist demonstrated improved interpretation performance when reading with versus without AI, with an average AUC of 0.93 versus 0.87, demonstrating a difference in AUC of 0.06 (95% CI: 0.04, 0.08; P < .001). Improvement in AUC was observed for both general radiologists (difference of 0.08; P < .001) and breast imaging specialists (difference of 0.04; P < .001) and across all cancer characteristics (lesion type, lesion size, and pathology) and patient subgroups (race and ethnicity, age, and breast density) examined. Conclusion A categorical AI system helped improve overall radiologist interpretation performance of DBT screening mammograms for both general radiologists and breast imaging specialists and across various patient subgroups and breast cancer characteristics. Keywords: Computer-aided Diagnosis, Screening Mammography, Digital Breast Tomosynthesis, Breast Cancer, Screening, Convolutional Neural Network (CNN), Artificial Intelligence Supplemental material is available for this article. © RSNA, 2024.


Assuntos
Neoplasias da Mama , Feminino , Humanos , Pessoa de Meia-Idade , Neoplasias da Mama/diagnóstico por imagem , Mamografia/métodos , Estudos Retrospectivos , Inteligência Artificial , Detecção Precoce de Câncer/métodos , Radiologistas
2.
Sci Rep ; 9(1): 15540, 2019 10 29.
Artigo em Inglês | MEDLINE | ID: mdl-31664075

RESUMO

Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.

3.
Nat Biomed Eng ; 3(3): 173-182, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30948806

RESUMO

Owing to improvements in image recognition via deep learning, machine-learning algorithms could eventually be applied to automated medical diagnoses that can guide clinical decision-making. However, these algorithms remain a 'black box' in terms of how they generate the predictions from the input data. Also, high-performance deep learning requires large, high-quality training datasets. Here, we report the development of an understandable deep-learning system that detects acute intracranial haemorrhage (ICH) and classifies five ICH subtypes from unenhanced head computed-tomography scans. By using a dataset of only 904 cases for algorithm training, the system achieved a performance similar to that of expert radiologists in two independent test datasets containing 200 cases (sensitivity of 98% and specificity of 95%) and 196 cases (sensitivity of 92% and specificity of 95%). The system includes an attention map and a prediction basis retrieved from training data to enhance explainability, and an iterative process that mimics the workflow of radiologists. Our approach to algorithm development can facilitate the development of deep-learning systems for a variety of clinical applications and accelerate their adoption into clinical practice.


Assuntos
Algoritmos , Bases de Dados como Assunto , Aprendizado Profundo , Hemorragias Intracranianas/diagnóstico , Doença Aguda , Hemorragias Intracranianas/diagnóstico por imagem
4.
Radiol Artif Intell ; 1(4): e180066, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33937795

RESUMO

PURPOSE: To investigate the diagnostic accuracy of cascading convolutional neural network (CNN) for urinary stone detection on unenhanced CT images and to evaluate the performance of pretrained models enriched with labeled CT images across different scanners. MATERIALS AND METHODS: This HIPAA-compliant, institutional review board-approved, retrospective clinical study used unenhanced abdominopelvic CT scans from 535 adults suspected of having urolithiasis. The scans were obtained on two scanners (scanner 1 [hereafter S1] and scanner 2 [hereafter S2]). A radiologist reviewed clinical reports and labeled cases for determination of reference standard. Stones were present on 279 (S1, 131; S2, 148) and absent on 256 (S1, 158; S2, 98) scans. One hundred scans (50 from each scanner) were randomly reserved as the test dataset, and the rest were used for developing a cascade of two CNNs: The first CNN identified the extent of the urinary tract, and the second CNN detected presence of stone. Nine variations of models were developed through the combination of different training data sources (S1, S2, or both [hereafter SB]) with (ImageNet, GrayNet) and without (Random) pretrained CNNs. First, models were compared for generalizability at the section level. Second, models were assessed by using area under the receiver operating characteristic curve (AUC) and accuracy at the patient level with test dataset from both scanners (n = 100). RESULTS: The GrayNet-pretrained model showed higher classifier exactness than did ImageNet-pretrained or Random-initialized models when tested by using data from the same or different scanners at section level. At the patient level, the AUC for stone detection was 0.92-0.95, depending on the model. Accuracy of GrayNet-SB (95%) was higher than that of ImageNet-SB (91%) and Random-SB (88%). For stones larger than 4 mm, all models showed similar performance (false-negative results: two of 34). For stones smaller than 4 mm, the number of false-negative results for GrayNet-SB, ImageNet-SB, and Random-SB were one of 16, three of 16, and five of 16, respectively. GrayNet-SB identified stones in all 22 test cases that had obstructive uropathy. CONCLUSION: A cascading model of CNNs can detect urinary tract stones on unenhanced CT scans with a high accuracy (AUC, 0.954). Performance and generalization of CNNs across scanners can be enhanced by using transfer learning with datasets enriched with labeled medical images.© RSNA, 2019Supplemental material is available for this article. : An earlier incorrect version appeared online. This article was corrected on August 6, 2019.

5.
J Digit Imaging ; 32(4): 665-671, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30478479

RESUMO

Despite the well-established impact of sex and sex hormones on bone structure and density, there has been limited description of sexual dimorphism in the hand and wrist in the literature. We developed a deep convolutional neural network (CNN) model to predict sex based on hand radiographs of children and adults aged between 5 and 70 years. Of the 1531 radiographs tested, the algorithm predicted sex correctly in 95.9% (κ = 0.92) of the cases. Two human radiologists achieved 58% (κ = 0.15) and 46% (κ = - 0.07) accuracy. The class activation maps (CAM) showed that the model mostly focused on the 2nd and 3rd metacarpal base or thumb sesamoid in women, and distal radioulnar joint, distal radial physis and epiphysis, or 3rd metacarpophalangeal joint in men. The radiologists reviewed 70 cases (35 females and 35 males) labeled with sex along with heat maps generated by CAM, but they could not find any patterns that distinguish the two sexes. A small sample of patients (n = 44) with sexual developmental disorders or transgender identity was selected for a preliminary exploration of application of the model. The model prediction agreed with phenotypic sex in only 77.8% (κ = 0.54) of these cases. To the best of our knowledge, this is the first study that demonstrated a machine learning model to perform a task in which human experts could not fulfill.


Assuntos
Aprendizado Profundo , Mãos/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Radiografia/métodos , Caracteres Sexuais , Punho/anatomia & histologia , Adolescente , Adulto , Idoso , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
6.
Skeletal Radiol ; 48(2): 275-283, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30069585

RESUMO

OBJECTIVE: Radiographic bone age assessment (BAA) is used in the evaluation of pediatric endocrine and metabolic disorders. We previously developed an automated artificial intelligence (AI) deep learning algorithm to perform BAA using convolutional neural networks. We compared the BAA performance of a cohort of pediatric radiologists with and without AI assistance. MATERIALS AND METHODS: Six board-certified, subspecialty trained pediatric radiologists interpreted 280 age- and gender-matched bone age radiographs ranging from 5 to 18 years. Three of those radiologists then performed BAA with AI assistance. Bone age accuracy and root mean squared error (RMSE) were used as measures of accuracy. Intraclass correlation coefficient evaluated inter-rater variation. RESULTS: AI BAA accuracy was 68.2% overall and 98.6% within 1 year, and the mean six-reader cohort accuracy was 63.6 and 97.4% within 1 year. AI RMSE was 0.601 years, while mean single-reader RMSE was 0.661 years. Pooled RMSE decreased from 0.661 to 0.508 years, all individually decreasing with AI assistance. ICC without AI was 0.9914 and with AI was 0.9951. CONCLUSIONS: AI improves radiologist's bone age assessment by increasing accuracy and decreasing variability and RMSE. The utilization of AI by radiologists improves performance compared to AI alone, a radiologist alone, or a pooled cohort of experts. This suggests that AI may optimally be utilized as an adjunct to radiologist interpretation of imaging studies to improve performance.


Assuntos
Determinação da Idade pelo Esqueleto/métodos , Inteligência Artificial , Doenças Ósseas Metabólicas/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Adolescente , Algoritmos , Criança , Pré-Escolar , Aprendizado Profundo , Feminino , Humanos , Masculino , Estudos Retrospectivos
7.
J Digit Imaging ; 31(4): 393-402, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-28983851

RESUMO

A peripherally inserted central catheter (PICC) is a thin catheter that is inserted via arm veins and threaded near the heart, providing intravenous access. The final catheter tip position is always confirmed on a chest radiograph (CXR) immediately after insertion since malpositioned PICCs can cause potentially life-threatening complications. Although radiologists interpret PICC tip location with high accuracy, delays in interpretation can be significant. In this study, we proposed a fully-automated, deep-learning system with a cascading segmentation AI system containing two fully convolutional neural networks for detecting a PICC line and its tip location. A preprocessing module performed image quality and dimension normalization, and a post-processing module found the PICC tip accurately by pruning false positives. Our best model, trained on 400 training cases and selectively tuned on 50 validation cases, obtained absolute distances from ground truth with a mean of 3.10 mm, a standard deviation of 2.03 mm, and a root mean squares error (RMSE) of 3.71 mm on 150 held-out test cases. This system could help speed confirmation of PICC position and further be generalized to include other types of vascular access and therapeutic support devices.


Assuntos
Cateterismo Venoso Central/métodos , Cateterismo Periférico/métodos , Aprendizado Profundo , Reconhecimento Automatizado de Padrão/métodos , Radiografia Torácica/métodos , Cateteres Venosos Centrais , Bases de Dados Factuais , Eletrocardiografia/métodos , Feminino , Humanos , Masculino , Redes Neurais de Computação , Segurança do Paciente , Estudos Retrospectivos
8.
J Digit Imaging ; 30(4): 487-498, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28653123

RESUMO

Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an "eyeball test" to assess whether patients will tolerate major surgery or chemotherapy, "eyeballing" is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.


Assuntos
Aprendizado de Máquina , Músculo Esquelético/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Tecido Adiposo/diagnóstico por imagem , Fatores Etários , Inteligência Artificial , Índice de Massa Corporal , Feminino , Humanos , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Obesidade , Fatores Sexuais , Fatores de Tempo
9.
J Digit Imaging ; 30(4): 427-441, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28275919

RESUMO

Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.


Assuntos
Determinação da Idade pelo Esqueleto/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Adolescente , Adulto , Criança , Sistemas de Apoio a Decisões Clínicas , Feminino , Mãos/diagnóstico por imagem , Humanos , Masculino , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...