Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 118
Filter
1.
Eur Radiol ; 2024 May 17.
Article in English | MEDLINE | ID: mdl-38758252

ABSTRACT

INTRODUCTION: This study investigates the performance of a commercially available artificial intelligence (AI) system to identify normal chest radiographs and its potential to reduce radiologist workload. METHODS: Retrospective analysis included consecutive chest radiographs from two medical centers between Oct 1, 2016 and Oct 14, 2016. Exclusions comprised follow-up exams within the inclusion period, bedside radiographs, incomplete images, imported radiographs, and pediatric radiographs. Three chest radiologists categorized findings into normal, clinically irrelevant, clinically relevant, urgent, and critical. A commercial AI system processed all radiographs, scoring 10 chest abnormalities on a 0-100 confidence scale. AI system performance was evaluated using the area under the ROC curve (AUC), assessing the detection of normal radiographs. Sensitivity was calculated for the default and a conservative operating point. the detection of negative predictive value (NPV) for urgent and critical findings, as well as the potential workload reduction, was calculated. RESULTS: A total of 2603 radiographs were acquired in 2141 unique patients. Post-exclusion, 1670 radiographs were analyzed. Categories included 479 normal, 332 clinically irrelevant, 339 clinically relevant, 501 urgent, and 19 critical findings. The AI system achieved an AUC of 0.92. Sensitivity for normal radiographs was 92% at default and 53% at the conservative operating point. At the conservative operating point, NPV was 98% for urgent and critical findings, and could result in a 15% workload reduction. CONCLUSION: A commercially available AI system effectively identifies normal chest radiographs and holds the potential to lessen radiologists' workload by omitting half of the normal exams from reporting. CLINICAL RELEVANCE STATEMENT: The AI system is able to detect half of all normal chest radiographs at a clinically acceptable operating point, thereby potentially reducing the workload for the radiologists by 15%. KEY POINTS: The AI system reached an AUC of 0.92 for the detection of normal chest radiographs. Fifty-three percent of normal chest radiographs were identified with a NPV of 98% for urgent findings. AI can reduce the workload of chest radiography reporting by 15%.

2.
Eur Radiol ; 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724765

ABSTRACT

OBJECTIVE: Deep learning (DL) MRI reconstruction enables fast scan acquisition with good visual quality, but the diagnostic impact is often not assessed because of large reader study requirements. This study used existing diagnostic DL to assess the diagnostic quality of reconstructed images. MATERIALS AND METHODS: A retrospective multisite study of 1535 patients assessed biparametric prostate MRI between 2016 and 2020. Likely clinically significant prostate cancer (csPCa) lesions (PI-RADS ≥ 4) were delineated by expert radiologists. T2-weighted scans were retrospectively undersampled, simulating accelerated protocols. DL reconstruction (DLRecon) and diagnostic DL detection (DLDetect) were developed. The effect on the partial area under (pAUC), the Free-Response Operating Characteristic (FROC) curve, and the structural similarity (SSIM) were compared as metrics for diagnostic and visual quality, respectively. DLDetect was validated with a reader concordance analysis. Statistical analysis included Wilcoxon, permutation, and Cohen's kappa tests for visual quality, diagnostic performance, and reader concordance. RESULTS: DLRecon improved visual quality at 4- and 8-fold (R4, R8) subsampling rates, with SSIM (range: -1 to 1) improved to 0.78 ± 0.02 (p < 0.001) and 0.67 ± 0.03 (p < 0.001) from 0.68 ± 0.03 and 0.51 ± 0.03, respectively. However, diagnostic performance at R4 showed a pAUC FROC of 1.33 (CI 1.28-1.39) for DL and 1.29 (CI 1.23-1.35) for naive reconstructions, both significantly lower than fully sampled pAUC of 1.58 (DL: p = 0.024, naïve: p = 0.02). Similar trends were noted for R8. CONCLUSION: DL reconstruction produces visually appealing images but may reduce diagnostic accuracy. Incorporating diagnostic AI into the assessment framework offers a clinically relevant metric essential for adopting reconstruction models into clinical practice. CLINICAL RELEVANCE STATEMENT: In clinical settings, caution is warranted when using DL reconstruction for MRI scans. While it recovered visual quality, it failed to match the prostate cancer detection rates observed in scans not subjected to acceleration and DL reconstruction.

3.
Eur J Radiol Open ; 12: 100545, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38293282

ABSTRACT

Purpose: To evaluate artificial intelligence-based computer-aided diagnosis (AI-CAD) for screening mammography, we analyzed the diagnostic performance of radiologists by providing and withholding AI-CAD results alternatively every month. Methods: This retrospective study was approved by the institutional review board with a waiver for informed consent. Between August 2020 and May 2022, 1819 consecutive women (mean age 50.8 ± 9.4 years) with 2061 screening mammography and ultrasound performed on the same day in a single institution were included. Radiologists interpreted screening mammography in clinical practice with AI-CAD results being provided or withheld alternatively by month. The AI-CAD results were retrospectively obtained for analysis even when withheld from radiologists. The diagnostic performances of radiologists and stand-alone AI-CAD were compared and the performances of radiologists with and without AI-CAD assistance were also compared by cancer detection rate, recall rate, sensitivity, specificity, accuracy and area under the receiver-operating-characteristics curve (AUC). Results: Twenty-nine breast cancer patients and 1790 women without cancers were included. Diagnostic performances of the radiologists did not significantly differ with and without AI-CAD assistance. Radiologists with AI-CAD assistance showed the same sensitivity (76.5%) and similar specificity (92.3% vs 93.8%), AUC (0.844 vs 0.851), and recall rates (8.8% vs. 7.4%) compared to standalone AI-CAD. Radiologists without AI-CAD assistance showed lower specificity (91.9% vs 94.6%) and accuracy (91.5% vs 94.1%) and higher recall rates (8.6% vs 5.9%, all p < 0.05) compared to stand-alone AI-CAD. Conclusion: Radiologists showed no significant difference in diagnostic performance when both screening mammography and ultrasound were performed with or without AI-CAD assistance for mammography. However, without AI-CAD assistance, radiologists showed lower specificity and accuracy and higher recall rates compared to stand-alone AI-CAD.

4.
Jpn J Radiol ; 42(1): 69-77, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37561264

ABSTRACT

PURPOSE: Imaging diagnosis of stapes fixation (SF) is challenging owing to a lack of definite evidence. We developed a comprehensive machine learning (ML) model to identify SF on ultra-high-resolution CT. MATERIALS AND METHODS: We retrospectively enrolled 109 participants (143 ears) and divided them into the training set (115 ears) and test set (28 ears). Stapes mobility (SF or non-SF) was determined by surgical inspection. In the ML analysis, rectangular regions of interest were placed on consecutive axial slices in the training set. Radiomic features were extracted and fed into the training session. The test set was analyzed using 7 ML models (support vector machine, k nearest neighbor, decision tree, random forest, extra trees, eXtreme Gradient Boosting, and Light Gradient Boosting Machine) and by 2 dedicated neuroradiologists. Diagnostic performance (sensitivity, specificity and accuracy, with surgical findings as the reference) was compared between the radiologists and the optimal ML model by using the McNemar test. RESULTS: The mean age of the participants was 42.3 ± 17.5 years. The Light Gradient Boosting Machine (LightGBM) model showed the highest sensitivity (0.83), specificity (0.81), accuracy (0.82) and area under the curve (0.88) for detecting SF among the 7 ML models. The neuroradiologists achieved good sensitivities (0.75 and 0.67), moderate-to-good specificities (0.63 and 0.56) and good accuracies (0.68 and 0.61). This model showed no statistical differences with the neuroradiologists (P values 0.289-1.000). CONCLUSIONS: Compared to the neuroradiologists, the LightGBM model achieved competitive diagnostic performance in identifying SF, and has the potential to be a supportive tool in clinical practice.


Subject(s)
Machine Learning , Stapes , Humans , Young Adult , Adult , Middle Aged , Retrospective Studies , Stapes/diagnostic imaging , Radiologists , Tomography, X-Ray Computed
5.
Eur Radiol ; 33(11): 8241-8250, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37572190

ABSTRACT

OBJECTIVES: To assess whether a computer-aided detection (CADe) system could serve as a learning tool for radiology residents in chest X-ray (CXR) interpretation. METHODS: Eight radiology residents were asked to interpret 500 CXRs for the detection of five abnormalities, namely pneumothorax, pleural effusion, alveolar syndrome, lung nodule, and mediastinal mass. After interpreting 150 CXRs, the residents were divided into 2 groups of equivalent performance and experience. Subsequently, group 1 interpreted 200 CXRs from the "intervention dataset" using a CADe as a second reader, while group 2 served as a control by interpreting the same CXRs without the use of CADe. Finally, the 2 groups interpreted another 150 CXRs without the use of CADe. The sensitivity, specificity, and accuracy before, during, and after the intervention were compared. RESULTS: Before the intervention, the median individual sensitivity, specificity, and accuracy of the eight radiology residents were 43% (range: 35-57%), 90% (range: 82-96%), and 81% (range: 76-84%), respectively. With the use of CADe, residents from group 1 had a significantly higher overall sensitivity (53% [n = 431/816] vs 43% [n = 349/816], p < 0.001), specificity (94% [i = 3206/3428] vs 90% [n = 3127/3477], p < 0.001), and accuracy (86% [n = 3637/4244] vs 81% [n = 3476/4293], p < 0.001), compared to the control group. After the intervention, there were no significant differences between group 1 and group 2 regarding the overall sensitivity (44% [n = 309/696] vs 46% [n = 317/696], p = 0.666), specificity (90% [n = 2294/2541] vs 90% [n = 2285/2542], p = 0.642), or accuracy (80% [n = 2603/3237] vs 80% [n = 2602/3238], p = 0.955). CONCLUSIONS: Although it improves radiology residents' performances for interpreting CXRs, a CADe system alone did not appear to be an effective learning tool and should not replace teaching. CLINICAL RELEVANCE STATEMENT: Although the use of artificial intelligence improves radiology residents' performance in chest X-rays interpretation, artificial intelligence cannot be used alone as a learning tool and should not replace dedicated teaching. KEY POINTS: • With CADe as a second reader, residents had a significantly higher sensitivity (53% vs 43%, p < 0.001), specificity (94% vs 90%, p < 0.001), and accuracy (86% vs 81%, p < 0.001), compared to residents without CADe. • After removing access to the CADe system, residents' sensitivity (44% vs 46%, p = 0.666), specificity (90% vs 90%, p = 0.642), and accuracy (80% vs 80%, p = 0.955) returned to that of the level for the group without CADe.


Subject(s)
Artificial Intelligence , Internship and Residency , Humans , X-Rays , Radiography, Thoracic , Radiography
6.
J Digit Imaging ; 36(5): 1965-1973, 2023 10.
Article in English | MEDLINE | ID: mdl-37326891

ABSTRACT

To evaluate the consistency in the performance of Artificial Intelligence (AI)-based diagnostic support software in short-term digital mammography reimaging after core needle biopsy. Of 276 women who underwent short-term (<3 mo) serial digital mammograms followed by breast cancer surgery from Jan. to Dec. 2017, 550 breasts were included. All core needle biopsies for breast lesions were performed between serial exams. All mammography images were analyzed using a commercially available AI-based software providing an abnormality score (0-100). Demographic data for age, interval between serial exams, biopsy, and final diagnosis were compiled. Mammograms were reviewed for mammographic density and finding. Statistical analysis was performed to evaluate the distribution of variables according to biopsy and to test the interaction effects of variables with the difference in AI-based score according to biopsy. AI-based score of 550 exams (benign or normal in 263 and malignant in 287) showed significant difference between malignant and benign/normal exams (0.48 vs. 91.97 in first exam and 0.62 vs. 87.13 in second exam, P<0.0001). In comparison of serial exams, no significant difference was found in AI-based score. AI-based score difference between serial exams was significantly different according to biopsy performed or not (-0.25 vs. 0.07, P = 0.035). In linear regression analysis, there was no significant interaction effect of all clinical and mammographic characteristics with mammographic examinations performed after biopsy or not. The results from AI-based diagnostic support software for digital mammography was relatively consistent in short-term reimaging even after core needle biopsy.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Female , Humans , Biopsy, Large-Core Needle , Mammography/methods , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Software , Retrospective Studies
7.
Article in Chinese | MEDLINE | ID: mdl-37006142

ABSTRACT

Objective: To construct and verify a light-weighted convolutional neural network (CNN), and explore its application value for screening the early stage (subcategory 0/1 and stage Ⅰ of pneumoconiosis) of coal workers' pneumoconiosis (CWP) from digital chest radiography (DR) . Methods: A total of 1225 DR images of coal workers who were examined at an Occupational Disease Prevention and Control Institute in Anhui Province from October 2018 to March 2021 were retrospectively collected. All DR images were collectively diagnosed by 3 radiologists with diagnostic qualifications and gave diagnostic results. There were 692 DR images with small opacity profusion 0/- or 0/0 and 533 DR images with small opacity profusion 0/1 to stage Ⅲ of pneumoconiosis. The original chest radiographs were preprocessed differently to generate four datasets, namely 16-bit grayscale original image set (Origin16), 8-bit grayscale original image set (Origin 8), 16-bit grayscale histogram equalized image set (HE16) and 8-bit grayscale histogram equalized image set (HE8). The light-weighted CNN, ShuffleNet, was applied to train the generated prediction model on the four datasets separately. The performance of the four models for pneumoconiosis prediction was evaluated on a test set containing 130 DR images using measures such as the receiver operating characteristic (ROC) curve, accuracy, sensitivity, specificity, and Youden index. The Kappa consistency test was used to compare the agreement between the model predictions and the physician diagnosed pneumoconiosis results. Results: Origin16 model achieved the highest ROC area under the curve (AUC=0.958), accuracy (92.3%), specificity (92.9%), and Youden index (0.8452) for predicting pneumoconiosis, with a sensitivity of 91.7%. And the highest consistency between identification and physician diagnosis was observed for Origin16 model (Kappa value was 0.845, 95%CI: 0.753-0.937, P<0.001). HE16 model had the highest sensitivity (98.3%) . Conclusion: The light-weighted CNN ShuffleNet model can efficiently identify the early stages of CWP, and its application in the early screening of CWP can effectively improve physicians' work efficiency.


Subject(s)
Anthracosis , Coal Mining , Pneumoconiosis , Humans , Retrospective Studies , Anthracosis/diagnostic imaging , Pneumoconiosis/diagnostic imaging , Neural Networks, Computer , Coal
8.
Imaging Sci Dent ; 53(1): 43-51, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37006790

ABSTRACT

Purpose: This study aimed to assess texture analysis (TA) of cone-beam computed tomography (CBCT) images as a quantitative tool for the differential diagnosis of odontogenic and non-odontogenic maxillary sinusitis (OS and NOS, respectively). Materials and Methods: CBCT images of 40 patients diagnosed with OS (N=20) and NOS (N=20) were evaluated. The gray level co-occurrence (GLCM) matrix parameters, and gray level run length matrix texture (GLRLM) parameters were extracted using manually placed regions of interest on lesion images. Seven texture parameters were calculated using GLCM and 4 parameters using GLRLM. The Mann-Whitney test was used for comparisons between the groups, and the Levene test was performed to confirm the homogeneity of variance (α=5%). Results: The results showed statistically significant differences (P<0.05) between the OS and NOS patients regarding 3 TA parameters. NOS patients presented higher values for contrast, while OS patients presented higher values for correlation and inverse difference moment. Greater textural homogeneity was observed in the OS patients than in the NOS patients, with statistically significant differences in standard deviations between the groups for correlation, sum of squares, sum of entropy, and entropy. Conclusion: TA enabled quantitative differentiation between OS and NOS on CBCT images by using the parameters of contrast, correlation, and inverse difference moment.

9.
Eur Radiol ; 33(8): 5871-5881, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36735040

ABSTRACT

OBJECTIVE: To develop and investigate a deep learning model with data integration of ultrasound contrast-enhanced micro-flow (CEMF) cines, B-mode images, and patients' clinical parameters to improve the diagnosis of significant liver fibrosis (≥ F2) in patients with chronic hepatitis B (CHB). METHODS: Of 682 CHB patients who underwent ultrasound and histopathological examinations between October 2016 and May 2020, 218 subjects were included in this retrospective study. We devised a data integration-based deep learning (DIDL) model for assessing ≥ F2 in CHB patients. The model contained three convolutional neural network branches to automatically extract features from ultrasound CEMF cines, B-mode images, and clinical data. The extracted features were fused at the backend of the model for decision-making. The diagnostic performance was evaluated across fivefold cross-validation and compared against the other methods in terms of the area under the receiver operating characteristic curve (AUC), with histopathological results as the reference standard. RESULTS: The mean AUC achieved by the DIDL model was 0.901 [95% CI, 0.857-0.939], which was significantly higher than those of the comparative methods, including the models trained by using only CEMF cines (0.850 [0.794-0.893]), B-mode images (0.813 [0.754-0.862]), or clinical data (0.757 [0.694-0.812]), as well as the conventional TIC method (0.752 [0.689-0.808]), APRI (0.792 [0.734-0.845]), FIB-4 (0.776 [0.714-0.829]), and visual assessments of two radiologists (0.812 [0.754-0.862], and 0.800 [0.739-0.849]), all ps < 0.01, DeLong test. CONCLUSION: The DIDL model with data integration of ultrasound CEMF cines, B-mode images, and clinical parameters showed promising performance in diagnosing significant liver fibrosis for CHB patients. KEY POINTS: • The combined use of ultrasound contrast-enhanced micro-flow cines, B-mode images, and clinical data in a deep learning model has potential to improve the diagnosis of significant liver fibrosis. • The deep learning model with the fusion of features extracted from multimodality data outperformed the conventional methods including mono-modality data-based models, the time-intensity curve-based recognizer, fibrosis biomarkers, and visual assessments by experienced radiologists. • The interpretation of the feature attention maps in the deep learning model may help radiologists get better understanding of liver fibrosis-related features and hence potentially enhancing their diagnostic capacities.


Subject(s)
Deep Learning , Hepatitis B, Chronic , Humans , Hepatitis B, Chronic/complications , Hepatitis B, Chronic/pathology , Retrospective Studies , Liver Cirrhosis/pathology , Ultrasonography , Contrast Media , Liver/diagnostic imaging
10.
Eur Radiol ; 33(7): 5077-5086, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36729173

ABSTRACT

This statement from the European Society of Thoracic imaging (ESTI) explains and summarises the essentials for understanding and implementing Artificial intelligence (AI) in clinical practice in thoracic radiology departments. This document discusses the current AI scientific evidence in thoracic imaging, its potential clinical utility, implementation and costs, training requirements and validation, its' effect on the training of new radiologists, post-implementation issues, and medico-legal and ethical issues. All these issues have to be addressed and overcome, for AI to become implemented clinically in thoracic radiology. KEY POINTS: • Assessing the datasets used for training and validation of the AI system is essential. • A departmental strategy and business plan which includes continuing quality assurance of AI system and a sustainable financial plan is important for successful implementation. • Awareness of the negative effect on training of new radiologists is vital.


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiology/methods , Radiologists , Radiography, Thoracic , Societies, Medical
11.
Eur Radiol ; 33(7): 5087-5096, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36690774

ABSTRACT

OBJECTIVE: Automatic MR imaging segmentation of the prostate provides relevant clinical benefits for prostate cancer evaluation such as calculation of automated PSA density and other critical imaging biomarkers. Further, automated T2-weighted image segmentation of central-transition zone (CZ-TZ), peripheral zone (PZ), and seminal vesicle (SV) can help to evaluate clinically significant cancer following the PI-RADS v2.1 guidelines. Therefore, the main objective of this work was to develop a robust and reproducible CNN-based automatic prostate multi-regional segmentation model using an intercontinental cohort of prostate MRI. METHODS: A heterogeneous database of 243 T2-weighted prostate studies from 7 countries and 10 machines of 3 different vendors, with the CZ-TZ, PZ, and SV regions manually delineated by two experienced radiologists (ground truth), was used to train (n = 123) and test (n = 120) a U-Net-based model with deep supervision using a cyclical learning rate. The performance of the model was evaluated by means of dice similarity coefficient (DSC), among others. Segmentation results with a DSC above 0.7 were considered accurate. RESULTS: The proposed method obtained a DSC of 0.88 ± 0.01, 0.85 ± 0.02, 0.72 ± 0.02, and 0.72 ± 0.02 for the prostate gland, CZ-TZ, PZ, and SV respectively in the 120 studies of the test set when comparing the predicted segmentations with the ground truth. No statistically significant differences were found in the results obtained between manufacturers or continents. CONCLUSION: Prostate multi-regional T2-weighted MR images automatic segmentation can be accurately achieved by U-Net like CNN, generalizable in a highly variable clinical environment with different equipment, acquisition configurations, and population. KEY POINTS: • Deep learning techniques allows the accurate segmentation of the prostate in three different regions on MR T2w images. • Multi-centric database proved the generalization of the CNN model on different institutions across different continents. • CNN models can be used to aid on the diagnosis and follow-up of patients with prostate cancer.


Subject(s)
Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Magnetic Resonance Imaging/methods , Prostate/diagnostic imaging , Prostate/pathology , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Neural Networks, Computer , Magnetic Resonance Spectroscopy , Image Processing, Computer-Assisted/methods
12.
Eur Radiol ; 33(1): 360-367, 2023 Jan.
Article in English | MEDLINE | ID: mdl-35779087

ABSTRACT

OBJECTIVES: Content-based image retrieval systems (CBIRS) are a new and potentially impactful tool for radiological reporting, but their clinical evaluation is largely missing. This study aimed at assessing the effect of CBIRS on the interpretation of chest CT scans from patients with suspected diffuse parenchymal lung disease (DPLD). MATERIALS AND METHODS: A total of 108 retrospectively included chest CT scans with 22 unique, clinically and/or histopathologically verified diagnoses were read by eight radiologists (four residents, four attending, median years reading chest CT scans 2.1± 0.7 and 12 ± 1.8, respectively). The radiologists read and provided the suspected diagnosis at a certified radiological workstation to simulate clinical routine. Half of the readings were done without CBIRS and half with the additional support of the CBIRS. The CBIRS retrieved the most likely of 19 lung-specific patterns from a large database of 6542 thin-section CT scans and provided relevant information (e.g., a list of potential differential diagnoses). RESULTS: Reading time decreased by 31.3% (p < 0.001) despite the radiologists searching for additional information more frequently when the CBIRS was available (154 [72%] vs. 95 [43%], p < 0.001). There was a trend towards higher overall diagnostic accuracy (42.2% vs 34.7%, p = 0.083) when the CBIRS was available. CONCLUSION: The use of the CBIRS had a beneficial impact on the reading time of chest CT scans in cases with DPLD. In addition, both resident and attending radiologists were more likely to consult informational resources if they had access to the CBIRS. Further studies are needed to confirm the observed trend towards increased diagnostic accuracy with the use of a CBIRS in practice. KEY POINTS: • A content-based image retrieval system for supporting the diagnostic process of reading chest CT scans can decrease reading time by 31.3% (p < 0.001). • The decrease in reading time was present despite frequent usage of the content-based image retrieval system. • Additionally, a trend towards higher diagnostic accuracy was observed when using the content-based image retrieval system (42.2% vs 34.7%, p = 0.083).


Subject(s)
Lung Diseases, Interstitial , Lung Neoplasms , Humans , Retrospective Studies , Tomography, X-Ray Computed/methods , Thorax
13.
Eur Radiol ; 33(6): 4303-4312, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36576543

ABSTRACT

OBJECTIVES: Lymph node (LN) metastasis is a common cause of recurrence in oral cancer; however, the accuracy of distinguishing positive and negative LNs is not ideal. Here, we aimed to develop a deep learning model that can identify, locate, and distinguish LNs in contrast-enhanced CT (CECT) images with a higher accuracy. METHODS: The preoperative CECT images and corresponding postoperative pathological diagnoses of 1466 patients with oral cancer from our hospital were retrospectively collected. In stage I, full-layer images (five common anatomical structures) were labeled; in stage II, negative and positive LNs were separately labeled. The stage I model was innovatively employed for stage II training to improve accuracy with the idea of transfer learning (TL). The Mask R-CNN instance segmentation framework was selected for model construction and training. The accuracy of the model was compared with that of human observers. RESULTS: A total of 5412 images and 5601 images were labeled in stage I and II, respectively. The stage I model achieved an excellent segmentation effect in the test set (AP50-0.7249). The positive LN accuracy of the stage II TL model was similar to that of the radiologist and much higher than that of the surgeons and students (0.7042 vs. 0.7647 (p = 0.243), 0.4216 (p < 0.001), and 0.3629 (p < 0.001)). The clinical accuracy of the model was highest (0.8509 vs. 0.8000, 0.5500, 0.4500, and 0.6658 of the Radiology Department). CONCLUSIONS: The model was constructed using a deep neural network and had high accuracy in LN localization and metastasis discrimination, which could contribute to accurate diagnosis and customized treatment planning. KEY POINTS: • Lymph node metastasis is not well recognized with modern medical imaging tools. • Transfer learning can improve the accuracy of deep learning model prediction. • Deep learning can aid the accurate identification of lymph node metastasis.


Subject(s)
Deep Learning , Mouth Neoplasms , Humans , Retrospective Studies , Lymphatic Metastasis/diagnostic imaging , Mouth Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Lymph Nodes/diagnostic imaging
14.
Ann Biomed Eng ; 51(3): 517-526, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36036857

ABSTRACT

This study proposes a new diagnostic tool for automatically extracting discriminative features and detecting temporomandibular joint disc displacement (TMJDD) accurately with artificial intelligence. We analyzed the structural magnetic resonance imaging (MRI) images of 52 patients with TMJDD and 32 healthy controls. The data were split into training and test sets, and only the training sets were used for model construction. U-net was trained with 100 sagittal MRI images of the TMJ to detect the joint cavity between the temporal bone and the mandibular condyle, which was used as the region of interest, and classify the images into binary categories using four convolutional neural networks: InceptionResNetV2, InceptionV3, DenseNet169, and VGG16. The best models were InceptionV3 and DenseNet169; the results of InceptionV3 for recall, precision, accuracy, and F1 score were 1, 0.81, 0.85, and 0.9, respectively, and the corresponding results of DenseNet169 were 0.92, 0.86, 0.85, and 0.89, respectively. Automated detection of TMJDD from sagittal MRI images is a promising technique that involves using deep learning neural networks. It can be used to support clinicians in diagnosing patients as having TMJDD.


Subject(s)
Artificial Intelligence , Temporomandibular Joint Disorders , Humans , Temporomandibular Joint Disorders/diagnostic imaging , Temporomandibular Joint Disorders/pathology , Temporomandibular Joint/diagnostic imaging , Temporomandibular Joint/pathology , Mandibular Condyle/pathology , Magnetic Resonance Imaging/methods
15.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-970734

ABSTRACT

Objective: To construct and verify a light-weighted convolutional neural network (CNN), and explore its application value for screening the early stage (subcategory 0/1 and stage Ⅰ of pneumoconiosis) of coal workers' pneumoconiosis (CWP) from digital chest radiography (DR) . Methods: A total of 1225 DR images of coal workers who were examined at an Occupational Disease Prevention and Control Institute in Anhui Province from October 2018 to March 2021 were retrospectively collected. All DR images were collectively diagnosed by 3 radiologists with diagnostic qualifications and gave diagnostic results. There were 692 DR images with small opacity profusion 0/- or 0/0 and 533 DR images with small opacity profusion 0/1 to stage Ⅲ of pneumoconiosis. The original chest radiographs were preprocessed differently to generate four datasets, namely 16-bit grayscale original image set (Origin16), 8-bit grayscale original image set (Origin 8), 16-bit grayscale histogram equalized image set (HE16) and 8-bit grayscale histogram equalized image set (HE8). The light-weighted CNN, ShuffleNet, was applied to train the generated prediction model on the four datasets separately. The performance of the four models for pneumoconiosis prediction was evaluated on a test set containing 130 DR images using measures such as the receiver operating characteristic (ROC) curve, accuracy, sensitivity, specificity, and Youden index. The Kappa consistency test was used to compare the agreement between the model predictions and the physician diagnosed pneumoconiosis results. Results: Origin16 model achieved the highest ROC area under the curve (AUC=0.958), accuracy (92.3%), specificity (92.9%), and Youden index (0.8452) for predicting pneumoconiosis, with a sensitivity of 91.7%. And the highest consistency between identification and physician diagnosis was observed for Origin16 model (Kappa value was 0.845, 95%CI: 0.753-0.937, P<0.001). HE16 model had the highest sensitivity (98.3%) . Conclusion: The light-weighted CNN ShuffleNet model can efficiently identify the early stages of CWP, and its application in the early screening of CWP can effectively improve physicians' work efficiency.


Subject(s)
Humans , Retrospective Studies , Anthracosis/diagnostic imaging , Pneumoconiosis/diagnostic imaging , Coal Mining , Neural Networks, Computer , Coal
16.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-995410

ABSTRACT

Objective:To evaluate deep learning for differentiating invasion depth of colorectal adenomas under image enhanced endoscopy (IEE).Methods:A total of 13 246 IEE images from 3 714 lesions acquired from November 2016 to June 2021 were retrospectively collected in Renmin Hospital of Wuhan University, Shenzhen Hospital of Southern Medical University and the First Hospital of Yichang to construct a deep learning model to differentiate submucosal deep invasion and non-submucosal deep invasion lesions of colorectal adenomas. The performance of the deep learning model was validated in an independent test and an external test. The full test was used to compare the diagnostic performance between 5 endoscopists and the deep learning model. A total of 35 videos were collected from January to June 2021 in Renmin Hospital of Wuhan University to validate the diagnostic performance of the endoscopists with the assistance of deep learning model.Results:The accuracy and Youden index of the deep learning model in image test set were 93.08% (821/882) and 0.86, which were better than those of endoscopists [the highest were 91.72% (809/882) and 0.78]. In video test set, the accuracy and Youden index of the model were 97.14% (34/35) and 0.94. With the assistance of the model, the accuracy of endoscopists was significantly improved [the highest was 97.14% (34/35)].Conclusion:The deep learning model obtained in this study could identify submucosal lesions with deep invasion accurately for colorectal adenomas, and could improve the diagnostic accuracy of endoscopists.

17.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-991045

ABSTRACT

Objective:To investigate the diagnostic value of CT-guided puncture biopsy combined with serum gamma-glutamyltransferase (GGT) and abnormal prothrombin (PIVKA-Ⅱ) in serum alpha-fetoprotein(AFP) negative primary liver cancer (PHC).Methods:Eighty patients with AFP negative PHC treatment in Fuyang Women and Children′s Hospital from January 2018 to March 2021 were selected as AFP negative PHC group, and another 85 patients diagnosed with benign liver tumor during the same period were selected as the control group retrospectively. The patients of the two groups underwent CT-guided biopsy and the levels of GGT and PIVKA-Ⅱ were detected. The single diagnostic value and combined diagnostic value of AFP negative PHC were analyzed by receiver operating characteristic (ROC) curve.Results:Seventy-five of the 80 patients in the AFP negative PHC group were confirmed as liver malignant lesions by biopsy, with a coincidence of 93.75%, and 84 of the 85 patients in the control group were confirmed as liver benign lesions by biopsy, with a coincidence of 98.82%. The levels of AFP, GGT and PIVKA-Ⅱ in AFP negative PHC group were significantly higher than those in the control group: (175.67 ± 39.58) μg/L vs. (18.74 ± 7.42) μg/L, (1 245.37 ± 255.41) U/L vs. (486.63 ± 89.05) U/L, (385.49 ± 30.27) AU/L vs. (25.84 ± 7.66) AU/L, there were statistical differences ( P<0.05). Spearman correlation analysis showed that serum AFP was positively correlated with GGT and PIVKA-Ⅱ ( r = 0.858 and 0.429, P<0.05). The results of ROC curve showed that the area under curve of CT-guided biopsy combined with GGT and PIVKA-Ⅱ in the diagnosis of AFP negative PHC was 0.877, the sensitivity was 91.19%, the specificity was 87.34%. Conclusions:CT-guided biopsy combined with GGT and PIVKA-Ⅱ detection of AFP negative PHC can effectively improve the diagnostic value.

18.
J Biomed Phys Eng ; 12(6): 551-558, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36569568

ABSTRACT

The health organisation has suffered from the lack of diagnosis support systems and physicians in India. Further, the physicians are struggling to treat many patients, and the hospitals also have the lack of a radiologist especially in rural areas; thus, almost all cases are handled by a single physician, leading to many misdiagnoses. Computer aided diagnostic systems are being developed to address this problem. The current study aimed to review the different methods to detect pneumonia using neural networks and compare their approach and results. For the best comparisons, only papers with the same data set Chest X-ray14 are studied.

19.
Ultrasonography ; 41(4): 718-727, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35850498

ABSTRACT

PURPOSE: This study evaluated how artificial intelligence-based computer-assisted diagnosis (AICAD) for breast ultrasonography (US) influences diagnostic performance and agreement between radiologists with varying experience levels in different workflows. METHODS: Images of 492 breast lesions (200 malignant and 292 benign masses) in 472 women taken from April 2017 to June 2018 were included. Six radiologists (three inexperienced [<1 year of experience] and three experienced [10-15 years of experience]) individually reviewed US images with and without the aid of AI-CAD, first sequentially and then simultaneously. Diagnostic performance and interobserver agreement were calculated and compared between radiologists and AI-CAD. RESULTS: After implementing AI-CAD, the specificity, positive predictive value (PPV), and accuracy significantly improved, regardless of experience and workflow (all P<0.001, respectively). The overall area under the receiver operating characteristic curve significantly increased in simultaneous reading, but only for inexperienced radiologists. The agreement for Breast Imaging Reporting and Database System (BI-RADS) descriptors generally increased when AI-CAD was used (κ=0.29-0.63 to 0.35-0.73). Inexperienced radiologists tended to concede to AI-CAD results more easily than experienced radiologists, especially in simultaneous reading (P<0.001). The conversion rates for final assessment changes from BI-RADS 2 or 3 to BI-RADS higher than 4a or vice versa were also significantly higher in simultaneous reading than sequential reading (overall, 15.8% and 6.2%, respectively; P<0.001) for both inexperienced and experienced radiologists. CONCLUSION: Using AI-CAD to interpret breast US improved the specificity, PPV, and accuracy of radiologists regardless of experience level. AI-CAD may work better in simultaneous reading to improve diagnostic performance and agreement between radiologists, especially for inexperienced radiologists.

20.
J Digit Imaging ; 35(6): 1699-1707, 2022 12.
Article in English | MEDLINE | ID: mdl-35902445

ABSTRACT

As thyroid and breast cancer have several US findings in common, we applied an artificial intelligence computer-assisted diagnosis (AI-CAD) software originally developed for thyroid nodules to breast lesions on ultrasound (US) and evaluated its diagnostic performance. From January 2017 to December 2017, 1042 breast lesions (mean size 20.2 ± 11.8 mm) of 1001 patients (mean age 45.9 ± 12.9 years) who underwent US-guided core-needle biopsy were included. An AI-CAD software that was previously trained and validated with thyroid nodules using the convolutional neural network was applied to breast nodules. There were 665 benign breast lesions (63.0%) and 391 breast cancers (37.0%). The area under the receiver operating characteristic curve (AUROC) of AI-CAD to differentiate breast lesions was 0.678 (95% confidence interval: 0.649, 0.707). After fine-tuning AI-CAD with 1084 separate breast lesions, the diagnostic performance of AI-CAD markedly improved (AUC 0.841). This was significantly higher than that of radiologists when the cutoff category was BI-RADS 4a (AUC 0.621, P < 0.001), but lower when the cutoff category was BI-RADS 4b (AUC 0.908, P < 0.001). When applied to breast lesions, the diagnostic performance of an AI-CAD software that had been developed for differentiating malignant and benign thyroid nodules was not bad. However, an organ-specific approach guarantees better diagnostic performance despite the similar US features of thyroid and breast malignancies.


Subject(s)
Breast Neoplasms , Thyroid Nodule , Humans , Adult , Middle Aged , Female , Thyroid Nodule/diagnostic imaging , Thyroid Nodule/pathology , Artificial Intelligence , Sensitivity and Specificity , Ultrasonography , Diagnosis, Computer-Assisted , Breast Neoplasms/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...