Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.514
Filter
2.
Int J Chron Obstruct Pulmon Dis ; 19: 1167-1175, 2024.
Article in English | MEDLINE | ID: mdl-38826698

ABSTRACT

Purpose: To develop a novel method for calculating small airway resistance using computational fluid dynamics (CFD) based on CT data and evaluate its value to identify COPD. Patients and Methods: 24 subjects who underwent chest CT scans and pulmonary function tests between August 2020 and December 2020 were enrolled retrospectively. Subjects were divided into three groups: normal (10), high-risk (6), and COPD (8). The airway from the trachea down to the sixth generation of bronchioles was reconstructed by a 3D slicer. The small airway resistance (RSA) and RSA as a percentage of total airway resistance (RSA%) were calculated by CFD combined with airway resistance and FEV1 measured by pulmonary function test. A correlation analysis was conducted between RSA and pulmonary function parameters, including FEV1/FVC, FEV1% predicted, MEF50% predicted, MEF75% predicted and MMEF75/25% predicted. Results: The RSA and RSA% were significantly different among the three groups (p<0.05) and related to FEV1/FVC (r = -0.70, p < 0.001; r = -0.67, p < 0.001), FEV1% predicted (r = -0.60, p = 0.002; r = -0.57, p = 0.004), MEF50% predicted (r = -0.64, p = 0.001; r = -0.64, p = 0.001), MEF75% predicted (r = -0.71, p < 0.001; r = -0.60, p = 0.002) and MMEF 75/25% predicted (r = -0.64, p = 0.001; r = -0.64, p = 0.001). Conclusion: Airway CFD is a valuable method for estimating the small airway resistance, where the derived RSA will aid in the early diagnosis of COPD.


Subject(s)
Airway Resistance , Hydrodynamics , Lung , Predictive Value of Tests , Pulmonary Disease, Chronic Obstructive , Tomography, X-Ray Computed , Humans , Pulmonary Disease, Chronic Obstructive/physiopathology , Pulmonary Disease, Chronic Obstructive/diagnostic imaging , Male , Retrospective Studies , Female , Middle Aged , Aged , Forced Expiratory Volume , Lung/physiopathology , Lung/diagnostic imaging , Vital Capacity , Computer Simulation , Radiographic Image Interpretation, Computer-Assisted , Respiratory Function Tests/methods
3.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38720391

ABSTRACT

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Subject(s)
Deep Learning , Lung Neoplasms , Multiple Pulmonary Nodules , Phantoms, Imaging , Radiation Dosage , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/pathology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
4.
Eur Radiol Exp ; 8(1): 54, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38698099

ABSTRACT

BACKGROUND: We aimed to improve the image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection and determine the best tradeoff between number of views, IQ, and diagnostic confidence. METHODS: CT images from 41 subjects aged 62.8 ± 10.6 years (mean ± standard deviation, 23 men), 34 with lung metastasis, 7 healthy, were retrospectively selected (2016-2018) and forward projected onto 2,048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, 7 healthy) for a single-blinded multireader study. These slices, for all levels of subsampling, with and without U-Net postprocessing, were presented to three readers. IQ and diagnostic confidence were ranked using predefined scales. Subjective nodule segmentation was evaluated using sensitivity and Dice similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used. RESULTS: The 64-projection sparse-view images resulted in 0.89 sensitivity and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led to insufficient IQ for diagnosis. For increased views, no substantial discrepancies were noted between sparse-view and postprocessed images. CONCLUSIONS: Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level. RELEVANCE STATEMENT: Our reader study demonstrates the benefit of U-Net postprocessing for regular CT screenings of patients with lung metastasis to increase the IQ and diagnostic confidence while reducing the dose. KEY POINTS: • Sparse-projection-view streak artifacts reduce the quality and usability of sparse-view CT images. • U-Net-based postprocessing removes sparse-view artifacts while maintaining diagnostically accurate IQ. • Postprocessed sparse-view CTs drastically increase radiologists' confidence in diagnosing lung metastasis.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Male , Middle Aged , Tomography, X-Ray Computed/methods , Female , Retrospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Aged
5.
Radiology ; 311(2): e232178, 2024 May.
Article in English | MEDLINE | ID: mdl-38742970

ABSTRACT

Background Accurate characterization of suspicious small renal masses is crucial for optimized management. Deep learning (DL) algorithms may assist with this effort. Purpose To develop and validate a DL algorithm for identifying benign small renal masses at contrast-enhanced multiphase CT. Materials and Methods Surgically resected renal masses measuring 3 cm or less in diameter at contrast-enhanced CT were included. The DL algorithm was developed by using retrospective data from one hospital between 2009 and 2021, with patients randomly allocated in a training and internal test set ratio of 8:2. Between 2013 and 2021, external testing was performed on data from five independent hospitals. A prospective test set was obtained between 2021 and 2022 from one hospital. Algorithm performance was evaluated by using the area under the receiver operating characteristic curve (AUC) and compared with the results of seven clinicians using the DeLong test. Results A total of 1703 patients (mean age, 56 years ± 12 [SD]; 619 female) with a single renal mass per patient were evaluated. The retrospective data set included 1063 lesions (874 in training set, 189 internal test set); the multicenter external test set included 537 lesions (12.3%, 66 benign) with 89 subcentimeter (≤1 cm) lesions (16.6%); and the prospective test set included 103 lesions (13.6%, 14 benign) with 20 (19.4%) subcentimeter lesions. The DL algorithm performance was comparable with that of urological radiologists: for the external test set, AUC was 0.80 (95% CI: 0.75, 0.85) versus 0.84 (95% CI: 0.78, 0.88) (P = .61); for the prospective test set, AUC was 0.87 (95% CI: 0.79, 0.93) versus 0.92 (95% CI: 0.86, 0.96) (P = .70). For subcentimeter lesions in the external test set, the algorithm and urological radiologists had similar AUC of 0.74 (95% CI: 0.63, 0.83) and 0.81 (95% CI: 0.68, 0.92) (P = .78), respectively. Conclusion The multiphase CT-based DL algorithm showed comparable performance with that of radiologists for identifying benign small renal masses, including lesions of 1 cm or less. Published under a CC BY 4.0 license. Supplemental material is available for this article.


Subject(s)
Contrast Media , Deep Learning , Kidney Neoplasms , Tomography, X-Ray Computed , Humans , Female , Male , Middle Aged , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/pathology , Retrospective Studies , Tomography, X-Ray Computed/methods , Prospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Aged , Algorithms , Kidney/diagnostic imaging , Adult
6.
PLoS One ; 19(5): e0302641, 2024.
Article in English | MEDLINE | ID: mdl-38753596

ABSTRACT

The development of automated tools using advanced technologies like deep learning holds great promise for improving the accuracy of lung nodule classification in computed tomography (CT) imaging, ultimately reducing lung cancer mortality rates. However, lung nodules can be difficult to detect and classify, from CT images since different imaging modalities may provide varying levels of detail and clarity. Besides, the existing convolutional neural network may struggle to detect nodules that are small or located in difficult-to-detect regions of the lung. Therefore, the attention pyramid pooling network (APPN) is proposed to identify and classify lung nodules. First, a strong feature extractor, named vgg16, is used to obtain features from CT images. Then, the attention primary pyramid module is proposed by combining the attention mechanism and pyramid pooling module, which allows for the fusion of features at different scales and focuses on the most important features for nodule classification. Finally, we use the gated spatial memory technique to decode the general features, which is able to extract more accurate features for classifying lung nodules. The experimental results on the LIDC-IDRI dataset show that the APPN can achieve highly accurate and effective for classifying lung nodules, with sensitivity of 87.59%, specificity of 90.46%, accuracy of 88.47%, positive predictive value of 95.41%, negative predictive value of 76.29% and area under receiver operating characteristic curve of 0.914.


Subject(s)
Lung Neoplasms , Neural Networks, Computer , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/diagnosis , Tomography, X-Ray Computed/methods , Deep Learning , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/diagnosis , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/diagnosis , Algorithms , Lung/diagnostic imaging , Lung/pathology , Radiographic Image Interpretation, Computer-Assisted/methods
7.
BMC Med Inform Decis Mak ; 24(1): 126, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38755563

ABSTRACT

BACKGROUND: Chest X-ray imaging based abnormality localization, essential in diagnosing various diseases, faces significant clinical challenges due to complex interpretations and the growing workload of radiologists. While recent advances in deep learning offer promising solutions, there is still a critical issue of domain inconsistency in cross-domain transfer learning, which hampers the efficiency and accuracy of diagnostic processes. This study aims to address the domain inconsistency problem and improve autonomic abnormality localization performance of heterogeneous chest X-ray image analysis, particularly in detecting abnormalities, by developing a self-supervised learning strategy called "BarlwoTwins-CXR". METHODS: We utilized two publicly available datasets: the NIH Chest X-ray Dataset and the VinDr-CXR. The BarlowTwins-CXR approach was conducted in a two-stage training process. Initially, self-supervised pre-training was performed using an adjusted Barlow Twins algorithm on the NIH dataset with a Resnet50 backbone pre-trained on ImageNet. This was followed by supervised fine-tuning on the VinDr-CXR dataset using Faster R-CNN with Feature Pyramid Network (FPN). The study employed mean Average Precision (mAP) at an Intersection over Union (IoU) of 50% and Area Under the Curve (AUC) for performance evaluation. RESULTS: Our experiments showed a significant improvement in model performance with BarlowTwins-CXR. The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models. In addition, the Ablation CAM method revealed enhanced precision in localizing chest abnormalities. The study involved 112,120 images from the NIH dataset and 18,000 images from the VinDr-CXR dataset, indicating robust training and testing samples. CONCLUSION: BarlowTwins-CXR significantly enhances the efficiency and accuracy of chest X-ray image-based abnormality localization, outperforming traditional transfer learning methods and effectively overcoming domain inconsistency in cross-domain scenarios. Our experiment results demonstrate the potential of using self-supervised learning to improve the generalizability of models in medical settings with limited amounts of heterogeneous data. This approach can be instrumental in aiding radiologists, particularly in high-workload environments, offering a promising direction for future AI-driven healthcare solutions.


Subject(s)
Radiography, Thoracic , Supervised Machine Learning , Humans , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Datasets as Topic
8.
Eur Radiol Exp ; 8(1): 63, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38764066

ABSTRACT

BACKGROUND: Emphysema influences the appearance of lung tissue in computed tomography (CT). We evaluated whether this affects lung nodule detection by artificial intelligence (AI) and human readers (HR). METHODS: Individuals were selected from the "Lifelines" cohort who had undergone low-dose chest CT. Nodules in individuals without emphysema were matched to similar-sized nodules in individuals with at least moderate emphysema. AI results for nodular findings of 30-100 mm3 and 101-300 mm3 were compared to those of HR; two expert radiologists blindly reviewed discrepancies. Sensitivity and false positives (FPs)/scan were compared for emphysema and non-emphysema groups. RESULTS: Thirty-nine participants with and 82 without emphysema were included (n = 121, aged 61 ± 8 years (mean ± standard deviation), 58/121 males (47.9%)). AI and HR detected 196 and 206 nodular findings, respectively, yielding 109 concordant nodules and 184 discrepancies, including 118 true nodules. For AI, sensitivity was 0.68 (95% confidence interval 0.57-0.77) in emphysema versus 0.71 (0.62-0.78) in non-emphysema, with FPs/scan 0.51 and 0.22, respectively (p = 0.028). For HR, sensitivity was 0.76 (0.65-0.84) and 0.80 (0.72-0.86), with FPs/scan of 0.15 and 0.27 (p = 0.230). Overall sensitivity was slightly higher for HR than for AI, but this difference disappeared after the exclusion of benign lymph nodes. FPs/scan were higher for AI in emphysema than in non-emphysema (p = 0.028), while FPs/scan for HR were higher than AI for 30-100 mm3 nodules in non-emphysema (p = 0.009). CONCLUSIONS: AI resulted in more FPs/scan in emphysema compared to non-emphysema, a difference not observed for HR. RELEVANCE STATEMENT: In the creation of a benchmark dataset to validate AI software for lung nodule detection, the inclusion of emphysema cases is important due to the additional number of FPs. KEY POINTS: • The sensitivity of nodule detection by AI was similar in emphysema and non-emphysema. • AI had more FPs/scan in emphysema compared to non-emphysema. • Sensitivity and FPs/scan by the human reader were comparable for emphysema and non-emphysema. • Emphysema and non-emphysema representation in benchmark dataset is important for validating AI.


Subject(s)
Artificial Intelligence , Pulmonary Emphysema , Tomography, X-Ray Computed , Humans , Male , Middle Aged , Female , Tomography, X-Ray Computed/methods , Pulmonary Emphysema/diagnostic imaging , Software , Sensitivity and Specificity , Lung Neoplasms/diagnostic imaging , Aged , Radiation Dosage , Solitary Pulmonary Nodule/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
10.
Crit Rev Biomed Eng ; 52(4): 41-60, 2024.
Article in English | MEDLINE | ID: mdl-38780105

ABSTRACT

Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.


Subject(s)
Breast Neoplasms , Machine Learning , Mammography , Humans , Mammography/methods , Breast Neoplasms/diagnostic imaging , Female , Breast/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
11.
Sci Rep ; 14(1): 11810, 2024 05 23.
Article in English | MEDLINE | ID: mdl-38782976

ABSTRACT

In this retrospective study, we aimed to assess the objective and subjective image quality of different reconstruction techniques and a deep learning-based software on non-contrast head computed tomography (CT) images. In total, 152 adult head CT scans (77 female, 75 male; mean age 69.4 ± 18.3 years) obtained from three different CT scanners using different protocols between March and April 2021 were included. CT images were reconstructed using filtered-back projection (FBP), iterative reconstruction (IR), and post-processed using a deep learning-based algorithm (PS). Post-processing significantly reduced noise in FBP-reconstructed images (up to 15.4% reduction) depending on the protocol, leading to improvements in signal-to-noise ratio of up to 19.7%. However, when deep learning-based post-processing was applied to FBP images compared to IR alone, the differences were inconsistent and partly non-significant, which appeared to be protocol or site specific. Subjective assessments showed no significant overall improvement in image quality for all reconstructions and post-processing. Inter-rater reliability was low and preferences varied. Deep learning-based denoising software improved objective image quality compared to FBP in routine head CT. A significant difference compared to IR was observed for only one protocol. Subjective assessments did not indicate a significant clinical impact in terms of improved subjective image quality, likely due to the low noise levels in full-dose images.


Subject(s)
Deep Learning , Head , Software , Tomography, X-Ray Computed , Humans , Female , Tomography, X-Ray Computed/methods , Male , Aged , Head/diagnostic imaging , Retrospective Studies , Middle Aged , Aged, 80 and over , Image Processing, Computer-Assisted/methods , Signal-To-Noise Ratio , Adult , Algorithms , Radiographic Image Interpretation, Computer-Assisted/methods
12.
Clin Radiol ; 79(7): e957-e962, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38693034

ABSTRACT

AIM: The comparison between chest x-ray (CXR) and computed tomography (CT) images is commonly required in clinical practice to assess the evolution of chest pathological manifestations. Intrinsic differences between the two techniques, however, limit reader confidence in such a comparison. CT average intensity projection (AIP) reconstruction allows obtaining "synthetic" CXR (s-CXR) images, which are thought to have the potential to increase the accuracy of comparison between CXR and CT imaging. We aim at assessing the diagnostic performance of s-CXR imaging in detecting common pleuro-parenchymal abnormalities. MATERIALS AND METHODS: 142 patients who underwent chest CT examination and CXR within 24 hours were enrolled. CT was the standard of reference. Both conventional CXR (c-CXR) and s-CXR images were retrospectively reviewed for the presence of consolidation, nodule/mass, linear opacities, reticular opacities, and pleural effusion by 3 readers in two separate sessions. Sensitivity, specificity, accuracy and their 95% confidence interval were calculated for each reader and setting and tested by McNemar test. Inter-observer agreement was tested by Cohen's K test and its 95%CI. RESULTS: Overall, s-CXR sensitivity ranged 45-67% for consolidation, 12-28% for nodule/mass, 17-33% for linear opacities, 2-61% for reticular opacities, and 33-58% for pleural effusion; specificity 65-83%, 83-94%, 94-98%, 93-100% and 79-86%; accuracy 66-68%, 74-79%, 89-91%, 61-65% and 68-72%, respectively. K values ranged 0.38-0.50, 0.05-0.25, -0.05-0.11, -0.01-0.15, and 0.40-0.66 for consolidation, nodule/mass, linear opacities, reticular opacities, and pleural effusion, respectively. CONCLUSION: S-CXR images, reconstructed with AIP technique, can be compared with conventional images in clinical practice and for educational purposes.


Subject(s)
Radiography, Thoracic , Sensitivity and Specificity , Tomography, X-Ray Computed , Humans , Male , Female , Tomography, X-Ray Computed/methods , Middle Aged , Retrospective Studies , Aged , Radiography, Thoracic/methods , Adult , Aged, 80 and over , Radiographic Image Interpretation, Computer-Assisted/methods , Pleural Diseases/diagnostic imaging , Reproducibility of Results , Observer Variation
13.
BMC Med Imaging ; 24(1): 120, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38789925

ABSTRACT

BACKGROUND: Lung cancer is the second most common cancer worldwide, with over two million new cases per year. Early identification would allow healthcare practitioners to handle it more effectively. The advancement of computer-aided detection systems significantly impacted clinical analysis and decision-making on human disease. Towards this, machine learning and deep learning techniques are successfully being applied. Due to several advantages, transfer learning has become popular for disease detection based on image data. METHODS: In this work, we build a novel transfer learning model (VER-Net) by stacking three different transfer learning models to detect lung cancer using lung CT scan images. The model is trained to map the CT scan images with four lung cancer classes. Various measures, such as image preprocessing, data augmentation, and hyperparameter tuning, are taken to improve the efficacy of VER-Net. All the models are trained and evaluated using multiclass classifications chest CT images. RESULTS: The experimental results confirm that VER-Net outperformed the other eight transfer learning models compared with. VER-Net scored 91%, 92%, 91%, and 91.3% when tested for accuracy, precision, recall, and F1-score, respectively. Compared to the state-of-the-art, VER-Net has better accuracy. CONCLUSION: VER-Net is not only effectively used for lung cancer detection but may also be useful for other diseases for which CT scan images are available.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Machine Learning , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods
14.
Med Image Anal ; 95: 103185, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38718716

ABSTRACT

BACKGROUND & AIMS: Metabolic-dysfunction associated fatty liver disease (MAFLD) is highly prevalent and can lead to liver complications and comorbidities, with non-invasive tests such as vibration-controlled transient elastography (VCTE) and invasive liver biopsies being used for diagnosis The aim of the present study was to develop a new fully automatized method for quantifying the percentage of fat in the liver based on a voxel analysis on computed tomography (CT) images to solve previously unconcluded diagnostic deficiencies either in contrast (CE) or non-contrast enhanced (NCE) assessments. METHODS: Liver and spleen were segmented using nn-UNet on CE- and NCE-CT images. Radiodensity values were obtained for both organs for defining the key benchmarks for fatty liver assessment: liver mean, liver-to-spleen ratio, liver-spleen difference, and their average. VCTE was used for validation. A classification task method was developed for detection of suitable patients to fulfill maximum reproducibility across cohorts and highlight subjects with other potential radiodensity-related diseases. RESULTS: Best accuracy was attained using the average of all proposed benchmarks being the liver-to-spleen ratio highly useful for CE and the liver-to-spleen difference for NCE. The proposed whole-organ automatic segmentation displayed superior potential when compared to the typically used manual region-of-interest drawing as it allows to accurately obtain the percent of fat in liver, among other improvements. Atypical patients were successfully stratified through a function based on biochemical data. CONCLUSIONS: The developed method tackles the current drawbacks including biopsy invasiveness, and CT-related weaknesses such as lack of automaticity, dependency on contrast agent, no quantification of the percentage of fat in liver, and limited information on region-to-organ affectation. We propose this tool as an alternative for individualized MAFLD evaluation by an early detection of abnormal CT patterns based in radiodensity whilst abording detection of non-suitable patients to avoid unnecessary exposure to CT radiation. Furthermore, this work presents a surrogate aid for assessing fatty liver at a primary assessment of MAFLD using elastography data.


Subject(s)
Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Reproducibility of Results , Male , Contrast Media , Middle Aged , Female , Radiographic Image Interpretation, Computer-Assisted/methods , Elasticity Imaging Techniques/methods , Aged , Fatty Liver/diagnostic imaging , Non-alcoholic Fatty Liver Disease/diagnostic imaging , Spleen/diagnostic imaging , Liver/diagnostic imaging , Adult
15.
Med Image Anal ; 95: 103194, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38749304

ABSTRACT

Real-time diagnosis of intracerebral hemorrhage after thrombectomy is crucial for follow-up treatment. However, this is difficult to achieve with standard single-energy CT (SECT) due to similar CT values of blood and contrast agents under a single energy spectrum. In contrast, dual-energy CT (DECT) scanners employ two different energy spectra, which allows for real-time differentiation between hemorrhage and contrast extravasation based on energy-related attenuation characteristics. Unfortunately, DECT scanners are not as widely used as SECT scanners due to their high costs. To address this dilemma, in this paper, we generate pseudo DECT images from a SECT image for real-time diagnosis of hemorrhage. More specifically, we propose a SECT-to-DECT Transformer-based Generative Adversarial Network (SDTGAN), which is a 3D transformer-based multi-task learning framework equipped with a shared attention mechanism. In this way, SDTGAN can be guided to focus more on high-density areas (crucial for hemorrhage diagnosis) during the generation. Meanwhile, the introduced multi-task learning strategy and the shared attention mechanism also enable SDTGAN to model dependencies between interconnected generation tasks, improving generation performance while significantly reducing model parameters and computational complexity. In the experiments, we approximate real SECT images using mixed 120kV images from DECT data to address the issue of not being able to obtain the true paired DECT and SECT data. Extensive experiments demonstrate that SDTGAN can generate DECT images better than state-of-the-art methods. The code of our implementation is available at https://github.com/jiang-cw/SDTGAN.


Subject(s)
Cerebral Hemorrhage , Tomography, X-Ray Computed , Cerebral Hemorrhage/diagnostic imaging , Humans , Tomography, X-Ray Computed/methods , Radiography, Dual-Energy Scanned Projection/methods , Radiographic Image Interpretation, Computer-Assisted/methods
16.
Med Image Anal ; 95: 103199, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38759258

ABSTRACT

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/classification , Tomography, X-Ray Computed/methods , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms
17.
Biomed Phys Eng Express ; 10(4)2024 May 15.
Article in English | MEDLINE | ID: mdl-38701765

ABSTRACT

Purpose. To improve breast cancer risk prediction for young women, we have developed deep learning methods to estimate mammographic density from low dose mammograms taken at approximately 1/10th of the usual dose. We investigate the quality and reliability of the density scores produced on low dose mammograms focussing on how image resolution and levels of training affect the low dose predictions.Methods. Deep learning models are developed and tested, with two feature extraction methods and an end-to-end trained method, on five different resolutions of 15,290 standard dose and simulated low dose mammograms with known labels. The models are further tested on a dataset with 296 matching standard and real low dose images allowing performance on the low dose images to be ascertained.Results. Prediction quality on standard and simulated low dose images compared to labels is similar for all equivalent model training and image resolution versions. Increasing resolution results in improved performance of both feature extraction methods for standard and simulated low dose images, while the trained models show high performance across the resolutions. For the trained models the Spearman rank correlation coefficient between predictions of standard and low dose images at low resolution is 0.951 (0.937 to 0.960) and at the highest resolution 0.956 (0.942 to 0.965). If pairs of model predictions are averaged, similarity increases.Conclusions. Deep learning mammographic density predictions on low dose mammograms are highly correlated with standard dose equivalents for feature extraction and end-to-end approaches across multiple image resolutions. Deep learning models can reliably make high quality mammographic density predictions on low dose mammograms.


Subject(s)
Breast Density , Breast Neoplasms , Deep Learning , Mammography , Radiation Dosage , Humans , Mammography/methods , Female , Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Image Processing, Computer-Assisted/methods , Reproducibility of Results , Algorithms , Radiographic Image Interpretation, Computer-Assisted/methods
20.
Radiology ; 311(2): e232286, 2024 May.
Article in English | MEDLINE | ID: mdl-38771177

ABSTRACT

Background Artificial intelligence (AI) is increasingly used to manage radiologists' workloads. The impact of patient characteristics on AI performance has not been well studied. Purpose To understand the impact of patient characteristics (race and ethnicity, age, and breast density) on the performance of an AI algorithm interpreting negative screening digital breast tomosynthesis (DBT) examinations. Materials and Methods This retrospective cohort study identified negative screening DBT examinations from an academic institution from January 1, 2016, to December 31, 2019. All examinations had 2 years of follow-up without a diagnosis of atypia or breast malignancy and were therefore considered true negatives. A subset of unique patients was randomly selected to provide a broad distribution of race and ethnicity. DBT studies in this final cohort were interpreted by a U.S. Food and Drug Administration-approved AI algorithm, which generated case scores (malignancy certainty) and risk scores (1-year subsequent malignancy risk) for each mammogram. Positive examinations were classified based on vendor-provided thresholds for both scores. Multivariable logistic regression was used to understand relationships between the scores and patient characteristics. Results A total of 4855 patients (median age, 54 years [IQR, 46-63 years]) were included: 27% (1316 of 4855) White, 26% (1261 of 4855) Black, 28% (1351 of 4855) Asian, and 19% (927 of 4855) Hispanic patients. False-positive case scores were significantly more likely in Black patients (odds ratio [OR] = 1.5 [95% CI: 1.2, 1.8]) and less likely in Asian patients (OR = 0.7 [95% CI: 0.5, 0.9]) compared with White patients, and more likely in older patients (71-80 years; OR = 1.9 [95% CI: 1.5, 2.5]) and less likely in younger patients (41-50 years; OR = 0.6 [95% CI: 0.5, 0.7]) compared with patients aged 51-60 years. False-positive risk scores were more likely in Black patients (OR = 1.5 [95% CI: 1.0, 2.0]), patients aged 61-70 years (OR = 3.5 [95% CI: 2.4, 5.1]), and patients with extremely dense breasts (OR = 2.8 [95% CI: 1.3, 5.8]) compared with White patients, patients aged 51-60 years, and patients with fatty density breasts, respectively. Conclusion Patient characteristics influenced the case and risk scores of a Food and Drug Administration-approved AI algorithm analyzing negative screening DBT examinations. © RSNA, 2024.


Subject(s)
Algorithms , Artificial Intelligence , Breast Neoplasms , Mammography , Humans , Female , Middle Aged , Retrospective Studies , Mammography/methods , Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Aged , Adult , Breast Density
SELECTION OF CITATIONS
SEARCH DETAIL
...