Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 8.750
Filter
2.
Eur Radiol Exp ; 8(1): 54, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38698099

ABSTRACT

BACKGROUND: We aimed to improve the image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection and determine the best tradeoff between number of views, IQ, and diagnostic confidence. METHODS: CT images from 41 subjects aged 62.8 ± 10.6 years (mean ± standard deviation, 23 men), 34 with lung metastasis, 7 healthy, were retrospectively selected (2016-2018) and forward projected onto 2,048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, 7 healthy) for a single-blinded multireader study. These slices, for all levels of subsampling, with and without U-Net postprocessing, were presented to three readers. IQ and diagnostic confidence were ranked using predefined scales. Subjective nodule segmentation was evaluated using sensitivity and Dice similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used. RESULTS: The 64-projection sparse-view images resulted in 0.89 sensitivity and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led to insufficient IQ for diagnosis. For increased views, no substantial discrepancies were noted between sparse-view and postprocessed images. CONCLUSIONS: Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level. RELEVANCE STATEMENT: Our reader study demonstrates the benefit of U-Net postprocessing for regular CT screenings of patients with lung metastasis to increase the IQ and diagnostic confidence while reducing the dose. KEY POINTS: • Sparse-projection-view streak artifacts reduce the quality and usability of sparse-view CT images. • U-Net-based postprocessing removes sparse-view artifacts while maintaining diagnostically accurate IQ. • Postprocessed sparse-view CTs drastically increase radiologists' confidence in diagnosing lung metastasis.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Male , Middle Aged , Tomography, X-Ray Computed/methods , Female , Retrospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Aged
3.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38720391

ABSTRACT

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Subject(s)
Deep Learning , Lung Neoplasms , Multiple Pulmonary Nodules , Phantoms, Imaging , Radiation Dosage , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/pathology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
4.
Radiology ; 311(2): e232178, 2024 May.
Article in English | MEDLINE | ID: mdl-38742970

ABSTRACT

Background Accurate characterization of suspicious small renal masses is crucial for optimized management. Deep learning (DL) algorithms may assist with this effort. Purpose To develop and validate a DL algorithm for identifying benign small renal masses at contrast-enhanced multiphase CT. Materials and Methods Surgically resected renal masses measuring 3 cm or less in diameter at contrast-enhanced CT were included. The DL algorithm was developed by using retrospective data from one hospital between 2009 and 2021, with patients randomly allocated in a training and internal test set ratio of 8:2. Between 2013 and 2021, external testing was performed on data from five independent hospitals. A prospective test set was obtained between 2021 and 2022 from one hospital. Algorithm performance was evaluated by using the area under the receiver operating characteristic curve (AUC) and compared with the results of seven clinicians using the DeLong test. Results A total of 1703 patients (mean age, 56 years ± 12 [SD]; 619 female) with a single renal mass per patient were evaluated. The retrospective data set included 1063 lesions (874 in training set, 189 internal test set); the multicenter external test set included 537 lesions (12.3%, 66 benign) with 89 subcentimeter (≤1 cm) lesions (16.6%); and the prospective test set included 103 lesions (13.6%, 14 benign) with 20 (19.4%) subcentimeter lesions. The DL algorithm performance was comparable with that of urological radiologists: for the external test set, AUC was 0.80 (95% CI: 0.75, 0.85) versus 0.84 (95% CI: 0.78, 0.88) (P = .61); for the prospective test set, AUC was 0.87 (95% CI: 0.79, 0.93) versus 0.92 (95% CI: 0.86, 0.96) (P = .70). For subcentimeter lesions in the external test set, the algorithm and urological radiologists had similar AUC of 0.74 (95% CI: 0.63, 0.83) and 0.81 (95% CI: 0.68, 0.92) (P = .78), respectively. Conclusion The multiphase CT-based DL algorithm showed comparable performance with that of radiologists for identifying benign small renal masses, including lesions of 1 cm or less. Published under a CC BY 4.0 license. Supplemental material is available for this article.


Subject(s)
Contrast Media , Deep Learning , Kidney Neoplasms , Tomography, X-Ray Computed , Humans , Female , Male , Middle Aged , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/pathology , Retrospective Studies , Tomography, X-Ray Computed/methods , Prospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Aged , Algorithms , Kidney/diagnostic imaging , Adult
5.
BMC Med Inform Decis Mak ; 24(1): 126, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38755563

ABSTRACT

BACKGROUND: Chest X-ray imaging based abnormality localization, essential in diagnosing various diseases, faces significant clinical challenges due to complex interpretations and the growing workload of radiologists. While recent advances in deep learning offer promising solutions, there is still a critical issue of domain inconsistency in cross-domain transfer learning, which hampers the efficiency and accuracy of diagnostic processes. This study aims to address the domain inconsistency problem and improve autonomic abnormality localization performance of heterogeneous chest X-ray image analysis, particularly in detecting abnormalities, by developing a self-supervised learning strategy called "BarlwoTwins-CXR". METHODS: We utilized two publicly available datasets: the NIH Chest X-ray Dataset and the VinDr-CXR. The BarlowTwins-CXR approach was conducted in a two-stage training process. Initially, self-supervised pre-training was performed using an adjusted Barlow Twins algorithm on the NIH dataset with a Resnet50 backbone pre-trained on ImageNet. This was followed by supervised fine-tuning on the VinDr-CXR dataset using Faster R-CNN with Feature Pyramid Network (FPN). The study employed mean Average Precision (mAP) at an Intersection over Union (IoU) of 50% and Area Under the Curve (AUC) for performance evaluation. RESULTS: Our experiments showed a significant improvement in model performance with BarlowTwins-CXR. The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models. In addition, the Ablation CAM method revealed enhanced precision in localizing chest abnormalities. The study involved 112,120 images from the NIH dataset and 18,000 images from the VinDr-CXR dataset, indicating robust training and testing samples. CONCLUSION: BarlowTwins-CXR significantly enhances the efficiency and accuracy of chest X-ray image-based abnormality localization, outperforming traditional transfer learning methods and effectively overcoming domain inconsistency in cross-domain scenarios. Our experiment results demonstrate the potential of using self-supervised learning to improve the generalizability of models in medical settings with limited amounts of heterogeneous data. This approach can be instrumental in aiding radiologists, particularly in high-workload environments, offering a promising direction for future AI-driven healthcare solutions.


Subject(s)
Radiography, Thoracic , Supervised Machine Learning , Humans , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Datasets as Topic
6.
Crit Rev Biomed Eng ; 52(4): 41-60, 2024.
Article in English | MEDLINE | ID: mdl-38780105

ABSTRACT

Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.


Subject(s)
Breast Neoplasms , Machine Learning , Mammography , Humans , Mammography/methods , Breast Neoplasms/diagnostic imaging , Female , Breast/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
7.
Sci Rep ; 14(1): 11810, 2024 05 23.
Article in English | MEDLINE | ID: mdl-38782976

ABSTRACT

In this retrospective study, we aimed to assess the objective and subjective image quality of different reconstruction techniques and a deep learning-based software on non-contrast head computed tomography (CT) images. In total, 152 adult head CT scans (77 female, 75 male; mean age 69.4 ± 18.3 years) obtained from three different CT scanners using different protocols between March and April 2021 were included. CT images were reconstructed using filtered-back projection (FBP), iterative reconstruction (IR), and post-processed using a deep learning-based algorithm (PS). Post-processing significantly reduced noise in FBP-reconstructed images (up to 15.4% reduction) depending on the protocol, leading to improvements in signal-to-noise ratio of up to 19.7%. However, when deep learning-based post-processing was applied to FBP images compared to IR alone, the differences were inconsistent and partly non-significant, which appeared to be protocol or site specific. Subjective assessments showed no significant overall improvement in image quality for all reconstructions and post-processing. Inter-rater reliability was low and preferences varied. Deep learning-based denoising software improved objective image quality compared to FBP in routine head CT. A significant difference compared to IR was observed for only one protocol. Subjective assessments did not indicate a significant clinical impact in terms of improved subjective image quality, likely due to the low noise levels in full-dose images.


Subject(s)
Deep Learning , Head , Software , Tomography, X-Ray Computed , Humans , Female , Tomography, X-Ray Computed/methods , Male , Aged , Head/diagnostic imaging , Retrospective Studies , Middle Aged , Aged, 80 and over , Image Processing, Computer-Assisted/methods , Signal-To-Noise Ratio , Adult , Algorithms , Radiographic Image Interpretation, Computer-Assisted/methods
9.
Biomed Phys Eng Express ; 10(4)2024 May 22.
Article in English | MEDLINE | ID: mdl-38744255

ABSTRACT

Purpose. To develop a method to extract statistical low-contrast detectability (LCD) and contrast-detail (C-D) curves from clinical patient images.Method. We used the region of air surrounding the patient as an alternative for a homogeneous region within a patient. A simple graphical user interface (GUI) was created to set the initial configuration for region of interest (ROI), ROI size, and minimum detectable contrast (MDC). The process was started by segmenting the air surrounding the patient with a threshold between -980 HU (Hounsfield units) and -1024 HU to get an air mask. The mask was trimmed using the patient center coordinates to avoid distortion from the patient table. It was used to automatically place square ROIs of a predetermined size. The mean pixel values in HU within each ROI were calculated, and the standard deviation (SD) from all the means was obtained. The MDC for a particular target size was generated by multiplying the SD by 3.29. A C-D curve was obtained by iterating this process for the other ROI sizes. This method was applied to the homogeneous area from the uniformity module of an ACR CT phantom to find the correlation between the parameters inside and outside the phantom, for 30 thoracic, 26 abdominal, and 23 head images.Results. The phantom images showed a significant linear correlation between the LCDs obtained from outside and inside the phantom, with R2values of 0.67 and 0.99 for variations in tube currents and tube voltages. This indicated that the air region outside the phantom can act as a surrogate for the homogenous region inside the phantom to obtain the LCD and C-D curves.Conclusion. The C-D curves obtained from outside the ACR CT phantom show a strong linear correlation with those from inside the phantom. The proposed method can also be used to extract the LCD from patient images by using the region of air outside as a surrogate for a region inside the patient.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Phantoms, Imaging , Image Processing, Computer-Assisted/methods , User-Computer Interface , Radiographic Image Interpretation, Computer-Assisted/methods
10.
Biomed Phys Eng Express ; 10(4)2024 May 15.
Article in English | MEDLINE | ID: mdl-38701765

ABSTRACT

Purpose. To improve breast cancer risk prediction for young women, we have developed deep learning methods to estimate mammographic density from low dose mammograms taken at approximately 1/10th of the usual dose. We investigate the quality and reliability of the density scores produced on low dose mammograms focussing on how image resolution and levels of training affect the low dose predictions.Methods. Deep learning models are developed and tested, with two feature extraction methods and an end-to-end trained method, on five different resolutions of 15,290 standard dose and simulated low dose mammograms with known labels. The models are further tested on a dataset with 296 matching standard and real low dose images allowing performance on the low dose images to be ascertained.Results. Prediction quality on standard and simulated low dose images compared to labels is similar for all equivalent model training and image resolution versions. Increasing resolution results in improved performance of both feature extraction methods for standard and simulated low dose images, while the trained models show high performance across the resolutions. For the trained models the Spearman rank correlation coefficient between predictions of standard and low dose images at low resolution is 0.951 (0.937 to 0.960) and at the highest resolution 0.956 (0.942 to 0.965). If pairs of model predictions are averaged, similarity increases.Conclusions. Deep learning mammographic density predictions on low dose mammograms are highly correlated with standard dose equivalents for feature extraction and end-to-end approaches across multiple image resolutions. Deep learning models can reliably make high quality mammographic density predictions on low dose mammograms.


Subject(s)
Breast Density , Breast Neoplasms , Deep Learning , Mammography , Radiation Dosage , Humans , Mammography/methods , Female , Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Image Processing, Computer-Assisted/methods , Reproducibility of Results , Algorithms , Radiographic Image Interpretation, Computer-Assisted/methods
12.
Radiology ; 311(2): e232286, 2024 May.
Article in English | MEDLINE | ID: mdl-38771177

ABSTRACT

Background Artificial intelligence (AI) is increasingly used to manage radiologists' workloads. The impact of patient characteristics on AI performance has not been well studied. Purpose To understand the impact of patient characteristics (race and ethnicity, age, and breast density) on the performance of an AI algorithm interpreting negative screening digital breast tomosynthesis (DBT) examinations. Materials and Methods This retrospective cohort study identified negative screening DBT examinations from an academic institution from January 1, 2016, to December 31, 2019. All examinations had 2 years of follow-up without a diagnosis of atypia or breast malignancy and were therefore considered true negatives. A subset of unique patients was randomly selected to provide a broad distribution of race and ethnicity. DBT studies in this final cohort were interpreted by a U.S. Food and Drug Administration-approved AI algorithm, which generated case scores (malignancy certainty) and risk scores (1-year subsequent malignancy risk) for each mammogram. Positive examinations were classified based on vendor-provided thresholds for both scores. Multivariable logistic regression was used to understand relationships between the scores and patient characteristics. Results A total of 4855 patients (median age, 54 years [IQR, 46-63 years]) were included: 27% (1316 of 4855) White, 26% (1261 of 4855) Black, 28% (1351 of 4855) Asian, and 19% (927 of 4855) Hispanic patients. False-positive case scores were significantly more likely in Black patients (odds ratio [OR] = 1.5 [95% CI: 1.2, 1.8]) and less likely in Asian patients (OR = 0.7 [95% CI: 0.5, 0.9]) compared with White patients, and more likely in older patients (71-80 years; OR = 1.9 [95% CI: 1.5, 2.5]) and less likely in younger patients (41-50 years; OR = 0.6 [95% CI: 0.5, 0.7]) compared with patients aged 51-60 years. False-positive risk scores were more likely in Black patients (OR = 1.5 [95% CI: 1.0, 2.0]), patients aged 61-70 years (OR = 3.5 [95% CI: 2.4, 5.1]), and patients with extremely dense breasts (OR = 2.8 [95% CI: 1.3, 5.8]) compared with White patients, patients aged 51-60 years, and patients with fatty density breasts, respectively. Conclusion Patient characteristics influenced the case and risk scores of a Food and Drug Administration-approved AI algorithm analyzing negative screening DBT examinations. © RSNA, 2024.


Subject(s)
Algorithms , Artificial Intelligence , Breast Neoplasms , Mammography , Humans , Female , Middle Aged , Retrospective Studies , Mammography/methods , Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Aged , Adult , Breast Density
13.
Clin Radiol ; 79(7): e957-e962, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38693034

ABSTRACT

AIM: The comparison between chest x-ray (CXR) and computed tomography (CT) images is commonly required in clinical practice to assess the evolution of chest pathological manifestations. Intrinsic differences between the two techniques, however, limit reader confidence in such a comparison. CT average intensity projection (AIP) reconstruction allows obtaining "synthetic" CXR (s-CXR) images, which are thought to have the potential to increase the accuracy of comparison between CXR and CT imaging. We aim at assessing the diagnostic performance of s-CXR imaging in detecting common pleuro-parenchymal abnormalities. MATERIALS AND METHODS: 142 patients who underwent chest CT examination and CXR within 24 hours were enrolled. CT was the standard of reference. Both conventional CXR (c-CXR) and s-CXR images were retrospectively reviewed for the presence of consolidation, nodule/mass, linear opacities, reticular opacities, and pleural effusion by 3 readers in two separate sessions. Sensitivity, specificity, accuracy and their 95% confidence interval were calculated for each reader and setting and tested by McNemar test. Inter-observer agreement was tested by Cohen's K test and its 95%CI. RESULTS: Overall, s-CXR sensitivity ranged 45-67% for consolidation, 12-28% for nodule/mass, 17-33% for linear opacities, 2-61% for reticular opacities, and 33-58% for pleural effusion; specificity 65-83%, 83-94%, 94-98%, 93-100% and 79-86%; accuracy 66-68%, 74-79%, 89-91%, 61-65% and 68-72%, respectively. K values ranged 0.38-0.50, 0.05-0.25, -0.05-0.11, -0.01-0.15, and 0.40-0.66 for consolidation, nodule/mass, linear opacities, reticular opacities, and pleural effusion, respectively. CONCLUSION: S-CXR images, reconstructed with AIP technique, can be compared with conventional images in clinical practice and for educational purposes.


Subject(s)
Radiography, Thoracic , Sensitivity and Specificity , Tomography, X-Ray Computed , Humans , Male , Female , Tomography, X-Ray Computed/methods , Middle Aged , Retrospective Studies , Aged , Radiography, Thoracic/methods , Adult , Aged, 80 and over , Radiographic Image Interpretation, Computer-Assisted/methods , Pleural Diseases/diagnostic imaging , Reproducibility of Results , Observer Variation
14.
BMC Med Imaging ; 24(1): 120, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38789925

ABSTRACT

BACKGROUND: Lung cancer is the second most common cancer worldwide, with over two million new cases per year. Early identification would allow healthcare practitioners to handle it more effectively. The advancement of computer-aided detection systems significantly impacted clinical analysis and decision-making on human disease. Towards this, machine learning and deep learning techniques are successfully being applied. Due to several advantages, transfer learning has become popular for disease detection based on image data. METHODS: In this work, we build a novel transfer learning model (VER-Net) by stacking three different transfer learning models to detect lung cancer using lung CT scan images. The model is trained to map the CT scan images with four lung cancer classes. Various measures, such as image preprocessing, data augmentation, and hyperparameter tuning, are taken to improve the efficacy of VER-Net. All the models are trained and evaluated using multiclass classifications chest CT images. RESULTS: The experimental results confirm that VER-Net outperformed the other eight transfer learning models compared with. VER-Net scored 91%, 92%, 91%, and 91.3% when tested for accuracy, precision, recall, and F1-score, respectively. Compared to the state-of-the-art, VER-Net has better accuracy. CONCLUSION: VER-Net is not only effectively used for lung cancer detection but may also be useful for other diseases for which CT scan images are available.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Machine Learning , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods
15.
Comput Med Imaging Graph ; 115: 102397, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38735104

ABSTRACT

We address the problem of lung CT image registration, which underpins various diagnoses and treatments for lung diseases. The main crux of the problem is the large deformation that the lungs undergo during respiration. This physiological process imposes several challenges from a learning point of view. In this paper, we propose a novel training scheme, called stochastic decomposition, which enables deep networks to effectively learn such a difficult deformation field during lung CT image registration. The key idea is to stochastically decompose the deformation field, and supervise the registration by synthetic data that have the corresponding appearance discrepancy. The stochastic decomposition allows for revealing all possible decompositions of the deformation field. At the learning level, these decompositions can be seen as a prior to reduce the ill-posedness of the registration yielding to boost the performance. We demonstrate the effectiveness of our framework on Lung CT data. We show, through extensive numerical and visual results, that our technique outperforms existing methods.


Subject(s)
Stochastic Processes , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Humans , Radiographic Image Interpretation, Computer-Assisted/methods , Lung/diagnostic imaging , Algorithms , Lung Diseases/diagnostic imaging , Lung Diseases/physiopathology
16.
Med Image Anal ; 95: 103185, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38718716

ABSTRACT

BACKGROUND & AIMS: Metabolic-dysfunction associated fatty liver disease (MAFLD) is highly prevalent and can lead to liver complications and comorbidities, with non-invasive tests such as vibration-controlled transient elastography (VCTE) and invasive liver biopsies being used for diagnosis The aim of the present study was to develop a new fully automatized method for quantifying the percentage of fat in the liver based on a voxel analysis on computed tomography (CT) images to solve previously unconcluded diagnostic deficiencies either in contrast (CE) or non-contrast enhanced (NCE) assessments. METHODS: Liver and spleen were segmented using nn-UNet on CE- and NCE-CT images. Radiodensity values were obtained for both organs for defining the key benchmarks for fatty liver assessment: liver mean, liver-to-spleen ratio, liver-spleen difference, and their average. VCTE was used for validation. A classification task method was developed for detection of suitable patients to fulfill maximum reproducibility across cohorts and highlight subjects with other potential radiodensity-related diseases. RESULTS: Best accuracy was attained using the average of all proposed benchmarks being the liver-to-spleen ratio highly useful for CE and the liver-to-spleen difference for NCE. The proposed whole-organ automatic segmentation displayed superior potential when compared to the typically used manual region-of-interest drawing as it allows to accurately obtain the percent of fat in liver, among other improvements. Atypical patients were successfully stratified through a function based on biochemical data. CONCLUSIONS: The developed method tackles the current drawbacks including biopsy invasiveness, and CT-related weaknesses such as lack of automaticity, dependency on contrast agent, no quantification of the percentage of fat in liver, and limited information on region-to-organ affectation. We propose this tool as an alternative for individualized MAFLD evaluation by an early detection of abnormal CT patterns based in radiodensity whilst abording detection of non-suitable patients to avoid unnecessary exposure to CT radiation. Furthermore, this work presents a surrogate aid for assessing fatty liver at a primary assessment of MAFLD using elastography data.


Subject(s)
Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Reproducibility of Results , Male , Contrast Media , Middle Aged , Female , Radiographic Image Interpretation, Computer-Assisted/methods , Elasticity Imaging Techniques/methods , Aged , Fatty Liver/diagnostic imaging , Non-alcoholic Fatty Liver Disease/diagnostic imaging , Spleen/diagnostic imaging , Liver/diagnostic imaging , Adult
17.
Med Image Anal ; 95: 103194, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38749304

ABSTRACT

Real-time diagnosis of intracerebral hemorrhage after thrombectomy is crucial for follow-up treatment. However, this is difficult to achieve with standard single-energy CT (SECT) due to similar CT values of blood and contrast agents under a single energy spectrum. In contrast, dual-energy CT (DECT) scanners employ two different energy spectra, which allows for real-time differentiation between hemorrhage and contrast extravasation based on energy-related attenuation characteristics. Unfortunately, DECT scanners are not as widely used as SECT scanners due to their high costs. To address this dilemma, in this paper, we generate pseudo DECT images from a SECT image for real-time diagnosis of hemorrhage. More specifically, we propose a SECT-to-DECT Transformer-based Generative Adversarial Network (SDTGAN), which is a 3D transformer-based multi-task learning framework equipped with a shared attention mechanism. In this way, SDTGAN can be guided to focus more on high-density areas (crucial for hemorrhage diagnosis) during the generation. Meanwhile, the introduced multi-task learning strategy and the shared attention mechanism also enable SDTGAN to model dependencies between interconnected generation tasks, improving generation performance while significantly reducing model parameters and computational complexity. In the experiments, we approximate real SECT images using mixed 120kV images from DECT data to address the issue of not being able to obtain the true paired DECT and SECT data. Extensive experiments demonstrate that SDTGAN can generate DECT images better than state-of-the-art methods. The code of our implementation is available at https://github.com/jiang-cw/SDTGAN.


Subject(s)
Cerebral Hemorrhage , Tomography, X-Ray Computed , Cerebral Hemorrhage/diagnostic imaging , Humans , Tomography, X-Ray Computed/methods , Radiography, Dual-Energy Scanned Projection/methods , Radiographic Image Interpretation, Computer-Assisted/methods
18.
Med Image Anal ; 95: 103199, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38759258

ABSTRACT

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/classification , Tomography, X-Ray Computed/methods , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms
19.
Comput Biol Med ; 176: 108554, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38744013

ABSTRACT

One of the most common diseases affecting society around the world is kidney tumor. The risk of kidney disease increases due to reasons such as consumption of ready-made food and bad habits. Early diagnosis of kidney tumors is essential for effective treatment, reducing side effects, and reducing the number of deaths. With the development of computer-aided diagnostic methods, the need for accurate renal tumor classification is also increasing. Because traditional methods based on manual detection are time-consuming, boring, and costly, high-accuracy tests can be performed faster and at a lower cost with deep learning (DL) methods in kidney tumor detection (KTD). Among the current challenges regarding artificial intelligence-assisted KTD, obtaining more precise programming information and the capacity to group with high accuracy make clinical determination more vital and bring it to an important point for current treatment in KTD prediction. This encourages us to propose a more effective DL model that can effectively assist specialist physicians in the diagnosis of kidney tumors. In this way, the workload of radiologists can be alleviated and errors in clinical diagnoses that may occur due to the complex structure of the kidney can be prevented. A large amount of data is needed during the training of the developed methods. Although various studies have been conducted to reduce the amount of data with feature selection techniques, these techniques provide little improvement in the classification accuracy rate. In this paper, a masked autoencoder (MAE) is proposed for KTD, which can produce effective results on datasets containing some samples and can be directly fine-tuned and pre-trained. Self-supervised learning (SSL) is achieved through self-distillation (SD), which can be reintroduced into the configuration loss calculation using masked patches. The SD loss on the decoder and encoder outputs' latent representation is calculated operating SSLSD-KTD. The encoder obtains local attention, while the decoder transfers its global attention to calculate losses. The SSLSD-KTD method reached 98.04 % classification accuracy on the KAUH-kidney dataset, including 8400 samples, and 82.14 % on the CT-kidney dataset, containing 840 samples. By adding more external information to the SSLSD-KTD method with transfer learning, accuracy results of 99.82 % and 95.24 % were obtained on the same datasets. Experimental results have shown that the SSLSD-KTD method can effectively extract kidney tumor features with limited data and can be an aid or even an alternative for radiologists in decision-making in the diagnosis of the disease.


Subject(s)
Kidney Neoplasms , Tomography, X-Ray Computed , Humans , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/classification , Tomography, X-Ray Computed/methods , Supervised Machine Learning , Deep Learning , Kidney/diagnostic imaging , Male , Female , Radiographic Image Interpretation, Computer-Assisted/methods
20.
Comput Methods Programs Biomed ; 251: 108211, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38744058

ABSTRACT

Mammography screening is instrumental in the early detection and diagnosis of breast cancer by identifying masses in mammograms. With the rapid development of deep learning, numerous deep learning-based object detection algorithms have been explored for mass detection studies. However, these methods often yield a high false positive rate per image (FPPI) while achieving a high true positive rate (TPR). To maintain a higher TPR while also ensuring lower FPPI, we improved the Probability Anchor Assignment (PAA) algorithm to enhance the detection capability for mammographic characteristics with our previous work. We considered three dimensions: the backbone network, feature fusion module, and dense detection heads. The final experiment showed the effectiveness of the proposed method, and the TPR/FPPI values of the final improved PAA algorithm were 0.96/0.56 on the INbreast datasets. Compared to other methods, our method stands distinguished with its effectiveness in addressing the imbalance between positive and negative classes in cases of single lesion detection.


Subject(s)
Algorithms , Breast Neoplasms , Mammography , Humans , Mammography/methods , Breast Neoplasms/diagnostic imaging , Female , Deep Learning , Early Detection of Cancer/methods , False Positive Reactions , Probability , Radiographic Image Interpretation, Computer-Assisted/methods , Breast/diagnostic imaging , Databases, Factual
SELECTION OF CITATIONS
SEARCH DETAIL
...