Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Diagnostics (Basel) ; 13(16)2023 Aug 16.
Article in English | MEDLINE | ID: mdl-37627953

ABSTRACT

Breast density is an important risk factor for breast cancer development; however, imager inconsistency in density reporting can lead to patient and clinician confusion. A deep learning (DL) model for mammographic density grading was examined in a retrospective multi-reader multi-case study consisting of 928 image pairs and assessed for impact on inter- and intra-reader variability and reading time. Seven readers assigned density categories to the images, then re-read the test set aided by the model after a 4-week washout. To measure intra-reader agreement, 100 image pairs were blindly double read in both sessions. Linear Cohen Kappa (κ) and Student's t-test were used to assess the model and reader performance. The model achieved a κ of 0.87 (95% CI: 0.84, 0.89) for four-class density assessment and a κ of 0.91 (95% CI: 0.88, 0.93) for binary non-dense/dense assessment. Superiority tests showed significant reduction in inter-reader variability (κ improved from 0.70 to 0.88, p ≤ 0.001) and intra-reader variability (κ improved from 0.83 to 0.95, p ≤ 0.01) for four-class density, and significant reduction in inter-reader variability (κ improved from 0.77 to 0.96, p ≤ 0.001) and intra-reader variability (κ improved from 0.89 to 0.97, p ≤ 0.01) for binary non-dense/dense assessment when aided by DL. The average reader mean reading time per image pair also decreased by 30%, 0.86 s (95% CI: 0.01, 1.71), with six of seven readers having reading time reductions.

2.
Diagnostics (Basel) ; 13(13)2023 Jun 21.
Article in English | MEDLINE | ID: mdl-37443526

ABSTRACT

Artificial intelligence (AI) applications in mammography have gained significant popular attention; however, AI has the potential to revolutionize other aspects of breast imaging beyond simple lesion detection. AI has the potential to enhance risk assessment by combining conventional factors with imaging and improve lesion detection through a comparison with prior studies and considerations of symmetry. It also holds promise in ultrasound analysis and automated whole breast ultrasound, areas marked by unique challenges. AI's potential utility also extends to administrative tasks such as MQSA compliance, scheduling, and protocoling, which can reduce the radiologists' workload. However, adoption in breast imaging faces limitations in terms of data quality and standardization, generalizability, benchmarking performance, and integration into clinical workflows. Developing methods for radiologists to interpret AI decisions, and understanding patient perspectives to build trust in AI results, will be key future endeavors, with the ultimate aim of fostering more efficient radiology practices and better patient care.

5.
Ann Am Thorac Soc ; 19(12): 1993-2002, 2022 12.
Article in English | MEDLINE | ID: mdl-35830591

ABSTRACT

Rationale: Chronic obstructive pulmonary disease (COPD) is a heterogeneous syndrome with phenotypic manifestations that tend to be distributed along a continuum. Unsupervised machine learning based on broad selection of imaging and clinical phenotypes may be used to identify primary variables that define disease axes and stratify patients with COPD. Objectives: To identify primary variables driving COPD heterogeneity using principal component analysis and to define disease axes and assess the prognostic value of these axes across three outcomes: progression, exacerbation, and mortality. Methods: We included 7,331 patients between 39 and 85 years old, of whom 40.3% were Black and 45.8% were female smokers with a mean of 44.6 pack-years, from the COPDGene (Genetic Epidemiology of COPD) phase I cohort (2008-2011) in our analysis. Out of a total of 916 phenotypes, 147 continuous clinical, spirometric, and computed tomography (CT) features were selected. For each principal component (PC), we computed a PC score based on feature weights. We used PC score distributions to define disease axes along which we divided the patients into quartiles. To assess the prognostic value of these axes, we applied logistic regression analyses to estimate 5-year (n = 4,159) and 10-year (n = 1,487) odds of progression. Cox regression and Kaplan-Meier analyses were performed to estimate 5-year and 10-year risk of exacerbation (n = 6,532) and all-cause mortality (n = 7,331). Results: The first PC, accounting for 43.7% of variance, was defined by CT measures of air trapping and emphysema. The second PC, accounting for 13.7% of variance, was defined by spirometric and CT measures of vital capacity and lung volume. The third PC, accounting for 7.9% of the variance, was defined by CT measures of lung mass, airway thickening, and body habitus. Stratification of patients across each disease axis revealed up to 3.2-fold (95% confidence interval [CI] 2.4, 4.3) greater odds of 5-year progression, 5.4-fold (95% CI 4.6, 6.3) greater risk of 5-year exacerbation, and 5.0-fold (95% CI 4.2, 6.0) greater risk of 10-year mortality between the highest and lowest quartiles. Conclusions: Unsupervised learning analysis of the COPDGene cohort reveals that CT measurements may bolster patient stratification along the continuum of COPD phenotypes. Each of the disease axes also individually demonstrate prognostic potential, predictive of future forced expiratory volume in 1 second decline, exacerbation, and mortality.


Subject(s)
Pulmonary Disease, Chronic Obstructive , Pulmonary Emphysema , Female , Male , Humans , Unsupervised Machine Learning , Forced Expiratory Volume , Tomography, X-Ray Computed/methods , Disease Progression
6.
Radiol Artif Intell ; 4(2): e210160, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35391767

ABSTRACT

Quantitative imaging measurements can be facilitated by artificial intelligence (AI) algorithms, but how they might impact decision-making and be perceived by radiologists remains uncertain. After creation of a dedicated inspiratory-expiratory CT examination and concurrent deployment of a quantitative AI algorithm for assessing air trapping, five cardiothoracic radiologists retrospectively evaluated severity of air trapping on 17 examination studies. Air trapping severity of each lobe was evaluated in three stages: qualitatively (visually); semiquantitatively, allowing manual region-of-interest measurements; and quantitatively, using results from an AI algorithm. Readers were surveyed on each case for their perceptions of the AI algorithm. The algorithm improved interreader agreement (intraclass correlation coefficients: visual, 0.28; semiquantitative, 0.40; quantitative, 0.84; P < .001) and improved correlation with pulmonary function testing (forced expiratory volume in 1 second-to-forced vital capacity ratio) (visual r = -0.26, semiquantitative r = -0.32, quantitative r = -0.44). Readers perceived moderate agreement with the AI algorithm (Likert scale average, 3.7 of 5), a mild impact on their final assessment (average, 2.6), and a neutral perception of overall utility (average, 3.5). Though the AI algorithm objectively improved interreader consistency and correlation with pulmonary function testing, individual readers did not immediately perceive this benefit, revealing a potential barrier to clinical adoption. Keywords: Technology Assessment, Quantification © RSNA, 2021.

7.
Radiol Artif Intell ; 4(1): e210211, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35146437

ABSTRACT

PURPOSE: To develop a convolutional neural network (CNN)-based deformable lung registration algorithm to reduce computation time and assess its potential for lobar air trapping quantification. MATERIALS AND METHODS: In this retrospective study, a CNN algorithm was developed to perform deformable registration of lung CT (LungReg) using data on 9118 patients from the COPDGene Study (data collected between 2007 and 2012). Loss function constraints included cross-correlation, displacement field regularization, lobar segmentation overlap, and the Jacobian determinant. LungReg was compared with a standard diffeomorphic registration (SyN) for lobar Dice overlap, percentage voxels with nonpositive Jacobian determinants, and inference runtime using paired t tests. Landmark colocalization error (LCE) across 10 patients was compared using a random effects model. Agreement between LungReg and SyN air trapping measurements was assessed using intraclass correlation coefficient. The ability of LungReg versus SyN emphysema and air trapping measurements to predict Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages was compared using area under the receiver operating characteristic curves. RESULTS: Average performance of LungReg versus SyN showed lobar Dice overlap score of 0.91-0.97 versus 0.89-0.95, respectively (P < .001); percentage voxels with nonpositive Jacobian determinant of 0.04 versus 0.10, respectively (P < .001); inference run time of 0.99 second (graphics processing unit) and 2.27 seconds (central processing unit) versus 418.46 seconds (central processing unit) (P < .001); and LCE of 7.21 mm versus 6.93 mm (P < .001). LungReg and SyN whole-lung and lobar air trapping measurements achieved excellent agreement (intraclass correlation coefficients > 0.98). LungReg versus SyN area under the receiver operating characteristic curves for predicting GOLD stage were not statistically different (range, 0.88-0.95 vs 0.88-0.95, respectively; P = .31-.95). CONCLUSION: CNN-based deformable lung registration is accurate and fully automated, with runtime feasible for clinical lobar air trapping quantification, and has potential to improve diagnosis of small airway diseases.Keywords: Air Trapping, Convolutional Neural Network, Deformable Registration, Small Airway Disease, CT, Lung, Semisupervised Learning, Unsupervised Learning Supplemental material is available for this article. © RSNA, 2021 An earlier incorrect version of this article appeared online. This article was corrected on December 22, 2021.

8.
Radiol Artif Intell ; 4(1): e219003, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35157746

ABSTRACT

[This corrects the article DOI: 10.1148/ryai.2021210211.].

9.
J Breast Imaging ; 4(5): 488-495, 2022 Oct 10.
Article in English | MEDLINE | ID: mdl-38416951

ABSTRACT

OBJECTIVE: Artificial intelligence (AI)-based triage algorithms may improve cancer detection and expedite radiologist workflow. To this end, the performance of a commercial AI-based triage algorithm on screening mammograms was evaluated across breast densities and lesion types. METHODS: This retrospective, IRB-exempt, multicenter, multivendor study examined 1255 screening 4-view mammograms (400 positive and 855 negative studies). Images were anonymized by providing institutions and analyzed by a commercially available AI algorithm (cmTriage, CureMetrix, La Jolla, CA) that performed retrospective triage at the study level by flagging exams as "suspicious" or not. Sensitivities and specificities with confidence intervals were derived from area under the curve (AUC) calculations. RESULTS: The algorithm demonstrated an AUC of 0.95 (95% CI: 0.94-0.96) for case identification. Area under the curve held across densities (0.95) and lesion types (masses: 0.94 [95% CI: 0.92-0.96] or microcalcifications: 0.97 [95% CI: 0.96-0.99]). The algorithm has a default sensitivity of 93% (95% CI: 95.6%-90.5%) with specificity of 76.3% (95% CI: 79.2%-73.4%). To evaluate real-world performance, a sensitivity of 86.9% (95% CI: 83.6%-90.2%) was tested, as observed for practicing radiologists by the Breast Cancer Surveillance Consortium (BCSC) study. The resulting specificity was 88.5% (95% CI: 86.4%-90.7%), similar to the BCSC specificity of 88.9%, indicating performance comparable to real-world results. CONCLUSION: When tested for lesion detection, an AI-based triage software can perform at the level of practicing radiologists. Drawing attention to suspicious exams may improve reader specificity and help streamline radiologist workflow, enabling faster turnaround times and improving care.


Subject(s)
Artificial Intelligence , Mammography , Triage , Algorithms , Mammography/methods , Retrospective Studies , Triage/methods
10.
Radiol Cardiothorac Imaging ; 3(2): e200477, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33969307

ABSTRACT

PURPOSE: To develop a deep learning-based algorithm to stage the severity of chronic obstructive pulmonary disease (COPD) through quantification of emphysema and air trapping on CT images and to assess the ability of the proposed stages to prognosticate 5-year progression and mortality. MATERIALS AND METHODS: In this retrospective study, an algorithm using co-registration and lung segmentation was developed in-house to automate quantification of emphysema and air trapping from inspiratory and expiratory CT images. The algorithm was then tested in a separate group of 8951 patients from the COPD Genetic Epidemiology study (date range, 2007-2017). With measurements of emphysema and air trapping, bivariable thresholds were determined to define CT stages of severity (mild, moderate, severe, and very severe) and were evaluated for their ability to prognosticate disease progression and mortality using logistic regression and Cox regression. RESULTS: On the basis of CT stages, the odds of disease progression were greatest among patients with very severe disease (odds ratio [OR], 2.67; 95% CI: 2.02, 3.53; P < .001) and were elevated in patients with moderate disease (OR, 1.50; 95% CI: 1.22, 1.84; P = .001). The hazard ratio of mortality for very severe disease at CT was 2.23 times the normal ratio (95% CI: 1.93, 2.58; P < .001). When combined with Global Initiative for Chronic Obstructive Lung Disease (GOLD) staging, patients with GOLD stage 2 disease had the greatest odds of disease progression when the CT stage was severe (OR, 4.48; 95% CI: 3.18, 6.31; P < .001) or very severe (OR, 4.72; 95% CI: 3.13, 7.13; P < .001). CONCLUSION: Automated CT algorithms can facilitate staging of COPD severity, have diagnostic performance comparable with that of spirometric GOLD staging, and provide further prognostic value when used in conjunction with GOLD staging.Supplemental material is available for this article.© RSNA, 2021See also commentary by Kalra and Ebrahimian in this issue.

11.
J Magn Reson Imaging ; 53(6): 1841-1850, 2021 06.
Article in English | MEDLINE | ID: mdl-33354852

ABSTRACT

Stereotactic radiosurgery (SRS) is used to treat cerebral arteriovenous malformations (AVMs). However, early evaluation of efficacy is difficult as structural magnetic resonance imaging (MRI)/magnetic resonance angiography (MRA) often does not demonstrate appreciable changes within the first 6 months. The aim of this study was to evaluate the use of four-dimensional (4D) flow MRI to quantify hemodynamic changes after SRS as early as 2 months. This was a retrospective observational study, which included 14 patients with both pre-SRS and post-SRS imaging obtained at multiple time points from 1 to 27 months after SRS. A 3T MRI Scanner was used to obtain T2 single-shot fast spin echo, time-of-flight MRA, and postcontrast 4D flow with three-dimensional velocity encoding between 150 and 200 cm/s. Post-hoc two-dimensional cross-sectional flow was measured for the dominant feeding artery, the draining vein, and the corresponding contralateral artery as a control. Measurements were performed by two independent observers, and reproducibility was assessed. Wilcoxon signed-rank tests were used to compare differences in flow, circumference, and pulsatility between the feeding artery and the contralateral artery both before and after SRS; and differences in nidus size and flow and circumference of the feeding artery and draining vein before and after SRS. Arterial flow (L/min) decreased in the primary feeding artery (mean: 0.1 ± 0.07 vs. 0.3 ± 0.2; p < 0.05) and normalized in comparison to the contralateral artery (mean: 0.1 ± 0.07 vs. 0.1 ± 0.07; p = 0.068). Flow decreased in the draining vein (mean: 0.1 ± 0.2 vs. 0.2 ± 0.2; p < 0.05), and the circumference of the draining vein also decreased (mean: 16.1 ± 8.3 vs. 15.7 ± 6.7; p < 0.05). AVM volume decreased after SRS (mean: 45.3 ± 84.8 vs. 38.1 ± 78.7; p < 0.05). However, circumference (mm) of the primary feeding artery remained similar after SRS (mean: 15.7 ± 2.7 vs. 16.1 ± 3.1; p = 0.600). 4D flow may be able to demonstrate early hemodynamic changes in AVMs treated with radiosurgery, and these changes appear to be more pronounced and occur earlier than the structural changes on standard MRI/MRA. Level of Evidence: 4 Technical Efficacy Stage: 1.


Subject(s)
Intracranial Arteriovenous Malformations , Radiosurgery , Cross-Sectional Studies , Hemodynamics , Humans , Intracranial Arteriovenous Malformations/diagnostic imaging , Intracranial Arteriovenous Malformations/surgery , Magnetic Resonance Imaging , Reproducibility of Results , Retrospective Studies , Treatment Outcome
12.
Radiol Artif Intell ; 2(4): e190064, 2020 Jul 08.
Article in English | MEDLINE | ID: mdl-32797119

ABSTRACT

PURPOSE: To evaluate the performance of a deep learning (DL) algorithm for clinical measurement of right and left ventricular volume and function across cardiac MR images obtained for a range of clinical indications and pathologies. MATERIALS AND METHODS: A retrospective, Health Insurance Portability and Accountability Act-compliant study was conducted using the first 200 noncongenital clinical cardiac MRI examinations from June 2015 to June 2017 for which volumetry was available. Images were analyzed using commercially available software for automated DL-based and manual contouring of biventricular volumes. Fully automated measurements were compared using Pearson correlations, relative volume errors, and Bland-Altman analyses. Manual, automated, and expert revised contours for 50 MR images were examined by comparing regional Dice coefficients at the base, midventricle, and apex to further analyze the contour quality. RESULTS: Fully automated and manual left ventricular volumes were strongly correlated for end-systolic volume (ESV: Pearson r = 0.99, P < .001), end-diastolic volume (EDV: r = 0.97, P < .001), and ejection fraction (EF: r = 0.94, P < .001). Right ventricular measurements were also correlated for ESV (r = 0.93, P < .001), EDV (r = 0.92, P < .001), and EF (r = 0.73, P < .001). Visual inspection of segmentation quality showed most errors (73%) occurred at the cardiac base. Mean Dice coefficients between manual, automated, and expert revised contours ranged from 0.92 to 0.95, with greatest variance at the base and apex. CONCLUSION: Fully automated ventricular segmentation by the tested algorithm provides contours and ventricular volumes that could be used to aid expert segmentation, but can benefit from expert supervision, particularly to resolve errors at the basal and apical slices. Supplemental material is available for this article. © RSNA, 2020.

13.
Clin Imaging ; 68: 121-123, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32592972

ABSTRACT

Fat embolism in the subarachnoid space has a unique pathophysiology and clinical picture when compared to fat embolism syndrome. Lipid deposits in the subarachnoid space-most commonly the sequela of dermoid rupture in the neuraxis-can cause an inflammatory reaction leading to irritation of nearby neurovascular structures. Herein, we report the only case in the United States, to our knowledge, of a patient diagnosed with subarachnoid fat emboli secondary to sacral fracture who initially presented with a normal head CT and subsequently developed visual changes.


Subject(s)
Embolism, Fat , Spinal Fractures , Embolism, Fat/diagnostic imaging , Embolism, Fat/etiology , Humans , Magnetic Resonance Imaging , Subarachnoid Space , Tomography, X-Ray Computed
14.
Acad Radiol ; 26(12): 1695-1706, 2019 12.
Article in English | MEDLINE | ID: mdl-31405724

ABSTRACT

RATIONALE AND OBJECTIVES: The automated segmentation of organs and tissues throughout the body using computed tomography and magnetic resonance imaging has been rapidly increasing. Research into many medical conditions has benefited greatly from these approaches by allowing the development of more rapid and reproducible quantitative imaging markers. These markers have been used to help diagnose disease, determine prognosis, select patients for therapy, and follow responses to therapy. Because some of these tools are now transitioning from research environments to clinical practice, it is important for radiologists to become familiar with various methods used for automated segmentation. MATERIALS AND METHODS: The Radiology Research Alliance of the Association of University Radiologists convened an Automated Segmentation Task Force to conduct a systematic review of the peer-reviewed literature on this topic. RESULTS: The systematic review presented here includes 408 studies and discusses various approaches to automated segmentation using computed tomography and magnetic resonance imaging for neurologic, thoracic, abdominal, musculoskeletal, and breast imaging applications. CONCLUSION: These insights should help prepare radiologists to better evaluate automated segmentation tools and apply them not only to research, but eventually to clinical practice.


Subject(s)
Algorithms , Magnetic Resonance Imaging/methods , Tomography, X-Ray Computed/methods , Automation , Humans
15.
J Thorac Imaging ; 34(3): 192-201, 2019 May.
Article in English | MEDLINE | ID: mdl-31009397

ABSTRACT

Advances in technology have always had the potential and opportunity to shape the practice of medicine, and in no medical specialty has technology been more rapidly embraced and adopted than radiology. Machine learning and deep neural networks promise to transform the practice of medicine, and, in particular, the practice of diagnostic radiology. These technologies are evolving at a rapid pace due to innovations in computational hardware and novel neural network architectures. Several cutting-edge postprocessing analysis applications are actively being developed in the fields of thoracic and cardiovascular imaging, including applications for lesion detection and characterization, lung parenchymal characterization, coronary artery assessment, cardiac volumetry and function, and anatomic localization. Cardiothoracic and cardiovascular imaging lies at the technological forefront of radiology due to a confluence of technical advances. Enhanced equipment has enabled computed tomography and magnetic resonance imaging scanners that can safely capture images that freeze the motion of the heart to exquisitely delineate fine anatomic structures. Computing hardware developments have enabled an explosion in computational capabilities and in data storage. Progress in software and fluid mechanical models is enabling complex 3D and 4D reconstructions to not only visualize and assess the dynamic motion of the heart, but also quantify its blood flow and hemodynamics. And now, innovations in machine learning, particularly in the form of deep neural networks, are enabling us to leverage the increasingly massive data repositories that are prevalent in the field. Here, we discuss developments in machine learning techniques and deep neural networks to highlight their likely role in future radiologic practice, both in and outside of image interpretation and analysis. We discuss the concepts of validation, generalizability, and clinical utility, as they pertain to this and other new technologies, and we reflect upon the opportunities and challenges of bringing these into daily use.


Subject(s)
Cardiovascular Diseases/diagnostic imaging , Diagnostic Imaging/methods , Image Processing, Computer-Assisted/methods , Machine Learning , Thoracic Diseases/diagnostic imaging , Cardiovascular System/diagnostic imaging , Humans , Neural Networks, Computer , Thorax/diagnostic imaging
16.
Magn Reson Med ; 81(5): 3283-3291, 2019 05.
Article in English | MEDLINE | ID: mdl-30714197

ABSTRACT

PURPOSE: Delayed enhancement imaging is an essential component of cardiac MRI, which is used widely for the evaluation of myocardial scar and viability. The selection of an optimal inversion time (TI) or null point (TINP ) to suppress the background myocardial signal is required. The purpose of this study was to assess the feasibility of automated selection of TINP using a convolutional neural network (CNN). We hypothesized that a CNN may use spatial and temporal imaging characteristics from an inversion-recovery scout to select TINP , without the aid of a human observer. METHODS: We retrospectively collected 425 clinically acquired cardiac MRI exams performed at 1.5 T that included inversion-recovery scout acquisitions. We developed a VGG19 classifier ensembled with long short-term memory to identify the TINP . We compared the performance of the ensemble CNN in predicting TINP against ground truth, using linear regression analysis. Ground truth was defined as the expert physician annotation of the optimal TI. In a backtrack approach, saliency maps were generated to interpret the classification outcome and to increase the model's transparency. RESULTS: Prediction of TINP from our ensemble VGG19 long short-term memory closely matched with expert annotation (ρ = 0.88). Ninety-four percent of the predicted TINP were within ±36 ms, and 83% were at or after expert TI selection. CONCLUSION: In this study, we show that a CNN is capable of automated prediction of myocardial TI from an inversion-recovery experiment. Merging the spatial and temporal characteristics of the VGG-19 and long short-term-memory CNN structures appears to be sufficient to predict myocardial TI from TI scout.


Subject(s)
Heart/diagnostic imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Myocardium/pathology , Neural Networks, Computer , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Child , Contrast Media/administration & dosage , Female , Gadolinium/administration & dosage , Humans , Male , Memory, Short-Term , Middle Aged , Pattern Recognition, Automated , Retrospective Studies , Young Adult
17.
Radiol Artif Intell ; 1(2)2019 Mar.
Article in English | MEDLINE | ID: mdl-32582883

ABSTRACT

PURPOSE: To assess feasibility of training a convolutional neural network (CNN) to automate liver segmentation across different imaging modalities and techniques used in clinical practice and apply this to enable automation of liver biometry. METHODS: We trained a 2D U-Net CNN for liver segmentation in two stages using 330 abdominal MRI and CT exams acquired at our institution. First, we trained the neural network with non-contrast multi-echo spoiled-gradient-echo (SGPR)images with 300 MRI exams to provide multiple signal-weightings. Then, we used transfer learning to generalize the CNN with additional images from 30 contrast-enhanced MRI and CT exams.We assessed the performance of the CNN using a distinct multi-institutional data set curated from multiple sources (n = 498 subjects). Segmentation accuracy was evaluated by computing Dice scores. Utilizing these segmentations, we computed liver volume from CT and T1-weighted (T1w) MRI exams, and estimated hepatic proton- density-fat-fraction (PDFF) from multi-echo T2*w MRI exams. We compared quantitative volumetry and PDFF estimates between automated and manual segmentation using Pearson correlation and Bland-Altman statistics. RESULTS: Dice scores were 0.94 ± 0.06 for CT (n = 230), 0.95 ± 0.03 (n = 100) for T1w MR, and 0.92 ± 0.05 for T2*w MR (n = 169). Liver volume measured by manual and automated segmentation agreed closely for CT (95% limit-of-agreement (LoA) = [-298 mL, 180 mL]) and T1w MR (LoA = [-358 mL, 180 mL]). Hepatic PDFF measured by the two segmentations also agreed closely (LoA = [-0.62%, 0.80%]). CONCLUSIONS: Utilizing a transfer-learning strategy, we have demonstrated the feasibility of a CNN to be generalized to perform liver segmentations across different imaging techniques and modalities. With further refinement and validation, CNNs may have broad applicability for multimodal liver volumetry and hepatic tissue characterization.

18.
Radiol Artif Intell ; 1(6): e180069, 2019 Nov 27.
Article in English | MEDLINE | ID: mdl-32090204

ABSTRACT

PURPOSE: To develop and evaluate a system to prescribe imaging planes for cardiac MRI based on deep learning (DL)-based localization of key anatomic landmarks. MATERIALS AND METHODS: Annotated landmarks on 892 long-axis (LAX) and 493 short-axis (SAX) cine steady-state free precession series from cardiac MR images were retrospectively collected between February 2012 and June 2017. U-Net-based heatmap regression was used for localization of cardiac landmarks, which were used to compute cardiac MRI planes. Performance was evaluated by comparing localization distances and plane angle differences between DL predictions and ground truth. The plane angulations from DL were compared with those prescribed by the technologist at the original time of acquisition. Data were split into 80% for training and 20% for testing, and results confirmed with fivefold cross-validation. RESULTS: On LAX images, DL localized the apex within mean 12.56 mm ± 19.11 (standard deviation) and the mitral valve (MV) within 7.68 mm ± 6.91. On SAX images, DL localized the aortic valve within 5.78 mm ± 5.68, MV within 5.90 mm ± 5.24, pulmonary valve within 6.55 mm ± 6.39, and tricuspid valve within 6.39 mm ± 5.89. On the basis of these localizations, average angle bias and mean error of DL-predicted imaging planes relative to ground truth annotations were as follows: SAX, -1.27° ± 6.81 and 4.93° ± 4.86; four chambers, 0.38° ± 6.45 and 5.16° ± 3.80; three chambers, 0.13° ± 12.70 and 9.02° ± 8.83; and two chamber, 0.25° ± 9.08 and 6.53° ± 6.28, respectively. CONCLUSION: DL-based anatomic localization is a feasible strategy for planning cardiac MRI planes. This approach can produce imaging planes comparable to those defined by ground truth landmarks.© RSNA, 2019 Supplemental material is available for this article.

SELECTION OF CITATIONS
SEARCH DETAIL
...