Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Tomography ; 9(3): 1041-1051, 2023 05 20.
Article in English | MEDLINE | ID: mdl-37218945

ABSTRACT

PURPOSE: Reliable and objective measures of abdominal fat distribution across imaging modalities are essential for various clinical and research scenarios, such as assessing cardiometabolic disease risk due to obesity. We aimed to compare quantitative measures of subcutaneous (SAT) and visceral (VAT) adipose tissues in the abdomen between computed tomography (CT) and Dixon-based magnetic resonance (MR) images using a unified computer-assisted software framework. MATERIALS AND METHODS: This study included 21 subjects who underwent abdominal CT and Dixon MR imaging on the same day. For each subject, two matched axial CT and fat-only MR images at the L2-L3 and the L4-L5 intervertebral levels were selected for fat quantification. For each image, an outer and an inner abdominal wall regions as well as SAT and VAT pixel masks were automatically generated by our software. The computer-generated results were then inspected and corrected by an expert reader. RESULTS: There were excellent agreements for both abdominal wall segmentation and adipose tissue quantification between matched CT and MR images. Pearson coefficients were 0.97 for both outer and inner region segmentation, 0.99 for SAT, and 0.97 for VAT quantification. Bland-Altman analyses indicated minimum biases in all comparisons. CONCLUSION: We showed that abdominal adipose tissue can be reliably quantified from both CT and Dixon MR images using a unified computer-assisted software framework. This flexible framework has a simple-to-use workflow to measure SAT and VAT from both modalities to support various clinical research applications.


Subject(s)
Abdominal Fat , Magnetic Resonance Imaging , Humans , Abdominal Fat/diagnostic imaging , Abdominal Fat/pathology , Magnetic Resonance Imaging/methods , Tomography, X-Ray Computed/methods , Software , Computers
2.
NPJ Digit Med ; 3: 70, 2020.
Article in English | MEDLINE | ID: mdl-32435698

ABSTRACT

As one of the most ubiquitous diagnostic imaging tests in medical practice, chest radiography requires timely reporting of potential findings and diagnosis of diseases in the images. Automated, fast, and reliable detection of diseases based on chest radiography is a critical step in radiology workflow. In this work, we developed and evaluated various deep convolutional neural networks (CNN) for differentiating between normal and abnormal frontal chest radiographs, in order to help alert radiologists and clinicians of potential abnormal findings as a means of work list triaging and reporting prioritization. A CNN-based model achieved an AUC of 0.9824 ± 0.0043 (with an accuracy of 94.64 ± 0.45%, a sensitivity of 96.50 ± 0.36% and a specificity of 92.86 ± 0.48%) for normal versus abnormal chest radiograph classification. The CNN model obtained an AUC of 0.9804 ± 0.0032 (with an accuracy of 94.71 ± 0.32%, a sensitivity of 92.20 ± 0.34% and a specificity of 96.34 ± 0.31%) for normal versus lung opacity classification. Classification performance on the external dataset showed that the CNN model is likely to be highly generalizable, with an AUC of 0.9444 ± 0.0029. The CNN model pre-trained on cohorts of adult patients and fine-tuned on pediatric patients achieved an AUC of 0.9851 ± 0.0046 for normal versus pneumonia classification. Pretraining with natural images demonstrates benefit for a moderate-sized training image set of about 8500 images. The remarkable performance in diagnostic accuracy observed in this study shows that deep CNNs can accurately and effectively differentiate normal and abnormal chest radiographs, thereby providing potential benefits to radiology workflow and patient care.

SELECTION OF CITATIONS
SEARCH DETAIL
...