Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Med Imaging (Bellingham) ; 10(5): 054503, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37840849

ABSTRACT

Purpose: Generative adversarial networks (GANs) can synthesize various feasible-looking images. We showed that a GAN, specifically a conditional GAN (CGAN), can simulate breast mammograms with normal, healthy appearances and can help detect mammographically-occult (MO) cancer. However, similar to other GANs, CGANs can suffer from various artifacts, e.g., checkerboard artifacts, that may impact the quality of the final synthesized image, as well as the performance of detecting MO cancer. We explored the types of GAN artifacts that exist in mammogram simulations and their effect on MO cancer detection. Approach: We first trained a CGAN using digital mammograms (FFDMs) of 1366 women with normal/healthy breasts. Then, we tested the trained CGAN on an independent MO cancer dataset with 333 women with dense breasts (97 MO cancers). We trained a convolutional neural network (CNN) on the MO cancer dataset, in which real and simulated mammograms were fused, to identify women with MO cancer. Then, a radiologist who was independent of the development of the CGAN algorithms evaluated the entire MO cancer dataset to identify and annotate artifacts in the simulated mammograms. Results: We found four artifact types, including checkerboard, breast boundary, nipple-areola complex, and black spots around calcification artifacts, with an overall incidence rate over 69% (the individual incident rate ranged from 9% to 53%) from both normal and MO cancer samples. We then evaluated their potential impact on MO cancer detection. Even though various artifacts existed in the simulated mammogram, we found that it still provided complementary information for MO cancer detection when it was combined with the real mammograms. Conclusions: We found that artifacts were pervasive in the CGAN-simulated mammograms. However, they did not negatively affect our MO cancer detection algorithm; the simulated mammograms still provided complementary information for MO cancer detection when combined with real mammograms.

2.
J Digit Imaging ; 36(3): 767-775, 2023 06.
Article in English | MEDLINE | ID: mdl-36622464

ABSTRACT

The workload of some radiologists increased dramatically in the last several, which resulted in a potentially reduced quality of diagnosis. It was demonstrated that diagnostic accuracy of radiologists significantly reduces at the end of work shifts. The study aims to investigate how radiologists cover chest X-rays with their gaze in the presence of different chest abnormalities and high workload. We designed a randomized experiment to quantitatively assess how radiologists' image reading patterns change with the radiological workload. Four radiologists read chest X-rays on a radiological workstation equipped with an eye-tracker. The lung fields on the X-rays were automatically segmented with U-Net neural network allowing to measure the lung coverage with radiologists' gaze. The images were randomly split so that each image was shown at a different time to a different radiologist. Regression models were fit to the gaze data to calculate the treads in lung coverage for individual radiologists and chest abnormalities. For the study, a database of 400 chest X-rays with reference diagnoses was assembled. The average lung coverage with gaze ranged from 55 to 65% per radiologist. For every 100 X-rays read, the lung coverage reduced from 1.3 to 7.6% for the different radiologists. The coverage reduction trends were consistent for all abnormalities ranging from 3.4% per 100 X-rays for cardiomegaly to 4.1% per 100 X-rays for atelectasis. The more image radiologists read, the smaller part of the lung fields they cover with the gaze. This pattern is very stable for all abnormality types and is not affected by the exact order the abnormalities are viewed by radiologists. The proposed randomized experiment captured and quantified consistent changes in X-ray reading for different lung abnormalities that occur due to high workload.


Subject(s)
Radiologists , Radiology , Humans , X-Rays , Radiography , Lung/diagnostic imaging
3.
Sci Rep ; 13(1): 1135, 2023 01 20.
Article in English | MEDLINE | ID: mdl-36670118

ABSTRACT

In 2020, an experiment testing AI solutions for lung X-ray analysis on a multi-hospital network was conducted. The multi-hospital network linked 178 Moscow state healthcare centers, where all chest X-rays from the network were redirected to a research facility, analyzed with AI, and returned to the centers. The experiment was formulated as a public competition with monetary awards for participating industrial and research teams. The task was to perform the binary detection of abnormalities from chest X-rays. For the objective real-life evaluation, no training X-rays were provided to the participants. This paper presents one of the top-performing AI frameworks from this experiment. First, the framework used two EfficientNets, histograms of gradients, Haar feature ensembles, and local binary patterns to recognize whether an input image represents an acceptable lung X-ray sample, meaning the X-ray is not grayscale inverted, is a frontal chest X-ray, and completely captures both lung fields. Second, the framework extracted the region with lung fields and then passed them to a multi-head DenseNet, where the heads recognized the patient's gender, age and the potential presence of abnormalities, and generated the heatmap with the abnormality regions highlighted. During one month of the experiment from 11.23.2020 to 12.25.2020, 17,888 cases have been analyzed by the framework with 11,902 cases having radiological reports with the reference diagnoses that were unequivocally parsed by the experiment organizers. The performance measured in terms of the area under receiving operator curve (AUC) was 0.77. The AUC for individual diseases ranged from 0.55 for herniation to 0.90 for pneumothorax.


Subject(s)
Pneumothorax , Radiography, Thoracic , Humans , Radiography, Thoracic/methods , Lung/diagnostic imaging , Thorax , Artificial Intelligence
4.
IEEE J Biomed Health Inform ; 26(9): 4541-4550, 2022 09.
Article in English | MEDLINE | ID: mdl-35704540

ABSTRACT

Around 60-80% of radiological errors are attributed to overlooked abnormalities, the rate of which increases at the end of work shifts. In this study, we run an experiment to investigate if artificial intelligence (AI) can assist in detecting radiologists' gaze patterns that correlate with fatigue. A retrospective database of lung X-ray images with the reference diagnoses was used. The X-ray images were acquired from 400 subjects with a mean age of 49 ± 17, and 61% men. Four practicing radiologists read these images while their eye movements were recorded. The radiologists passed a series of concentration tests at prearranged breaks of the experiment. A U-Net neural network was adapted to annotate lung anatomy on X-rays and calculate coverage and information gain features from the radiologists' eye movements over lung fields. The lung coverage, information gain, and eye tracker-based features were compared with the cumulative work done (CDW) label for each radiologist. The gaze-traveled distance, X-ray coverage, and lung coverage statistically significantly (p < 0.01) deteriorated with cumulative work done (CWD) for three out of four radiologists. The reading time and information gain over lungs statistically significantly deteriorated for all four radiologists. We discovered a novel AI-based metric blending reading time, speed, and organ coverage, which can be used to predict changes in the fatigue-related image reading patterns.


Subject(s)
Artificial Intelligence , Workload , Adult , Aged , Fatigue , Female , Humans , Male , Middle Aged , Radiologists , Retrospective Studies
5.
Eur Spine J ; 31(8): 2115-2124, 2022 08.
Article in English | MEDLINE | ID: mdl-35596800

ABSTRACT

PURPOSE: To propose a fully automated deep learning (DL) framework for the vertebral morphometry and Cobb angle measurement from three-dimensional (3D) computed tomography (CT) images of the spine, and validate the proposed framework on an external database. METHODS: The vertebrae were first localized and segmented in each 3D CT image using a DL architecture based on an ensemble of U-Nets, and then automated vertebral morphometry in the form of vertebral body (VB) and intervertebral disk (IVD) heights, and spinal curvature measurements in the form of coronal and sagittal Cobb angles (thoracic kyphosis and lumbar lordosis) were performed using dedicated machine learning techniques. The framework was trained on 1725 vertebrae from 160 CT images and validated on an external database of 157 vertebrae from 15 CT images. RESULTS: The resulting mean absolute errors (± standard deviation) between the obtained DL and corresponding manual measurements were 1.17 ± 0.40 mm for VB heights, 0.54 ± 0.21 mm for IVD heights, and 3.42 ± 1.36° for coronal and sagittal Cobb angles, with respective maximal absolute errors of 2.51 mm, 1.64 mm, and 5.52°. Linear regression revealed excellent agreement, with Pearson's correlation coefficient of 0.943, 0.928, and 0.996, respectively. CONCLUSION: The obtained results are within the range of values, obtained by existing DL approaches without external validation. The results therefore confirm the scalability of the proposed DL framework from the perspective of application to external data, and time and computational resource consumption required for framework training.


Subject(s)
Deep Learning , Kyphosis , Lordosis , Scoliosis , Humans , Lumbar Vertebrae/diagnostic imaging , Thoracic Vertebrae/diagnostic imaging
6.
Med Image Anal ; 78: 102417, 2022 05.
Article in English | MEDLINE | ID: mdl-35325712

ABSTRACT

Morphological abnormalities of the femoroacetabular (hip) joint are among the most common human musculoskeletal disorders and often develop asymptomatically at early easily treatable stages. In this paper, we propose an automated framework for landmark-based detection and quantification of hip abnormalities from magnetic resonance (MR) images. The framework relies on a novel idea of multi-landmark environment analysis with reinforcement learning. In particular, we merge the concepts of the graphical lasso and Morris sensitivity analysis with deep neural networks to quantitatively estimate the contribution of individual landmark and landmark subgroup locations to the other landmark locations. Convolutional neural networks for image segmentation are utilized to propose the initial landmark locations, and landmark detection is then formulated as a reinforcement learning (RL) problem, where each landmark-agent can adjust its position by observing the local MR image neighborhood and the locations of the most-contributive landmarks. The framework was validated on T1-, T2- and proton density-weighted MR images of 260 patients with the aim to measure the lateral center-edge angle (LCEA), femoral neck-shaft angle (NSA), and the anterior and posterior acetabular sector angles (AASA and PASA) of the hip, and derive the quantitative abnormality metrics from these angles. The framework was successfully tested using the UNet and feature pyramid network (FPN) segmentation architectures for landmark proposal generation, and the deep Q-network (DeepQN), deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3), and actor-critic policy gradient (A2C) RL networks for landmark position optimization. The resulting overall landmark detection error of 1.5 mm and angle measurement error of 1.4° indicates a superior performance in comparison to existing methods. Moreover, the automatically estimated abnormality labels were in 95% agreement with those generated by an expert radiologist.


Subject(s)
Hip Joint/abnormalities , Neural Networks, Computer , Hip Joint/diagnostic imaging , Humans , Learning , Magnetic Resonance Imaging
7.
IEEE J Biomed Health Inform ; 25(5): 1660-1672, 2021 05.
Article in English | MEDLINE | ID: mdl-32956067

ABSTRACT

Pneumothorax is potentially a life-threatening disease that requires urgent diagnosis and treatment. The chest X-ray is the diagnostic modality of choice when pneumothorax is suspected. The computer-aided diagnosis of pneumothorax has received a dramatic boost in the last few years due to deep learning advances and the first public pneumothorax diagnosis competition with 15257 chest X-rays manually annotated by a team of 19 radiologists. This paper describes one of the top frameworks that participated in the competition. The framework investigates the benefits of combining the Unet convolutional neural network with various backbones, namely ResNet34, SE-ResNext50, SE-ResNext101, and DenseNet121. The paper presents a step-by-step instruction for the framework application, including data augmentation, and different pre- and post-processing steps. The performance of the framework was of 0.8574 measured in terms of the Dice coefficient. The second contribution of the paper is the comparison of the deep learning framework against three experienced radiologists on the pneumothorax detection and segmentation on challenging X-rays. We also evaluated how diagnostic confidence of radiologists affects the accuracy of the diagnosis and observed that the deep learning framework and radiologists find the same X-rays to be easy/difficult to analyze (p-value <1e4). Finally, the methodology of all top-performing teams from the competition leaderboard was analyzed to find the consistent methodological patterns of accurate pneumothorax detection and segmentation.


Subject(s)
Deep Learning , Pneumothorax , Diagnosis, Computer-Assisted , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer , Pneumothorax/diagnostic imaging , Radiologists
SELECTION OF CITATIONS
SEARCH DETAIL
...