Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Diagnostics (Basel) ; 13(21)2023 Nov 03.
Article in English | MEDLINE | ID: mdl-37958277

ABSTRACT

T2-weighted magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) are essential components of cervical cancer diagnosis. However, combining these channels for the training of deep learning models is challenging due to image misalignment. Here, we propose a novel multi-head framework that uses dilated convolutions and shared residual connections for the separate encoding of multiparametric MRI images. We employ a residual U-Net model as a baseline, and perform a series of architectural experiments to evaluate the tumor segmentation performance based on multiparametric input channels and different feature encoding configurations. All experiments were performed on a cohort of 207 patients with locally advanced cervical cancer. Our proposed multi-head model using separate dilated encoding for T2W MRI and combined b1000 DWI and apparent diffusion coefficient (ADC) maps achieved the best median Dice similarity coefficient (DSC) score, 0.823 (confidence interval (CI), 0.595-0.797), outperforming the conventional multi-channel model, DSC 0.788 (95% CI, 0.568-0.776), although the difference was not statistically significant (p > 0.05). We investigated channel sensitivity using 3D GRAD-CAM and channel dropout, and highlighted the critical importance of T2W and ADC channels for accurate tumor segmentation. However, our results showed that b1000 DWI had a minor impact on the overall segmentation performance. We demonstrated that the use of separate dilated feature extractors and independent contextual learning improved the model's ability to reduce the boundary effects and distortion of DWI, leading to improved segmentation performance. Our findings could have significant implications for the development of robust and generalizable models that can extend to other multi-modal segmentation applications.

2.
Sci Rep ; 13(1): 10568, 2023 06 29.
Article in English | MEDLINE | ID: mdl-37386097

ABSTRACT

Handcrafted and deep learning (DL) radiomics are popular techniques used to develop computed tomography (CT) imaging-based artificial intelligence models for COVID-19 research. However, contrast heterogeneity from real-world datasets may impair model performance. Contrast-homogenous datasets present a potential solution. We developed a 3D patch-based cycle-consistent generative adversarial network (cycle-GAN) to synthesize non-contrast images from contrast CTs, as a data homogenization tool. We used a multi-centre dataset of 2078 scans from 1,650 patients with COVID-19. Few studies have previously evaluated GAN-generated images with handcrafted radiomics, DL and human assessment tasks. We evaluated the performance of our cycle-GAN with these three approaches. In a modified Turing-test, human experts identified synthetic vs acquired images, with a false positive rate of 67% and Fleiss' Kappa 0.06, attesting to the photorealism of the synthetic images. However, on testing performance of machine learning classifiers with radiomic features, performance decreased with use of synthetic images. Marked percentage difference was noted in feature values between pre- and post-GAN non-contrast images. With DL classification, deterioration in performance was observed with synthetic images. Our results show that whilst GANs can produce images sufficient to pass human assessment, caution is advised before GAN-synthesized images are used in medical imaging applications.


Subject(s)
COVID-19 , Deep Learning , Humans , Artificial Intelligence , COVID-19/diagnostic imaging , Tomography, X-Ray Computed , Machine Learning
3.
Diagnostics (Basel) ; 11(11)2021 Oct 22.
Article in English | MEDLINE | ID: mdl-34829310

ABSTRACT

The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.

4.
Front Oncol ; 11: 665807, 2021.
Article in English | MEDLINE | ID: mdl-34395244

ABSTRACT

BACKGROUND: Computed tomography (CT) and magnetic resonance imaging (MRI) are the mainstay imaging modalities in radiotherapy planning. In MR-Linac treatment, manual annotation of organs-at-risk (OARs) and clinical volumes requires a significant clinician interaction and is a major challenge. Currently, there is a lack of available pre-annotated MRI data for training supervised segmentation algorithms. This study aimed to develop a deep learning (DL)-based framework to synthesize pelvic T1-weighted MRI from a pre-existing repository of clinical planning CTs. METHODS: MRI synthesis was performed using UNet++ and cycle-consistent generative adversarial network (Cycle-GAN), and the predictions were compared qualitatively and quantitatively against a baseline UNet model using pixel-wise and perceptual loss functions. Additionally, the Cycle-GAN predictions were evaluated through qualitative expert testing (4 radiologists), and a pelvic bone segmentation routine based on a UNet architecture was trained on synthetic MRI using CT-propagated contours and subsequently tested on real pelvic T1 weighted MRI scans. RESULTS: In our experiments, Cycle-GAN generated sharp images for all pelvic slices whilst UNet and UNet++ predictions suffered from poorer spatial resolution within deformable soft-tissues (e.g. bladder, bowel). Qualitative radiologist assessment showed inter-expert variabilities in the test scores; each of the four radiologists correctly identified images as acquired/synthetic with 67%, 100%, 86% and 94% accuracy. Unsupervised segmentation of pelvic bone on T1-weighted images was successful in a number of test cases. CONCLUSION: Pelvic MRI synthesis is a challenging task due to the absence of soft-tissue contrast on CT. Our study showed the potential of deep learning models for synthesizing realistic MR images from CT, and transferring cross-domain knowledge which may help to expand training datasets for 21 development of MR-only segmentation models.

5.
Int Orthop ; 44(8): 1539-1542, 2020 08.
Article in English | MEDLINE | ID: mdl-32462314

ABSTRACT

BACKGROUND: Detection of COVID-19 cases' accuracy is posing a conundrum for scientists, physicians, and policy-makers. As of April 23, 2020, 2.7 million cases have been confirmed, over 190,000 people are dead, and about 750,000 people are reported recovered. Yet, there is no publicly available data on tests that could be missing infections. Complicating matters and furthering anxiety are specific instances of false-negative tests. METHODS: We developed a deep learning model to improve accuracy of reported cases and to precisely predict the disease from chest X-ray scans. Our model relied on convolutional neural networks (CNNs) to detect structural abnormalities and disease categorization that were keys to uncovering hidden patterns. To do so, a transfer learning approach was deployed to perform detections from the chest anterior-posterior radiographs of patients. We used publicly available datasets to achieve this. RESULTS: Our results offer very high accuracy (96.3%) and loss (0.151 binary cross-entropy) using the public dataset consisting of patients from different countries worldwide. As the confusion matrix indicates, our model is able to accurately identify true negatives (74) and true positives (32); this deep learning model identified three cases of false-positive and one false-negative finding from the healthy patient scans. CONCLUSIONS: Our COVID-19 detection model minimizes manual interaction dependent on radiologists as it automates identification of structural abnormalities in patient's CXRs, and our deep learning model is likely to detect true positives and true negatives and weed out false positive and false negatives with > 96.3% accuracy.


Subject(s)
Betacoronavirus , Coronavirus Infections , Deep Learning , Pandemics , Pneumonia, Viral , Adolescent , Adult , Aged , Aged, 80 and over , Bias , COVID-19 , Child , Female , Humans , Male , Middle Aged , Neural Networks, Computer , SARS-CoV-2 , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...