Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 170: 107982, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38266466

ABSTRACT

Accurate brain tumour segmentation is critical for tasks such as surgical planning, diagnosis, and analysis, with magnetic resonance imaging (MRI) being the preferred modality due to its excellent visualisation of brain tissues. However, the wide intensity range of voxel values in MR scans often results in significant overlap between the density distributions of different tumour tissues, leading to reduced contrast and segmentation accuracy. This paper introduces a novel framework based on conditional generative adversarial networks (cGANs) aimed at enhancing the contrast of tumour subregions for both voxel-wise and region-wise segmentation approaches. We present two models: Enhancement and Segmentation GAN (ESGAN), which combines classifier loss with adversarial loss to predict central labels of input patches, and Enhancement GAN (EnhGAN), which generates high-contrast synthetic images with reduced inter-class overlap. These synthetic images are then fused with corresponding modalities to emphasise meaningful tissues while suppressing weaker ones. We also introduce a novel generator that adaptively calibrates voxel values within input patches, leveraging fully convolutional networks. Both models employ a multi-scale Markovian network as a GAN discriminator to capture local patch statistics and estimate the distribution of MR images in complex contexts. Experimental results on publicly available MR brain tumour datasets demonstrate the competitive accuracy of our models compared to current brain tumour segmentation techniques.


Subject(s)
Brain Neoplasms , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Brain Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods
2.
Can Assoc Radiol J ; : 8465371231221052, 2024 Jan 08.
Article in English | MEDLINE | ID: mdl-38189316

ABSTRACT

BACKGROUND: Multi-detector contrast-enhanced abdominal computed tomography (CT) allows for the accurate detection and classification of traumatic splenic injuries, leading to improved patient management. Their effective use requires rapid study interpretation, which can be a challenge on busy emergency radiology services. A machine learning system has the potential to automate the process, potentially leading to a faster clinical response. This study aimed to create such a system. METHOD: Using the American Association for the Surgery of Trauma (AAST), spleen injuries were classified into 3 classes: normal, low-grade (AAST grade I-III) injuries, and high-grade (AAST grade IV and V) injuries. Employing a 2-stage machine learning strategy, spleens were initially segmented from input CT images and subsequently underwent classification via a 3D dense convolutional neural network (DenseNet). RESULTS: This single-centre retrospective study involved trauma protocol CT scans performed between January 1, 2005, and July 31, 2021, totaling 608 scans with splenic injuries and 608 without. Five board-certified fellowship-trained abdominal radiologists utilizing the AAST injury scoring scale established ground truth labels. The model achieved AUC values of 0.84, 0.69, and 0.90 for normal, low-grade injuries, and high-grade splenic injuries, respectively. CONCLUSIONS: Our findings demonstrate the feasibility of automating spleen injury detection using our method with potential applications in improving patient care through radiologist worklist prioritization and injury stratification. Future endeavours should concentrate on further enhancing and optimizing our approach and testing its use in a real-world clinical environment.

3.
Radiol Artif Intell ; 5(5): e230034, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37795143

ABSTRACT

This dataset is composed of cervical spine CT images with annotations related to fractures; it is available at https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/.

4.
Neural Netw ; 132: 43-52, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32861913

ABSTRACT

Magnetic resonance imaging (MRI) presents a detailed image of the internal organs via a magnetic field. Given MRI's non-invasive advantage in repeated imaging, the low-contrast MR images in the target area make segmentation of tissue a challenging problem. This study shows the potential advantages of synthetic high tissue contrast (HTC) images through image-to-image translation techniques. Mainly, we use a novel cycle generative adversarial network (Cycle-GAN), which provides a mechanism of attention to increase the contrast within the tissue. The attention block and training on HTC images are beneficial to our model to enhance tissue visibility. We use a multistage architecture to concentrate on a single tissue as a preliminary and filter out the irrelevant context in every stage in order to increase the resolution of HTC images. The multistage architecture reduces the gap between source and target domains and alleviates synthetic images' artefacts. We apply our HTC image synthesising method to two public datasets. In order to validate the effectiveness of these images we use HTC MR images in both end-to-end and two-stage segmentation structures. The experiments on three segmentation baselines on BraTS'18 demonstrate that joining the synthetic HTC images in the multimodal segmentation framework develops the average Dice similarity scores (DSCs) of 0.8%, 0.6%, and 0.5% respectively on the whole tumour (WT), tumour core (TC), and enhancing tumour (ET) while removing one real MRI channels from the segmentation pipeline. Moreover, segmentation of infant brain tissue in T1w MR slices through our framework improves DSCs approximately 1% in cerebrospinal fluid (CSF), grey matter (GM), and white matter (WM) compared to state-of-the-art segmentation techniques. The source code of synthesising HTC images is publicly available.


Subject(s)
Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Image Enhancement/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Attention , Humans , Image Enhancement/standards , Infant , Magnetic Resonance Imaging/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...