Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Biomedicines ; 12(6)2024 May 31.
Article in English | MEDLINE | ID: mdl-38927428

ABSTRACT

Recent developments in AI, especially in machine learning and deep learning, have opened new avenues for research and clinical practice in neurology [...].

2.
J Neurointerv Surg ; 2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38914461

ABSTRACT

BACKGROUND: Carotid web (CaW) is a risk factor for ischemic stroke, mainly in young patients with stroke of undetermined etiology. Its detection is challenging, especially among non-experienced physicians. METHODS: We included patients with CaW from six international trials and registries of patients with acute ischemic stroke. Identification and manual segmentations of CaW were performed by three trained radiologists. We designed a two-stage segmentation strategy based on a convolutional neural network (CNN). At the first stage, the two carotid arteries were segmented using a U-shaped CNN. At the second stage, the segmentation of the CaW was first confined to the vicinity of the carotid arteries. Then, the carotid bifurcation region was localized by the proposed carotid bifurcation localization algorithm followed by another U-shaped CNN. A volume threshold based on the derived CaW manual segmentation statistics was then used to determine whether or not CaW was present. RESULTS: We included 58 patients (median (IQR) age 59 (50-75) years, 60% women). The Dice similarity coefficient and 95th percentile Hausdorff distance between manually segmented CaW and the algorithm segmented CaW were 63.20±19.03% and 1.19±0.9 mm, respectively. Using a volume threshold of 5 mm3, binary classification detection metrics for CaW on a single artery were as follows: accuracy: 92.2% (95% CI 87.93% to 96.55%), precision: 94.83% (95% CI 88.68% to 100.00%), sensitivity: 90.16% (95% CI 82.16% to 96.97%), specificity: 94.55% (95% CI 88.0% to 100.0%), F1 measure: 0.9244 (95% CI 0.8679 to 0.9692), area under the curve: 0.9235 (95%CI 0.8726 to 0.9688). CONCLUSIONS: The proposed two-stage method enables reliable segmentation and detection of CaW from head and neck CT angiography.

3.
Biomedicines ; 12(3)2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38540193

ABSTRACT

Differentiating between a salvageable Ischemic Penumbra (IP) and an irreversibly damaged Infarct Core (IC) is important for therapy decision making for acute ischemic stroke (AIS) patients. Existing methods rely on Computed Tomography Perfusion (CTP) or Diffusion-Weighted Imaging-Fluid Attenuated Inversion Recovery (DWI-FLAIR). We designed a novel Convolutional Neural Network named I2PC-Net, which relies solely on Non-Contrast Computed Tomography (NCCT) for the automatic and simultaneous segmentation of the IP and IC. In the encoder, Multi-Scale Convolution (MSC) blocks were proposed to capture effective features of ischemic lesions, and in the deep levels of the encoder, Symmetry Enhancement (SE) blocks were also designed to enhance anatomical symmetries. In the attention-based decoder, hierarchical deep supervision was introduced to address the challenge of differentiating between the IP and IC. We collected 197 NCCT scans from AIS patients to evaluate the proposed method. On the test set, I2PC-Net achieved Dice Similarity Scores of 42.76 ± 21.84%, 33.54 ± 24.13% and 65.67 ± 12.30% and lesion volume correlation coefficients of 0.95 (p < 0.001), 0.61 (p < 0.001) and 0.93 (p < 0.001) for the IP, IC and IP + IC, respectively. The results indicated that NCCT could potentially be used as a surrogate technique of CTP for the quantitative evaluation of the IP and IC.

4.
IEEE Trans Med Imaging ; 43(6): 2303-2316, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38319756

ABSTRACT

Lesion segmentation is a fundamental step for the diagnosis of acute ischemic stroke (AIS). Non-contrast CT (NCCT) is still a mainstream imaging modality for AIS lesion measurement. However, AIS lesion segmentation on NCCT is challenging due to low contrast, noise and artifacts. To achieve accurate AIS lesion segmentation on NCCT, this study proposes a hybrid convolutional neural network (CNN) and Transformer network with circular feature interaction and bilateral difference learning. It consists of parallel CNN and Transformer encoders, a circular feature interaction module, and a shared CNN decoder with a bilateral difference learning module. A new Transformer block is particularly designed to solve the weak inductive bias problem of the traditional Transformer. To effectively combine features from CNN and Transformer encoders, we first design a multi-level feature aggregation module to combine multi-scale features in each encoder and then propose a novel feature interaction module containing circular CNN-to-Transformer and Transformer-to-CNN interaction blocks. Besides, a bilateral difference learning module is proposed at the bottom level of the decoder to learn the different information between the ischemic and contralateral sides of the brain. The proposed method is evaluated on three AIS datasets: the public AISD, a private dataset and an external dataset. Experimental results show that the proposed method achieves Dices of 61.39% and 46.74% on the AISD and the private dataset, respectively, outperforming 17 state-of-the-art segmentation methods. Besides, volumetric analysis on segmented lesions and external validation results imply that the proposed method is potential to provide support information for AIS diagnosis.


Subject(s)
Ischemic Stroke , Neural Networks, Computer , Tomography, X-Ray Computed , Humans , Ischemic Stroke/diagnostic imaging , Tomography, X-Ray Computed/methods , Brain/diagnostic imaging , Algorithms
5.
IEEE J Biomed Health Inform ; 27(10): 4828-4839, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37578920

ABSTRACT

Medical image segmentation is indispensable for diagnosis and prognosis of many diseases. To improve the segmentation performance, this study proposes a new 2D body and edge aware network with multi-scale short-term concatenation for medical image segmentation. Multi-scale short-term concatenation modules which concatenate successive convolution layers with different receptive fields, are proposed for capturing multi-scale representations with fewer parameters. Body generation modules with feature adjustment based on weight map computing via enlarging the receptive fields, and edge generation modules with multi-scale convolutions using Sobel kernels for edge detection, are proposed to separately learn body and edge features from convolutional features in decoders, making the proposed network be body and edge aware. Based on the body and edge modules, we design parallel body and edge decoders whose outputs are fused to achieve the final segmentation. Besides, deep supervision from the body and edge decoders is applied to ensure the effectiveness of the generated body and edge features and further improve the final segmentation. The proposed method is trained and evaluated on six public medical image segmentation datasets to show its effectiveness and generality. Experimental results show that the proposed method achieves better average Dice similarity coefficient and 95% Hausdorff distance than several benchmarks on all used datasets. Ablation studies validate the effectiveness of the proposed multi-scale representation learning modules, body and edge generation modules and deep supervision.

6.
J Biomed Inform ; 143: 104408, 2023 07.
Article in English | MEDLINE | ID: mdl-37295630

ABSTRACT

Predicting the patient's in-hospital mortality from the historical Electronic Medical Records (EMRs) can assist physicians to make clinical decisions and assign medical resources. In recent years, researchers proposed many deep learning methods to predict in-hospital mortality by learning patient representations. However, most of these methods fail to comprehensively learn the temporal representations and do not sufficiently mine the contextual knowledge of demographic information. We propose a novel end-to-end approach based on Local and Global Temporal Representation Learning with Demographic Embedding (LGTRL-DE) to address the current issues for in-hospital mortality prediction. LGTRL-DE is enabled by (1) a local temporal representation learning module that captures the temporal information and analyzes the health status from a local perspective through a recurrent neural network with the demographic initialization and the local attention mechanism; (2) a Transformer-based global temporal representation learning module that extracts the interaction dependencies among clinical events; (3) a multi-view representation fusion module that fuses temporal and static information and generates the final patient's health representations. We evaluate our proposed LGTRL-DE on two public real-world clinical datasets (MIMIC-III and e-ICU). Experimental results show that LGTRL-DE achieves area under receiver operating characteristic curve of 0.8685 and 0.8733 on the MIMIC-III and e-ICU datasets, respectively, outperforming several state-of-the-art approaches.


Subject(s)
Neural Networks, Computer , Humans , Hospital Mortality
7.
IEEE J Biomed Health Inform ; 27(8): 4086-4097, 2023 08.
Article in English | MEDLINE | ID: mdl-37192032

ABSTRACT

Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images are exploited to enhance features of each region of interest (RoI) proposal. Accordingly, two modules, dubbed as RoI-relationship attention module (RRAM) and global RoI attention module (GRAM), are developed and their combination strategies are also investigated. We establish a strong baseline by using Double-Head Faster R-CNN with a feature pyramid network (FPN) and integrate our RRAM and GRAM into it to validate the effectiveness of the proposed modules. Experiments conducted on a large cervical cell detection dataset reveal that the introduction of RRAM and GRAM both achieves better average precision (AP) than the baseline methods. Moreover, when cascading RRAM and GRAM, our method outperforms the state-of-the-art (SOTA) methods. Furthermore, we show that the proposed feature-enhancing scheme can facilitate image- and smear-level classification.


Subject(s)
Cervix Uteri , Cytological Techniques , Humans , Cervix Uteri/pathology , Female
8.
Comput Biol Med ; 160: 106953, 2023 06.
Article in English | MEDLINE | ID: mdl-37120987

ABSTRACT

Hippocampus has great influence over the Alzheimer's disease (AD) research because of its essential role as a biomarker in the human brain. Thus the performance of hippocampus segmentation influences the development of clinical research for brain disorders. Deep learning using U-net-like networks becomes prevalent in hippocampus segmentation on Magnetic Resonance Imaging (MRI) due to its efficiency and accuracy. However, current methods lose sufficient detailed information during pooling, which hinders the segmentation results. And weak supervision on the details like edges or positions results in fuzzy and coarse boundary segmentation, causing great differences between the segmentation and ground-truth. In view of these drawbacks, we propose a Region-Boundary and Structure Net (RBS-Net), which consists of a primary net and an auxiliary net. (1) Our primary net focuses on the region distribution of hippocampus and introduces a distance map for boundary supervision. Furthermore the primary net adds a multi-layer feature learning module to compensate the information loss during pooling and strengthen the differences between the foreground and background, improving the region and boundary segmentation. (2) The auxiliary net concentrates on the structure similarity and also utilizes the multi-layer feature learning module, and this parallel task can refine encoders by similarizing the structure of the segmentation and ground-truth. We train and test our network using 5-fold cross-validation on HarP, a public available hippocampus dataset. Experimental results demonstrate that our proposed RBS-Net achieves a Dice of 89.76% in average, outperforming several state-of-the-art hippocampus segmentation methods. Furthermore, in few shot circumstances, our proposed RBS-Net achieves better results in terms of a comprehensive evaluation compared to several state-of-the-art deep learning-based methods. Finally we can observe that visual segmentation results for the boundary and detailed regions are improved by our proposed RBS-Net.


Subject(s)
Alzheimer Disease , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Brain , Hippocampus/diagnostic imaging , Alzheimer Disease/diagnostic imaging
9.
Biomedicines ; 11(2)2023 Jan 17.
Article in English | MEDLINE | ID: mdl-36830780

ABSTRACT

Collateral scoring plays an important role in diagnosis and treatment decisions of acute ischemic stroke (AIS). Most existing automated methods rely on vessel prominence and amount after vessel segmentation. The purpose of this study was to design a vessel-segmentation free method for automating collateral scoring on CT angiography (CTA). We first processed the original CTA via maximum intensity projection (MIP) and middle cerebral artery (MCA) region segmentation. The obtained MIP images were fed into our proposed hybrid CNN and Transformer model (MPViT) to automatically determine the collateral scores. We collected 154 CTA scans of patients with AIS for evaluation using five-folder cross validation. Results show that the proposed MPViT achieved an intraclass correlation coefficient of 0.767 (95% CI: 0.68-0.83) and a Kappa of 0.6184 (95% CI: 0.4954-0.7414) for three-point collateral score classification. For dichotomized classification (good vs. non-good and poor vs. non-poor), it also achieved great performance.

10.
Comput Biol Med ; 151(Pt A): 106278, 2022 12.
Article in English | MEDLINE | ID: mdl-36371901

ABSTRACT

In healthcare, Intensive Care Unit (ICU) bed management is a necessary task because of the limited budget and resources. Predicting the remaining Length of Stay (LoS) in ICU and mortality can assist clinicians in managing ICU beds efficiently. This study proposes a deep learning method based on several successive Temporal Dilated Separable Convolution with Context-Aware Feature Fusion (TDSC-CAFF) modules, and a multi-view and multi-scale feature fusion for predicting the remaining LoS and mortality risk for ICU patients. In each TDSC-CAFF module, temporal dilated separable convolution is used to encode each feature separately, and context-aware feature fusion is proposed to capture comprehensive and context-aware feature representations from the input time-series features, static demographics, and the output of the last TDSC-CAFF module. The CAFF outputs of each module are accumulated to achieve multi-scale representations with different receptive fields. The outputs of TDSC and CAFF are concatenated with skip connection from the output of the last module and the original time-series input. The concatenated features are processed by the proposed Point-Wise convolution-based Attention (PWAtt) that captures the inter-feature context to generate the final temporal features. Finally, the final temporal features, the accumulated multi-scale features, the encoded diagnosis, and static demographic features are fused and then processed by fully connected layers to obtain prediction results. We evaluate our proposed method on two publicly available datasets: eICU and MIMIC-IV v1.0 for LoS and mortality prediction tasks. Experimental results demonstrate that our proposed method achieves a mean squared log error of 0.07 and 0.08 for LoS prediction, and an Area Under the Receiver Operating Characteristic Curve of 0.909 and 0.926 for mortality prediction, on eICU and MIMIC-IV v1.0 datasets, respectively, which outperforms several state-of-the-art methods.


Subject(s)
Critical Care , Intensive Care Units , Humans , Length of Stay , Health Facilities , Time Factors
11.
Med Image Anal ; 80: 102521, 2022 08.
Article in English | MEDLINE | ID: mdl-35780594

ABSTRACT

In recent years, deep learning as a state-of-the-art machine learning technique has made great success in histopathological image classification. However, most of deep learning approaches rely heavily on the substantial task-specific annotations, which require experienced pathologists' manual labelling. As a result, they are laborious and time-consuming, and many unlabeled pathological images are difficult to use without experts' annotations. To mitigate the requirement for data annotation, we propose a self-supervised Deep Adaptive Regularized Clustering (DARC) framework to pre-train a neural network. DARC iteratively clusters the learned representations and utilizes the cluster assignments as pseudo-labels to learn the parameters of the network. To learn feasible representations and encourage the representations to become more discriminative, we design an objective function combining a network loss with a clustering loss using an adaptive regularization function, which is updated adaptively throughout the training process to learn feasible representations. The proposed DARC is evaluated on three public datasets, including NCT-CRC-HE-100K, PCam and LC25000. Compared to the strategy of training from scratch, fine-tuning using the pre-trained weights of DARC can obviously boost the accuracy of neural networks on histopathological classification. The accuracy of using the network trained using DARC pre-trained weights with only 10% labeled data is already comparable to the network trained from scratch with 100% training data. The network using DARC pre-trained weights achieves the fastest convergence speed on the downstream classification task. Moreover, visualization through t-distributed stochastic neighbor embedding (t-SNE) shows that the learned representations are generalizable and discriminative.


Subject(s)
Machine Learning , Neural Networks, Computer , Algorithms , Cluster Analysis , Humans
12.
Med Image Anal ; 79: 102423, 2022 07.
Article in English | MEDLINE | ID: mdl-35429696

ABSTRACT

Accurate prediction of pathological complete response (pCR) after neoadjuvant chemoradiotherapy (nCRT) is essential for clinical precision treatment. However, the existing methods of predicting pCR in esophageal cancer are based on the single stage data, which limits the performance of these methods. Effective fusion of the longitudinal data has the potential to improve the performance of pCR prediction, thanks to the combination of complementary information. In this study, we propose a new multi-loss disentangled representation learning (MLDRL) to realize the effective fusion of complementary information in the longitudinal data. Specifically, we first disentangle the latent variables of features in each stage into inherent and variational components. Then, we define a multi-loss function to ensure the effectiveness and structure of disentanglement, which consists of a cross-cycle reconstruction loss, an inherent-variational loss and a supervised classification loss. Finally, an adaptive gradient normalization algorithm is applied to balance the training of multiple loss terms by dynamically tuning the gradient magnitudes. Due to the cooperation of the multi-loss function and the adaptive gradient normalization algorithm, MLDRL effectively restrains the potential interference and achieves effective information fusion. The proposed method is evaluated on multi-center datasets, and the experimental results show that our method not only outperforms several state-of-art methods in pCR prediction, but also achieves better performance in the prognostic analysis of multi-center unlabeled datasets.


Subject(s)
Esophageal Neoplasms , Neoadjuvant Therapy , Algorithms , Esophageal Neoplasms/diagnostic imaging , Esophageal Neoplasms/pathology , Esophageal Neoplasms/therapy , Humans , Neoadjuvant Therapy/methods , Prognosis , Tomography, X-Ray Computed
13.
IEEE Trans Med Imaging ; 41(6): 1520-1532, 2022 06.
Article in English | MEDLINE | ID: mdl-35020590

ABSTRACT

The accurate prediction of isocitrate dehydrogenase (IDH) mutation and glioma segmentation are important tasks for computer-aided diagnosis using preoperative multimodal magnetic resonance imaging (MRI). The two tasks are ongoing challenges due to the significant inter-tumor and intra-tumor heterogeneity. The existing methods to address them are mostly based on single-task approaches without considering the correlation between the two tasks. In addition, the acquisition of IDH genetic labels is expensive and costly, resulting in a limited number of IDH mutation data for modeling. To comprehensively address these problems, we propose a fully automated multimodal MRI-based multi-task learning framework for simultaneous glioma segmentation and IDH genotyping. Specifically, the task correlation and heterogeneity are tackled with a hybrid CNN-Transformer encoder that consists of a convolutional neural network and a transformer to extract the shared spatial and global information learned from a decoder for glioma segmentation and a multi-scale classifier for IDH genotyping. Then, a multi-task learning loss is designed to balance the two tasks by combining the segmentation and classification loss functions with uncertain weights. Finally, an uncertainty-aware pseudo-label selection is proposed to generate IDH pseudo-labels from larger unlabeled data for improving the accuracy of IDH genotyping by using semi-supervised learning. We evaluate our method on a multi-institutional public dataset. Experimental results show that our proposed multi-task network achieves promising performance and outperforms the single-task learning counterparts and other existing state-of-the-art methods. With the introduction of unlabeled data, the semi-supervised multi-task learning framework further improves the performance of glioma segmentation and IDH genotyping. The source codes of our framework are publicly available at https://github.com/miacsu/MTTU-Net.git.


Subject(s)
Glioma , Isocitrate Dehydrogenase , Genotype , Glioma/diagnostic imaging , Glioma/genetics , Glioma/pathology , Humans , Image Processing, Computer-Assisted/methods , Isocitrate Dehydrogenase/genetics , Magnetic Resonance Imaging , Neural Networks, Computer
14.
IEEE J Biomed Health Inform ; 26(2): 673-684, 2022 02.
Article in English | MEDLINE | ID: mdl-34236971

ABSTRACT

Effective fusion of multimodal magnetic resonance imaging (MRI) is of great significance to boost the accuracy of glioma grading thanks to the complementary information provided by different imaging modalities. However, how to extract the common and distinctive information from MRI to achieve complementarity is still an open problem in information fusion research. In this study, we propose a deep neural network model termed as multimodal disentangled variational autoencoder (MMD-VAE) for glioma grading based on radiomics features extracted from preoperative multimodal MRI images. Specifically, the radiomics features are quantized and extracted from the region of interest for each modality. Then, the latent representations of variational autoencoder for these features are disentangled into common and distinctive representations to obtain the shared and complementary data among modalities. Afterwards, cross-modality reconstruction loss and common-distinctive loss are designed to ensure the effectiveness of the disentangled representations. Finally, the disentangled common and distinctive representations are fused to predict the glioma grades, and SHapley Additive exPlanations (SHAP) is adopted to quantitatively interpret and analyze the contribution of the important features to grading. Experimental results on two benchmark datasets demonstrate that the proposed MMD-VAE model achieves encouraging predictive performance (AUC:0.9939) on a public dataset, and good generalization performance (AUC:0.9611) on a cross-institutional private dataset. These quantitative results and interpretations may help radiologists understand gliomas better and make better treatment decisions for improving clinical outcomes.


Subject(s)
Glioma , Glioma/diagnostic imaging , Glioma/pathology , Humans , Magnetic Resonance Imaging/methods , Neoplasm Grading , Neural Networks, Computer
15.
J Stroke ; 23(2): 234-243, 2021 May.
Article in English | MEDLINE | ID: mdl-34102758

ABSTRACT

BACKGROUND AND PURPOSE: Multiphase computed tomographic angiography (mCTA) provides time variant images of pial vasculature supplying brain in patients with acute ischemic stroke (AIS). To develop a machine learning (ML) technique to predict tissue perfusion and infarction from mCTA source images. METHODS: 284 patients with AIS were included from the Precise and Rapid assessment of collaterals using multi-phase CTA in the triage of patients with acute ischemic stroke for Intra-artery Therapy (Prove-IT) study. All patients had non-contrast computed tomography, mCTA, and computed tomographic perfusion (CTP) at baseline and follow-up magnetic resonance imaging/non-contrast-enhanced computed tomography. Of the 284 patient images, 140 patient images were randomly selected to train and validate three ML models to predict a pre-defined Tmax thresholded perfusion abnormality, core and penumbra on CTP. The remaining 144 patient images were used to test the ML models. The predicted perfusion, core and penumbra lesions from ML models were compared to CTP perfusion lesion and to follow-up infarct using Bland-Altman plots, concordance correlation coefficient (CCC), intra-class correlation coefficient (ICC), and Dice similarity coefficient. RESULTS: Mean difference between the mCTA predicted perfusion volume and CTP perfusion volume was 4.6 mL (limit of agreement [LoA], -53 to 62.1 mL; P=0.56; CCC 0.63 [95% confidence interval [CI], 0.53 to 0.71; P<0.01], ICC 0.68 [95% CI, 0.58 to 0.78; P<0.001]). Mean difference between the mCTA predicted infarct and follow-up infarct in the 100 patients with acute reperfusion (modified thrombolysis in cerebral infarction [mTICI] 2b/2c/3) was 21.7 mL, while it was 3.4 mL in the 44 patients not achieving reperfusion (mTICI 0/1). Amongst reperfused subjects, CCC was 0.4 (95% CI, 0.15 to 0.55; P<0.01) and ICC was 0.42 (95% CI, 0.18 to 0.50; P<0.01); in non-reperfused subjects CCC was 0.52 (95% CI, 0.20 to 0.60; P<0.001) and ICC was 0.60 (95% CI, 0.37 to 0.76; P<0.001). No difference was observed between the mCTA and CTP predicted infarct volume in the test cohort (P=0.67). CONCLUSIONS: A ML based mCTA model is able to predict brain tissue perfusion abnormality and follow-up infarction, comparable to CTP.

16.
Med Image Anal ; 70: 101984, 2021 05.
Article in English | MEDLINE | ID: mdl-33676101

ABSTRACT

Detecting early infarct (EI) plays an essential role in patient selection for reperfusion therapy in the management of acute ischemic stroke (AIS). EI volume at acute or hyper-acute stage can be measured using advanced pre-treatment imaging, such as MRI and CT perfusion. In this study, a novel multi-task learning approach, EIS-Net, is proposed to segment EI and score Alberta Stroke Program Early CT Score (ASPECTS) simultaneously on baseline non-contrast CT (NCCT) scans of AIS patients. The EIS-Net comprises of a 3D triplet convolutional neural network (T-CNN) for EI segmentation and a multi-region classification network for ASPECTS scoring. T-CNN has triple encoders with original NCCT, mirrored NCCT, and atlas as inputs, as well as one decoder. A comparison disparity block (CDB) is designed to extract and enhance image contexts. In the decoder, a multi-level attention gate module (MAGM) is developed to recalibrate the features of the decoder for both segmentation and classification tasks. Evaluations using a high-quality dataset comprising of baseline NCCT and concomitant diffusion weighted MRI (DWI) as reference standard of 260 patients with AIS show that the proposed EIS-Net can accurately segment EI. The EIS-Net segmented EI volume strongly correlates with EI volume on DWI (r=0.919), and the mean difference between the two volumes is 8.5 mL. For ASPECTS scoring, the proposed EIS-Net achieves an intraclass correlation coefficient of 0.78 for total 10-point ASPECTS and a kappa of 0.75 for dichotomized ASPECTS (≤ 4 vs. >4). Both EI segmentation and ASPECTS scoring tasks achieve state-of-the-art performances.


Subject(s)
Brain Ischemia , Ischemic Stroke , Stroke , Alberta , Brain Ischemia/diagnostic imaging , Humans , Infarction , Stroke/diagnostic imaging , Tomography, X-Ray Computed
17.
Int J Stroke ; 16(2): 192-199, 2021 02.
Article in English | MEDLINE | ID: mdl-31847733

ABSTRACT

BACKGROUND: Manual segmentations of intracranial hemorrhage on non-contrast CT images are the gold-standard in measuring hematoma growth but are prone to rater variability. AIMS: We demonstrate that a convex optimization-based interactive segmentation approach can accurately and reliably measure intracranial hemorrhage growth. METHODS: Baseline and 16-h follow-up head non-contrast CT images of 46 subjects presenting with intracranial hemorrhage were selected randomly from the ANNEXA-4 trial imaging database. Three users semi-automatically segmented intracranial hemorrhage to measure hematoma volume for each timepoint using our proposed method. Segmentation accuracy was quantitatively evaluated compared to manual segmentations by using Dice similarity coefficient, Pearson correlation, and Bland-Altman analysis. Intra- and inter-rater reliability of the Dice similarity coefficient and intracranial hemorrhage volumes and volume change were assessed by the intraclass correlation coefficient and minimum detectable change. RESULTS: Among the three users, the mean Dice similarity coefficient, Pearson correlation, and mean difference ranged from 76.79% to 79.76%, 0.970 to 0.980 (p < 0.001), and -1.5 to -0.4 ml, respectively, for all intracranial hemorrhage segmentations. Inter-rater intraclass correlation coefficients between the three users for Dice similarity coefficient and intracranial hemorrhage volume were 0.846 and 0.962, respectively, and the corresponding minimum detectable change was 2.51 ml. Inter-rater intraclass correlation coefficient for intracranial hemorrhage volume change ranged from 0.915 to 0.958 for each user compared to manual measurements, resulting in an minimum detectable change range of 2.14 to 4.26 ml. CONCLUSIONS: We spatially and volumetrically validate a novel interactive segmentation method for delineating intracranial hemorrhage on head non-contrast CT images. Good spatial overlap, excellent volume correlation, and good repeatability suggest its usefulness for measuring intracranial hemorrhage volume and volume change on non-contrast CT images.


Subject(s)
Stroke , Head , Humans , Intracranial Hemorrhages/diagnostic imaging , Reproducibility of Results , Stroke/diagnostic imaging , Tomography, X-Ray Computed
18.
Stroke ; 52(1): 223-231, 2021 01.
Article in English | MEDLINE | ID: mdl-33280549

ABSTRACT

BACKGROUND AND PURPOSE: Prediction of infarct extent among patients with acute ischemic stroke using computed tomography perfusion is defined by predefined discrete computed tomography perfusion thresholds. Our objective is to develop a threshold-free computed tomography perfusion-based machine learning (ML) model to predict follow-up infarct in patients with acute ischemic stroke. METHODS: Sixty-eight patients from the PRoveIT study (Measuring Collaterals With Multi-Phase CT Angiography in Patients With Ischemic Stroke) were used to derive a ML model using random forest to predict follow-up infarction voxel by voxel, and 137 patients from the HERMES study (Highly Effective Reperfusion Evaluated in Multiple Endovascular Stroke Trials) were used to test the derived ML model. Average map, Tmax, cerebral blood flow, cerebral blood volume, and time variables including stroke onset-to-imaging and imaging-to-reperfusion time, were used as features to train the ML model. Spatial and volumetric agreement between the ML model predicted follow-up infarct and actual follow-up infarct were assessed. Relative cerebral blood flow <0.3 threshold using RAPID software and time-dependent Tmax thresholds were compared with the ML model. RESULTS: In the test cohort (137 patients), median follow-up infarct volume predicted by the ML model was 30.9 mL (interquartile range, 16.4-54.3 mL), compared with a median 29.6 mL (interquartile range, 11.1-70.9 mL) of actual follow-up infarct volume. The Pearson correlation coefficient between 2 measurements was 0.80 (95% CI, 0.74-0.86, P<0.001) while the volumetric difference was -3.2 mL (interquartile range, -16.7 to 6.1 mL). Volumetric difference with the ML model was smaller versus the relative cerebral blood flow <0.3 threshold and the time-dependent Tmax threshold (P<0.001). CONCLUSIONS: A ML using computed tomography perfusion data and time estimates follow-up infarction in patients with acute ischemic stroke better than current methods.


Subject(s)
Cerebral Infarction/diagnostic imaging , Cerebral Infarction/etiology , Ischemic Stroke/complications , Ischemic Stroke/diagnostic imaging , Aged , Cerebrovascular Circulation , Collateral Circulation , Female , Follow-Up Studies , Humans , Image Processing, Computer-Assisted , Machine Learning , Male , Middle Aged , Perfusion Imaging , Predictive Value of Tests , Tomography, X-Ray Computed
19.
Phys Med Biol ; 65(21): 215013, 2020 11 05.
Article in English | MEDLINE | ID: mdl-32604080

ABSTRACT

Stroke lesion volume is a key radiologic measurement in assessing prognosis of acute ischemic stroke (AIS) patients. The aim of this paper is to develop an automated segmentation method for accurately segmenting follow-up ischemic and hemorrhagic lesion from multislice non-contrast CT (NCCT) volumes of AIS patients. This paper proposes a 2D dense multi-path contextual generative adversarial network (MPC-GAN) where a dense multi-path 2D U-Net is utilized as the generator and a discriminator network is applied to regularize the generator. Contextual information (i.e. bilateral intensity difference, distance map and lesion location probability) are input into the generator and discriminator. The proposed method is validated separately on follow-up NCCT volumes of 60 patients with ischemic infarcts and NCCT volumes of 70 patients with hemorrhages. Quantitative results demonstrated that the proposed MPC-GAN method obtained a Dice coefficient (DC) of 70.6% for ischemic infarct segmentation and a DC of 76.5% for hemorrhage segmentation compared with manual segmented lesions, outperforming several benchmark methods. Additional volumetric analyses demonstrated that the MPC-GAN segmented lesion volume correlated well with manual measurements (Pearson correlation coefficients were 0.926 and 0.927 for ischemic infarcts and hemorrhages, respectively). The proposed MPC-GAN method can accurately segment ischemic infarcts and hemorrhages from NCCT volumes of AIS patients.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Stroke/diagnostic imaging , Tomography, X-Ray Computed , Female , Humans , Male
20.
Radiology ; 294(3): 638-644, 2020 03.
Article in English | MEDLINE | ID: mdl-31990267

ABSTRACT

Background Identifying the presence and extent of infarcted brain tissue at baseline plays a crucial role in the treatment of patients with acute ischemic stroke (AIS). Patients with extensive infarction are unlikely to benefit from thrombolysis or thrombectomy procedures. Purpose To develop an automated approach to detect and quantitate infarction by using non-contrast-enhanced CT scans in patients with AIS. Materials and Methods Non-contrast-enhanced CT images in patients with AIS (<6 hours from symptom onset to CT) who also underwent diffusion-weighted (DW) MRI within 1 hour after AIS were obtained from May 2004 to July 2009 and were included in this retrospective study. Ischemic lesions manually contoured on DW MRI scans were used as the reference standard. An automatic segmentation approach involving machine learning (ML) was developed to detect infarction. Randomly selected nonenhanced CT images from 157 patients with the lesion labels manually contoured on DW MRI scans were used to train and validate the ML model; the remaining 100 patients independent of the derivation cohort were used for testing. The ML algorithm was quantitatively compared with the reference standard (DW MRI) by using Bland-Altman plots and Pearson correlation. Results In 100 patients in the testing data set (median age, 69 years; interquartile range [IQR]: 59-76 years; 59 men), baseline non-contrast-enhanced CT was performed within a median time of 48 minutes from symptom onset (IQR, 27-93 minutes); baseline MRI was performed a median of 38 minutes (IQR, 24-48 minutes) later. The algorithm-detected lesion volume correlated with the reference standard of expert-contoured lesion volume in acute DW MRI scans (r = 0.76, P < .001). The mean difference between the algorithm-segmented volume (median, 15 mL; IQR, 9-38 mL) and the DW MRI volume (median, 19 mL; IQR, 5-43 mL) was 11 mL (P = .89). Conclusion A machine learning approach for segmentation of infarction on non-contrast-enhanced CT images in patients with acute ischemic stroke showed good agreement with stroke volume on diffusion-weighted MRI scans. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Nael in this issue.


Subject(s)
Brain Infarction/diagnostic imaging , Machine Learning , Stroke/diagnostic imaging , Tomography, X-Ray Computed/methods , Aged , Algorithms , Brain/diagnostic imaging , Female , Humans , Male , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...