Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
1.
IEEE Trans Med Imaging ; PP2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38923479

ABSTRACT

Intrathoracic airway segmentation in computed tomography is a prerequisite for various respiratory disease analyses such as chronic obstructive pulmonary disease, asthma and lung cancer. Due to the low imaging contrast and noises execrated at peripheral branches, the topological-complexity and the intra-class imbalance of airway tree, it remains challenging for deep learning-based methods to segment the complete airway tree (on extracting deeper branches). Unlike other organs with simpler shapes or topology, the airway's complex tree structure imposes an unbearable burden to generate the "ground truth" label (up to 7 or 3 hours of manual or semi-automatic annotation per case). Most of the existing airway datasets are incompletely labeled/annotated, thus limiting the completeness of computer-segmented airway. In this paper, we propose a new anatomy-aware multi-class airway segmentation method enhanced by topology-guided iterative self-learning. Based on the natural airway anatomy, we formulate a simple yet highly effective anatomy-aware multi-class segmentation task to intuitively handle the severe intra-class imbalance of the airway. To solve the incomplete labeling issue, we propose a tailored iterative self-learning scheme to segment toward the complete airway tree. For generating pseudo-labels to achieve higher sensitivity (while retaining similar specificity), we introduce a novel breakage attention map and design a topology-guided pseudo-label refinement method by iteratively connecting breaking branches commonly existed from initial pseudo-labels. Extensive experiments have been conducted on four datasets including two public challenges. The proposed method achieves the top performance in both EXACT'09 challenge using average score and ATM'22 challenge on weighted average score. In a public BAS dataset and a private lung cancer dataset, our method significantly improves previous leading approaches by extracting at least (absolute) 6.1% more detected tree length and 5.2% more tree branches, while maintaining comparable precision.

2.
Article in English | MEDLINE | ID: mdl-38687670

ABSTRACT

Automated colorectal cancer (CRC) segmentation in medical imaging is the key to achieving automation of CRC detection, staging, and treatment response monitoring. Compared with magnetic resonance imaging (MRI) and computed tomography colonography (CTC), conventional computed tomography (CT) has enormous potential because of its broad implementation, superiority for the hollow viscera (colon), and convenience without needing bowel preparation. However, the segmentation of CRC in conventional CT is more challenging due to the difficulties presenting with the unprepared bowel, such as distinguishing the colorectum from other structures with similar appearance and distinguishing the CRC from the contents of the colorectum. To tackle these challenges, we introduce DeepCRC-SL, the first automated segmentation algorithm for CRC and colorectum in conventional contrast-enhanced CT scans. We propose a topology-aware deep learning-based approach, which builds a novel 1-D colorectal coordinate system and encodes each voxel of the colorectum with a relative position along the coordinate system. We then induce an auxiliary regression task to predict the colorectal coordinate value of each voxel, aiming to integrate global topology into the segmentation network and thus improve the colorectum's continuity. Self-attention layers are utilized to capture global contexts for the coordinate regression task and enhance the ability to differentiate CRC and colorectum tissues. Moreover, a coordinate-driven self-learning (SL) strategy is introduced to leverage a large amount of unlabeled data to improve segmentation performance. We validate the proposed approach on a dataset including 227 labeled and 585 unlabeled CRC cases by fivefold cross-validation. Experimental results demonstrate that our method outperforms some recent related segmentation methods and achieves the segmentation accuracy in DSC for CRC of 0.669 and colorectum of 0.892, reaching to the performance (at 0.639 and 0.890, respectively) of a medical resident with two years of specialized CRC imaging fellowship.

3.
Ann Am Thorac Soc ; 21(7): 1022-1033, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38530051

ABSTRACT

Rationale: Rates of emphysema progression vary in chronic obstructive pulmonary disease (COPD), and the relationships with vascular and airway pathophysiology remain unclear. Objectives: We sought to determine if indices of peripheral (segmental and beyond) pulmonary arterial dilation measured on computed tomography (CT) are associated with a 1-year index of emphysema (EI; percentage of voxels <-950 Hounsfield units) progression. Methods: Five hundred ninety-nine former and never-smokers (Global Initiative for Chronic Obstructive Lung Disease stages 0-3) were evaluated from the SPIROMICS (Subpopulations and Intermediate Outcome Measures in COPD Study) cohort: rapid emphysema progressors (RPs; n = 188, 1-year ΔEI > 1%), nonprogressors (n = 301, 1-year ΔEI ± 0.5%), and never-smokers (n = 110). Segmental pulmonary arterial cross-sectional areas were standardized to associated airway luminal areas (segmental pulmonary artery-to-airway ratio [PAARseg]). Full-inspiratory CT scan-derived total (arteries and veins) pulmonary vascular volume (TPVV) was compared with small vessel volume (radius smaller than 0.75 mm). Ratios of airway to lung volume (an index of dysanapsis and COPD risk) were compared with ratios of TPVV to lung volume. Results: Compared with nonprogressors, RPs exhibited significantly larger PAARseg (0.73 ± 0.29 vs. 0.67 ± 0.23; P = 0.001), lower ratios of TPVV to lung volume (3.21 ± 0.42% vs. 3.48 ± 0.38%; P = 5.0 × 10-12), lower ratios of airway to lung volume (0.031 ± 0.003 vs. 0.034 ± 0.004; P = 6.1 × 10-13), and larger ratios of small vessel volume to TPVV (37.91 ± 4.26% vs. 35.53 ± 4.89%; P = 1.9 × 10-7). In adjusted analyses, an increment of 1 standard deviation in PAARseg was associated with a 98.4% higher rate of severe exacerbations (95% confidence interval, 29-206%; P = 0.002) and 79.3% higher odds of being in the RP group (95% confidence interval, 24-157%; P = 0.001). At 2-year follow-up, the CT-defined RP group demonstrated a significant decline in postbronchodilator percentage predicted forced expiratory volume in 1 second. Conclusions: Rapid one-year progression of emphysema was associated with indices indicative of higher peripheral pulmonary vascular resistance and a possible role played by pulmonary vascular-airway dysanapsis.


Subject(s)
Disease Progression , Pulmonary Artery , Pulmonary Emphysema , Tomography, X-Ray Computed , Humans , Male , Female , Pulmonary Emphysema/diagnostic imaging , Pulmonary Emphysema/physiopathology , Aged , Middle Aged , Pulmonary Artery/diagnostic imaging , Pulmonary Artery/physiopathology , Lung/diagnostic imaging , Lung/physiopathology , Forced Expiratory Volume , Pulmonary Disease, Chronic Obstructive/physiopathology , Pulmonary Disease, Chronic Obstructive/diagnostic imaging
4.
IEEE Trans Med Imaging ; 43(1): 96-107, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37399157

ABSTRACT

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.


Subject(s)
Language , Supervised Machine Learning , Image Processing, Computer-Assisted
5.
Med Image Anal ; 90: 102957, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37716199

ABSTRACT

Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to the quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and extensive clinical efforts for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Both quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage (https://atm22.grand-challenge.org/).


Subject(s)
Lung Diseases , Trees , Humans , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods , Algorithms , Lung/diagnostic imaging
6.
Nat Commun ; 13(1): 6137, 2022 10 17.
Article in English | MEDLINE | ID: mdl-36253346

ABSTRACT

Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.


Subject(s)
Head and Neck Neoplasms , Organs at Risk , Head and Neck Neoplasms/radiotherapy , Humans , Image Processing, Computer-Assisted/methods , Neck , Radiometry
8.
IEEE Trans Med Imaging ; 41(10): 2658-2669, 2022 10.
Article in English | MEDLINE | ID: mdl-35442886

ABSTRACT

Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.


Subject(s)
Imaging, Three-Dimensional , Tomography, X-Ray Computed , Algorithms , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Radiography , Supervised Machine Learning , Tomography, X-Ray Computed/methods
9.
Clin Imaging ; 77: 291-298, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34171743

ABSTRACT

PURPOSE: To investigate the diagnostic performance of a deep convolutional neural network for differentiation of clear cell renal cell carcinoma (ccRCC) from renal oncocytoma. METHODS: In this retrospective study, 74 patients (49 male, mean age 59.3) with 243 renal masses (203 ccRCC and 40 oncocytoma) that had undergone MR imaging 6 months prior to pathologic confirmation of the lesions were included. Segmentation using seed placement and bounding box selection was used to extract the lesion patches from T2-WI, and T1-WI pre-contrast, post-contrast arterial and venous phases. Then, a deep convolutional neural network (AlexNet) was fine-tuned to distinguish the ccRCC from oncocytoma. Five-fold cross validation was used to evaluate the AI algorithm performance. A subset of 80 lesions (40 ccRCC, 40 oncocytoma) were randomly selected to be classified by two radiologists and their performance was compared to the AI algorithm. Intra-class correlation coefficient was calculated using the Shrout-Fleiss method. RESULTS: Overall accuracy of the AI system was 91% for differentiation of ccRCC from oncocytoma with an area under the curve of 0.9. For the observer study on 80 randomly selected lesions, there was moderate agreement between the two radiologists and AI algorithm. In the comparison sub-dataset, classification accuracies were 81%, 78%, and 70% for AI, radiologist 1, and radiologist 2, respectively. CONCLUSION: The developed AI system in this study showed high diagnostic performance in differentiation of ccRCC versus oncocytoma on multi-phasic MRIs.


Subject(s)
Adenoma, Oxyphilic , Carcinoma, Renal Cell , Deep Learning , Kidney Neoplasms , Adenoma, Oxyphilic/diagnostic imaging , Artificial Intelligence , Carcinoma, Renal Cell/diagnostic imaging , Cell Differentiation , Diagnosis, Differential , Humans , Kidney Neoplasms/diagnostic imaging , Magnetic Resonance Imaging , Male , Middle Aged , Retrospective Studies
10.
Med Image Anal ; 68: 101909, 2021 02.
Article in English | MEDLINE | ID: mdl-33341494

ABSTRACT

Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.


Subject(s)
Esophageal Neoplasms , Radiotherapy Planning, Computer-Assisted , Esophageal Neoplasms/diagnostic imaging , Esophageal Neoplasms/radiotherapy , Humans , Positron-Emission Tomography , Tomography, X-Ray Computed , Tumor Burden
11.
IEEE Trans Med Imaging ; 40(10): 2759-2770, 2021 10.
Article in English | MEDLINE | ID: mdl-33370236

ABSTRACT

Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations). When training a lesion detector on a partially-labeled dataset, the missing annotations will generate incorrect negative signals and degrade the performance. Besides DeepLesion, there are several small single-type datasets, such as LUNA for lung nodules and LiTS for liver tumors. These datasets have heterogeneous label scopes, i.e., different lesion types are labeled in different datasets with other types ignored. In this work, we aim to develop a universal lesion detection algorithm to detect a variety of lesions. The problem of heterogeneous and partial labels is tackled. First, we build a simple yet effective lesion detection framework named Lesion ENSemble (LENS). LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion and leverage their synergy by proposal fusion. Next, we propose strategies to mine missing annotations from partially-labeled datasets by exploiting clinical prior knowledge and cross-dataset knowledge transfer. Finally, we train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion. Our method brings a relative improvement of 49% compared to the current state-of-the-art approach in the metric of average sensitivity. We have publicly released our manual 3D annotations of DeepLesion online.1 1https://github.com/viggin/DeepLesion_manual_test_set.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Radiography
12.
Front Oncol ; 11: 785788, 2021.
Article in English | MEDLINE | ID: mdl-35141147

ABSTRACT

BACKGROUND: The current clinical workflow for esophageal gross tumor volume (GTV) contouring relies on manual delineation with high labor costs and inter-user variability. PURPOSE: To validate the clinical applicability of a deep learning multimodality esophageal GTV contouring model, developed at one institution whereas tested at multiple institutions. MATERIALS AND METHODS: We collected 606 patients with esophageal cancer retrospectively from four institutions. Among them, 252 patients from institution 1 contained both a treatment planning CT (pCT) and a pair of diagnostic FDG-PET/CT; 354 patients from three other institutions had only pCT scans under different staging protocols or lacking PET scanners. A two-streamed deep learning model for GTV segmentation was developed using pCT and PET/CT scans of a subset (148 patients) from institution 1. This built model had the flexibility of segmenting GTVs via only pCT or pCT+PET/CT combined when available. For independent evaluation, the remaining 104 patients from institution 1 behaved as an unseen internal testing, and 354 patients from the other three institutions were used for external testing. Degrees of manual revision were further evaluated by human experts to assess the contour-editing effort. Furthermore, the deep model's performance was compared against four radiation oncologists in a multi-user study using 20 randomly chosen external patients. Contouring accuracy and time were recorded for the pre- and post-deep learning-assisted delineation process.

13.
Front Radiol ; 1: 661237, 2021.
Article in English | MEDLINE | ID: mdl-37492171

ABSTRACT

Purpose: Computed tomography (CT) characteristics associated with critical outcomes of patients with coronavirus disease 2019 (COVID-19) have been reported. However, CT risk factors for mortality have not been directly reported. We aim to determine the CT-based quantitative predictors for COVID-19 mortality. Methods: In this retrospective study, laboratory-confirmed COVID-19 patients at Wuhan Central Hospital between December 9, 2019, and March 19, 2020, were included. A novel prognostic biomarker, V-HU score, depicting the volume (V) of total pneumonia infection and the average Hounsfield unit (HU) of consolidation areas was automatically quantified from CT by an artificial intelligence (AI) system. Cox proportional hazards models were used to investigate risk factors for mortality. Results: The study included 238 patients (women 136/238, 57%; median age, 65 years, IQR 51-74 years), 126 of whom were survivors. The V-HU score was an independent predictor (hazard ratio [HR] 2.78, 95% confidence interval [CI] 1.50-5.17; p = 0.001) after adjusting for several COVID-19 prognostic indicators significant in univariable analysis. The prognostic performance of the model containing clinical and outpatient laboratory factors was improved by integrating the V-HU score (c-index: 0.695 vs. 0.728; p < 0.001). Older patients (age ≥ 65 years; HR 3.56, 95% CI 1.64-7.71; p < 0.001) and younger patients (age < 65 years; HR 4.60, 95% CI 1.92-10.99; p < 0.001) could be further risk-stratified by the V-HU score. Conclusions: A combination of an increased volume of total pneumonia infection and high HU value of consolidation areas showed a strong correlation to COVID-19 mortality, as determined by AI quantified CT.

14.
Eur Radiol ; 30(12): 6828-6837, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32683550

ABSTRACT

OBJECTIVE: To develop a fully automated AI system to quantitatively assess the disease severity and disease progression of COVID-19 using thick-section chest CT images. METHODS: In this retrospective study, an AI system was developed to automatically segment and quantify the COVID-19-infected lung regions on thick-section chest CT images. Five hundred thirty-one CT scans from 204 COVID-19 patients were collected from one appointed COVID-19 hospital. The automatically segmented lung abnormalities were compared with manual segmentation of two experienced radiologists using the Dice coefficient on a randomly selected subset (30 CT scans). Two imaging biomarkers were automatically computed, i.e., the portion of infection (POI) and the average infection HU (iHU), to assess disease severity and disease progression. The assessments were compared with patient status of diagnosis reports and key phrases extracted from radiology reports using the area under the receiver operating characteristic curve (AUC) and Cohen's kappa, respectively. RESULTS: The dice coefficient between the segmentation of the AI system and two experienced radiologists for the COVID-19-infected lung abnormalities was 0.74 ± 0.28 and 0.76 ± 0.29, respectively, which were close to the inter-observer agreement (0.79 ± 0.25). The computed two imaging biomarkers can distinguish between the severe and non-severe stages with an AUC of 0.97 (p value < 0.001). Very good agreement (κ = 0.8220) between the AI system and the radiologists was achieved on evaluating the changes in infection volumes. CONCLUSIONS: A deep learning-based AI system built on the thick-section CT imaging can accurately quantify the COVID-19-associated lung abnormalities and assess the disease severity and its progressions. KEY POINTS: • A deep learning-based AI system was able to accurately segment the infected lung regions by COVID-19 using the thick-section CT scans (Dice coefficient ≥ 0.74). • The computed imaging biomarkers were able to distinguish between the non-severe and severe COVID-19 stages (area under the receiver operating characteristic curve 0.97). • The infection volume changes computed by the AI system were able to assess the COVID-19 progression (Cohen's kappa 0.8220).


Subject(s)
Betacoronavirus , Community-Acquired Infections/diagnosis , Coronavirus Infections/diagnosis , Deep Learning , Lung/diagnostic imaging , Pneumonia, Viral/diagnosis , Pneumonia/diagnosis , Tomography, X-Ray Computed/methods , Artificial Intelligence , COVID-19 , China/epidemiology , Disease Progression , Female , Humans , Male , Middle Aged , Pandemics , ROC Curve , Retrospective Studies , SARS-CoV-2
15.
Sci Transl Med ; 11(495)2019 06 05.
Article in English | MEDLINE | ID: mdl-31167928

ABSTRACT

Autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED), a monogenic disorder caused by AIRE mutations, presents with several autoimmune diseases. Among these, endocrine organ failure is widely recognized, but the prevalence, immunopathogenesis, and treatment of non-endocrine manifestations such as pneumonitis remain poorly characterized. We enrolled 50 patients with APECED in a prospective observational study and comprehensively examined their clinical and radiographic findings, performed pulmonary function tests, and analyzed immunological characteristics in blood, bronchoalveolar lavage fluid, and endobronchial and lung biopsies. Pneumonitis was found in >40% of our patients, presented early in life, was misdiagnosed despite chronic respiratory symptoms and accompanying radiographic and pulmonary function abnormalities, and caused hypoxemic respiratory failure and death. Autoantibodies against BPIFB1 and KCNRG and the homozygous c.967_979del13 AIRE mutation are associated with pneumonitis development. APECED pneumonitis features compartmentalized immunopathology, with accumulation of activated neutrophils in the airways and lymphocytic infiltration in intraepithelial, submucosal, peribronchiolar, and interstitial areas. Beyond APECED, we extend these observations to lung disease seen in other conditions with secondary AIRE deficiency (thymoma and RAG deficiency). Aire-deficient mice had similar compartmentalized cellular immune responses in the airways and lung tissue, which was ameliorated by deficiency of T and B lymphocytes. Accordingly, T and B lymphocyte-directed immunomodulation controlled symptoms and radiographic abnormalities and improved pulmonary function in patients with APECED pneumonitis. Collectively, our findings unveil lung autoimmunity as a common, early, and unrecognized manifestation of APECED and provide insights into the immunopathogenesis and treatment of pulmonary autoimmunity associated with impaired central immune tolerance.


Subject(s)
Autoimmune Diseases/immunology , Autoimmune Diseases/pathology , Autoimmunity/physiology , Lymphocytes/immunology , Pneumonia/immunology , Pneumonia/pathology , Adolescent , Adult , Autoantibodies/immunology , Autoimmune Diseases/metabolism , B-Lymphocytes/immunology , B-Lymphocytes/metabolism , Child , Child, Preschool , Female , Humans , Infant , Infant, Newborn , Lymphocytes/metabolism , Male , Middle Aged , Pneumonia/metabolism , Prospective Studies , T-Lymphocytes/immunology , T-Lymphocytes/metabolism , Young Adult
16.
J Med Imaging (Bellingham) ; 6(2): 024007, 2019 Apr.
Article in English | MEDLINE | ID: mdl-31205977

ABSTRACT

Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution ( z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.

17.
IEEE Trans Med Imaging ; 38(11): 2556-2568, 2019 11.
Article in English | MEDLINE | ID: mdl-30908194

ABSTRACT

Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , White Matter/diagnostic imaging , Aged , Algorithms , Female , Humans , Male , Middle Aged
18.
IEEE Trans Vis Comput Graph ; 24(8): 2298-2314, 2018 08.
Article in English | MEDLINE | ID: mdl-28809701

ABSTRACT

Skeletonization offers a compact representation of an object while preserving important topological and geometrical features. Literature on skeletonization of binary objects is quite mature. However, challenges involved with skeletonization of fuzzy objects are mostly unanswered. This paper presents a new theory and algorithm of skeletonization for fuzzy objects, evaluates its performance, and demonstrates its applications. A formulation of fuzzy grassfire propagation is introduced; its relationships with fuzzy distance functions, level sets, and geodesics are discussed; and several new theoretical results are presented in the continuous space. A notion of collision-impact of fire-fronts at skeletal points is introduced, and its role in filtering noisy skeletal points is demonstrated. A fuzzy object skeletonization algorithm is developed using new notions of surface- and curve-skeletal voxels, digital collision-impact, filtering of noisy skeletal voxels, and continuity of skeletal surfaces. A skeletal noise pruning algorithm is presented using branch-level significance. Accuracy and robustness of the new algorithm are examined on computer-generated phantoms and micro- and conventional CT imaging of trabecular bone specimens. An application of fuzzy object skeletonization to compute structure-width at a low image resolution is demonstrated, and its ability to predict bone strength is examined. Finally, the performance of the new fuzzy object skeletonization algorithm is compared with two binary object skeletonization methods.


Subject(s)
Algorithms , Computer Graphics/statistics & numerical data , Fuzzy Logic , Animals , Bone and Bones/diagnostic imaging , Bone and Bones/physiology , Computer Simulation , Humans , Models, Anatomic , Models, Statistical , Phantoms, Imaging/statistics & numerical data , Tomography, X-Ray Computed/statistics & numerical data , X-Ray Microtomography/statistics & numerical data
19.
Med Phys ; 45(1): 236-249, 2018 Jan.
Article in English | MEDLINE | ID: mdl-29064579

ABSTRACT

PURPOSE: Osteoporosis associated with reduced bone mineral density (BMD) and microarchitectural changes puts patients at an elevated risk of fracture. Modern multidetector row CT (MDCT) technology, producing high spatial resolution at increasingly lower dose radiation, is emerging as a viable modality for trabecular bone (Tb) imaging. Wide variation in CT scanners raises concerns of data uniformity in multisite and longitudinal studies. A comprehensive cadaveric study was performed to evaluate MDCT-derived Tb microarchitectural measures. A human pilot study was performed comparing continuity of Tb measures estimated from two MDCT scanners with significantly different image resolution features. METHOD: Micro-CT imaging of cadaveric ankle specimens (n=25) was used to examine the validity of MDCT-derived Tb microarchitectural measures. Repeat scan reproducibility of MDCT-based Tb measures and their ability to predict mechanical properties were examined. To assess multiscanner data continuity of Tb measures, the distal tibias of 20 volunteers (age:26.2±4.5Y,10F) were scanned using the Siemens SOMATOM Definition Flash and the higher resolution Siemens SOMATOM Force scanners with an average 45-day time gap between scans. The correlation of Tb measures derived from the two scanners over 30% and 60% peel regions at the 4% to 8% of distal tibia was analyzed. RESULTS: MDCT-based Tb measures characterizing bone network area density, plate-rod microarchitecture, and transverse trabeculae showed good correlations (r∈0.85,0.92) with the gold standard micro-CT-derived values of matching Tb measures. However, other MDCT-derived Tb measures characterizing trabecular thickness and separation, erosion index, and structure model index produced weak correlation (r<0.8) with their micro-CT-derived values. Most MDCT Tb measures were found repeatable (ICC∈0.94,0.98). The Tb plate-width measure showed a strong correlation (r = 0.89) with experimental yield stress, while the transverse trabecular measure produced the highest correlation (r = 0.81) with Young's modulus. The data continuity experiment showed that, despite significant differences in image resolution between two scanners (10% MTF along xy-plane and z-direction - Flash: 16.2 and 17.9 lp/cm; Force: 24.8 and 21.0 lp/cm), most Tb measures had high Pearson correlations (r > 0.95) between values estimated from the two scanners. Relatively lower correlation coefficients were observed for the bone network area density (r = 0.91) and Tb separation (r = 0.93) measures. CONCLUSION: Most MDCT-derived Tb microarchitectural measures are reproducible and their values derived from two scanners strongly correlate with each other as well as with bone strength. This study has highlighted those MDCT-derived measures which show the greatest promise for characterization of bone network area density, plate-rod and transverse trabecular distributions with a good correlation (r ≥ 0.85) compared with their micro-CT-derived values. At the same time, other measures representing trabecular thickness and separation, erosion index, and structure model index produced weak correlations (r < 0.8) with their micro-CT-derived values, failing to accurately portray the projected trabecular microarchitectural features. Strong correlations of Tb measures estimated from two scanners suggest that image data from different scanners can be used successfully in multisite and longitudinal studies with linear calibration required for some measures. In summary, modern MDCT scanners are suitable for effective quantitative imaging of peripheral Tb microarchitecture if care is taken to focus on appropriate quantitative metrics.


Subject(s)
Bone and Bones/diagnostic imaging , X-Ray Microtomography/methods , Adult , Aged , Ankle/diagnostic imaging , Female , Humans , Male , Reproducibility of Results
20.
Phys Med Biol ; 61(18): N478-N496, 2016 09 21.
Article in English | MEDLINE | ID: mdl-27541945

ABSTRACT

Osteoporosis is associated with increased risk of fractures, which is clinically defined by low bone mineral density. Increasing evidence suggests that trabecular bone (TB) micro-architecture is an important determinant of bone strength and fracture risk. We present an improved volumetric topological analysis algorithm based on fuzzy skeletonization, results of its application on in vivo MR imaging, and compare its performance with digital topological analysis. The new VTA method eliminates data loss in the binarization step and yields accurate and robust measures of local plate-width for individual trabeculae, which allows classification of TB structures on the continuum between perfect plates and rods. The repeat-scan reproducibility of the method was evaluated on in vivo MRI of distal femur and distal radius, and high intra-class correlation coefficients between 0.93 and 0.97 were observed. The method's ability to detect treatment effects on TB micro-architecture was examined in a 2 years testosterone study on hypogonadal men. It was observed from experimental results that average plate-width and plate-to-rod ratio significantly improved after 6 months and the improvement was found to continue at 12 and 24 months. The bone density of plate-like trabeculae was found to increase by 6.5% (p = 0.06), 7.2% (p = 0.07) and 16.2% (p = 0.003) at 6, 12, 24 months, respectively. While the density of rod-like trabeculae did not change significantly, even at 24 months. A comparative study showed that VTA has enhanced ability to detect treatment effects in TB micro-architecture as compared to conventional method of digital topological analysis for plate/rod characterization in terms of both percent change and effect-size.


Subject(s)
Algorithms , Cancellous Bone/pathology , Eunuchism/pathology , Magnetic Resonance Imaging/methods , Osteoporosis/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Adolescent , Adult , Aged , Aged, 80 and over , Bone Density , Computer Simulation , Female , Follow-Up Studies , Humans , Longitudinal Studies , Male , Middle Aged , Reproducibility of Results , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...