Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Front Cell Dev Biol ; 10: 1081285, 2022.
Article in English | MEDLINE | ID: mdl-36568975

ABSTRACT

Purpose: To assess the alteration in the macular microvascular in type 2 diabetic patients with peripheral neuropathy (DPN) and without peripheral neuropathy (NDPN) by optical coherence tomography angiography (OCTA) and explore the correlation between retinal microvascular abnormalities and DPN disease. Methods: Twenty-seven healthy controls (42 eyes), 36 NDPN patients (62 eyes), and 27 DPN patients (40 eyes) were included. OCTA was used to image the macula in the superficial vascular complex (SVC) and deep vascular complex (DVC). In addition, a state-of-the-art deep learning method was employed to quantify the microvasculature of the two capillary plexuses in all participants using vascular length density (VLD). Results: Compared with the healthy control group, the average VLD values of patients with DPN in SVC (p = 0.010) and DVC (p = 0.011) were significantly lower. Compared with NDPN, DPN patients showed significantly reduced VLD values in the SVC (p = 0.006) and DVC (p = 0.001). Also, DPN patients showed lower VLD values (p < 0.05) in the nasal, superior, temporal and inferior sectors of the inner ring of the SVC when compared with controls; VLD values in NDPN patients were lower in the nasal section of the inner ring of SVC (p < 0.05) compared with healthy controls. VLD values in the DVC (AUC = 0.736, p < 0.001) of the DPN group showed a higher ability to discriminate microvascular damage when compared with NDPN. Conclusion: OCTA based on deep learning could be potentially used in clinical practice as a new indicator in the early diagnosis of DM with and without DPN.

2.
IEEE Trans Med Imaging ; 41(12): 3969-3980, 2022 12.
Article in English | MEDLINE | ID: mdl-36044489

ABSTRACT

Automated detection of retinal structures, such as retinal vessels (RV), the foveal avascular zone (FAZ), and retinal vascular junctions (RVJ), are of great importance for understanding diseases of the eye and clinical decision-making. In this paper, we propose a novel Voting-based Adaptive Feature Fusion multi-task network (VAFF-Net) for joint segmentation, detection, and classification of RV, FAZ, and RVJ in optical coherence tomography angiography (OCTA). A task-specific voting gate module is proposed to adaptively extract and fuse different features for specific tasks at two levels: features at different spatial positions from a single encoder, and features from multiple encoders. In particular, since the complexity of the microvasculature in OCTA images makes simultaneous precise localization and classification of retinal vascular junctions into bifurcation/crossing a challenging task, we specifically design a task head by combining the heatmap regression and grid classification. We take advantage of three different en face angiograms from various retinal layers, rather than following existing methods that use only a single en face. We carry out extensive experiments on three OCTA datasets acquired using different imaging devices, and the results demonstrate that the proposed method performs on the whole better than either the state-of-the-art single-purpose methods or existing multi-task learning solutions. We also demonstrate that our multi-task learning method generalizes across other imaging modalities, such as color fundus photography, and may potentially be used as a general multi-task learning tool. We also construct three datasets for multiple structure detection, and part of these datasets with the source code and evaluation benchmark have been released for public access.


Subject(s)
Retinal Vessels , Tomography, Optical Coherence , Tomography, Optical Coherence/methods , Fluorescein Angiography/methods , Retinal Vessels/diagnostic imaging , Fundus Oculi , Retina/diagnostic imaging
3.
Med Image Anal ; 75: 102217, 2022 01.
Article in English | MEDLINE | ID: mdl-34775280

ABSTRACT

Parapneumonic effusion (PPE) is a common condition that causes death in patients hospitalized with pneumonia. Rapid distinction of complicated PPE (CPPE) from uncomplicated PPE (UPPE) in Computed Tomography (CT) scans is of great importance for the management and medical treatment of PPE. However, UPPE and CPPE display similar appearances in CT scans, and it is challenging to distinguish CPPE from UPPE via a single 2D CT image, whether attempted by a human expert, or by any of the existing disease classification approaches. 3D convolutional neural networks (CNNs) can utilize the entire 3D volume for classification: however, they typically suffer from the intrinsic defect of over-fitting. Therefore, it is important to develop a method that not only overcomes the heavy memory and computational requirements of 3D CNNs, but also leverages the 3D information. In this paper, we propose an uncertainty-guided graph attention network (UG-GAT) that can automatically extract and integrate information from all CT slices in a 3D volume for classification into UPPE, CPPE, and normal control cases. Specifically, we frame the distinction of different cases as a graph classification problem. Each individual is represented as a directed graph with a topological structure, where vertices represent the image features of slices, and edges encode the spatial relationship between them. To estimate the contribution of each slice, we first extract the slice representations with uncertainty, using a Bayesian CNN: we then make use of the uncertainty information to weight each slice during the graph prediction phase in order to enable more reliable decision-making. We construct a dataset consisting of 302 chest CT volumetric data from different subjects (99 UPPE, 99 CPPE and 104 normal control cases) in this study, and to the best of our knowledge, this is the first attempt to classify UPPE, CPPE and normal cases using a deep learning method. Extensive experiments show that our approach is lightweight in demands, and outperforms accepted state-of-the-art methods by a large margin. Code is available at https://github.com/iMED-Lab/UG-GAT.


Subject(s)
Pleural Effusion , Pneumonia , Bayes Theorem , Diagnosis, Differential , Humans , Pleural Effusion/diagnosis , Pneumonia/diagnosis , Uncertainty
4.
IEEE Trans Med Imaging ; 41(2): 254-265, 2022 02.
Article in English | MEDLINE | ID: mdl-34487491

ABSTRACT

Automatic angle-closure assessment in Anterior Segment OCT (AS-OCT) images is an important task for the screening and diagnosis of glaucoma, and the most recent computer-aided models focus on a binary classification of anterior chamber angles (ACA) in AS-OCT, i.e., open-angle and angle-closure. In order to assist clinicians who seek better to understand the development of the spectrum of glaucoma types, a more discriminating three-class classification scheme was suggested, i.e., the classification of ACA was expended to include open-, appositional- and synechial angles. However, appositional and synechial angles display similar appearances in an AS-OCT image, which makes classification models struggle to differentiate angle-closure subtypes based on static AS-OCT images. In order to tackle this issue, we propose a 2D-3D Hybrid Variation-aware Network (HV-Net) for open-appositional-synechial ACA classification from AS-OCT imagery. Specifically, taking into account clinical priors, we first reconstruct the 3D iris surface from an AS-OCT sequence, and obtain the geometrical characteristics necessary to provide global shape information. 2D AS-OCT slices and 3D iris representations are then fed into our HV-Net to extract cross-sectional appearance features and iris morphological features, respectively. To achieve similar results to those of dynamic gonioscopy examination, which is the current gold standard for diagnostic angle assessment, the paired AS-OCT images acquired in dark and light illumination conditions are used to obtain an accurate characterization of configurational changes in ACAs and iris shapes, using a Variation-aware Block. In addition, an annealing loss function was introduced to optimize our model, so as to encourage the sub-networks to map the inputs into the more conducive spaces to extract dark-to-light variation representations, while retaining the discriminative power of the learned features. The proposed model is evaluated across 1584 paired AS-OCT samples, and it has demonstrated its superiority in classifying open-, appositional- and synechial angles.


Subject(s)
Glaucoma, Angle-Closure , Anterior Eye Segment , Cross-Sectional Studies , Glaucoma, Angle-Closure/diagnostic imaging , Gonioscopy , Humans , Intraocular Pressure , Tomography, Optical Coherence/methods
5.
Front Oncol ; 11: 781798, 2021.
Article in English | MEDLINE | ID: mdl-34926297

ABSTRACT

OBJECTIVE: To develop an accurate and rapid computed tomography (CT)-based interpretable AI system for the diagnosis of lung diseases. BACKGROUND: Most existing AI systems only focus on viral pneumonia (e.g., COVID-19), specifically, ignoring other similar lung diseases: e.g., bacterial pneumonia (BP), which should also be detected during CT screening. In this paper, we propose a unified sequence-based pneumonia classification network, called SLP-Net, which utilizes consecutiveness information for the differential diagnosis of viral pneumonia (VP), BP, and normal control cases from chest CT volumes. METHODS: Considering consecutive images of a CT volume as a time sequence input, compared with previous 2D slice-based or 3D volume-based methods, our SLP-Net can effectively use the spatial information and does not need a large amount of training data to avoid overfitting. Specifically, sequential convolutional neural networks (CNNs) with multi-scale receptive fields are first utilized to extract a set of higher-level representations, which are then fed into a convolutional long short-term memory (ConvLSTM) module to construct axial dimensional feature maps. A novel adaptive-weighted cross-entropy loss (ACE) is introduced to optimize the output of the SLP-Net with a view to ensuring that as many valid features from the previous images as possible are encoded into the later CT image. In addition, we employ sequence attention maps for auxiliary classification to enhance the confidence level of the results and produce a case-level prediction. RESULTS: For evaluation, we constructed a dataset of 258 chest CT volumes with 153 VP, 42 BP, and 63 normal control cases, for a total of 43,421 slices. We implemented a comprehensive comparison between our SLP-Net and several state-of-the-art methods across the dataset. Our proposed method obtained significant performance without a large amount of data, outperformed other slice-based and volume-based approaches. The superior evaluation performance achieved in the classification experiments demonstrated the ability of our model in the differential diagnosis of VP, BP and normal cases.

SELECTION OF CITATIONS
SEARCH DETAIL
...