Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 151(Pt B): 106283, 2022 12.
Article in English | MEDLINE | ID: mdl-36442272

ABSTRACT

Glaucoma has become a major cause of vision loss. Early-stage diagnosis of glaucoma is critical for treatment planning to avoid irreversible vision damage. Meanwhile, interpreting the rapidly accumulated medical data from ophthalmic exams is cumbersome and resource-intensive. Therefore, automated methods are highly desired to assist ophthalmologists in achieving fast and accurate glaucoma diagnosis. Deep learning has achieved great successes in diagnosing glaucoma by analyzing data from different kinds of tests, such as peripapillary optical coherence tomography (OCT) and visual field (VF) testing. Nevertheless, applying these developed models to clinical practice is still challenging because of various limiting factors. OCT models present worse glaucoma diagnosis performances compared to those achieved by OCT&VF based models, whereas VF is time-consuming and highly variable, which can restrict the wide employment of OCT&VF models. To this end, we develop a novel deep learning framework that leverages the OCT&VF model to enhance the performance of the OCT model. To transfer the complementary knowledge from the structural and functional assessments to the OCT model, a cross-modal knowledge transfer method is designed by integrating a designed distillation loss and a proposed asynchronous feature regularization (AFR) module. We demonstrate the effectiveness of the proposed method for glaucoma diagnosis by utilizing a public OCT&VF dataset and evaluating it on an external OCT dataset. Our final model with only OCT inputs achieves the accuracy of 87.4% (3.1% absolute improvement) and AUC of 92.3%, which are on par with the OCT&VF joint model. Moreover, results on the external dataset sufficiently indicate the effectiveness and generalization capability of our model.


Subject(s)
Glaucoma , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Visual Fields , Distillation , Glaucoma/diagnostic imaging , Visual Field Tests/methods , Intraocular Pressure
3.
Ophthalmology ; 129(2): 171-180, 2022 02.
Article in English | MEDLINE | ID: mdl-34339778

ABSTRACT

PURPOSE: To develop and validate a multimodal artificial intelligence algorithm, FusionNet, using the pattern deviation probability plots from visual field (VF) reports and circular peripapillary OCT scans to detect glaucomatous optic neuropathy (GON). DESIGN: Cross-sectional study. SUBJECTS: Two thousand four hundred sixty-three pairs of VF and OCT images from 1083 patients. METHODS: FusionNet based on bimodal input of VF and OCT paired data was developed to detect GON. Visual field data were collected using the Humphrey Field Analyzer (HFA). OCT images were collected from 3 types of devices (DRI-OCT, Cirrus OCT, and Spectralis). Two thousand four hundred sixty-three pairs of VF and OCT images were divided into 4 datasets: 1567 for training (HFA and DRI-OCT), 441 for primary validation (HFA and DRI-OCT), 255 for the internal test (HFA and Cirrus OCT), and 200 for the external test set (HFA and Spectralis). GON was defined as retinal nerve fiber layer thinning with corresponding VF defects. MAIN OUTCOME MEASURES: Diagnostic performance of FusionNet compared with that of VFNet (with VF data as input) and OCTNet (with OCT data as input). RESULTS: FusionNet achieved an area under the receiver operating characteristic curve (AUC) of 0.950 (0.931-0.968) and outperformed VFNet (AUC, 0.868 [95% confidence interval (CI), 0.834-0.902]), OCTNet (AUC, 0.809 [95% CI, 0.768-0.850]), and 2 glaucoma specialists (glaucoma specialist 1: AUC, 0.882 [95% CI, 0.847-0.917]; glaucoma specialist 2: AUC, 0.883 [95% CI, 0.849-0.918]) in the primary validation set. In the internal and external test sets, the performances of FusionNet were also superior to VFNet and OCTNet (FusionNet vs VFNet vs OCTNet: internal test set 0.917 vs 0.854 vs 0.811; external test set 0.873 vs 0.772 vs 0.785). No significant difference was found between the 2 glaucoma specialists and FusionNet in the internal and external test sets, except for glaucoma specialist 2 (AUC, 0.858 [95% CI, 0.805-0.912]) in the internal test set. CONCLUSIONS: FusionNet, developed using paired VF and OCT data, demonstrated superior performance to both VFNet and OCTNet in detecting GON, suggesting that multimodal machine learning models are valuable in detecting GON.


Subject(s)
Glaucoma, Open-Angle/diagnostic imaging , Machine Learning , Optic Nerve Diseases/diagnostic imaging , Tomography, Optical Coherence , Vision Disorders/physiopathology , Visual Fields/physiology , Adult , Aged , Algorithms , Area Under Curve , Cross-Sectional Studies , Female , Glaucoma, Open-Angle/physiopathology , Humans , Intraocular Pressure , Male , Middle Aged , Multimodal Imaging , Nerve Fibers/pathology , Optic Nerve Diseases/physiopathology , ROC Curve , Retinal Ganglion Cells/pathology , Visual Field Tests
4.
IEEE Trans Med Imaging ; 40(9): 2392-2402, 2021 09.
Article in English | MEDLINE | ID: mdl-33945474

ABSTRACT

Glaucoma is the leading reason for irreversible blindness. Early detection and timely treatment of glaucoma are essential for preventing visual field loss or even blindness. In clinical practice, Optical Coherence Tomography (OCT) and Visual Field (VF) exams are two widely-used and complementary techniques for diagnosing glaucoma. OCT provides quantitative measurements of the optic nerve head (ONH) structure, while VF test is the functional assessment of peripheral vision. In this paper, we propose a Deep Relation Transformer (DRT) to perform glaucoma diagnosis with OCT and VF information combined. A novel deep reasoning mechanism is proposed to explore implicit pairwise relations between OCT and VF information in global and regional manners. With the pairwise relations, a carefully-designed deep transformer mechanism is developed to enhance the representation with complementary information for each modal. Based on reasoning and transformer mechanisms, three successive modules are designed to extract and collect valuable information for glaucoma diagnosis, the global relation module, the guided regional relation module, and the interaction transformer module, namely. Moreover, we build a large dataset, namely ZOC-OCT&VF dataset, which includes 1395 OCT-VF pairs for developing and evaluating our DRT. We conduct extensive experiments to validate the effectiveness of the proposed method. Experimental results show that our method achieves 88.3% accuracy and outperforms the existing single-modal approaches with a large margin. The codes and dataset will be publicly available in the future.


Subject(s)
Glaucoma , Optic Disk , Glaucoma/diagnostic imaging , Humans , Intraocular Pressure , Optic Disk/diagnostic imaging , Tomography, Optical Coherence , Visual Field Tests , Visual Fields
5.
NPJ Digit Med ; 3: 123, 2020.
Article in English | MEDLINE | ID: mdl-33043147

ABSTRACT

By 2040, ~100 million people will have glaucoma. To date, there are a lack of high-efficiency glaucoma diagnostic tools based on visual fields (VFs). Herein, we develop and evaluate the performance of 'iGlaucoma', a smartphone application-based deep learning system (DLS) in detecting glaucomatous VF changes. A total of 1,614,808 data points of 10,784 VFs (5542 patients) from seven centers in China were included in this study, divided over two phases. In Phase I, 1,581,060 data points from 10,135 VFs of 5105 patients were included to train (8424 VFs), validate (598 VFs) and test (3 independent test sets-200, 406, 507 samples) the diagnostic performance of the DLS. In Phase II, using the same DLS, iGlaucoma cloud-based application further tested on 33,748 data points from 649 VFs of 437 patients from three glaucoma clinics. With reference to three experienced expert glaucomatologists, the diagnostic performance (area under curve [AUC], sensitivity and specificity) of the DLS and six ophthalmologists were evaluated in detecting glaucoma. In Phase I, the DLS outperformed all six ophthalmologists in the three test sets (AUC of 0.834-0.877, with a sensitivity of 0.831-0.922 and a specificity of 0.676-0.709). In Phase II, iGlaucoma had 0.99 accuracy in recognizing different patterns in pattern deviation probability plots region, with corresponding AUC, sensitivity and specificity of 0.966 (0.953-0.979), 0.954 (0.930-0.977), and 0.873 (0.838-0.908), respectively. The 'iGlaucoma' is a clinically effective glaucoma diagnostic tool to detect glaucoma from humphrey VFs, although the target population will need to be carefully identified with glaucoma expertise input.

7.
BMC Med Imaging ; 18(1): 35, 2018 10 04.
Article in English | MEDLINE | ID: mdl-30286740

ABSTRACT

BACKGROUND: To develop a deep neural network able to differentiate glaucoma from non-glaucoma visual fields based on visual filed (VF) test results, we collected VF tests from 3 different ophthalmic centers in mainland China. METHODS: Visual fields obtained by both Humphrey 30-2 and 24-2 tests were collected. Reliability criteria were established as fixation losses less than 2/13, false positive and false negative rates of less than 15%. RESULTS: We split a total of 4012 PD images from 1352 patients into two sets, 3712 for training and another 300 for validation. There is no significant difference between left to right ratio (P = 0.6211), while age (P = 0.0022), VFI (P = 0.0001), MD (P = 0.0039) and PSD (P = 0.0001) exhibited obvious statistical differences. On the validation set of 300 VFs, CNN achieves the accuracy of 0.876, while the specificity and sensitivity are 0.826 and 0.932, respectively. For ophthalmologists, the average accuracies are 0.607, 0.585 and 0.626 for resident ophthalmologists, attending ophthalmologists and glaucoma experts, respectively. AGIS and GSS2 achieved accuracy of 0.459 and 0.523 respectively. Three traditional machine learning algorithms, namely support vector machine (SVM), random forest (RF), and k-nearest neighbor (k-NN) were also implemented and evaluated in the experiments, which achieved accuracy of 0.670, 0.644, and 0.591 respectively. CONCLUSIONS: Our algorithm based on CNN has achieved higher accuracy compared to human ophthalmologists and traditional rules (AGIS and GSS2) in differentiation of glaucoma and non-glaucoma VFs.


Subject(s)
Glaucoma/diagnosis , Visual Field Tests/methods , Adult , Aged , Female , Humans , Machine Learning , Middle Aged , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...