Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Front Public Health ; 12: 1381284, 2024.
Article in English | MEDLINE | ID: mdl-38454986

ABSTRACT

[This corrects the article DOI: 10.3389/fpubh.2023.1252357.].

2.
Sci Rep ; 13(1): 11154, 2023 07 10.
Article in English | MEDLINE | ID: mdl-37429862

ABSTRACT

Although deep learning architecture has been used to process sequential data, only a few studies have explored the usefulness of deep learning algorithms to detect glaucoma progression. Here, we proposed a bidirectional gated recurrent unit (Bi-GRU) algorithm to predict visual field loss. In total, 5413 eyes from 3321 patients were included in the training set, whereas 1272 eyes from 1272 patients were included in the test set. Data from five consecutive visual field examinations were used as input; the sixth visual field examinations were compared with predictions by the Bi-GRU. The performance of Bi-GRU was compared with the performances of conventional linear regression (LR) and long short-term memory (LSTM) algorithms. Overall prediction error was significantly lower for Bi-GRU than for LR and LSTM algorithms. In pointwise prediction, Bi-GRU showed the lowest prediction error among the three models in most test locations. Furthermore, Bi-GRU was the least affected model in terms of worsening reliability indices and glaucoma severity. Accurate prediction of visual field loss using the Bi-GRU algorithm may facilitate decision-making regarding the treatment of patients with glaucoma.


Subject(s)
Glaucoma , Visual Fields , Humans , Reproducibility of Results , Eye , Algorithms , Glaucoma/diagnosis
3.
Ophthalmic Res ; 66(1): 978-991, 2023.
Article in English | MEDLINE | ID: mdl-37231880

ABSTRACT

INTRODUCTION: The purpose of this study was to determine whether data preprocessing and augmentation could improve visual field (VF) prediction of recurrent neural network (RNN) with multi-central datasets. METHODS: This retrospective study collected data from five glaucoma services between June 2004 and January 2021. From an initial dataset of 331,691 VFs, we considered reliable VF tests with fixed intervals. Since the VF monitoring interval is very variable, we applied data augmentation using multiple sets of data for patients with more than eight VFs. We obtained 5,430 VFs from 463 patients and 13,747 VFs from 1,076 patients by setting the fixed test interval to 365 ± 60 days (D = 365) and 180 ± 60 days (D = 180), respectively. Five consecutive VFs were provided to the constructed RNN as input and the 6th VF was compared with the output of the RNN. The performance of the periodic RNN (D = 365) was compared to that of an aperiodic RNN. The performance of the RNN with 6 long- and short-term memory (LSTM) cells (D = 180) was compared with that of the RNN with 5-LSTM cells. To compare the prediction performance, the root mean square error (RMSE) and mean absolute error (MAE) of the total deviation value (TDV) were calculated as accuracy metrics. RESULTS: The performance of the periodic model (D = 365) improved significantly over aperiodic model. Overall prediction error (MAE) was 2.56 ± 0.46 dB versus 3.26 ± 0.41 dB (periodic vs. aperiodic) (p < 0.001). A higher perimetric frequency was better for predicting future VF. The overall prediction error (RMSE) was 3.15 ± 2.29 dB versus 3.42 ± 2.25 dB (D = 180 vs. D = 365). Increasing the number of input VFs improved the performance of VF prediction in D = 180 periodic model (3.15 ± 2.29 dB vs. 3.18 ± 2.34 dB, p < 0.001). The 6-LSTM in the D = 180 periodic model was more robust to worsening of VF reliability and disease severity. The prediction accuracy worsened as the false-negative rate increased and the mean deviation decreased. CONCLUSION: Data preprocessing with augmentation improved the VF prediction of the RNN model using multi-center datasets. The periodic RNN model predicted the future VF significantly better than the aperiodic RNN model.


Subject(s)
Intraocular Pressure , Visual Fields , Humans , Retrospective Studies , Reproducibility of Results , Visual Field Tests , Neural Networks, Computer , Disease Progression
4.
Front Public Health ; 11: 1252357, 2023.
Article in English | MEDLINE | ID: mdl-38174072

ABSTRACT

Background: The coronavirus disease (COVID-19) pandemic has spread rapidly across the world, creating an urgent need for predictive models that can help healthcare providers prepare and respond to outbreaks more quickly and effectively, and ultimately improve patient care. Early detection and warning systems are crucial for preventing and controlling epidemic spread. Objective: In this study, we aimed to propose a machine learning-based method to predict the transmission trend of COVID-19 and a new approach to detect the start time of new outbreaks by analyzing epidemiological data. Methods: We developed a risk index to measure the change in the transmission trend. We applied machine learning (ML) techniques to predict COVID-19 transmission trends, categorized into three labels: decrease (L0), maintain (L1), and increase (L2). We used Support Vector Machine (SVM), Random Forest (RF), and XGBoost (XGB) as ML models. We employed grid search methods to determine the optimal hyperparameters for these three models. We proposed a new method to detect the start time of new outbreaks based on label 2, which was sustained for at least 14 days (i.e., the duration of maintenance). We compared the performance of different ML models to identify the most accurate approach for outbreak detection. We conducted sensitivity analysis for the duration of maintenance between 7 days and 28 days. Results: ML methods demonstrated high accuracy (over 94%) in estimating the classification of the transmission trends. Our proposed method successfully predicted the start time of new outbreaks, enabling us to detect a total of seven estimated outbreaks, while there were five reported outbreaks between March 2020 and October 2022 in Korea. It means that our method could detect minor outbreaks. Among the ML models, the RF and XGB classifiers exhibited the highest accuracy in outbreak detection. Conclusion: The study highlights the strength of our method in accurately predicting the timing of an outbreak using an interpretable and explainable approach. It could provide a standard for predicting the start time of new outbreaks and detecting future transmission trends. This method can contribute to the development of targeted prevention and control measures and enhance resource management during the pandemic.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Disease Outbreaks/prevention & control , Pandemics/prevention & control , Health Personnel , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...