Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
2.
Sci Rep ; 13(1): 15844, 2023 09 22.
Article in English | MEDLINE | ID: mdl-37739967

ABSTRACT

This study analyzes the impact of COVID-19 variants on cost-effectiveness across age groups, considering vaccination efforts and nonpharmaceutical interventions in Republic of Korea. We aim to assess the costs needed to reduce COVID-19 cases and deaths using age-structured model. The proposed age-structured model analyzes COVID-19 transmission dynamics, evaluates vaccination effectiveness, and assesses the impact of the Delta and Omicron variants. The model is fitted using data from the Republic of Korea between February 2021 and November 2022. The cost-effectiveness of interventions, medical costs, and the cost of death for different age groups are evaluated through analysis. The impact of different variants on cases and deaths is also analyzed, with the Omicron variant increasing transmission rates and decreasing case-fatality rates compared to the Delta variant. The cost of interventions and deaths is higher for older age groups during both outbreaks, with the Omicron outbreak resulting in a higher overall cost due to increased medical costs and interventions. This analysis shows that the daily cost per person for both the Delta and Omicron variants falls within a similar range of approximately $10-$35. This highlights the importance of conducting cost-effect analyses when evaluating the impact of COVID-19 variants.


Subject(s)
COVID-19 , Cost-Effectiveness Analysis , Humans , Aged , SARS-CoV-2 , COVID-19/epidemiology , Costs and Cost Analysis
3.
Sci Rep ; 13(1): 11154, 2023 07 10.
Article in English | MEDLINE | ID: mdl-37429862

ABSTRACT

Although deep learning architecture has been used to process sequential data, only a few studies have explored the usefulness of deep learning algorithms to detect glaucoma progression. Here, we proposed a bidirectional gated recurrent unit (Bi-GRU) algorithm to predict visual field loss. In total, 5413 eyes from 3321 patients were included in the training set, whereas 1272 eyes from 1272 patients were included in the test set. Data from five consecutive visual field examinations were used as input; the sixth visual field examinations were compared with predictions by the Bi-GRU. The performance of Bi-GRU was compared with the performances of conventional linear regression (LR) and long short-term memory (LSTM) algorithms. Overall prediction error was significantly lower for Bi-GRU than for LR and LSTM algorithms. In pointwise prediction, Bi-GRU showed the lowest prediction error among the three models in most test locations. Furthermore, Bi-GRU was the least affected model in terms of worsening reliability indices and glaucoma severity. Accurate prediction of visual field loss using the Bi-GRU algorithm may facilitate decision-making regarding the treatment of patients with glaucoma.


Subject(s)
Glaucoma , Visual Fields , Humans , Reproducibility of Results , Eye , Algorithms , Glaucoma/diagnosis
4.
Ophthalmic Res ; 66(1): 978-991, 2023.
Article in English | MEDLINE | ID: mdl-37231880

ABSTRACT

INTRODUCTION: The purpose of this study was to determine whether data preprocessing and augmentation could improve visual field (VF) prediction of recurrent neural network (RNN) with multi-central datasets. METHODS: This retrospective study collected data from five glaucoma services between June 2004 and January 2021. From an initial dataset of 331,691 VFs, we considered reliable VF tests with fixed intervals. Since the VF monitoring interval is very variable, we applied data augmentation using multiple sets of data for patients with more than eight VFs. We obtained 5,430 VFs from 463 patients and 13,747 VFs from 1,076 patients by setting the fixed test interval to 365 ± 60 days (D = 365) and 180 ± 60 days (D = 180), respectively. Five consecutive VFs were provided to the constructed RNN as input and the 6th VF was compared with the output of the RNN. The performance of the periodic RNN (D = 365) was compared to that of an aperiodic RNN. The performance of the RNN with 6 long- and short-term memory (LSTM) cells (D = 180) was compared with that of the RNN with 5-LSTM cells. To compare the prediction performance, the root mean square error (RMSE) and mean absolute error (MAE) of the total deviation value (TDV) were calculated as accuracy metrics. RESULTS: The performance of the periodic model (D = 365) improved significantly over aperiodic model. Overall prediction error (MAE) was 2.56 ± 0.46 dB versus 3.26 ± 0.41 dB (periodic vs. aperiodic) (p < 0.001). A higher perimetric frequency was better for predicting future VF. The overall prediction error (RMSE) was 3.15 ± 2.29 dB versus 3.42 ± 2.25 dB (D = 180 vs. D = 365). Increasing the number of input VFs improved the performance of VF prediction in D = 180 periodic model (3.15 ± 2.29 dB vs. 3.18 ± 2.34 dB, p < 0.001). The 6-LSTM in the D = 180 periodic model was more robust to worsening of VF reliability and disease severity. The prediction accuracy worsened as the false-negative rate increased and the mean deviation decreased. CONCLUSION: Data preprocessing with augmentation improved the VF prediction of the RNN model using multi-center datasets. The periodic RNN model predicted the future VF significantly better than the aperiodic RNN model.


Subject(s)
Intraocular Pressure , Visual Fields , Humans , Retrospective Studies , Reproducibility of Results , Visual Field Tests , Neural Networks, Computer , Disease Progression
5.
PLoS One ; 17(11): e0277671, 2022.
Article in English | MEDLINE | ID: mdl-36383630

ABSTRACT

BACKGROUND: The norovirus is a major cause of acute gastroenteritis at all ages but particularly has a high chance of affecting children under the age of five. Given that the outbreak of norovirus in Korea is seasonal, it is important to try and predict the start and end of norovirus outbreaks. METHODS: We predicted weekly norovirus warnings using six machine learning algorithms using test data from 2017 to 2018 and training data from 2009 to 2016. In addition, we proposed a novel method for the early detection of norovirus using a calculated norovirus risk index. Further, feature importance was calculated to evaluate the contribution of the estimated weekly norovirus warnings. RESULTS: The long short-term memory machine learning (LSTM) algorithm proved to be the best algorithm for predicting weekly norovirus warnings, with 97.2% and 92.5% accuracy in the training and test data, respectively. The LSTM algorithm predicted the observed start and end weeks of the early detection of norovirus within a 3-week range. CONCLUSIONS: The results of this study show that early detection can provide important insights for the preparation and control of norovirus outbreaks by the government. Our method provides indicators of high-risk weeks. In particular, last norovirus detection rate, minimum temperature, and day length, play critical roles in estimating weekly norovirus warnings.


Subject(s)
Caliciviridae Infections , Gastroenteritis , Norovirus , Child , Humans , Caliciviridae Infections/diagnosis , Caliciviridae Infections/epidemiology , Gastroenteritis/diagnosis , Gastroenteritis/epidemiology , Disease Outbreaks , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...