Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
Clin Interv Aging ; 19: 1051-1063, 2024.
Article in English | MEDLINE | ID: mdl-38883992

ABSTRACT

Background: The global aging population presents a significant challenge, with older adults experiencing declining physical and cognitive abilities and increased vulnerability to chronic diseases and adverse health outcomes. This study aims to develop an interpretable deep learning (DL) model to predict adverse events in geriatric patients within 72 hours of hospitalization. Methods: The study used retrospective data (2017-2020) from a major medical center in Taiwan. It included non-trauma geriatric patients who visited the emergency department and were admitted to the general ward. Data preprocessing involved collecting prognostic factors like vital signs, lab results, medical history, and clinical management. A deep feedforward neural network was developed, and performance was evaluated using accuracy, sensitivity, specificity, positive predictive value (PPV), and area under the receiver operating characteristic curve (AUC). Model interpretation utilized the Shapley Additive Explanation (SHAP) technique. Results: The analysis included 127,268 patients, with 2.6% experiencing imminent intensive care unit transfer, respiratory failure, or death during hospitalization. The DL model achieved AUCs of 0.86 and 0.84 in the validation and test sets, respectively, outperforming the Sequential Organ Failure Assessment (SOFA) score. Sensitivity and specificity values ranged from 0.79 to 0.81. The SHAP technique provided insights into feature importance and interactions. Conclusion: The developed DL model demonstrated high accuracy in predicting serious adverse events in geriatric patients within 72 hours of hospitalization. It outperformed the SOFA score and provided valuable insights into the model's decision-making process.


Subject(s)
Deep Learning , Hospitalization , Humans , Aged , Female , Male , Retrospective Studies , Hospitalization/statistics & numerical data , Aged, 80 and over , Taiwan , ROC Curve , Geriatric Assessment/methods , Prognosis , Intensive Care Units , Organ Dysfunction Scores , Area Under Curve , Emergency Service, Hospital , Risk Assessment
2.
Front Cardiovasc Med ; 10: 1195235, 2023.
Article in English | MEDLINE | ID: mdl-37600054

ABSTRACT

Objectives: The aim of this study was to develop a deep-learning pipeline for the measurement of pericardial effusion (PE) based on raw echocardiography clips, as current methods for PE measurement can be operator-dependent and present challenges in certain situations. Methods: The proposed pipeline consisted of three distinct steps: moving window view selection (MWVS), automated segmentation, and width calculation from a segmented mask. The MWVS model utilized the ResNet architecture to classify each frame of the extracted raw echocardiography files into selected view types. The automated segmentation step then generated a mask for the PE area from the extracted echocardiography clip, and a computer vision technique was used to calculate the largest width of the PE from the segmented mask. The pipeline was applied to a total of 995 echocardiographic examinations. Results: The proposed deep-learning pipeline exhibited high performance, as evidenced by intraclass correlation coefficient (ICC) values of 0.867 for internal validation and 0.801 for external validation. The pipeline demonstrated a high level of accuracy in detecting PE, with an area under the receiving operating characteristic curve (AUC) of 0.926 (95% CI: 0.902-0.951) for internal validation and 0.842 (95% CI: 0.794-0.889) for external validation. Conclusion: The machine-learning pipeline developed in this study can automatically calculate the width of PE from raw ultrasound clips. The novel concepts of moving window view selection for image quality control and computer vision techniques for maximal PE width calculation seem useful in the field of ultrasound. This pipeline could potentially provide a standardized and objective approach to the measurement of PE, reducing operator-dependency and improving accuracy.

3.
JAMA Netw Open ; 6(4): e237489, 2023 04 03.
Article in English | MEDLINE | ID: mdl-37040115

ABSTRACT

Importance: Early awareness of Kawasaki disease (KD) helps physicians administer appropriate therapy to prevent acquired heart disease in children. However, diagnosing KD is challenging and relies largely on subjective diagnosis criteria. Objective: To develop a prediction model using machine learning with objective parameters to differentiate children with KD from other febrile children. Design, Setting, and Participants: This diagnostic study included 74 641 febrile children younger than 5 years who were recruited from 4 hospitals, including 2 medical centers and 2 regional hospitals, between January 1, 2010, and December 31, 2019. Statistical analysis was performed from October 2021 to February 2023. Main Outcomes and Measures: Demographic data and laboratory values from electronic medical records, including complete blood cell count with differential, urinalysis, and biochemistry, were collected as possible parameters. The primary outcome was whether the febrile children fulfilled the diagnostic criteria of KD. The supervised eXtreme Gradient Boosting (XGBoost) machine learning method was applied to establish a prediction model. The confusion matrix and likelihood ratio were used to evaluate the performance of the prediction model. Results: This study included a total of 1142 patients with KD (mean [SD] age, 1.1 [0.8] years; 687 male patients [60.2%]) and 73 499 febrile children (mean [SD] age, 1.6 [1.4] years; 41 465 male patients [56.4%]) comprising the control group. The KD group was predominantly male (odds ratio, 1.79; 95% CI, 1.55-2.06) with younger age (mean difference, -0.6 years [95% CI, -0.6 to -0.5 years]) compared with the control group. The prediction model's best performance in the testing set was able to achieve 92.5% sensitivity, 97.3% specificity, 34.5% positive predictive value, 99.9% negative predictive value, and a positive likelihood ratio of 34.0, which indicates outstanding performance. The area under the receiver operating characteristic curve of the prediction model was 0.980 (95% CI, 0.974-0.987). Conclusions and Relevance: This diagnostic study suggests that results of objective laboratory tests had the potential to be predictors of KD. Furthermore, these findings suggested that machine learning with XGBoost can help physicians differentiate children with KD from other febrile children in pediatric emergency departments with excellent sensitivity, specificity, and accuracy.


Subject(s)
Mucocutaneous Lymph Node Syndrome , Humans , Male , Child , Infant , Female , Fever , Emergency Service, Hospital , Predictive Value of Tests , Machine Learning
4.
Healthcare (Basel) ; 11(8)2023 Apr 15.
Article in English | MEDLINE | ID: mdl-37107975

ABSTRACT

Several risk factors are related to glycemic control in patients with type 2 diabetes mellitus (T2DM), including demographics, medical conditions, negative emotions, lipid profiles, and heart rate variability (HRV; to present cardiac autonomic activity). The interactions between these risk factors remain unclear. This study aimed to use machine learning methods of artificial intelligence to explore the relationships between various risk factors and glycemic control in T2DM patients. The study utilized a database from Lin et al. (2022) that included 647 T2DM patients. Regression tree analysis was conducted to identify the interactions among risk factors that contribute to glycated hemoglobin (HbA1c) values, and various machine learning methods were compared for their accuracy in classifying T2DM patients. The results of the regression tree analysis revealed that high depression scores may be a risk factor in one subgroup but not in others. When comparing different machine learning classification methods, the random forest algorithm emerged as the best-performing method with a small set of features. Specifically, the random forest algorithm achieved 84% accuracy, 95% area under the curve (AUC), 77% sensitivity, and 91% specificity. Using machine learning methods can provide significant value in accurately classifying patients with T2DM when considering depression as a risk factor.

5.
JAMA Netw Open ; 6(3): e235102, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36976564

ABSTRACT

This quality improvement study compares the diagnostic quality and completion time between ultrasonography operators guided by artificial intelligence vs those without such assistance.


Subject(s)
Deep Learning , Humans , Ultrasonography , Algorithms
6.
Taiwan J Obstet Gynecol ; 62(2): 330-333, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36965903

ABSTRACT

OBJECTIVE: What is the more efficient and safer protocol during controlled ovarian hyperstimulation (COH) for early-stage breast cancer patients seeking emergency fertility preservation before adjuvant chemo/radiotherapy? MATERIALS AND METHODS: This retrospective, case-series study involved two early-stage (Ia) breast cancer patients that requested for fertility preservation within 3 weeks. Random start/dual stimulation protocols with aromatase inhibitor (AI) were used to maximize oocyte yield and suppress serum estradiol (E2) level. RESULTS: E2 levels on trigger day during dual COH were between 112.0 and 407.0 pg/mL. Duration of COH could be shortened to only 17 days, and up to 41 oocytes were successfully retrieved with two retrievals. CONCLUSION: This remarkable efficient and safe protocol involves the combination of random start/dual stimulation with step-up AI dosage beforehand which not only maximize oocyte yield within the shortest possible timeframe, but also to maintain the low level of E2 to avoid over-stimulating estrogen-sensitive cancer cells and to decrease the risk of developing ovarian hyperstimulation syndrome (OHSS).


Subject(s)
Breast Neoplasms , Fertility Preservation , Ovarian Hyperstimulation Syndrome , Female , Humans , Fertility Preservation/methods , Retrospective Studies , Aromatase Inhibitors/adverse effects , Estrogens , Ovulation Induction/methods , Ovarian Hyperstimulation Syndrome/prevention & control , Breast Neoplasms/therapy , Oocytes/physiology
7.
Int J Med Inform ; 172: 105007, 2023 04.
Article in English | MEDLINE | ID: mdl-36731394

ABSTRACT

BACKGROUND: Machine learning models have demonstrated superior performance in predicting invasive bacterial infection (IBI) in febrile infants compared to commonly used risk stratification criteria in recent studies. However, the black-box nature of these models can make them difficult to apply in clinical practice. In this study, we developed and validated an explainable deep learning model that can predict IBI in febrile infants ≤ 60 days of age visiting the emergency department. METHODS: We conducted a retrospective study of febrile infants aged ≤ 60 days who presented to the pediatric emergency department of a medical center in Taiwan between January 1, 2011 and December 31, 2019. Patients with uncertain test results and complex chronic health conditions were excluded. IBI was defined as the growth of a pathogen in the blood or cerebrospinal fluid. We used a deep neural network to develop a predictive model for IBI and compared its performance to the IBI score and step-by-step approach. The SHapley Additive Explanations (SHAP) technique was used to explain the model's predictions at different levels. RESULTS: Our study included 1847 patients, 53 (2.7%) of whom had IBI. The deep learning model performed similarly to the IBI score and step-by-step approach in terms of sensitivity and negative predictive value, but provided better specificity (54%), positive predictive value (5%), and area under the receiver-operating characteristic curve (0.87). SHapley Additive exPlanations identified five influential predictive variables (absolute neutrophil count, body temperature, heart rate, age, and C-reactive protein). CONCLUSION: We have developed an explainable deep learning model that can predict IBI in febrile infants aged 0-60 days. The model not only performs better than previous scoring systems, but also provides insight into how it arrives at its predictions through individual features and cases.


Subject(s)
Bacterial Infections , Deep Learning , Child , Infant , Humans , Retrospective Studies , Fever/diagnosis , Fever/microbiology , Bacterial Infections/diagnosis , Body Temperature
8.
J Med Internet Res ; 24(12): e41163, 2022 12 05.
Article in English | MEDLINE | ID: mdl-36469396

ABSTRACT

BACKGROUND: Hyperkalemia is a critical condition, especially in intensive care units. So far, there have been no accurate and noninvasive methods for recognizing hyperkalemia events on ambulatory electrocardiogram monitors. OBJECTIVE: This study aimed to improve the accuracy of hyperkalemia predictions from ambulatory electrocardiogram (ECG) monitors using a personalized transfer learning method; this would be done by training a generic model and refining it with personal data. METHODS: This retrospective cohort study used open source data from the Waveform Database Matched Subset of the Medical Information Mart From Intensive Care III (MIMIC-III). We included patients with multiple serum potassium test results and matched ECG data from the MIMIC-III database. A 1D convolutional neural network-based deep learning model was first developed to predict hyperkalemia in a generic population. Once the model achieved a state-of-the-art performance, it was used in an active transfer learning process to perform patient-adaptive heartbeat classification tasks. RESULTS: The results show that by acquiring data from each new patient, the personalized model can improve the accuracy of hyperkalemia detection significantly, from an average of 0.604 (SD 0.211) to 0.980 (SD 0.078), when compared with the generic model. Moreover, the area under the receiver operating characteristic curve level improved from 0.729 (SD 0.240) to 0.945 (SD 0.094). CONCLUSIONS: By using the deep transfer learning method, we were able to build a clinical standard model for hyperkalemia detection using ambulatory ECG monitors. These findings could potentially be extended to applications that continuously monitor one's ECGs for early alerts of hyperkalemia and help avoid unnecessary blood tests.


Subject(s)
Hyperkalemia , Humans , Hyperkalemia/diagnosis , Hyperkalemia/epidemiology , Retrospective Studies , Precision Medicine , Intensive Care Units , Electrocardiography , Machine Learning
9.
Front Med (Lausanne) ; 9: 964667, 2022.
Article in English | MEDLINE | ID: mdl-36341257

ABSTRACT

Purpose: To build machine learning models for predicting the risk of in-hospital death in patients with sepsis within 48 h, using only dynamic changes in the patient's vital signs. Methods: This retrospective observational cohort study enrolled septic patients from five emergency departments (ED) in Taiwan. We adopted seven variables, i.e., age, sex, systolic blood pressure, diastolic blood pressure, heart rate, respiratory rate, and body temperature. Results: Among all 353,253 visits, after excluding 159,607 visits (45%), the study group consisted of 193,646 ED visits. With a leading time of 6 h, the convolutional neural networks (CNNs), long short-term memory (LSTM), and random forest (RF) had accuracy rates of 0.905, 0.817, and 0.835, respectively, and the area under the receiver operating characteristic curve (AUC) was 0.840, 0.761, and 0.770, respectively. With a leading time of 48 h, the CNN, LSTM, and RF achieved accuracy rates of 0.828, 0759, and 0.805, respectively, and an AUC of 0.811, 0.734, and 0.776, respectively. Conclusion: By analyzing dynamic vital sign data, machine learning models can predict mortality in septic patients within 6 to 48 h of admission. The performance of the testing models is more accurate if the lead time is closer to the event.

10.
Sensors (Basel) ; 22(6)2022 Mar 17.
Article in English | MEDLINE | ID: mdl-35336499

ABSTRACT

Future wireless networks promise immense increases on data rate and energy efficiency while overcoming the difficulties of charging the wireless stations or devices in the Internet of Things (IoT) with the capability of simultaneous wireless information and power transfer (SWIPT). For such networks, jointly optimizing beamforming, power control, and energy harvesting to enhance the communication performance from the base stations (BSs) (or access points (APs)) to the mobile nodes (MNs) served would be a real challenge. In this work, we formulate the joint optimization as a mixed integer nonlinear programming (MINLP) problem, which can be also realized as a complex multiple resource allocation (MRA) optimization problem subject to different allocation constraints. By means of deep reinforcement learning to estimate future rewards of actions based on the reported information from the users served by the networks, we introduce single-layer MRA algorithms based on deep Q-learning (DQN) and deep deterministic policy gradient (DDPG), respectively, as the basis for the downlink wireless transmissions. Moreover, by incorporating the capability of data-driven DQN technique and the strength of noncooperative game theory model, we propose a two-layer iterative approach to resolve the NP-hard MRA problem, which can further improve the communication performance in terms of data rate, energy harvesting, and power consumption. For the two-layer approach, we also introduce a pricing strategy for BSs or APs to determine their power costs on the basis of social utility maximization to control the transmit power. Finally, with the simulated environment based on realistic wireless networks, our numerical results show that the two-layer MRA algorithm proposed can achieve up to 2.3 times higher value than the single-layer counterparts which represent the data-driven deep reinforcement learning-based algorithms extended to resolve the problem, in terms of the utilities designed to reflect the trade-off among the performance metrics considered.

11.
Biosensors (Basel) ; 13(1)2022 Dec 25.
Article in English | MEDLINE | ID: mdl-36671857

ABSTRACT

Blood glucose (BG) monitoring is important for critically ill patients, as poor sugar control has been associated with increased mortality in hospitalized patients. However, constant BG monitoring can be resource-intensive and pose a healthcare burden in clinical practice. In this study, we aimed to develop a personalized machine-learning model to predict dysglycemia from electrocardiogram (ECG) data. We used the Medical Information Mart for Intensive Care III database as our source of data and obtained more than 20 ECG records from each included patient during a single hospital admission. We focused on lead II recordings, along with corresponding blood sugar data. We processed the data and used ECG features from each heartbeat as inputs to develop a one-class support vector machine algorithm to predict dysglycemia. The model was able to predict dysglycemia using a single heartbeat with an AUC of 0.92 ± 0.09, a sensitivity of 0.92 ± 0.10, and specificity of 0.84 ± 0.04. After applying 10 s majority voting, the AUC of the model's dysglycemia prediction increased to 0.97 ± 0.06. This study showed that a personalized machine-learning algorithm can accurately detect dysglycemia from a single-lead ECG.


Subject(s)
Blood Glucose Self-Monitoring , Blood Glucose , Humans , Machine Learning , Electrocardiography, Ambulatory , Electrocardiography
12.
Front Med (Lausanne) ; 8: 707437, 2021.
Article in English | MEDLINE | ID: mdl-34631730

ABSTRACT

Background: The use of focused assessment with sonography in trauma (FAST) enables clinicians to rapidly screen for injury at the bedsides of patients. Pre-hospital FAST improves diagnostic accuracy and streamlines patient care, leading to dispositions to appropriate treatment centers. In this study, we determine the accuracy of artificial intelligence model-assisted free-fluid detection in FAST examinations, and subsequently establish an automated feedback system, which can help inexperienced sonographers improve their interpretation ability and image acquisition skills. Methods: This is a single-center study of patients admitted to the emergency room from January 2020 to March 2021. We collected 324 patient records for the training model, 36 patient records for validation, and another 36 patient records for testing. We balanced positive and negative Morison's pouch free-fluid detection groups in a 1:1 ratio. The deep learning (DL) model Residual Networks 50-Version 2 (ResNet50-V2) was used for training and validation. Results: The accuracy, sensitivity, and specificity of the model performance for ascites prediction were 0.961, 0.976, and 0.947, respectively, in the validation set and 0.967, 0.985, and 0.913, respectively, in the test set. Regarding feedback prediction, the model correctly classified qualified and non-qualified images with an accuracy of 0.941 in both the validation and test sets. Conclusions: The DL algorithm in ResNet50-V2 is able to detect free fluid in Morison's pouch with high accuracy. The automated feedback and instruction system could help inexperienced sonographers improve their interpretation ability and image acquisition skills.

13.
Biomed Res Int ; 2021: 9590131, 2021.
Article in English | MEDLINE | ID: mdl-34589553

ABSTRACT

BACKGROUND: Out-of-hospital cardiac arrest (OHCA) is a major health problem worldwide, and neurologic injury remains the leading cause of morbidity and mortality among survivors of OHCA. The purpose of this study was to investigate whether a machine learning algorithm could detect complex dependencies between clinical variables in emergency departments in OHCA survivors and perform reliable predictions of favorable neurologic outcomes. METHODS: This study included adults (≥18 years of age) with a sustained return of spontaneous circulation after successful resuscitation from OHCA between 1 January 2004 and 31 December 2014. We applied three machine learning algorithms, including logistic regression (LR), support vector machine (SVM), and extreme gradient boosting (XGB). The primary outcome was a favorable neurological outcome at hospital discharge, defined as a Glasgow-Pittsburgh cerebral performance category of 1 to 2. The secondary outcome was a 30-day survival rate and survival-to-discharge rate. RESULTS: The final analysis included 1071 participants from the study period. For neurologic outcome prediction, the area under the receiver operating curve (AUC) was 0.819, 0.771, and 0.956 in LR, SVM, and XGB, respectively. The sensitivity and specificity were 0.875 and 0.751 in LR, 0.687 and 0.793 in SVM, and 0.875 and 0.904 in XGB. The AUC was 0.766 and 0.732 in LR, 0.749 and 0.725 in SVM, and 0.866 and 0.831 in XGB, for survival-to-discharge and 30-day survival, respectively. CONCLUSIONS: Prognostic models trained with ML technique showed appropriate calibration and high discrimination for survival and neurologic outcome of OHCA without using prehospital data, with XGB exhibiting the best performance.


Subject(s)
Brain/pathology , Machine Learning , Models, Cardiovascular , Out-of-Hospital Cardiac Arrest/pathology , Aged , Algorithms , Area Under Curve , Female , Humans , Male , Predictive Value of Tests , ROC Curve , Sensitivity and Specificity , Survival Analysis
14.
Elife ; 102021 08 05.
Article in English | MEDLINE | ID: mdl-34351275

ABSTRACT

Methadone maintenance treatment (MMT) can alleviate opioid dependence. However, MMT possibly increases the risk of motor vehicle collisions. The current study investigated preliminary estimation of motor vehicle collision incidence rates. Furthermore, in this population-based retrospective cohort study with frequency-matched controls, opiate adults receiving MMT (cases) and those not receiving MMT (controls) were identified at a 1:2 ratio by linking data from several nationwide administrative registry databases. From 2009 to 2016, the crude incidence rate of motor vehicle collisions was the lowest in the general adult population, followed by that in opiate adults, and it was the highest in adults receiving MMT. The incidence rates of motor vehicle collisions were significantly higher in opiate users receiving MMT than in those not receiving MMT. Kaplan-Meier curves of the incidence of motor vehicle collisions differed significantly between groups, with a significant increased risk during the first 90 days of follow-up. In conclusion, drivers receiving MMT have higher motor vehicle collision risk than those not receiving MMT in opiate users, and it is worthy of noticing road safety in such drivers, particularly during the first 90 days of MMT.


In 2019, 58 million people were estimated to use opioids ­ a group of substances that include drugs like heroin and morphine. Dependence on opioids can be managed using a prescribed dose of an opioid called methadone, which is administered through a controlled treatment plan. This so-called methadone maintenance treatment manages withdrawal symptoms in opioid-dependent individuals and can reduce the occurrences of overdose, criminal activity and transmission of diseases such as HIV. However, methadone acts on the same brain receptors as other opioids, and individuals receiving methadone may experience impaired motoric and cognitive functioning, including reduced driving ability. It is therefore important to know whether methadone maintenance treatment may increase an individual's risk to cause road accidents. To assess motor vehicle collision risk associated with individuals receiving methadone maintenance treatment, Yang et al. analysed data from the Taiwan National Health Insurance Research Database and six Taiwanese administrative registries, including the ministries of health and welfare, interior and justice, and registries in substitution maintenance therapy, road accidents and the National Police Agency. Initial analyses found that individuals receiving treatment had a higher risk to be involved in car accidents than the general adult population or those without methadone maintenance treatment. Further tests showed that individuals receiving treatment were at three times higher risk of collisions than individuals not receiving treatment, particularly in the first 90 days. These findings may help individuals undergoing methadone maintenance treatment manage their risk of motor vehicle collisions. Further investigation is needed to reveal the underlying mechanisms of methadone-related impairment of driving ability.


Subject(s)
Accidents, Traffic/statistics & numerical data , Analgesics, Opioid/adverse effects , Methadone/administration & dosage , Adult , Cohort Studies , Female , Humans , Male , Middle Aged , Motor Vehicles , Opiate Substitution Treatment/adverse effects , Retrospective Studies , Risk , Taiwan , Young Adult
15.
J Clin Med ; 10(9)2021 Apr 26.
Article in English | MEDLINE | ID: mdl-33925973

ABSTRACT

BACKGROUND: The aim of this study was to develop and evaluate a machine learning (ML) model to predict invasive bacterial infections (IBIs) in young febrile infants visiting the emergency department (ED). METHODS: This retrospective study was conducted in the EDs of three medical centers across Taiwan from 2011 to 2018. We included patients age in 0-60 days who were visiting the ED with clinical symptoms of fever. We developed three different ML algorithms, including logistic regression (LR), supportive vector machine (SVM), and extreme gradient boosting (XGboost), comparing their performance at predicting IBIs to a previous validated score system (IBI score). RESULTS: During the study period, 4211 patients were included, where 126 (3.1%) had IBI. A total of eight, five, and seven features were used in the LR, SVM, and XGboost through the feature selection process, respectively. The ML models can achieve a better AUROC value when predicting IBIs in young infants compared with the IBI score (LR: 0.85 vs. SVM: 0.84 vs. XGBoost: 0.85 vs. IBI score: 0.70, p-value < 0.001). Using a cost sensitive learning algorithm, all ML models showed better specificity in predicting IBIs at a 90% sensitivity level compared to an IBI score > 2 (LR: 0.59 vs. SVM: 0.60 vs. XGBoost: 0.57 vs. IBI score >2: 0.43, p-value < 0.001). CONCLUSIONS: All ML models developed in this study outperformed the traditional scoring system in stratifying low-risk febrile infants after the standardized sensitivity level.

16.
Diagnostics (Basel) ; 11(1)2021 Jan 06.
Article in English | MEDLINE | ID: mdl-33419013

ABSTRACT

Prediction of functional outcome in ischemic stroke patients is useful for clinical decisions. Previous studies mostly elaborate on the prediction of favorable outcomes. Miserable outcomes, which are usually defined as modified Rankin Scale (mRS) 5-6, should be considered as well before further invasive intervention. By using a machine learning algorithm, we aimed to develop a multiclass classification model for outcome prediction in acute ischemic stroke patients requiring reperfusion therapy. This was a retrospective study performed at a stroke medical center in Taiwan. Patients with acute ischemic stroke who visited between January 2016 and December 2019 and who were candidates for reperfusion therapy were included. Clinical outcomes were classified as favorable outcome, intermediate outcome, and miserable outcome. We developed four different multiclass machine learning models (Logistic Regression, Supportive Vector Machine, Random Forest, and Extreme Gradient Boosting) to predict clinical outcomes and compared their performance to the DRAGON score. A sample of 590 patients was included in this study. Of them, 180 (30.5%) had favorable outcomes and 152 (25.8%) had miserable outcomes. All selected machine learning models outperformed the DRAGON score on accuracy of outcome prediction (Logistic Regression: 0.70, Supportive Vector Machine: 0.67, Random Forest: 0.69, and Extreme Gradient Boosting: 0.67, vs. DRAGON: 0.51, p < 0.001). Among all selected models, Logistic Regression also had a better performance than the DRAGON score on positive predictive value, sensitivity, and specificity. Compared with the DRAGON score, the multiclass machine learning approach showed better performance on the prediction of the 3-month functional outcome of acute ischemic stroke patients requiring reperfusion therapy.

17.
PLoS One ; 15(7): e0236443, 2020.
Article in English | MEDLINE | ID: mdl-32716954

ABSTRACT

OBJECTIVES: Patients with Parkinson's disease (PD) have higher prevalence of depression than the general population; however, the risk factors for depression in PD remain uncertain. METHODS/DESIGN: Using the 2000-2010 Taiwan National Health Insurance Research Database, we selected 1767 patients aged ≧ 40 years with new-onset PD during 2000-2009. Among them, 324 patients with a new incidence of depression were enrolled as cases and 972 patients without depression were randomly selected as controls. The groups were frequency-matched at a ratio of 1:3 by age, sex, and index year. Thus, this nested case-control study compared differences between the cases and the controls. Logistic regression models were used to identify risk factors for depression in PD. RESULTS: Compared with the controls, the odds ratio (OR) of anxiety disorders in the cases was 1.53 (95% confidence interval [95% CI], 1.16-2.02; P = 0.003), after adjusting for the confounding factors of age, sex, index year, geographic region, urban level, monthly income, and other coexisting medical conditions. The OR for sleep disturbances in the cases was 1.49 (95% CI, 1.14-1.96; P = 0.004) compared to the controls, after adjusting these confounding factors. Hence, the risk factors for depression in PD were nonsignificantly associated with physical comorbidities. CONCLUSIONS: In the present study, depression in PD was significantly associated with anxiety disorders and sleep disturbances. Integrated care for early identification and treatment of neuropsychiatric comorbidities is crucial in patients with new-onset PD so as to prevent further PD degeneration.


Subject(s)
Depression/epidemiology , Parkinson Disease/psychology , Adult , Aged , Case-Control Studies , Depression/drug therapy , Female , Humans , Male , Middle Aged , Prescription Drugs , Risk Factors , Taiwan/epidemiology
18.
Article in English | MEDLINE | ID: mdl-32512940

ABSTRACT

In the real world, dynamic changes in air pollutants and meteorological factors coexist simultaneously. Studies identifying the effects of individual pollutants on acute exacerbation (AE) of asthma may overlook the health effects of the overall combination. A comprehensive study examining the influence of air pollution and meteorological factors is required. Asthma AE data from emergency room visits were collected from the Taiwan National Health Insurance Research Database. Complete monitoring data for air pollutants (SO2; NO2; O3; CO; PM2.5; PM10) and meteorological factors were collected from the Environmental Protection Agency monitoring stations. A bi-directional case-crossover analysis was used to investigate the effects of air pollution and meteorological factors on asthma AE. Among age group divisions, a 1 °C temperature increase was a protective factor for asthma ER visits with OR = 0.981 (95% CI, 0.971-0.991) and 0.985 (95% CI, 0.975-0.994) for pediatric and adult patients, respectively. Children, especially younger females, are more susceptible to asthma AE due to the effects of outdoor air pollution than adults. Meteorological factors are important modulators for asthma AE in both asthmatic children and adults. When studying the effects of air pollution on asthma AE, meteorological factors should be considered.


Subject(s)
Air Pollutants , Air Pollution , Asthma , Adolescent , Adult , Air Pollutants/toxicity , Asthma/etiology , Child , Female , Humans , Male , Meteorological Concepts , Particulate Matter/toxicity , Taiwan , Young Adult
19.
Diagnostics (Basel) ; 10(5)2020 May 15.
Article in English | MEDLINE | ID: mdl-32429293

ABSTRACT

Blood culture is frequently used to detect bacteremia in febrile children. However, a high rate of negative or false-positive blood culture results is common at the pediatric emergency department (PED). The aim of this study was to use machine learning to build a model that could predict bacteremia in febrile children. We conducted a retrospective case-control study of febrile children who presented to the PED from 2008 to 2015. We adopted machine learning methods and cost-sensitive learning to establish a predictive model of bacteremia. We enrolled 16,967 febrile children with blood culture tests during the eight-year study period. Only 146 febrile children had true bacteremia, and more than 99% of febrile children had a contaminant or negative blood culture result. The maximum area under the curve of logistic regression and support vector machines to predict bacteremia were 0.768 and 0.832, respectively. Using the predictive model, we can categorize febrile children by risk value into five classes. Class 5 had the highest probability of having bacteremia, while class 1 had no risk. Obtaining blood cultures in febrile children at the PED rarely identifies a causative pathogen. Prediction models can help physicians determine whether patients have bacteremia and may reduce unnecessary expenses.

SELECTION OF CITATIONS
SEARCH DETAIL
...