Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Cureus ; 15(7): e42369, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37492036

ABSTRACT

BACKGROUND: Amidst the COVID-19 pandemic, nursing home residents have seen a significant increase in hospitalizations. However, there is a lack of published data on the healthcare provided to these individuals in community hospitals. This knowledge gap hinders our understanding and evaluation of the quality and outcomes of care received by nursing home residents when they are hospitalized for COVID-19 or other medical conditions. Furthermore, insufficient data is used to compare the clinical outcomes of COVID-19-related admissions from nursing facilities between small community hospitals and tertiary care facilities. It is essential to conduct further research to identify potential disparities, which may indicate an unequal burden of nursing facility referrals to less-resourced hospitals. OBJECTIVE: We examined the characteristics of COVID-19-related deaths in a community hospital during the first surge of COVID-19 and calculated the proportion of patients who expired and were transferred from nearby nursing facilities. METHOD: We performed a retrospective review of all cases of COVID-19 admitted to a 160-bed community hospital in Connecticut from January 1, 2020, to August 1, 2020. One hundred seventy-seven patients with COVID-19 who were admitted to our hospital were included in this study. Seventy patients (70/177, 39.54%) were transferred from nearby nursing facilities. The primary objective of this study was to examine the clinical characteristics of COVID-19-related deaths in our community hospital during the first surge of COVID-19. We also calculated the proportion of patients who expired and were transferred from nearby nursing facilities. RESULTS: Although the mortality rate in our community hospital was 15.23% (27/177), the majority of those who died were from nursing facilities (85.18%, 23/27). In contrast, mortality among the patients admitted from the community was 3.7% (4/107). The patients transferred from a nursing facility had 12.6 times higher odds of 30-day inpatient mortality or referral to hospice (95% CI, 4.1-38.5; p<0.001). CONCLUSION: The majority of COVID-19 deaths in our community hospital were due to nursing facility referrals. We hypothesize that this high mortality may reflect healthcare inequality due to the unequal burden of nursing facility referrals to less-resourced hospitals.

3.
Can J Cardiol ; 36(4): 577-583, 2020 04.
Article in English | MEDLINE | ID: mdl-32220387

ABSTRACT

BACKGROUND: Machine learning (ML) encompasses a wide variety of methods by which artificial intelligence learns to perform tasks when exposed to data. Although detection of myocardial infarction has been facilitated with introduction of troponins, the diagnosis of acute coronary syndromes (ACS) without myocardial damage (without elevation of serum troponin) remains subjective, and its accuracy remains highly dependent on clinical skills of the health care professionals. Application of a ML algorithm may expedite management of ACS for either early discharge or early initiation of ACS management. We aim to summarize the published studies of ML for diagnosis of ACS. METHODS: We searched electronic databases, including PubMed, Embase, and Web of Science from inception up to January 13, 2019, for studies that evaluated ML algorithms for the diagnosis of ACS in patients presenting with chest pain. We then used random-effects bivariate meta-analysis models to summarize the studies. RESULTS: We retained 9 studies that evaluated ML in a total of 6292 patients. The prevalence of ACS in the evaluated cohorts ranged from relatively rare (7%) to common (57%). The pooled sensitivity and specificity were 0.95 and 0.90, respectively. The positive predictive values ranged from 0.64 to 1.0, and the negative predictive values ranged from 0.91 to 1.0. The positive and negative likelihood ratios ranged from 1.6 to 33.0 and 0.01 to 0.13, respectively. CONCLUSIONS: The excellent sensitivity, negative likelihood ratio, and negative predictive values suggest that ML may be useful as an initial triage tool for ruling out ACS.


Subject(s)
Acute Coronary Syndrome/diagnosis , Machine Learning , Artificial Intelligence , Humans
4.
MMWR Morb Mortal Wkly Rep ; 68(1): 1-5, 2019 Jan 11.
Article in English | MEDLINE | ID: mdl-30629574

ABSTRACT

The drug epidemic in the United States continues to evolve. The drug overdose death rate has rapidly increased among women (1,2), although within this demographic group, the increase in overdose death risk is not uniform. From 1999 to 2010, the largest percentage changes in the rates of overall drug overdose deaths were among women in the age groups 45-54 years and 55-64 years (1); however, this finding does not take into account trends in specific drugs or consider changes in age group distributions in drug-specific overdose death rates. To target prevention strategies to address the epidemic among women in these age groups, CDC examined overdose death rates among women aged 30-64 years during 1999-2017, overall and by drug subcategories (antidepressants, benzodiazepines, cocaine, heroin, prescription opioids, and synthetic opioids, excluding methadone). Age distribution changes in drug-specific overdose death rates were calculated. Among women aged 30-64 years, the unadjusted drug overdose death rate increased 260%, from 6.7 deaths per 100,000 population (4,314 total drug overdose deaths) in 1999 to 24.3 (18,110) in 2017. The number and rate of deaths involving antidepressants, benzodiazepines, cocaine, heroin, and synthetic opioids each increased during this period. Prescription opioid-related deaths increased between 1999 and 2017 among women aged 30-64 years, with the largest increases among those aged 55-64 years. Interventions to address the rise in drug overdose deaths include implementing the CDC Guideline for Prescribing Opioids for Chronic Pain (3), reviewing records of controlled substance prescribing (e.g., prescription drug monitoring programs, health insurance programs), and developing capacity of drug use disorder treatments and linkage to care, especially for middle-aged women with drug use disorders.


Subject(s)
Drug Overdose/mortality , Adult , Female , Humans , Middle Aged , United States/epidemiology
5.
Curr Neurol Neurosci Rep ; 18(6): 32, 2018 04 20.
Article in English | MEDLINE | ID: mdl-29679162

ABSTRACT

PURPOSE OF REVIEW: Cardiac troponin levels in the blood are an important biomarker of acute coronary events, but may also be elevated in the context of acute ischemic stroke without an obvious concurrent myocardial insult. The objective of this study and systematic review is to determine how high the circulating troponin I level can rise due to ischemic stroke. RECENT FINDINGS: Anonymized medical records from Vanderbilt University Medical Center were reviewed identifying 151,972 unique acute ischemic stroke events, of which 1226 met criteria for inclusion in this study. Included patients had at least one measurement of troponin I level documented during the hospital visit when an acute ischemic stroke was diagnosed and were free of known cardiac/coronary disease, renal impairment, sepsis, or other confounders. In this group, 20.6% had a circulating troponin I level elevated over the reference range, but 99% were below 2.13 ng/mL. This is significantly lower than the distribution observed in a cohort of 89,423 unique cases of acute coronary syndrome (p < 2.2-16). A systematic review of published literature further supported the conclusion that troponin I level may increase due to an acute ischemic stroke, but rarely rises above 2 ng/mL. Because of the shared risk factors between stroke and coronary artery disease, clinicians caring for patients with acute ischemic stroke should always have a high index of suspicion for comorbid cardiac and cardiovascular disease. In general, troponin I levels greater than 2 ng/mL should not be attributed to an acute ischemic stroke, but should prompt a thorough evaluation for coronary artery disease.


Subject(s)
Brain Ischemia/blood , Stroke/blood , Troponin I/blood , Biomarkers/blood , Brain Ischemia/diagnosis , Comorbidity , Humans , Risk Factors , Stroke/diagnosis
6.
J Environ Manage ; 198(Pt 1): 213-220, 2017 Aug 01.
Article in English | MEDLINE | ID: mdl-28460328

ABSTRACT

Antimicrobial resistance genes (ARGs) present in the environment pose a risk to human health due to potential for transfer to human pathogens. Surveillance is an integral part of mitigating environmental dissemination. Quantification of the mobile genetic element class 1 integron-integrase gene (intI1) has been proposed as a surrogate to measuring multiple ARGs. Measurement of such indicator genes can be further simplified by adopting emerging nucleic acids methods such as loop mediated isothermal amplification (LAMP). In this study, LAMP assays were designed and tested for estimating relative abundance of the intI1 gene, which included design of a universal bacteria 16S rRNA gene assay. Following validation of sensitivity and specificity with known bacterial strains, the assays were tested using DNA extracted from river and lake samples. Results showed a significant Pearson correlation (R2 = 0.8) between the intI1 gene LAMP assay and ARG relative abundance (measured via qPCR). To demonstrate the ruggedness of the LAMP assays, experiments were also run in the hands of relatively "untrained" personnel by volunteer undergraduate students at a local community college using a hand-held real-time DNA analysis device - Gene-Z. Overall, results support use of the intI1 gene as an indicator of ARGs and the LAMP assays exhibit the opportunity for volunteers to monitor environmental samples for anthropogenic pollution outside of a specialized laboratory.


Subject(s)
Drug Resistance, Microbial , Environmental Monitoring , Integrases/genetics , RNA, Ribosomal, 16S , Humans , Integrons
7.
J Surg Oncol ; 115(3): 257-265, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28105636

ABSTRACT

BACKGROUND: The most cost-effective reconstruction after resection of bone sarcoma is unknown. The goal of this study was to compare the cost effectiveness of osteoarticular allograft to endoprosthetic reconstruction of the proximal tibia or distal femur. METHODS: A Markov model was used. Revision and complication rates were taken from existing studies. Costs were based on Medicare reimbursement rates and implant prices. Health-state utilities were derived from the Health Utilities Index 3 survey with additional assumptions. Incremental cost-effectiveness ratios (ICER) were used with less than $100 000 per quality-adjusted life year (QALY) considered cost-effective. Sensitivity analyses were performed for comparison over a range of costs, utilities, complication rates, and revisions rates. RESULTS: Osteoarticular allografts, and a 30% price-discounted endoprosthesis were cost-effective with ICERs of $92.59 and $6 114.77. One-way sensitivity analysis revealed discounted endoprostheses were favored if allografts cost over $21 900 or endoprostheses cost less than $51 900. Allograft reconstruction was favored over discounted endoprosthetic reconstruction if the allograft complication rate was less than 1.3%. Allografts were more cost-effective than full-price endoprostheses. CONCLUSIONS: Osteoarticular allografts and price-discounted endoprosthetic reconstructions are cost-effective. Sensitivity analysis, using plausible complication and revision rates, favored the use of discounted endoprostheses over allografts. Allografts are more cost-effective than full-price endoprostheses.


Subject(s)
Arthroplasty, Replacement, Knee/economics , Bone Neoplasms/surgery , Bone Transplantation/economics , Osteosarcoma/surgery , Plastic Surgery Procedures/economics , Arthroplasty, Replacement, Knee/methods , Bone Neoplasms/economics , Bone Transplantation/methods , Cost-Benefit Analysis , Femur/surgery , Humans , Knee Joint/surgery , Markov Chains , Osteosarcoma/economics , Plastic Surgery Procedures/methods , Tibia/surgery , Transplantation, Homologous
8.
J Am Med Inform Assoc ; 24(1): 162-171, 2017 01.
Article in English | MEDLINE | ID: mdl-27497800

ABSTRACT

OBJECTIVE: Phenotyping algorithms applied to electronic health record (EHR) data enable investigators to identify large cohorts for clinical and genomic research. Algorithm development is often iterative, depends on fallible investigator intuition, and is time- and labor-intensive. We developed and evaluated 4 types of phenotyping algorithms and categories of EHR information to identify hypertensive individuals and controls and provide a portable module for implementation at other sites. MATERIALS AND METHODS: We reviewed the EHRs of 631 individuals followed at Vanderbilt for hypertension status. We developed features and phenotyping algorithms of increasing complexity. Input categories included International Classification of Diseases, Ninth Revision (ICD9) codes, medications, vital signs, narrative-text search results, and Unified Medical Language System (UMLS) concepts extracted using natural language processing (NLP). We developed a module and tested portability by replicating 10 of the best-performing algorithms at the Marshfield Clinic. RESULTS: Random forests using billing codes, medications, vitals, and concepts had the best performance with a median area under the receiver operator characteristic curve (AUC) of 0.976. Normalized sums of all 4 categories also performed well (0.959 AUC). The best non-NLP algorithm combined normalized ICD9 codes, medications, and blood pressure readings with a median AUC of 0.948. Blood pressure cutoffs or ICD9 code counts alone had AUCs of 0.854 and 0.908, respectively. Marshfield Clinic results were similar. CONCLUSION: This work shows that billing codes or blood pressure readings alone yield good hypertension classification performance. However, even simple combinations of input categories improve performance. The most complex algorithms classified hypertension with excellent recall and precision.


Subject(s)
Algorithms , Electronic Health Records , Hypertension/diagnosis , Machine Learning , Aged , Blood Pressure Determination , Clinical Coding , Female , Humans , Information Storage and Retrieval/methods , Male , Middle Aged , Natural Language Processing , Phenotype , ROC Curve
9.
J Orthop Trauma ; 31(1): 21-26, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27611667

ABSTRACT

OBJECTIVES: The purpose of this study was to explore the relationship between preoperative Charlson Comorbidity Index (CCI) and postoperative length of stay (LOS) for lower extremity and hip/pelvis orthopaedic trauma patients. DESIGN: Retrospective. SETTING: Urban level 1 trauma center. PATIENTS/PARTICIPANTS: A total of 1561 patients treated for isolated lower extremity and pelvis fractures between 2000 and 2012. INTERVENTIONS: Surgical intervention for fractures MAIN OUTCOME MEASUREMENTS:: The main outcome metric was LOS. Negative binomial regression analysis was used to examine the association between CCI and LOS while controlling for significant confounders. RESULTS: One thousand five hundred sixty-one patients met the inclusion criteria, 1302 (83.4%) of which had lower extremity injuries and 259 (16.6%) experienced hip/pelvis trauma. A total of 1001 (64.1%) patients presented with a CCI score of 1 and stayed an average of 7.9 days. Patients with a CCI of 3 experienced a mean LOS of 1.2 days longer than patients presenting with a CCI of 1, whereas patients presenting with a CCI score of 5 stayed an average of 4.6 days longer. After controlling for age, race, American Society of Anesthesiologists score, sex, anesthesia type, and anesthesia time, a higher preoperative CCI was found to be associated with longer LOS for patients with lower extremity fractures (Incidence Rate Ratio: 1.04, P = 0.01). No significant association was found between CCI and LOS for patients with hip/pelvic fractures. CONCLUSIONS: This study demonstrated the potential utility of the CCI as a predictor of hospital LOS for lower extremity patients; however, the association may be small given the smaller Incidence Rate Ratio value. Further studies are needed to clarify the predictive value of the CCI for different types of orthopaedic injuries. LEVEL OF EVIDENCE: Prognostic Level III. See Instructions for Authors for a complete.


Subject(s)
Fractures, Bone/mortality , Fractures, Bone/surgery , Length of Stay/statistics & numerical data , Age Distribution , Comorbidity , Female , Humans , Incidence , Leg Injuries , Male , Middle Aged , New York/epidemiology , Prevalence , Retrospective Studies , Risk Factors , Sex Distribution , Survival Rate , Trauma Severity Indices , Utilization Review
10.
Pac Symp Biocomput ; 22: 545-556, 2017.
Article in English | MEDLINE | ID: mdl-27897005

ABSTRACT

The blood thinner warfarin has a narrow therapeutic range and high inter- and intra-patient variability in therapeutic doses. Several studies have shown that pharmacogenomic variants help predict stable warfarin dosing. However, retrospective and randomized controlled trials that employ dosing algorithms incorporating pharmacogenomic variants under perform in African Americans. This study sought to determine if: 1) including additional variants associated with warfarin dose in African Americans, 2) predicting within single ancestry groups rather than a combined population, or 3) using percentage African ancestry rather than observed race, would improve warfarin dosing algorithms in African Americans. Using BioVU, the Vanderbilt University Medical Center biobank linked to electronic medical records, we compared 25 modeling strategies to existing algorithms using a cohort of 2,181 warfarin users (1,928 whites, 253 blacks). We found that approaches incorporating additional variants increased model accuracy, but not in clinically significant ways. Race stratification increased model fidelity for African Americans, but the improvement was small and not likely to be clinically significant. Use of percent African ancestry improved model fit in the context of race misclassification.


Subject(s)
Pharmacogenomic Variants , Warfarin , Adult , Aged , Female , Humans , Male , Middle Aged , Algorithms , Anticoagulants/administration & dosage , Anticoagulants/pharmacokinetics , Black or African American , Cohort Studies , Computational Biology , Cytochrome P-450 CYP2C9/genetics , Gene Frequency , Models, Genetic , Polymorphism, Single Nucleotide , Vitamin K Epoxide Reductases/genetics , Warfarin/administration & dosage , Warfarin/pharmacokinetics , White
11.
Arch Trauma Res ; 5(1): e32915, 2016 Mar.
Article in English | MEDLINE | ID: mdl-27148502

ABSTRACT

BACKGROUND: Deep venous thrombosis (DVT) and pulmonary embolism (PE) are recognized as major causes of morbidity and mortality in orthopaedic trauma patients. Despite the high incidence of these complications following orthopaedic trauma, there is a paucity of literature investigating the clinical risk factors for DVT in this specific population. As our healthcare system increasingly emphasizes quality measures, it is critical for orthopaedic surgeons to understand the clinical factors that increase the risk of DVT following orthopaedic trauma. OBJECTIVES: Utilizing the ACS-NSQIP database, we sought to determine the incidence and identify independent risk factors for DVT following orthopaedic trauma. PATIENTS AND METHODS: Using current procedural terminology (CPT) codes for orthopaedic trauma procedures, we identified a prospective cohort of patients from the 2006 to 2013 ACS-NSQIP database. Using Wilcoxon-Mann-Whitney and chi-square tests where appropriate, patient demographics, comorbidities, and operative factors were compared between patients who developed a DVT within 30 days of surgery and those who did not. A multivariate logistic regression analysis was conducted to calculate odds ratios (ORs) and identify independent risk factors for DVT. Significance was set at P < 0.05. RESULTS: 56,299 orthopaedic trauma patients were included in the analysis, of which 473 (0.84%) developed a DVT within 30 days. In univariate analysis, twenty-five variables were significantly associated with the development of a DVT, including age (P < 0.0001), BMI (P = 0.037), diabetes (P = 0.01), ASA score (P < 0.0001) and anatomic region injured (P < 0.0001). Multivariate analysis identified several independent risk factors for development of a DVT including use of a ventilator (OR = 43.67, P = 0.039), ascites (OR = 41.61, P = 0.0038), steroid use (OR = 4.00, P < 0.001), and alcohol use (OR = 2.98, P = 0.0370). Compared to patients with upper extremity trauma, those with lower extremity injuries had significantly increased odds of developing a DVT (OR = 7.55, P = 0.006). The trend toward increased odds of DVT among patients with injuries to the hip/pelvis did not reach statistical significance (OR = 4.51, P = 0.22). Smoking was not found to be an independent risk factor for developing a DVT (P = 0.1217). CONCLUSIONS: This is the largest study to date using the NSQIP database to identify risk factors for DVT in orthopaedic trauma patients. Although the incidence of DVT was low in our cohort, the presence of certain risk factors significantly increased the odds of developing a DVT following orthopaedic trauma. These findings will enable orthopaedic surgeons to target at-risk patients and implement post-operative care protocols aimed at reducing the morbidity and mortality associated with DVT in orthopaedic trauma patients.

12.
J Orthop Trauma ; 30(3): e110, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26894642
13.
J Orthop Trauma ; 30(2): 95-9, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26371621

ABSTRACT

OBJECTIVES: The aim of our study was to determine the association between admitting service, medicine or orthopaedics, and length of stay (LOS) for a geriatric hip fracture patient. DESIGN: Retrospective. SETTING: Urban level 1 trauma center. PATIENTS/PARTICIPANTS: Six hundred fourteen geriatric hip fracture patients from 2000 to 2009. INTERVENTIONS: Orthopaedic surgery for geriatric hip fracture. MAIN OUTCOME MEASUREMENTS: Patient demographics, medical comorbidities, hospitalization length, and admitting service. Negative binomial regression used to determine association between LOS and admitting service. RESULTS: Six hundred fourteen geriatric hip fracture patients were included in the analysis, of whom 49.2% of patients (n = 302) were admitted to the orthopaedic service and 50.8% (3 = 312) to the medicine service. The median LOS for patients admitted to orthopaedics was 4.5 days compared with 7 days for patients admitted to medicine (P < 0.0001). Readmission was also significantly higher for patients admitted to medicine (n = 92, 29.8%) than for those admitted to orthopaedics (n = 70, 23.1%). After controlling for important patient factors, it was determined that medicine patients are expected to stay about 1.5 times (incidence rate ratio: 1.48, P < 0.0001) longer in the hospital than orthopaedic patients. CONCLUSIONS: This is the largest study to demonstrate that admission to the medicine service compared with the orthopaedic service increases a geriatric hip fractures patient's expected LOS. Since LOS is a major driver of cost as well as a measure of quality care, it is important to understand the factors that lead to a longer hospital stay to better allocate hospital resources. Based on the results from our institution, orthopaedic surgeons should be aware that admission to medicine might increase a patient's expected LOS. LEVEL OF EVIDENCE: Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence.


Subject(s)
Admitting Department, Hospital/statistics & numerical data , Hip Fractures/epidemiology , Hip Fractures/surgery , Length of Stay/statistics & numerical data , Orthopedics/statistics & numerical data , Patient Admission/statistics & numerical data , Age Distribution , Aged , Aged, 80 and over , Female , Health Services for the Aged , Humans , Male , Middle Aged , Prevalence , Sex Distribution , Tennessee/epidemiology
14.
Adv Health Sci Educ Theory Pract ; 21(1): 33-49, 2016 Mar.
Article in English | MEDLINE | ID: mdl-25952644

ABSTRACT

The Medical College Admission Test (MCAT) is a quantitative metric used by MD and MD-PhD programs to evaluate applicants for admission. This study assessed the validity of the MCAT in predicting training performance measures and career outcomes for MD-PhD students at a single institution. The study population consisted of 153 graduates of the Vanderbilt Medical Scientist Training Program (combined MD-PhD program) who matriculated between 1963 and 2003 and completed dual-degree training. This population was divided into three cohorts corresponding to the version of the MCAT taken at the time of application. Multivariable regression (logistic for binary outcomes and linear for continuous outcomes) was used to analyze factors associated with outcome measures. The MCAT score and undergraduate GPA (uGPA) were treated as independent variables; medical and graduate school grades, time-to-PhD defense, USMLE scores, publication number, and career outcome were dependent variables. For cohort 1 (1963-1977), MCAT score was not associated with any assessed outcome, although uGPA was associated with medical school preclinical GPA and graduate school GPA (gsGPA). For cohort 2 (1978-1991), MCAT score was associated with USMLE Step II score and inversely correlated with publication number, and uGPA was associated with preclinical GPA (mspGPA) and clinical GPA (mscGPA). For cohort 3 (1992-2003), the MCAT score was associated with mscGPA, and uGPA was associated with gsGPA. Overall, MCAT score and uGPA were inconsistent or weak predictors of training metrics and career outcomes for this population of MD-PhD students.


Subject(s)
College Admission Test , Education, Medical, Graduate/trends , Canada , Educational Measurement , Forecasting , Humans , Schools, Medical , Students, Medical , United States
15.
J Am Med Inform Assoc ; 22(5): 1054-71, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26104740

ABSTRACT

OBJECTIVE: Hospital-acquired acute kidney injury (HA-AKI) is a potentially preventable cause of morbidity and mortality. Identifying high-risk patients prior to the onset of kidney injury is a key step towards AKI prevention. MATERIALS AND METHODS: A national retrospective cohort of 1,620,898 patient hospitalizations from 116 Veterans Affairs hospitals was assembled from electronic health record (EHR) data collected from 2003 to 2012. HA-AKI was defined at stage 1+, stage 2+, and dialysis. EHR-based predictors were identified through logistic regression, least absolute shrinkage and selection operator (lasso) regression, and random forests, and pair-wise comparisons between each were made. Calibration and discrimination metrics were calculated using 50 bootstrap iterations. In the final models, we report odds ratios, 95% confidence intervals, and importance rankings for predictor variables to evaluate their significance. RESULTS: The area under the receiver operating characteristic curve (AUC) for the different model outcomes ranged from 0.746 to 0.758 in stage 1+, 0.714 to 0.720 in stage 2+, and 0.823 to 0.825 in dialysis. Logistic regression had the best AUC in stage 1+ and dialysis. Random forests had the best AUC in stage 2+ but the least favorable calibration plots. Multiple risk factors were significant in our models, including some nonsteroidal anti-inflammatory drugs, blood pressure medications, antibiotics, and intravenous fluids given during the first 48 h of admission. CONCLUSIONS: This study demonstrated that, although all the models tested had good discrimination, performance characteristics varied between methods, and the random forests models did not calibrate as well as the lasso or logistic regression models. In addition, novel modifiable risk factors were explored and found to be significant.


Subject(s)
Acute Kidney Injury , Models, Statistical , Aged , Female , Hospitalization , Hospitals, Veterans , Humans , Iatrogenic Disease , Logistic Models , Male , Middle Aged , Prognosis , ROC Curve , Retrospective Studies , Risk , United States , United States Department of Veterans Affairs
16.
Retina ; 34(10): 1997-2002, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24936944

ABSTRACT

PURPOSE: To determine the 1-year and 2-year likelihood of vitrectomy in diabetic patients undergoing initial pan retinal photocoagulation (PRP). METHODS: Diabetic eyes receiving initial PRP for proliferative diabetic retinopathy (PDR) were analyzed to determine their risk for vitrectomy based on clinical findings. RESULTS: In total, 374 eyes of 272 patients were analyzed. The percentage of eyes undergoing vitrectomy 1 year and 2 years following initial PRP was 19.1% and 26.2%, respectively. Of the eyes in Group 1 (PDR alone), Group 2 (PDR and vitreous hemorrhage), and Group 3 (PDR and iris neovascularization, vitreous hemorrhage with traction or fibrosis, or fibrosis alone), the percentage receiving pars plana vitrectomy at 1 year and 2 years was 9.73% (18/185) and 15.7% (29/185), 26.9% (43/160) and 34.4% (55/160), and 37.9% (11/29) and 48.3% (14/29), respectively. Eyes in Group 2 had 2.78 times greater likelihood (P < 0.0001) and eyes in Group 3 had 3.54 times higher likelihood (P < 0.0001) of requiring pars plana vitrectomy within 2 years than those with PDR alone. CONCLUSION: Eyes receiving PRP for PDR with associated hemorrhage or traction were more likely to undergo pars plana vitrectomy within 1 year and 2 years following initial PRP compared with eyes with only PDR, providing important prognostic information for PRP-naive patients.


Subject(s)
Diabetic Retinopathy/surgery , Iris/blood supply , Laser Coagulation , Neovascularization, Pathologic/surgery , Vitrectomy/statistics & numerical data , Vitreous Hemorrhage/surgery , Adult , Aged , Aged, 80 and over , Diabetic Retinopathy/diagnosis , Female , Humans , Male , Middle Aged , Neovascularization, Pathologic/diagnosis , Reoperation , Retrospective Studies , Risk Factors , Vitreous Hemorrhage/diagnosis , Young Adult
17.
AMIA Annu Symp Proc ; 2014: 1940-9, 2014.
Article in English | MEDLINE | ID: mdl-25954467

ABSTRACT

Acute coronary syndrome (ACS) accounts for 1.36 million hospitalizations and billions of dollars in costs in the United States alone. A major challenge to diagnosing and treating patients with suspected ACS is the significant symptom overlap between patients with and without ACS. There is a high cost to over- and under-treatment. Guidelines recommend early risk stratification of patients, but many tools lack sufficient accuracy for use in clinical practice. Prognostic indices often misrepresent clinical populations and rely on curated data. We used random forest and elastic net on 20,078 deidentified records with significant missing and noisy values to develop models that outperform existing ACS risk prediction tools. We found that the random forest (AUC = 0.848) significantly outperformed elastic net (AUC=0.818), ridge regression (AUC = 0.810), and the TIMI (AUC = 0.745) and GRACE (AUC = 0.623) scores. Our findings show that random forest applied to noisy and sparse data can perform on par with previously developed scoring metrics.


Subject(s)
Acute Coronary Syndrome/diagnosis , Algorithms , Artificial Intelligence , Risk Assessment/methods , Area Under Curve , Diagnostic Errors/prevention & control , Humans , Logistic Models , Prognosis , ROC Curve
18.
J Surg Res ; 174(2): 222-30, 2012 May 15.
Article in English | MEDLINE | ID: mdl-22079845

ABSTRACT

BACKGROUND: Optimal treatment for potentially resectable pancreatic cancer is controversial. Resection is considered the only curative treatment, but neoadjuvant chemoradiotherapy may offer significant advantages. MATERIALS AND METHODS: We developed a decision model for potentially resectable pancreatic cancer. Initial therapeutic choices were surgery, neoadjuvant chemoradiotherapy, or no treatment; subsequent decisions offered a second intervention if not prohibited by complications or death. Payoffs were calculated as the median expected survival. We gathered evidence for this model through a comprehensive MEDLINE search. One-way sensitivity analyses were performed. RESULTS: Neoadjuvant chemoradiation is favored over initial surgery, with expected values of 18.6 and 17.7 mo, respectively. The decision is sensitive to the probabilities of treatment mortality and tumor resectability. Threshold probabilities are 7.0% mortality of neoadjuvant chemoradiotherapy, 69.2% resectability on imaging after neoadjuvant therapy, and 73.7% resectability at exploration after neoadjuvant therapy, 92.2% resectability at initial resection, and 9.9% surgical mortality following chemoradiotherapy. The decision is sensitive to the utility of time spent in chemoradiotherapy, with surgery favored for utilities less than 0.3 and -0.8, for uncomplicated and complicated chemoradiotherapy, respectively. CONCLUSIONS: The ideal treatment for potentially resectable pancreatic cancer remains controversial, but recent evidence supports a slight benefit for neoadjuvant therapy. Our model shows that the decision is sensitive to the probability of tumor resectability and chemoradiation mortality, but not to rates of other treatment complications. With minimal benefit of one treatment over another based on survival alone, patient preferences will likely play an important role in determining best treatment.


Subject(s)
Decision Support Techniques , Pancreatic Neoplasms/therapy , Humans , Neoadjuvant Therapy
SELECTION OF CITATIONS
SEARCH DETAIL
...