Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Biomed Inform ; 86: 109-119, 2018 10.
Article in English | MEDLINE | ID: mdl-30195660

ABSTRACT

OBJECTIVE: Evaluate the quality of clinical order practice patterns machine-learned from clinician cohorts stratified by patient mortality outcomes. MATERIALS AND METHODS: Inpatient electronic health records from 2010 to 2013 were extracted from a tertiary academic hospital. Clinicians (n = 1822) were stratified into low-mortality (21.8%, n = 397) and high-mortality (6.0%, n = 110) extremes using a two-sided P-value score quantifying deviation of observed vs. expected 30-day patient mortality rates. Three patient cohorts were assembled: patients seen by low-mortality clinicians, high-mortality clinicians, and an unfiltered crowd of all clinicians (n = 1046, 1046, and 5230 post-propensity score matching, respectively). Predicted order lists were automatically generated from recommender system algorithms trained on each patient cohort and evaluated against (i) real-world practice patterns reflected in patient cases with better-than-expected mortality outcomes and (ii) reference standards derived from clinical practice guidelines. RESULTS: Across six common admission diagnoses, order lists learned from the crowd demonstrated the greatest alignment with guideline references (AUROC range = 0.86-0.91), performing on par or better than those learned from low-mortality clinicians (0.79-0.84, P < 10-5) or manually-authored hospital order sets (0.65-0.77, P < 10-3). The same trend was observed in evaluating model predictions against better-than-expected patient cases, with the crowd model (AUROC mean = 0.91) outperforming the low-mortality model (0.87, P < 10-16) and order set benchmarks (0.78, P < 10-35). DISCUSSION: Whether machine-learning models are trained on all clinicians or a subset of experts illustrates a bias-variance tradeoff in data usage. Defining robust metrics to assess quality based on internal (e.g. practice patterns from better-than-expected patient cases) or external reference standards (e.g. clinical practice guidelines) is critical to assess decision support content. CONCLUSION: Learning relevant decision support content from all clinicians is as, if not more, robust than learning from a select subgroup of clinicians favored by patient outcomes.


Subject(s)
Data Mining , Decision Support Systems, Clinical , Electronic Health Records , Mortality , Pattern Recognition, Automated , Algorithms , Area Under Curve , Decision Making , Evidence-Based Medicine , Hospitalization , Humans , Inpatients , Machine Learning , Practice Guidelines as Topic , Practice Patterns, Physicians' , ROC Curve , Regression Analysis , Treatment Outcome
2.
AMIA Jt Summits Transl Sci Proc ; 2017: 226-235, 2018.
Article in English | MEDLINE | ID: mdl-29888077

ABSTRACT

Clinical order patterns derived from data-mining electronic health records can be a valuable source of decision support content. However, the quality of crowdsourcing such patterns may be suspect depending on the population learned from. For example, it is unclear whether learning inpatient practice patterns from a university teaching service, characterized by physician-trainee teams with an emphasis on medical education, will be of variable quality versus an attending-only medical service that focuses strictly on clinical care. Machine learning clinical order patterns by association rule episode mining from teaching versus attending-only inpatient medical services illustrated some practice variability, but converged towards similar top results in either case. We further validated the automatically generated content by confirming alignment with external reference standards extracted from clinical practice guidelines.

3.
Qual Life Res ; 25(8): 1949-57, 2016 08.
Article in English | MEDLINE | ID: mdl-26886926

ABSTRACT

BACKGROUND: US veterans report lower health-related quality of life (HRQoL) relative to the general population. Identifying behavioral factors related to HRQoL that are malleable to change may inform interventions to improve well-being in this vulnerable group. PURPOSE: The current study sought to characterize HRQoL in a largely male sample of veterans in addictions treatment, both in relation to US norms and in association with five recommended health behavior practices: regularly exercising, managing stress, having good sleep hygiene, consuming fruits and vegetables, and being tobacco free. METHODS: We assessed HRQoL with 250 veterans in addictions treatment (96 % male, mean age 53, range 24-77) using scales from four validated measures. Data reduction methods identified two principal components reflecting physical and mental HRQoL. Model testing of HRQoL associations with health behaviors adjusted for relevant demographic and treatment-related covariates. RESULTS: Compared to US norms, the sample had lower HRQoL scores. Better psychological HRQoL was associated with higher subjective social standing, absence of pain or trauma, lower alcohol severity, and monotonically with the sum of health behaviors (all p < 0.05). Specifically, psychological HRQoL was associated with regular exercise, stress management, and sleep hygiene. Regular exercise also related to better physical HRQoL. The models explained >40 % of the variance in HRQoL. CONCLUSIONS: Exercise, sleep hygiene, and stress management are strongly associated with HRQoL among veterans in addictions treatment. Future research is needed to test the effect of interventions for improving well-being in this high-risk group.


Subject(s)
Health Behavior/drug effects , Sickness Impact Profile , Substance-Related Disorders/psychology , Veterans/psychology , Adult , Aged , Female , Humans , Male , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...