Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
JAMA Netw Open ; 2(9): e1910967, 2019 09 04.
Article in English | MEDLINE | ID: mdl-31509205

ABSTRACT

Importance: Laboratory testing is an important target for high-value care initiatives, constituting the highest volume of medical procedures. Prior studies have found that up to half of all inpatient laboratory tests may be medically unnecessary, but a systematic method to identify these unnecessary tests in individual cases is lacking. Objective: To systematically identify low-yield inpatient laboratory testing through personalized predictions. Design, Setting, and Participants: In this retrospective diagnostic study with multivariable prediction models, 116 637 inpatients treated at Stanford University Hospital from January 1, 2008, to December 31, 2017, a total of 60 929 inpatients treated at University of Michigan from January 1, 2015, to December 31, 2018, and 13 940 inpatients treated at the University of California, San Francisco from January 1 to December 31, 2018, were assessed. Main Outcomes and Measures: Diagnostic accuracy measures, including sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and area under the receiver operating characteristic curve (AUROC), of machine learning models when predicting whether inpatient laboratory tests yield a normal result as defined by local laboratory reference ranges. Results: In the recent data sets (July 1, 2014, to June 30, 2017) from Stanford University Hospital (including 22 664 female inpatients with a mean [SD] age of 58.8 [19.0] years and 22 016 male inpatients with a mean [SD] age of 59.0 [18.1] years), among the top 20 highest-volume tests, 792 397 were repeats of orders within 24 hours, including tests that are physiologically unlikely to yield new information that quickly (eg, white blood cell differential, glycated hemoglobin, and serum albumin level). The best-performing machine learning models predicted normal results with an AUROC of 0.90 or greater for 12 stand-alone laboratory tests (eg, sodium AUROC, 0.92 [95% CI, 0.91-0.93]; sensitivity, 98%; specificity, 35%; PPV, 66%; NPV, 93%; lactate dehydrogenase AUROC, 0.93 [95% CI, 0.93-0.94]; sensitivity, 96%; specificity, 65%; PPV, 71%; NPV, 95%; and troponin I AUROC, 0.92 [95% CI, 0.91-0.93]; sensitivity, 88%; specificity, 79%; PPV, 67%; NPV, 93%) and 10 common laboratory test components (eg, hemoglobin AUROC, 0.94 [95% CI, 0.92-0.95]; sensitivity, 99%; specificity, 17%; PPV, 90%; NPV, 81%; creatinine AUROC, 0.96 [95% CI, 0.96-0.97]; sensitivity, 93%; specificity, 83%; PPV, 79%; NPV, 94%; and urea nitrogen AUROC, 0.95 [95% CI, 0.94, 0.96]; sensitivity, 87%; specificity, 89%; PPV, 77%; NPV 94%). Conclusions and Relevance: The findings suggest that low-yield diagnostic testing is common and can be systematically identified through data-driven methods and patient context-aware predictions. Implementing machine learning models appear to be able to quantify the level of uncertainty and expected information gained from diagnostic tests explicitly, with the potential to encourage useful testing and discourage low-value testing that incurs direct costs and indirect harms.


Subject(s)
Clinical Laboratory Techniques/statistics & numerical data , Hospitalization , Machine Learning , Adult , Aged , Area Under Curve , Blood Urea Nitrogen , Female , Glycated Hemoglobin , Hemoglobins , Humans , L-Lactate Dehydrogenase , Leukocyte Count , Male , Middle Aged , Predictive Value of Tests , ROC Curve , Retrospective Studies , Sensitivity and Specificity , Troponin I
2.
BMC Med Educ ; 18(1): 269, 2018 Nov 20.
Article in English | MEDLINE | ID: mdl-30458759

ABSTRACT

BACKGROUND: Medical students and healthcare professionals can benefit from exposure to cross-disciplinary teamwork and core concepts of medical innovation. Indeed, to address complex challenges in patient care, diversity in collaboration across medicine, engineering, business, and design is critical. However, a limited number of academic institutions have established cross-disciplinary opportunities for students and young professionals within these domains to work collaboratively towards diverse healthcare needs. METHODS: Drawing upon best practices from computer science and engineering, healthcare hackathons bring together interdisciplinary teams of students and professionals to collaborate, brainstorm, and build solutions to unmet clinical needs. Over the course of six months, a committee of 20 undergraduates, medical students, and physician advisors organized Stanford University's first healthcare hackathon (November 2016). Demographic data from initial applications were supplemented with responses from a post-hackathon survey gauging themes of diversity in collaboration, professional development, interest in medical innovation, and educational value. In designing and evaluating the event, the committee focused on measurable outcomes of diversity across participants (skillset, age, gender, academic degree), ideas (clinical needs), and innovations (projects). RESULTS: Demographic data (n = 587 applicants, n = 257 participants) reveal participants across diverse academic backgrounds, age groups, and domains of expertise were in attendance. From 50 clinical needs presented representing 19 academic fields, 40 teams ultimately formed and submitted projects spanning web (n = 13) and mobile applications (n = 13), artificial intelligence-based tools (n = 6), and medical devices (n = 3), among others. In post-hackathon survey responses (n = 111), medical students and healthcare professionals alike noted a positive impact on their ability to work in multidisciplinary teams, learn from individuals of different backgrounds, and address complex healthcare challenges. CONCLUSIONS: Healthcare hackathons can encourage diversity across individuals, ideas, and projects to address clinical challenges. By providing an outline of Stanford's inaugural event, we hope more universities can adopt the healthcare hackathon model to promote diversity in collaboration in medicine.


Subject(s)
Academic Medical Centers , Health Personnel/psychology , Health Services/standards , Interdisciplinary Studies , Professional Competence/standards , Students, Medical/psychology , Adult , Biomedical Technology , Cooperative Behavior , Curriculum , Female , Health Personnel/education , Humans , Interprofessional Relations , Male
3.
AMIA Jt Summits Transl Sci Proc ; 2017: 217-226, 2018.
Article in English | MEDLINE | ID: mdl-29888076

ABSTRACT

Escalating healthcare costs and inconsistent quality is exacerbated by clinical practice variability. Diagnostic testing is the highest volume medical activity, but human intuition is typically unreliable for quantitative inferences on diagnostic performance characteristics. Electronic medical records from a tertiary academic hospital (2008-2014) allow us to systematically predict laboratory pre-test probabilities of being normal under different conditions. We find that low yield laboratory tests are common (e.g., ~90% of blood cultures are normal). Clinical decision support could triage cases based on available data, such as consecutive use (e.g., lactate, potassium, and troponin are >90% normal given two previously normal results) or more complex patterns assimilated through common machine learning methods (nearly 100% precision for the top 1% of several example labs).

SELECTION OF CITATIONS
SEARCH DETAIL