Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
J Hosp Med ; 16(7): 404-408, 2021 07.
Article in English | MEDLINE | ID: mdl-33929943

ABSTRACT

BACKGROUND: Medical training programs across the country are bound to a set of work hour regulations, generally monitored via self-report. OBJECTIVE: We developed a computational method to automate measurement of intern and resident work hours, which we validated against self-report. DESIGN, SETTING, AND PARTICIPANTS: We included all electronic health record (EHR) access log data between July 1, 2018, and June 30, 2019, for trainees enrolled in the internal medicine training program. We inferred the duration of continuous in-hospital work hours by linking EHR sessions that occurred within 5 hours as "on-campus" work and further accounted for "out-of-hospital" work which might be taking place at home. MAIN OUTCOMES AND MEASURES: We compared daily work hours estimated through the computational method with self-report and calculated the mean absolute error between the two groups. We used the computational method to estimate average weekly work hours across the rotation and the percentage of rotations where average work hours exceed the 80-hour workweek. RESULTS: The mean absolute error between self-reported and EHR-derived daily work hours for first- (PGY-1), second- (PGY-2), and third- (PGY-3) year trainees were 1.27, 1.51, and 1.51 hours, respectively. Using this computational method, we estimated average (SD) weekly work hours of 57.0 (21.7), 69.9 (12.2), and 64.1 (16.3) for PGY-1, PGY-2, and PGY-3 residents. CONCLUSION: EHR log data can be used to accurately approximate self-report of work hours, accounting for both in-hospital and out-of-hospital work. Automation will reduce trainees' clerical work, improve consistency and comparability of data, and provide more complete and timely data that training programs need.

2.
Ann Intern Med ; 172(11 Suppl): S85-S91, 2020 06 02.
Article in English | MEDLINE | ID: mdl-32479183

ABSTRACT

Electronic health record (EHR) systems can be configured to deliver novel EHR interventions that influence clinical decision making and to support efficient randomized controlled trials (RCTs) designed to evaluate the effectiveness, safety, and costs of those interventions. In designing RCTs of EHR interventions, one should carefully consider the unit of randomization (for example, patient, encounter, clinician, or clinical unit), balancing concerns about contamination of an intervention across randomization units within clusters (for example, patients within clinical units) against the superior control of measured and unmeasured confounders that comes with randomizing a larger number of units. One should also consider whether the key computational assessment components of the EHR intervention, such as a predictive algorithm used to target a subgroup for decision support, should occur before randomization (so that only 1 subgroup is randomized) or after randomization (including all subgroups). When these components are applied after randomization, one must consider expected heterogeneity in the effect of the differential decision support across subgroups, which has implications for overall impact potential, analytic approach, and sample size planning. Trials of EHR interventions should be reviewed by an institutional review board, but may not require patient-level informed consent when the interventions being tested can be considered minimal risk or quality improvement, and when clinical decision making is supported, rather than controlled, by an EHR intervention. Data and safety monitoring for RCTs of EHR interventions should be conducted to guide institutional pragmatic decision making about implementation and ensure that continuing randomization remains justified. Reporting should follow the CONSORT (Consolidated Standards of Reporting Trials) Statement, with extensions for pragmatic trials and cluster RCTs when applicable, and should include detailed materials to enhance reproducibility.


Subject(s)
Electronic Health Records/organization & administration , Randomized Controlled Trials as Topic/statistics & numerical data , Humans , Reproducibility of Results
5.
AMIA Annu Symp Proc ; : 876, 2007 Oct 11.
Article in English | MEDLINE | ID: mdl-18693977

ABSTRACT

During the phased transition from a paper-based record to an electronic health record (EHR), we found that clinicians had difficulty remembering where to find important clinical documents. We describe our experience with the design and use of a web-based map of the hybrid medical record. With between 50 to 75 unique visits per day, the UCare Navigator has served as an important aid to clinicians practicing in the transitional environment of a large EHR implementation.


Subject(s)
Medical Records Systems, Computerized , Medical Records , User-Computer Interface , Clinical Medicine , Organizational Innovation
6.
AMIA Annu Symp Proc ; : 919, 2007 Oct 11.
Article in English | MEDLINE | ID: mdl-18694019

ABSTRACT

We report the development and implementation of an electronic inpatient physician documentation system using off-the-shelf components, rapidly and at low cost. Within 9 months of deployment, over half of physician notes were electronic, and within 20 months, paper physician notes were eliminated. Our results suggest institutions can prioritize conversion to inpatient electronic physician documentation without waiting for development of sophisticated software packages or large capital investments.


Subject(s)
Medical Records Systems, Computerized/statistics & numerical data , Academic Medical Centers , Documentation/methods , Hospital Information Systems , Organizational Innovation , San Francisco
7.
J Am Med Inform Assoc ; 12(3): 275-85, 2005.
Article in English | MEDLINE | ID: mdl-15684131

ABSTRACT

OBJECTIVE: The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS(R) Specialist Lexicon to improve noun phrase identification within clinical radiology documents. DESIGN: The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)(R) Clinical Document Architecture (CDA)-compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. RESULTS: The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. CONCLUSION: The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain.


Subject(s)
Abstracting and Indexing/methods , Medical Records Systems, Computerized/classification , Natural Language Processing , Radiology Information Systems , Unified Medical Language System , Artificial Intelligence , Forms and Records Control , Humans , Medical Records Systems, Computerized/standards , Programming Languages
8.
J Am Med Inform Assoc ; 9(6): 637-52, 2002.
Article in English | MEDLINE | ID: mdl-12386114

ABSTRACT

OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.


Subject(s)
Abstracting and Indexing/methods , Information Storage and Retrieval , Textbooks as Topic , Attitude to Computers , Consumer Behavior , Electronic Data Processing , Surveys and Questionnaires , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...