Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Article in English | MEDLINE | ID: mdl-38946554

ABSTRACT

BACKGROUND: Acute hepatic porphyria (AHP) is a group of rare but treatable conditions associated with diagnostic delays of 15 years on average. The advent of electronic health records (EHR) data and machine learning (ML) may improve the timely recognition of rare diseases like AHP. However, prediction models can be difficult to train given the limited case numbers, unstructured EHR data, and selection biases intrinsic to healthcare delivery. We sought to train and characterize models for identifying patients with AHP. METHODS: This diagnostic study used structured and notes-based EHR data from 2 centers at the University of California, UCSF (2012-2022) and UCLA (2019-2022). The data were split into 2 cohorts (referral and diagnosis) and used to develop models that predict (1) who will be referred for testing of acute porphyria, among those who presented with abdominal pain (a cardinal symptom of AHP), and (2) who will test positive, among those referred. The referral cohort consisted of 747 patients referred for testing and 99 849 contemporaneous patients who were not. The diagnosis cohort consisted of 72 confirmed AHP cases and 347 patients who tested negative. The case cohort was 81% female and 6-75 years old at the time of diagnosis. Candidate models used a range of architectures. Feature selection was semi-automated and incorporated publicly available data from knowledge graphs. Our primary outcome was the F-score on an outcome-stratified test set. RESULTS: The best center-specific referral models achieved an F-score of 86%-91%. The best diagnosis model achieved an F-score of 92%. To further test our model, we contacted 372 current patients who lack an AHP diagnosis but were predicted by our models as potentially having it (≥10% probability of referral, ≥50% of testing positive). However, we were only able to recruit 10 of these patients for biochemical testing, all of whom were negative. Nonetheless, post hoc evaluations suggested that these models could identify 71% of cases earlier than their diagnosis date, saving 1.2 years. CONCLUSIONS: ML can reduce diagnostic delays in AHP and other rare diseases. Robust recruitment strategies and multicenter coordination will be needed to validate these models before they can be deployed.

2.
medRxiv ; 2023 Aug 31.
Article in English | MEDLINE | ID: mdl-37693437

ABSTRACT

Importance: Acute Hepatic Porphyria (AHP) is a group of rare but treatable conditions associated with diagnostic delays of fifteen years on average. The advent of electronic health records (EHR) data and machine learning (ML) may improve the timely recognition of rare diseases like AHP. However, prediction models can be difficult to train given the limited case numbers, unstructured EHR data, and selection biases intrinsic to healthcare delivery. Objective: To train and characterize models for identifying patients with AHP. Design Setting and Participants: This diagnostic study used structured and notes-based EHR data from two centers at the University of California, UCSF (2012-2022) and UCLA (2019-2022). The data were split into two cohorts (referral, diagnosis) and used to develop models that predict: 1) who will be referred for testing of acute porphyria, amongst those who presented with abdominal pain (a cardinal symptom of AHP), and 2) who will test positive, amongst those referred. The referral cohort consisted of 747 patients referred for testing and 99,849 contemporaneous patients who were not. The diagnosis cohort consisted of 72 confirmed AHP cases and 347 patients who tested negative. Cases were female predominant and 6-75 years old at the time of diagnosis. Candidate models used a range of architectures. Feature selection was semi-automated and incorporated publicly available data from knowledge graphs. Main Outcomes and Measures: F-score on an outcome-stratified test set. Results: The best center-specific referral models achieved an F-score of 86-91%. The best diagnosis model achieved an F-score of 92%. To further test our model, we contacted 372 current patients who lack an AHP diagnosis but were predicted by our models as potentially having it (≥ 10% probability of referral, ≥ 50% of testing positive). However, we were only able to recruit 10 of these patients for biochemical testing, all of whom were negative. Nonetheless, post hoc evaluations suggested that these models could identify 71% of cases earlier than their diagnosis date, saving 1.2 years. Conclusions and Relevance: ML can reduce diagnostic delays in AHP and other rare diseases. Robust recruitment strategies and multicenter coordination will be needed to validate these models before they can be deployed.

3.
J Allergy Clin Immunol Pract ; 7(1): 103-111, 2019 01.
Article in English | MEDLINE | ID: mdl-29969686

ABSTRACT

BACKGROUND: Although drugs represent a common cause of anaphylaxis, few large studies of drug-induced anaphylaxis have been performed. OBJECTIVE: To describe the epidemiology and validity of reported drug-induced anaphylaxis in the electronic health records (EHRs) of a large United States health care system. METHODS: Using EHR drug allergy data from 1995 to 2013, we determined the population prevalence of anaphylaxis including anaphylaxis prevalence over time, and the most commonly implicated drugs/drug classes reported to cause anaphylaxis. Patient risk factors for drug-induced anaphylaxis were assessed using a logistic regression model. Serum tryptase and allergist visits were used to assess the validity and follow-up of EHR-reported anaphylaxis. RESULTS: Among 1,756,481 patients, 19,836 (1.1%) reported drug-induced anaphylaxis; penicillins (45.9 per 10,000), sulfonamide antibiotics (15.1 per 10,000), and nonsteroidal anti-inflammatory drugs (NSAIDs) (13.0 per 10,000) were most commonly implicated. Patients with white race (odds ratio [OR] 2.38, 95% CI 2.27-2.49), female sex (OR 2.20, 95% CI 2.13-2.28), systemic mastocytosis (OR 4.60, 95% CI 2.66-7.94), Sjögren's syndrome (OR 1.94, 95% CI 1.47-2.56), and asthma (OR 1.50, 95% CI 1.43-1.59) had an increased odds of drug-induced anaphylaxis. Serum tryptase was performed in 135 (<1%) anaphylaxis cases and 1,587 patients (8.0%) saw an allergist for follow-up. CONCLUSIONS: EHR-reported anaphylaxis occurred in approximately 1% of patients, most commonly from penicillins, sulfonamide antibiotics, and NSAIDs. Females, whites, and patients with mastocytosis, Sjögren's syndrome, and asthma had increased odds of reporting drug-induced anaphylaxis. The low observed frequency of tryptase testing and specialist evaluation emphasize the importance of educating providers on anaphylaxis management.


Subject(s)
Anaphylaxis/epidemiology , Anti-Inflammatory Agents, Non-Steroidal/adverse effects , Delivery of Health Care/statistics & numerical data , Drug Hypersensitivity/epidemiology , Electronic Health Records/statistics & numerical data , Penicillins/adverse effects , Sulfonamides/adverse effects , Allergens/immunology , Anaphylaxis/diagnosis , Anti-Inflammatory Agents, Non-Steroidal/immunology , Drug Hypersensitivity/diagnosis , Female , Follow-Up Studies , Humans , Logistic Models , Male , Penicillins/immunology , Prevalence , Risk Factors , Sex Factors , Sulfonamides/immunology , Tryptases/blood , White People
4.
JAMA Netw Open ; 1(3): e180530, 2018 07.
Article in English | MEDLINE | ID: mdl-30370424

ABSTRACT

IMPORTANCE: Accurate clinical documentation is critical to health care quality and safety. Dictation services supported by speech recognition (SR) technology and professional medical transcriptionists are widely used by US clinicians. However, the quality of SR-assisted documentation has not been thoroughly studied. OBJECTIVE: To identify and analyze errors at each stage of the SR-assisted dictation process. DESIGN SETTING AND PARTICIPANTS: This cross-sectional study collected a stratified random sample of 217 notes (83 office notes, 75 discharge summaries, and 59 operative notes) dictated by 144 physicians between January 1 and December 31, 2016, at 2 health care organizations using Dragon Medical 360 | eScription (Nuance). Errors were annotated in the SR engine-generated document (SR), the medical transcriptionist-edited document (MT), and the physician's signed note (SN). Each document was compared with a criterion standard created from the original audio recordings and medical record review. MAIN OUTCOMES AND MEASURES: Error rate; mean errors per document; error frequency by general type (eg, deletion), semantic type (eg, medication), and clinical significance; and variations by physician characteristics, note type, and institution. RESULTS: Among the 217 notes, there were 144 unique dictating physicians: 44 female (30.6%) and 10 unknown sex (6.9%). Mean (SD) physician age was 52 (12.5) years (median [range] age, 54 [28-80] years). Among 121 physicians for whom specialty information was available (84.0%), 35 specialties were represented, including 45 surgeons (37.2%), 30 internists (24.8%), and 46 others (38.0%). The error rate in SR notes was 7.4% (ie, 7.4 errors per 100 words). It decreased to 0.4% after transcriptionist review and 0.3% in SNs. Overall, 96.3% of SR notes, 58.1% of MT notes, and 42.4% of SNs contained errors. Deletions were most common (34.7%), then insertions (27.0%). Among errors at the SR, MT, and SN stages, 15.8%, 26.9%, and 25.9%, respectively, involved clinical information, and 5.7%, 8.9%, and 6.4%, respectively, were clinically significant. Discharge summaries had higher mean SR error rates than other types (8.9% vs 6.6%; difference, 2.3%; 95% CI, 1.0%-3.6%; P < .001). Surgeons' SR notes had lower mean error rates than other physicians' (6.0% vs 8.1%; difference, 2.2%; 95% CI, 0.8%-3.5%; P = .002). One institution had a higher mean SR error rate (7.6% vs 6.6%; difference, 1.0%; 95% CI, -0.2% to 2.8%; P = .10) but lower mean MT and SN error rates (0.3% vs 0.7%; difference, -0.3%; 95% CI, -0.63% to -0.04%; P = .03 and 0.2% vs 0.6%; difference, -0.4%; 95% CI, -0.7% to -0.2%; P = .003). CONCLUSIONS AND RELEVANCE: Seven in 100 words in SR-generated documents contain errors; many errors involve clinical information. That most errors are corrected before notes are signed demonstrates the importance of manual review, quality assurance, and auditing.


Subject(s)
Medical Errors/statistics & numerical data , Medical Records/statistics & numerical data , Medical Records/standards , Speech Recognition Software/statistics & numerical data , Speech Recognition Software/standards , Adult , Aged , Aged, 80 and over , Boston , Clinical Audit , Colorado , Cross-Sectional Studies , Female , Humans , Male , Medical Records Systems, Computerized , Middle Aged , Physicians
SELECTION OF CITATIONS
SEARCH DETAIL
...