Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
2.
Front Sleep ; 32024.
Article in English | MEDLINE | ID: mdl-38817450

ABSTRACT

Introduction: Pediatric sleep problems can be detected across racial/ethnic subpopulations in primary care settings. However, the electronic health record (EHR) data documentation that describes patients' sleep problems may be inherently biased due to both historical biases and informed presence. This study assessed racial/ethnic differences in natural language processing (NLP) training data (e.g., pediatric sleep-related keywords in primary care clinical notes) prior to model training. Methods: We used a predefined keyword features set containing 178 Peds B-SATED keywords. We then queried all the clinical notes from patients seen in pediatric primary care between the ages of 5 and 18 from January 2018 to December 2021. A least absolute shrinkage and selection operator (LASSO) regression model was used to investigate whether there were racial/ethnic differences in the documentation of Peds B-SATED keywords. Then, mixed-effects logistic regression was used to determine whether the odds of the presence of global Peds B-SATED dimensions also differed across racial/ethnic subpopulations. Results: Using both LASSO and multilevel modeling approaches, the current study found that there were racial/ethnic differences in providers' documentation of Peds B-SATED keywords and global dimensions. In addition, the most frequently documented Peds B-SATED keyword rankings qualitatively differed across racial/ethnic subpopulations. Conclusion: This study revealed providers' differential patterns of documenting Peds B-SATED keywords and global dimensions that may account for the under-detection of pediatric sleep problems among racial/ethnic subpopulations. In research, these findings have important implications for the equitable clinical documentation of sleep problems in pediatric primary care settings and extend prior retrospective work in pediatric sleep specialty settings.

3.
Appl Clin Inform ; 2024 May 15.
Article in English | MEDLINE | ID: mdl-38749477

ABSTRACT

OBJECTIVE: We present a proof-of-concept digital scribe system as an Emergency Department (ED) consultation call-based clinical conversation summarization pipeline to support clinical documentation, and report its performance. MATERIALS AND METHODS: We use four pre-trained large language models to establish the digital scribe system: T5-small, T5-base, PEGASUS-PubMed, and BART-Large-CNN via zero-shot and fine-tuning approaches. Our dataset includes 100 referral conversations among ED clinicians and medical records. We report the ROUGE-1, ROUGE-2, and ROUGE-L to compare model performance. In addition, we annotated transcriptions to assess the quality of generated summaries. RESULTS: The fine-tuned BART-Large-CNN model demonstrates greater performance in summarization tasks with the highest ROUGE scores (F1ROUGE-1=0.49, F1ROUGE-2=0.23, F1ROUGE-L=0.35) scores. In contrast, PEGASUS-PubMed lags notably (F1ROUGE-1=0.28, F1ROUGE-2=0.11, F1ROUGE-L=0.22). BART-Large-CNN's performance decreases by more than 50% with the zero-shot approach. Annotations show that BART-Large-CNN performs 71.4% recall in identifying key information and a 67.7% accuracy rate. DISCUSSION: The BART-Large-CNN model demonstrates a high level of understanding of clinical dialogue structure, indicated by its performance with and without fine-tuning. Despite some instances of high recall, there is variability in the model's performance, particularly in achieving consistent correctness, suggesting room for refinement. The model's recall ability varies across different information categories. CONCLUSION: The study provides evidence towards the potential of AI-assisted tools in assisting clinical documentation. Future work is suggested on expanding the research scope with additional language models and hybrid approaches, and comparative analysis to measure documentation burden and human factors.

4.
J Autism Dev Disord ; 2024 Mar 23.
Article in English | MEDLINE | ID: mdl-38520586

ABSTRACT

The transition from pediatric to adult health care is a vulnerable time period for autistic adolescents and young adults (AYA) and for some autistic AYA may include a period of receiving care in both the pediatric and adult health systems. We sought to assess the proportion of autistic AYA who continued to use pediatric health services after their first adult primary care appointment and to identify factors associated with continued pediatric contact. We analyzed electronic medical record (EMR) data from a cohort of autistic AYA seen in a primary-care-based program for autistic people. Using logistic and linear regression, we assessed the relationship between eight patient characteristics and (1) the odds of a patient having ANY pediatric visits after their first adult appointment and (2) the number of pediatric visits among those with at least one pediatric visit. The cohort included 230 autistic AYA, who were mostly white (68%), mostly male (82%), with a mean age of 19.4 years at the time of their last pediatric visit before entering adult care. The majority (n = 149; 65%) had pediatric contact after the first adult visit. Younger age at the time of the first adult visit and more pediatric visits prior to the first adult visit were associated with continued pediatric contact. In this cohort of autistic AYA, most patients had contact with the pediatric system after their first adult primary care appointment.

5.
medRxiv ; 2023 Dec 07.
Article in English | MEDLINE | ID: mdl-38106162

ABSTRACT

Objective: We present a proof-of-concept digital scribe system as an ED clinical conversation summarization pipeline and report its performance. Materials and Methods: We use four pre-trained large language models to establish the digital scribe system: T5-small, T5-base, PEGASUS-PubMed, and BART-Large-CNN via zero-shot and fine-tuning approaches. Our dataset includes 100 referral conversations among ED clinicians and medical records. We report the ROUGE-1, ROUGE-2, and ROUGE-L to compare model performance. In addition, we annotated transcriptions to assess the quality of generated summaries. Results: The fine-tuned BART-Large-CNN model demonstrates greater performance in summarization tasks with the highest ROUGE scores (F1ROUGE-1=0.49, F1ROUGE-2=0.23, F1ROUGE-L=0.35) scores. In contrast, PEGASUS-PubMed lags notably (F1ROUGE-1=0.28, F1ROUGE-2=0.11, F1ROUGE-L=0.22). BART-Large-CNN's performance decreases by more than 50% with the zero-shot approach. Annotations show that BART-Large-CNN performs 71.4% recall in identifying key information and a 67.7% accuracy rate. Discussion: The BART-Large-CNN model demonstrates a high level of understanding of clinical dialogue structure, indicated by its performance with and without fine-tuning. Despite some instances of high recall, there is variability in the model's performance, particularly in achieving consistent correctness, suggesting room for refinement. The model's recall ability varies across different information categories. Conclusion: The study provides evidence towards the potential of AI-assisted tools in reducing clinical documentation burden. Future work is suggested on expanding the research scope with larger language models, and comparative analysis to measure documentation efforts and time.

6.
PLoS One ; 18(9): e0289982, 2023.
Article in English | MEDLINE | ID: mdl-37703269

ABSTRACT

BACKGROUND: The transition from pediatric to adult care is a challenge for autistic adolescents and young adults. Data on patient features associated with timely transfer between pediatric and adult health care are limited. Our objective was to describe the patient features associated with timely transfer to adult health care (defined as

Subject(s)
Autistic Disorder , Transition to Adult Care , Humans , Adolescent , Young Adult , Child , Autistic Disorder/therapy , Electronic Health Records , Primary Health Care , Research Design
7.
Methods Inf Med ; 61(5-06): 195-200, 2022 12.
Article in English | MEDLINE | ID: mdl-35835447

ABSTRACT

BACKGROUND: Generative pretrained transformer (GPT) models are one of the latest large pretrained natural language processing models that enables model training with limited datasets and reduces dependency on large datasets, which are scarce and costly to establish and maintain. There is a rising interest to explore the use of GPT models in health care. OBJECTIVE: We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction using 374,787 free-text dental notes. METHODS: We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset of over 374,000 manually written sections of dental clinical notes. Each model was trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%. We report model performance in terms of next word prediction accuracy and loss. Additionally, we analyze the performance of the models on different types of prediction tokens for categories. For comparison, we also fine-tuned a non-GPT pretrained neural network model, XLNet (large), for next word prediction. We annotate each token in 100 randomly sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation, etc.) and compare the performance of each model by token category. RESULTS: Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2 model also performs better in manual evaluations, especially for names, abbreviations, and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results suggest that pretrained models have the potential to assist medical charting in the future. We share the lessons learned, insights, and suggestions for future implementations. CONCLUSION: The results suggest that pretrained models have the potential to assist medical charting in the future. Our study presented one of the first implementations of the GPT model used with medical notes.


Subject(s)
Natural Language Processing , Neural Networks, Computer
8.
JMIR Med Inform ; 10(2): e32875, 2022 Feb 10.
Article in English | MEDLINE | ID: mdl-35142635

ABSTRACT

Generative pretrained transformer models have been popular recently due to their enhanced capabilities and performance. In contrast to many existing artificial intelligence models, generative pretrained transformer models can perform with very limited training data. Generative pretrained transformer 3 (GPT-3) is one of the latest releases in this pipeline, demonstrating human-like logical and intellectual responses to prompts. Some examples include writing essays, answering complex questions, matching pronouns to their nouns, and conducting sentiment analyses. However, questions remain with regard to its implementation in health care, specifically in terms of operationalization and its use in clinical practice and research. In this viewpoint paper, we briefly introduce GPT-3 and its capabilities and outline considerations for its implementation and operationalization in clinical practice through a use case. The implementation considerations include (1) processing needs and information systems infrastructure, (2) operating costs, (3) model biases, and (4) evaluation metrics. In addition, we outline the following three major operational factors that drive the adoption of GPT-3 in the US health care system: (1) ensuring Health Insurance Portability and Accountability Act compliance, (2) building trust with health care providers, and (3) establishing broader access to the GPT-3 tools. This viewpoint can inform health care practitioners, developers, clinicians, and decision makers toward understanding the use of the powerful artificial intelligence tools integrated into hospital systems and health care.

SELECTION OF CITATIONS
SEARCH DETAIL
...