Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
J Neurosurg Spine ; : 1-11, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38941643

ABSTRACT

OBJECTIVE: The objective of this study was to assess the safety and accuracy of ChatGPT recommendations in comparison to the evidence-based guidelines from the North American Spine Society (NASS) for the diagnosis and treatment of cervical radiculopathy. METHODS: ChatGPT was prompted with questions from the 2011 NASS clinical guidelines for cervical radiculopathy and evaluated for concordance. Selected key phrases within the NASS guidelines were identified. Completeness was measured as the number of overlapping key phrases between ChatGPT responses and NASS guidelines divided by the total number of key phrases. A senior spine surgeon evaluated the ChatGPT responses for safety and accuracy. ChatGPT responses were further evaluated on their readability, similarity, and consistency. Flesch Reading Ease scores and Flesch-Kincaid reading levels were measured to assess readability. The Jaccard Similarity Index was used to assess agreement between ChatGPT responses and NASS clinical guidelines. RESULTS: A total of 100 key phrases were identified across 14 NASS clinical guidelines. The mean completeness of ChatGPT-4 was 46%. ChatGPT-3.5 yielded a completeness of 34%. ChatGPT-4 outperformed ChatGPT-3.5 by a margin of 12%. ChatGPT-4.0 outputs had a mean Flesch reading score of 15.24, which is very difficult to read, requiring a college graduate education to understand. ChatGPT-3.5 outputs had a lower mean Flesch reading score of 8.73, indicating that they are even more difficult to read and require a professional education level to do so. However, both versions of ChatGPT were more accessible than NASS guidelines, which had a mean Flesch reading score of 4.58. Furthermore, with NASS guidelines as a reference, ChatGPT-3.5 registered a mean ± SD Jaccard Similarity Index score of 0.20 ± 0.078 while ChatGPT-4 had a mean of 0.18 ± 0.068. Based on physician evaluation, outputs from ChatGPT-3.5 and ChatGPT-4.0 were safe 100% of the time. Thirteen of 14 (92.8%) ChatGPT-3.5 responses and 14 of 14 (100%) ChatGPT-4.0 responses were in agreement with current best clinical practices for cervical radiculopathy according to a senior spine surgeon. CONCLUSIONS: ChatGPT models were able to provide safe and accurate but incomplete responses to NASS clinical guideline questions about cervical radiculopathy. Although the authors' results suggest that improvements are required before ChatGPT can be reliably deployed in a clinical setting, future versions of the LLM hold promise as an updated reference for guidelines on cervical radiculopathy. Future versions must prioritize accessibility and comprehensibility for a diverse audience.

3.
Cureus ; 16(4): e58928, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38800166

ABSTRACT

Background This study investigates the impact of New York's relaxed alcohol consumption policies during the coronavirus disease (COVID-19) pandemic on alcohol-related traumatic brain injuries (TBIs) among patients admitted to a Level 1 trauma center in Queens. Given the limited research available, this study critically explores the link between public health policies and trauma care. It aims to address a significant gap in the literature and highlight the implications of alcohol regulations during global health emergencies. Methodology A retrospective analysis was conducted among trauma patients from 2019 to 2021. The study period was divided into the following three periods: pre-lockdown (March 7, 2019, to July 31, 2019), lockdown (March 7, 2020, to July 31, 2020), and post-lockdown (March 7, 2021, to July 31, 2021). Data on demographics, injury severity, comorbidities, and outcomes were collected. The study focused on assessing the correlation between New York's alcohol policies and alcohol-related TBI admissions during these periods. Results A total of 1,074 admissions were analyzed. The study found no significant changes in alcohol-positive patients over the full calendar years of 2019, 2020, and 2021 (42.65%, 38.91%, and 31.16% respectively; p = 0.08711). Specifically, during the lockdown period, rates of alcohol-positive TBI patients remained unchanged, despite the relaxed alcohol policies. There was a decrease in alcohol-related TBI admissions in 2021 compared to 2020 during the lockdown period. Conclusions Our study concludes that New York's specific alcohol policies during the COVID-19 pandemic were not correlated with an increase in alcohol-related TBI admissions. Despite the relaxation of alcohol consumption laws, there was no increase in alcohol positivity among TBI patients. The findings suggest a complex relationship between public policies, alcohol use, and trauma during pandemic conditions, indicating that factors other than policy relaxation might influence alcohol-related trauma incidences.

5.
Neurospine ; 21(1): 128-146, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38569639

ABSTRACT

OBJECTIVE: Large language models, such as chat generative pre-trained transformer (ChatGPT), have great potential for streamlining medical processes and assisting physicians in clinical decision-making. This study aimed to assess the potential of ChatGPT's 2 models (GPT-3.5 and GPT-4.0) to support clinical decision-making by comparing its responses for antibiotic prophylaxis in spine surgery to accepted clinical guidelines. METHODS: ChatGPT models were prompted with questions from the North American Spine Society (NASS) Evidence-based Clinical Guidelines for Multidisciplinary Spine Care for Antibiotic Prophylaxis in Spine Surgery (2013). Its responses were then compared and assessed for accuracy. RESULTS: Of the 16 NASS guideline questions concerning antibiotic prophylaxis, 10 responses (62.5%) were accurate in ChatGPT's GPT-3.5 model and 13 (81%) were accurate in GPT-4.0. Twenty-five percent of GPT-3.5 answers were deemed as overly confident while 62.5% of GPT-4.0 answers directly used the NASS guideline as evidence for its response. CONCLUSION: ChatGPT demonstrated an impressive ability to accurately answer clinical questions. GPT-3.5 model's performance was limited by its tendency to give overly confident responses and its inability to identify the most significant elements in its responses. GPT-4.0 model's responses had higher accuracy and cited the NASS guideline as direct evidence many times. While GPT-4.0 is still far from perfect, it has shown an exceptional ability to extract the most relevant research available compared to GPT-3.5. Thus, while ChatGPT has shown far-reaching potential, scrutiny should still be exercised regarding its clinical use at this time.

6.
Eur Spine J ; 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38489044

ABSTRACT

BACKGROUND CONTEXT: Clinical guidelines, developed in concordance with the literature, are often used to guide surgeons' clinical decision making. Recent advancements of large language models and artificial intelligence (AI) in the medical field come with exciting potential. OpenAI's generative AI model, known as ChatGPT, can quickly synthesize information and generate responses grounded in medical literature, which may prove to be a useful tool in clinical decision-making for spine care. The current literature has yet to investigate the ability of ChatGPT to assist clinical decision making with regard to degenerative spondylolisthesis. PURPOSE: The study aimed to compare ChatGPT's concordance with the recommendations set forth by The North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and assess ChatGPT's accuracy within the context of the most recent literature. METHODS: ChatGPT-3.5 and 4.0 was prompted with questions from the NASS Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and graded its recommendations as "concordant" or "nonconcordant" relative to those put forth by NASS. A response was considered "concordant" when ChatGPT generated a recommendation that accurately reproduced all major points made in the NASS recommendation. Any responses with a grading of "nonconcordant" were further stratified into two subcategories: "Insufficient" or "Over-conclusive," to provide further insight into grading rationale. Responses between GPT-3.5 and 4.0 were compared using Chi-squared tests. RESULTS: ChatGPT-3.5 answered 13 of NASS's 28 total clinical questions in concordance with NASS's guidelines (46.4%). Categorical breakdown is as follows: Definitions and Natural History (1/1, 100%), Diagnosis and Imaging (1/4, 25%), Outcome Measures for Medical Intervention and Surgical Treatment (0/1, 0%), Medical and Interventional Treatment (4/6, 66.7%), Surgical Treatment (7/14, 50%), and Value of Spine Care (0/2, 0%). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-3.5 generated a concordant response 66.7% of the time (6/9). However, ChatGPT-3.5's concordance dropped to 36.8% when asked clinical questions that NASS did not provide a clear recommendation on (7/19). A further breakdown of ChatGPT-3.5's nonconcordance with the guidelines revealed that a vast majority of its inaccurate recommendations were due to them being "over-conclusive" (12/15, 80%), rather than "insufficient" (3/15, 20%). ChatGPT-4.0 answered 19 (67.9%) of the 28 total questions in concordance with NASS guidelines (P = 0.177). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-4.0 generated a concordant response 66.7% of the time (6/9). ChatGPT-4.0's concordance held up at 68.4% when asked clinical questions that NASS did not provide a clear recommendation on (13/19, P = 0.104). CONCLUSIONS: This study sheds light on the duality of LLM applications within clinical settings: one of accuracy and utility in some contexts versus inaccuracy and risk in others. ChatGPT was concordant for most clinical questions NASS offered recommendations for. However, for questions NASS did not offer best practices, ChatGPT generated answers that were either too general or inconsistent with the literature, and even fabricated data/citations. Thus, clinicians should exercise extreme caution when attempting to consult ChatGPT for clinical recommendations, taking care to ensure its reliability within the context of recent literature.

7.
Neurospine ; 21(1): 149-158, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38291746

ABSTRACT

OBJECTIVE: Large language models like chat generative pre-trained transformer (ChatGPT) have found success in various sectors, but their application in the medical field remains limited. This study aimed to assess the feasibility of using ChatGPT to provide accurate medical information to patients, specifically evaluating how well ChatGPT versions 3.5 and 4 aligned with the 2012 North American Spine Society (NASS) guidelines for lumbar disk herniation with radiculopathy. METHODS: ChatGPT's responses to questions based on the NASS guidelines were analyzed for accuracy. Three new categories-overconclusiveness, supplementary information, and incompleteness-were introduced to deepen the analysis. Overconclusiveness referred to recommendations not mentioned in the NASS guidelines, supplementary information denoted additional relevant details, and incompleteness indicated omitted crucial information from the NASS guidelines. RESULTS: Out of 29 clinical guidelines evaluated, ChatGPT-3.5 demonstrated accuracy in 15 responses (52%), while ChatGPT-4 achieved accuracy in 17 responses (59%). ChatGPT-3.5 was overconclusive in 14 responses (48%), while ChatGPT-4 exhibited overconclusiveness in 13 responses (45%). Additionally, ChatGPT-3.5 provided supplementary information in 24 responses (83%), and ChatGPT-4 provided supplemental information in 27 responses (93%). In terms of incompleteness, ChatGPT-3.5 displayed this in 11 responses (38%), while ChatGPT-4 showed incompleteness in 8 responses (23%). CONCLUSION: ChatGPT shows promise for clinical decision-making, but both patients and healthcare providers should exercise caution to ensure safety and quality of care. While these results are encouraging, further research is necessary to validate the use of large language models in clinical settings.

8.
Spine (Phila Pa 1976) ; 49(9): 640-651, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38213186

ABSTRACT

STUDY DESIGN: Comparative analysis. OBJECTIVE: To evaluate Chat Generative Pre-trained Transformer (ChatGPT's) ability to predict appropriate clinical recommendations based on the most recent clinical guidelines for the diagnosis and treatment of low back pain. BACKGROUND: Low back pain is a very common and often debilitating condition that affects many people globally. ChatGPT is an artificial intelligence model that may be able to generate recommendations for low back pain. MATERIALS AND METHODS: Using the North American Spine Society Evidence-Based Clinical Guidelines as the gold standard, 82 clinical questions relating to low back pain were entered into ChatGPT (GPT-3.5) independently. For each question, we recorded ChatGPT's answer, then used a point-answer system-the point being the guideline recommendation and the answer being ChatGPT's response-and asked ChatGPT if the point was mentioned in the answer to assess for accuracy. This response accuracy was repeated with one caveat-a prior prompt is given in ChatGPT to answer as an experienced orthopedic surgeon-for each question by guideline category. A two-sample proportion z test was used to assess any differences between the preprompt and postprompt scenarios with alpha=0.05. RESULTS: ChatGPT's response was accurate 65% (72% postprompt, P =0.41) for guidelines with clinical recommendations, 46% (58% postprompt, P =0.11) for guidelines with insufficient or conflicting data, and 49% (16% postprompt, P =0.003*) for guidelines with no adequate study to address the clinical question. For guidelines with insufficient or conflicting data, 44% (25% postprompt, P =0.01*) of ChatGPT responses wrongly suggested that sufficient evidence existed. CONCLUSION: ChatGPT was able to produce a sufficient clinical guideline recommendation for low back pain, with overall improvements if initially prompted. However, it tended to wrongly suggest evidence and often failed to mention, especially postprompt, when there is not enough evidence to adequately give an accurate recommendation.


Subject(s)
Low Back Pain , Orthopedic Surgeons , Humans , Low Back Pain/diagnosis , Low Back Pain/therapy , Artificial Intelligence , Spine
9.
Global Spine J ; 14(3): 998-1017, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37560946

ABSTRACT

STUDY DESIGN: Comparative Analysis and Narrative Review. OBJECTIVE: To assess and compare ChatGPT's responses to the clinical questions and recommendations proposed by The 2011 North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Lumbar Spinal Stenosis (LSS). We explore the advantages and disadvantages of ChatGPT's responses through an updated literature review on spinal stenosis. METHODS: We prompted ChatGPT with questions from the NASS Evidence-based Clinical Guidelines for LSS and compared its generated responses with the recommendations provided by the guidelines. A review of the literature was performed via PubMed, OVID, and Cochrane on the diagnosis and treatment of lumbar spinal stenosis between January 2012 and April 2023. RESULTS: 14 questions proposed by the NASS guidelines for LSS were uploaded into ChatGPT and directly compared to the responses offered by NASS. Three questions were on the definition and history of LSS, one on diagnostic tests, seven on non-surgical interventions and three on surgical interventions. The review process found 40 articles that were selected for inclusion that helped corroborate or contradict the responses that were generated by ChatGPT. CONCLUSIONS: ChatGPT's responses were similar to findings in the current literature on LSS. These results demonstrate the potential for implementing ChatGPT into the spine surgeon's workplace as a means of supporting the decision-making process for LSS diagnosis and treatment. However, our narrative summary only provides a limited literature review and additional research is needed to standardize our findings as means of validating ChatGPT's use in the clinical space.

10.
Spine J ; 23(11): 1684-1691, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37499880

ABSTRACT

BACKGROUND CONTEXT: Venous thromboembolism is a negative outcome of elective spine surgery. However, the use of thromboembolic chemoprophylaxis in this patient population is controversial due to the possible increased risk of epidural hematoma. ChatGPT is an artificial intelligence model which may be able to generate recommendations for thromboembolic prophylaxis in spine surgery. PURPOSE: To evaluate the accuracy of ChatGPT recommendations for thromboembolic prophylaxis in spine surgery. STUDY DESIGN/SETTING: Comparative analysis. PATIENT SAMPLE: None. OUTCOME MEASURES: Accuracy, over-conclusiveness, supplemental, and incompleteness of ChatGPT responses compared to the North American Spine Society (NASS) clinical guidelines. METHODS: ChatGPT was prompted with questions from the 2009 NASS clinical guidelines for antithrombotic therapies and evaluated for concordance with the clinical guidelines. ChatGPT-3.5 responses were obtained on March 5, 2023, and ChatGPT-4.0 responses were obtained on April 7, 2023. A ChatGPT response was classified as accurate if it did not contradict the clinical guideline. Three additional categories were created to further evaluate the ChatGPT responses in comparison to the NASS guidelines: over-conclusiveness, supplementary, and incompleteness. ChatGPT was classified as over-conclusive if it made a recommendation where the NASS guideline did not provide one. ChatGPT was classified as supplementary if it included additional relevant information not specified by the NASS guideline. ChatGPT was classified as incomplete if it failed to provide relevant information included in the NASS guideline. RESULTS: Twelve clinical guidelines were evaluated in total. Compared to the NASS clinical guidelines, ChatGPT-3.5 was accurate in 4 (33%) of its responses while ChatGPT-4.0 was accurate in 11 (92%) responses. ChatGPT-3.5 was over-conclusive in 6 (50%) of its responses while ChatGPT-4.0 was over-conclusive in 1 (8%) response. ChatGPT-3.5 provided supplemental information in 8 (67%) of its responses, and ChatGPT-4.0 provided supplemental information in 11 (92%) responses. Four (33%) responses from ChatGPT-3.5 were incomplete, and 4 (33%) responses from ChatGPT-4.0 were incomplete. CONCLUSIONS: ChatGPT was able to provide recommendations for thromboembolic prophylaxis with reasonable accuracy. ChatGPT-3.5 tended to cite nonexistent sources and was more likely to give specific recommendations while ChatGPT-4.0 was more conservative in its answers. As ChatGPT is continuously updated, further validation is needed before it can be used as a guideline for clinical practice.

SELECTION OF CITATIONS
SEARCH DETAIL
...