Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Orthopedics ; : 1-8, 2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38864645

ABSTRACT

BACKGROUND: Disparities in orthopedic trauma care have been reported for racial-ethnic minority and socially disadvantaged patients. We examined differences in perioperative metrics by patient race and ethnicity and insurance after pelvic fracture in a national sample in the United States. MATERIALS AND METHODS: The 2016-2019 National Inpatient Sample was queried for White, Black, and Hispanic patients 18 to 64 years old with private, Medicaid, or self-pay insurance who underwent non-elective pelvic fracture surgery. Associations between combined race and ethnicity and insurance subgroups and perioperative metrics (time to surgery, length of stay, inhospital complications, institutional discharge) were assessed using multivariable generalized linear and logistic regression models. Adjusted percent differences or odds ratios (ORs) were reported. RESULTS: A weighted total of 14,375 surgeries were included (68.8% in White patients, 16.1% in Black patients, and 15.1% in Hispanic patients; 60.0% private insurance, 26.3% Medicaid, and 13.7% self-pay). Compared with White patients with private insurance, all Black insurance subgroups had longer length of stay (+15.38% to +38.78%, P≤.001), as did Hispanic patients with Medicaid (+28.03%, P<.001), White patients with Medicaid (+13.08%, P<.001), and White patients with self-pay (+9.47%, P=.04). Additionally, compared with White patients with private insurance, decreased odds of institutional discharge were observed for all patients with self-pay (OR, 0.24-0.37, P<.001) as well as White patients with Medicaid (OR, 0.70, P=.003) and Hispanic patients with Medicaid (OR, 0.57, P=.002). There were no significant adjusted associations between race and ethnicity and insurance subgroups and in-hospital complications or time to surgery. CONCLUSION: These differences in perioperative metrics, primarily for Black patients and patients with self-pay insurance, warrant further examination to identify whether they reflect disparities that should be addressed to promote equitable orthopedic trauma care. [Orthopedics. 202x;4x(x):xx-xx.].

2.
Eur Spine J ; 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38489044

ABSTRACT

BACKGROUND CONTEXT: Clinical guidelines, developed in concordance with the literature, are often used to guide surgeons' clinical decision making. Recent advancements of large language models and artificial intelligence (AI) in the medical field come with exciting potential. OpenAI's generative AI model, known as ChatGPT, can quickly synthesize information and generate responses grounded in medical literature, which may prove to be a useful tool in clinical decision-making for spine care. The current literature has yet to investigate the ability of ChatGPT to assist clinical decision making with regard to degenerative spondylolisthesis. PURPOSE: The study aimed to compare ChatGPT's concordance with the recommendations set forth by The North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and assess ChatGPT's accuracy within the context of the most recent literature. METHODS: ChatGPT-3.5 and 4.0 was prompted with questions from the NASS Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and graded its recommendations as "concordant" or "nonconcordant" relative to those put forth by NASS. A response was considered "concordant" when ChatGPT generated a recommendation that accurately reproduced all major points made in the NASS recommendation. Any responses with a grading of "nonconcordant" were further stratified into two subcategories: "Insufficient" or "Over-conclusive," to provide further insight into grading rationale. Responses between GPT-3.5 and 4.0 were compared using Chi-squared tests. RESULTS: ChatGPT-3.5 answered 13 of NASS's 28 total clinical questions in concordance with NASS's guidelines (46.4%). Categorical breakdown is as follows: Definitions and Natural History (1/1, 100%), Diagnosis and Imaging (1/4, 25%), Outcome Measures for Medical Intervention and Surgical Treatment (0/1, 0%), Medical and Interventional Treatment (4/6, 66.7%), Surgical Treatment (7/14, 50%), and Value of Spine Care (0/2, 0%). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-3.5 generated a concordant response 66.7% of the time (6/9). However, ChatGPT-3.5's concordance dropped to 36.8% when asked clinical questions that NASS did not provide a clear recommendation on (7/19). A further breakdown of ChatGPT-3.5's nonconcordance with the guidelines revealed that a vast majority of its inaccurate recommendations were due to them being "over-conclusive" (12/15, 80%), rather than "insufficient" (3/15, 20%). ChatGPT-4.0 answered 19 (67.9%) of the 28 total questions in concordance with NASS guidelines (P = 0.177). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-4.0 generated a concordant response 66.7% of the time (6/9). ChatGPT-4.0's concordance held up at 68.4% when asked clinical questions that NASS did not provide a clear recommendation on (13/19, P = 0.104). CONCLUSIONS: This study sheds light on the duality of LLM applications within clinical settings: one of accuracy and utility in some contexts versus inaccuracy and risk in others. ChatGPT was concordant for most clinical questions NASS offered recommendations for. However, for questions NASS did not offer best practices, ChatGPT generated answers that were either too general or inconsistent with the literature, and even fabricated data/citations. Thus, clinicians should exercise extreme caution when attempting to consult ChatGPT for clinical recommendations, taking care to ensure its reliability within the context of recent literature.

3.
Spine (Phila Pa 1976) ; 49(9): 640-651, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38213186

ABSTRACT

STUDY DESIGN: Comparative analysis. OBJECTIVE: To evaluate Chat Generative Pre-trained Transformer (ChatGPT's) ability to predict appropriate clinical recommendations based on the most recent clinical guidelines for the diagnosis and treatment of low back pain. BACKGROUND: Low back pain is a very common and often debilitating condition that affects many people globally. ChatGPT is an artificial intelligence model that may be able to generate recommendations for low back pain. MATERIALS AND METHODS: Using the North American Spine Society Evidence-Based Clinical Guidelines as the gold standard, 82 clinical questions relating to low back pain were entered into ChatGPT (GPT-3.5) independently. For each question, we recorded ChatGPT's answer, then used a point-answer system-the point being the guideline recommendation and the answer being ChatGPT's response-and asked ChatGPT if the point was mentioned in the answer to assess for accuracy. This response accuracy was repeated with one caveat-a prior prompt is given in ChatGPT to answer as an experienced orthopedic surgeon-for each question by guideline category. A two-sample proportion z test was used to assess any differences between the preprompt and postprompt scenarios with alpha=0.05. RESULTS: ChatGPT's response was accurate 65% (72% postprompt, P =0.41) for guidelines with clinical recommendations, 46% (58% postprompt, P =0.11) for guidelines with insufficient or conflicting data, and 49% (16% postprompt, P =0.003*) for guidelines with no adequate study to address the clinical question. For guidelines with insufficient or conflicting data, 44% (25% postprompt, P =0.01*) of ChatGPT responses wrongly suggested that sufficient evidence existed. CONCLUSION: ChatGPT was able to produce a sufficient clinical guideline recommendation for low back pain, with overall improvements if initially prompted. However, it tended to wrongly suggest evidence and often failed to mention, especially postprompt, when there is not enough evidence to adequately give an accurate recommendation.


Subject(s)
Low Back Pain , Orthopedic Surgeons , Humans , Low Back Pain/diagnosis , Low Back Pain/therapy , Artificial Intelligence , Spine
SELECTION OF CITATIONS
SEARCH DETAIL
...