Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Arthroscopy ; 40(6): 1727-1736.e1, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38949274

RESUMO

PURPOSE: To categorize and trend annual out-of-pocket expenditures for arthroscopic rotator cuff repair (RCR) patients relative to total healthcare utilization (THU) reimbursement and compare drivers of patient out-of-pocket expenditures (POPE) in a granular fashion via analyses by insurance type and surgical setting. METHODS: Patients who underwent outpatient arthroscopic RCR in the United States from 2013 to 2018 were identified from the IBM MarketScan Database. Primary outcome variables were total POPE and THU reimbursement, which were calculated for all claims in the 9-month perioperative period. Trends in outcome variables over time and differences across insurance types were analyzed. Multivariable analysis was performed to investigate drivers of POPE. RESULTS: A total of 52,330 arthroscopic RCR patients were identified. Between 2013 and 2018, median POPE increased by 47.5% ($917 to $1,353), and median THU increased by 9.3% ($11,964 to $13,076). Patients with high deductible insurance plans paid $1,910 toward their THU, 52.5% more than patients with preferred provider plans ($1,253, P = .001) and 280.5% more than patients with managed care plans ($502, P = .001). All components of POPE increased over the study period, with the largest observed increase being POPE for the immediate procedure (P = .001). On multivariable analysis, out-of-network facility, out-of-network surgeon, and high-deductible insurance most significantly increased POPE. CONCLUSIONS: POPE for arthroscopic RCR increased at a higher rate than THU over the study period, demonstrating that patients are paying an increasing proportion of RCR costs. A large percentage of this increase comes from increasing POPE for the immediate procedure. Out-of-network facility status increased POPE 3 times more than out-of-network surgeon status, and future cost-optimization strategies should focus on facility-specific reimbursements in particular. Last, ambulatory surgery centers (ASCs) significantly reduced POPE, so performing arthroscopic RCRs at ASCs is beneficial to cost-minimization efforts. CLINICAL RELEVANCE: This study highlights that although payers have increased reimbursement for RCR, patient out-of-pocket expenditures have increased at a much higher rate. Furthermore, this study elucidates trends in and drivers of patient out-of-pocket payments for RCR, providing evidence for development of cost-optimization strategies and counseling of patients undergoing RCR.


Assuntos
Artroscopia , Gastos em Saúde , Lesões do Manguito Rotador , Humanos , Artroscopia/economia , Masculino , Feminino , Gastos em Saúde/estatística & dados numéricos , Pessoa de Meia-Idade , Estados Unidos , Lesões do Manguito Rotador/cirurgia , Lesões do Manguito Rotador/economia , Procedimentos Cirúrgicos Ambulatórios/economia , Reembolso de Seguro de Saúde , Aceitação pelo Paciente de Cuidados de Saúde/estatística & dados numéricos , Idoso , Manguito Rotador/cirurgia
2.
Stroke ; 55(7): 1776-1786, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38847098

RESUMO

BACKGROUND: It is uncertain whether antiplatelets or anticoagulants are more effective in preventing early recurrent stroke in patients with cervical artery dissection. Following the publication of the observational Antithrombotic for STOP-CAD (Stroke Prevention in Cervical Artery Dissection) study, which has more than doubled available data, we performed an updated systematic review and meta-analysis comparing antiplatelets versus anticoagulation in cervical artery dissection. METHODS: The systematic review was registered in PROSPERO (CRD42023468063). We searched 5 databases using a combination of keywords that encompass different antiplatelets and anticoagulants, as well as cervical artery dissection. We included relevant randomized trials and included observational studies of dissection unrelated to major trauma. Where studies were sufficiently similar, we performed meta-analyses for efficacy (ischemic stroke) and safety (major hemorrhage, symptomatic intracranial hemorrhage, and death) outcomes using relative risks. RESULTS: We identified 11 studies (2 randomized trials and 9 observational studies) that met the inclusion criteria. These included 5039 patients (30% [1512] treated with anticoagulation and 70% [3527]) treated with antiplatelets]. In meta-analysis, anticoagulation was associated with a lower ischemic stroke risk (relative risk, 0.63 [95% CI, 0.43 to 0.94]; P=0.02; I2=0%) but higher major bleeding risk (relative risk, 2.25 [95% CI, 1.07 to 4.72]; P=0.03, I2=0%). The risks of death and symptomatic intracranial hemorrhage were similar between the 2 treatments. Effect sizes were larger in randomized trials. There are insufficient data on the efficacy and safety of dual antiplatelet therapy or direct oral anticoagulants. CONCLUSIONS: In this study of patients with cervical artery dissection, anticoagulation was superior to antiplatelet therapy in reducing ischemic stroke but carried a higher major bleeding risk. This argues for an individualized therapeutic approach incorporating the net clinical benefit of ischemic stroke reduction and bleeding risks. Large randomized clinical trials are required to clarify optimal antithrombotic strategies for management of cervical artery dissection.


Assuntos
Anticoagulantes , Inibidores da Agregação Plaquetária , Humanos , Inibidores da Agregação Plaquetária/uso terapêutico , Anticoagulantes/uso terapêutico , Anticoagulantes/efeitos adversos , Dissecação da Artéria Vertebral/tratamento farmacológico , AVC Isquêmico/tratamento farmacológico , AVC Isquêmico/prevenção & controle , Acidente Vascular Cerebral/prevenção & controle , Acidente Vascular Cerebral/tratamento farmacológico , Dissecação da Artéria Carótida Interna/tratamento farmacológico
3.
J Neurosurg Spine ; : 1-11, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38941643

RESUMO

OBJECTIVE: The objective of this study was to assess the safety and accuracy of ChatGPT recommendations in comparison to the evidence-based guidelines from the North American Spine Society (NASS) for the diagnosis and treatment of cervical radiculopathy. METHODS: ChatGPT was prompted with questions from the 2011 NASS clinical guidelines for cervical radiculopathy and evaluated for concordance. Selected key phrases within the NASS guidelines were identified. Completeness was measured as the number of overlapping key phrases between ChatGPT responses and NASS guidelines divided by the total number of key phrases. A senior spine surgeon evaluated the ChatGPT responses for safety and accuracy. ChatGPT responses were further evaluated on their readability, similarity, and consistency. Flesch Reading Ease scores and Flesch-Kincaid reading levels were measured to assess readability. The Jaccard Similarity Index was used to assess agreement between ChatGPT responses and NASS clinical guidelines. RESULTS: A total of 100 key phrases were identified across 14 NASS clinical guidelines. The mean completeness of ChatGPT-4 was 46%. ChatGPT-3.5 yielded a completeness of 34%. ChatGPT-4 outperformed ChatGPT-3.5 by a margin of 12%. ChatGPT-4.0 outputs had a mean Flesch reading score of 15.24, which is very difficult to read, requiring a college graduate education to understand. ChatGPT-3.5 outputs had a lower mean Flesch reading score of 8.73, indicating that they are even more difficult to read and require a professional education level to do so. However, both versions of ChatGPT were more accessible than NASS guidelines, which had a mean Flesch reading score of 4.58. Furthermore, with NASS guidelines as a reference, ChatGPT-3.5 registered a mean ± SD Jaccard Similarity Index score of 0.20 ± 0.078 while ChatGPT-4 had a mean of 0.18 ± 0.068. Based on physician evaluation, outputs from ChatGPT-3.5 and ChatGPT-4.0 were safe 100% of the time. Thirteen of 14 (92.8%) ChatGPT-3.5 responses and 14 of 14 (100%) ChatGPT-4.0 responses were in agreement with current best clinical practices for cervical radiculopathy according to a senior spine surgeon. CONCLUSIONS: ChatGPT models were able to provide safe and accurate but incomplete responses to NASS clinical guideline questions about cervical radiculopathy. Although the authors' results suggest that improvements are required before ChatGPT can be reliably deployed in a clinical setting, future versions of the LLM hold promise as an updated reference for guidelines on cervical radiculopathy. Future versions must prioritize accessibility and comprehensibility for a diverse audience.

4.
Clin Spine Surg ; 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38828954

RESUMO

STUDY DESIGN: Retrospective cohort. OBJECTIVE: The purpose of this study was to evaluate the effect of overdistraction on interbody cage subsidence. BACKGROUND: Vertebral overdistraction due to the use of large intervertebral cage sizes may increase the risk of postoperative subsidence. METHODS: Patients who underwent anterior cervical discectomy and fusion between 2016 and 2021 were included. All measurements were performed using lateral cervical radiographs at 3 time points - preoperative, immediate postoperative, and final follow-up >6 months postoperatively. Anterior and posterior distraction were calculated by subtracting the preoperative disc height from the immediate postoperative disc height. Cage subsidence was calculated by subtracting the final follow-up postoperative disc height from the immediate postoperative disc height. Associations between anterior and posterior subsidence and distraction were determined using multivariable linear regression models. The analyses controlled for cage type, cervical level, sex, age, smoking status, and osteopenia. RESULTS: Sixty-eight patients and 125 fused levels were included in the study. Of the 68 fusions, 22 were single-level fusions, 35 were 2-level, and 11 were 3-level. The median final follow-up interval was 368 days (range: 181-1257 d). Anterior disc space subsidence was positively associated with anterior distraction (beta = 0.23; 95% CI: 0.08, 0.38; P = 0.004), and posterior disc space subsidence was positively associated with posterior distraction (beta = 0.29; 95% CI: 0.13, 0.45; P < 0.001). No significant associations between anterior distraction and posterior subsidence (beta = 0.07; 95% CI: -0.06, 0.20; P = 0.270) or posterior distraction and anterior subsidence (beta = 0.06; 95% CI: -0.14, 0.27; P = 0.541) were observed. CONCLUSIONS: We found that overdistraction of the disc space was associated with increased postoperative subsidence after anterior cervical discectomy and fusion. Surgeons should consider choosing a smaller cage size to avoid overdistraction and minimize postoperative subsidence.

7.
Neurospine ; 21(1): 128-146, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38569639

RESUMO

OBJECTIVE: Large language models, such as chat generative pre-trained transformer (ChatGPT), have great potential for streamlining medical processes and assisting physicians in clinical decision-making. This study aimed to assess the potential of ChatGPT's 2 models (GPT-3.5 and GPT-4.0) to support clinical decision-making by comparing its responses for antibiotic prophylaxis in spine surgery to accepted clinical guidelines. METHODS: ChatGPT models were prompted with questions from the North American Spine Society (NASS) Evidence-based Clinical Guidelines for Multidisciplinary Spine Care for Antibiotic Prophylaxis in Spine Surgery (2013). Its responses were then compared and assessed for accuracy. RESULTS: Of the 16 NASS guideline questions concerning antibiotic prophylaxis, 10 responses (62.5%) were accurate in ChatGPT's GPT-3.5 model and 13 (81%) were accurate in GPT-4.0. Twenty-five percent of GPT-3.5 answers were deemed as overly confident while 62.5% of GPT-4.0 answers directly used the NASS guideline as evidence for its response. CONCLUSION: ChatGPT demonstrated an impressive ability to accurately answer clinical questions. GPT-3.5 model's performance was limited by its tendency to give overly confident responses and its inability to identify the most significant elements in its responses. GPT-4.0 model's responses had higher accuracy and cited the NASS guideline as direct evidence many times. While GPT-4.0 is still far from perfect, it has shown an exceptional ability to extract the most relevant research available compared to GPT-3.5. Thus, while ChatGPT has shown far-reaching potential, scrutiny should still be exercised regarding its clinical use at this time.

8.
J Orthop ; 53: 27-33, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38450060

RESUMO

Background: Resident training programs in the US use the Orthopaedic In-Training Examination (OITE) developed by the American Academy of Orthopaedic Surgeons (AAOS) to assess the current knowledge of their residents and to identify the residents at risk of failing the Amerian Board of Orthopaedic Surgery (ABOS) examination. Optimal strategies for OITE preparation are constantly being explored. There may be a role for Large Language Models (LLMs) in orthopaedic resident education. ChatGPT, an LLM launched in late 2022 has demonstrated the ability to produce accurate, detailed answers, potentially enabling it to aid in medical education and clinical decision-making. The purpose of this study is to evaluate the performance of ChatGPT on Orthopaedic In-Training Examinations using Self-Assessment Exams from the AAOS database and approved literature as a proxy for the Orthopaedic Board Examination. Methods: 301 SAE questions from the AAOS database and associated AAOS literature were input into ChatGPT's interface in a question and multiple-choice format and the answers were then analyzed to determine which answer choice was selected. A new chat was used for every question. All answers were recorded, categorized, and compared to the answer given by the OITE and SAE exams, noting whether the answer was right or wrong. Results: Of the 301 questions asked, ChatGPT was able to correctly answer 183 (60.8%) of them. The subjects with the highest percentage of correct questions were basic science (81%), oncology (72.7%, shoulder and elbow (71.9%), and sports (71.4%). The questions were further subdivided into 3 groups: those about management, diagnosis, or knowledge recall. There were 86 management questions and 47 were correct (54.7%), 45 diagnosis questions with 32 correct (71.7%), and 168 knowledge recall questions with 102 correct (60.7%). Conclusions: ChatGPT has the potential to provide orthopedic educators and trainees with accurate clinical conclusions for the majority of board-style questions, although its reasoning should be carefully analyzed for accuracy and clinical validity. As such, its usefulness in a clinical educational context is currently limited but rapidly evolving. Clinical relevance: ChatGPT can access a multitude of medical data and may help provide accurate answers to clinical questions.

9.
Eur Spine J ; 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38489044

RESUMO

BACKGROUND CONTEXT: Clinical guidelines, developed in concordance with the literature, are often used to guide surgeons' clinical decision making. Recent advancements of large language models and artificial intelligence (AI) in the medical field come with exciting potential. OpenAI's generative AI model, known as ChatGPT, can quickly synthesize information and generate responses grounded in medical literature, which may prove to be a useful tool in clinical decision-making for spine care. The current literature has yet to investigate the ability of ChatGPT to assist clinical decision making with regard to degenerative spondylolisthesis. PURPOSE: The study aimed to compare ChatGPT's concordance with the recommendations set forth by The North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and assess ChatGPT's accuracy within the context of the most recent literature. METHODS: ChatGPT-3.5 and 4.0 was prompted with questions from the NASS Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and graded its recommendations as "concordant" or "nonconcordant" relative to those put forth by NASS. A response was considered "concordant" when ChatGPT generated a recommendation that accurately reproduced all major points made in the NASS recommendation. Any responses with a grading of "nonconcordant" were further stratified into two subcategories: "Insufficient" or "Over-conclusive," to provide further insight into grading rationale. Responses between GPT-3.5 and 4.0 were compared using Chi-squared tests. RESULTS: ChatGPT-3.5 answered 13 of NASS's 28 total clinical questions in concordance with NASS's guidelines (46.4%). Categorical breakdown is as follows: Definitions and Natural History (1/1, 100%), Diagnosis and Imaging (1/4, 25%), Outcome Measures for Medical Intervention and Surgical Treatment (0/1, 0%), Medical and Interventional Treatment (4/6, 66.7%), Surgical Treatment (7/14, 50%), and Value of Spine Care (0/2, 0%). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-3.5 generated a concordant response 66.7% of the time (6/9). However, ChatGPT-3.5's concordance dropped to 36.8% when asked clinical questions that NASS did not provide a clear recommendation on (7/19). A further breakdown of ChatGPT-3.5's nonconcordance with the guidelines revealed that a vast majority of its inaccurate recommendations were due to them being "over-conclusive" (12/15, 80%), rather than "insufficient" (3/15, 20%). ChatGPT-4.0 answered 19 (67.9%) of the 28 total questions in concordance with NASS guidelines (P = 0.177). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-4.0 generated a concordant response 66.7% of the time (6/9). ChatGPT-4.0's concordance held up at 68.4% when asked clinical questions that NASS did not provide a clear recommendation on (13/19, P = 0.104). CONCLUSIONS: This study sheds light on the duality of LLM applications within clinical settings: one of accuracy and utility in some contexts versus inaccuracy and risk in others. ChatGPT was concordant for most clinical questions NASS offered recommendations for. However, for questions NASS did not offer best practices, ChatGPT generated answers that were either too general or inconsistent with the literature, and even fabricated data/citations. Thus, clinicians should exercise extreme caution when attempting to consult ChatGPT for clinical recommendations, taking care to ensure its reliability within the context of recent literature.

10.
Stroke ; 55(4): 921-930, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38299350

RESUMO

BACKGROUND: Transcarotid artery revascularization (TCAR) is an interventional therapy for symptomatic internal carotid artery disease. Currently, the utilization of TCAR is contentious due to limited evidence. In this study, we evaluate the safety and efficacy of TCAR in patients with symptomatic internal carotid artery disease compared with carotid endarterectomy (CEA) and carotid artery stenting (CAS). METHODS: A systematic review was conducted, spanning from January 2000 to February 2023, encompassing studies that used TCAR for the treatment of symptomatic internal carotid artery disease. The primary outcomes included a 30-day stroke or transient ischemic attack, myocardial infarction, and mortality. Secondary outcomes comprised cranial nerve injury and major bleeding. Pooled odds ratios (ORs) for each outcome were calculated to compare TCAR with CEA and CAS. Furthermore, subgroup analyses were performed based on age and degree of stenosis. In addition, a sensitivity analysis was conducted by excluding the vascular quality initiative registry population. RESULTS: A total of 7 studies involving 24 246 patients were analyzed. Within this patient cohort, 4771 individuals underwent TCAR, 12 350 underwent CEA, and 7125 patients underwent CAS. Compared with CAS, TCAR was associated with a similar rate of stroke or transient ischemic attack (OR, 0.77 [95% CI, 0.33-1.82]) and myocardial infarction (OR, 1.29 [95% CI, 0.83-2.01]) but lower mortality (OR, 0.42 [95% CI, 0.22-0.81]). Compared with CEA, TCAR was associated with a higher rate of stroke or transient ischemic attack (OR, 1.26 [95% CI, 1.03-1.54]) but similar rates of myocardial infarction (OR, 0.9 [95% CI, 0.64-1.38]) and mortality (OR, 1.35 [95% CI, 0.87-2.10]). CONCLUSIONS: Although CEA has traditionally been considered superior to stenting for symptomatic carotid stenosis, TCAR may have some advantages over CAS. Prospective randomized trials comparing the 3 modalities are needed.


Assuntos
Doenças das Artérias Carótidas , Estenose das Carótidas , Endarterectomia das Carótidas , Procedimentos Endovasculares , Ataque Isquêmico Transitório , Infarto do Miocárdio , Acidente Vascular Cerebral , Humanos , Estenose das Carótidas/complicações , Ataque Isquêmico Transitório/complicações , Estudos Prospectivos , Fatores de Risco , Medição de Risco , Resultado do Tratamento , Stents , Doenças das Artérias Carótidas/cirurgia , Doenças das Artérias Carótidas/complicações , Acidente Vascular Cerebral/complicações , Artérias , Infarto do Miocárdio/complicações , Estudos Retrospectivos
11.
Neurospine ; 21(1): 149-158, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38291746

RESUMO

OBJECTIVE: Large language models like chat generative pre-trained transformer (ChatGPT) have found success in various sectors, but their application in the medical field remains limited. This study aimed to assess the feasibility of using ChatGPT to provide accurate medical information to patients, specifically evaluating how well ChatGPT versions 3.5 and 4 aligned with the 2012 North American Spine Society (NASS) guidelines for lumbar disk herniation with radiculopathy. METHODS: ChatGPT's responses to questions based on the NASS guidelines were analyzed for accuracy. Three new categories-overconclusiveness, supplementary information, and incompleteness-were introduced to deepen the analysis. Overconclusiveness referred to recommendations not mentioned in the NASS guidelines, supplementary information denoted additional relevant details, and incompleteness indicated omitted crucial information from the NASS guidelines. RESULTS: Out of 29 clinical guidelines evaluated, ChatGPT-3.5 demonstrated accuracy in 15 responses (52%), while ChatGPT-4 achieved accuracy in 17 responses (59%). ChatGPT-3.5 was overconclusive in 14 responses (48%), while ChatGPT-4 exhibited overconclusiveness in 13 responses (45%). Additionally, ChatGPT-3.5 provided supplementary information in 24 responses (83%), and ChatGPT-4 provided supplemental information in 27 responses (93%). In terms of incompleteness, ChatGPT-3.5 displayed this in 11 responses (38%), while ChatGPT-4 showed incompleteness in 8 responses (23%). CONCLUSION: ChatGPT shows promise for clinical decision-making, but both patients and healthcare providers should exercise caution to ensure safety and quality of care. While these results are encouraging, further research is necessary to validate the use of large language models in clinical settings.

12.
Spine (Phila Pa 1976) ; 49(9): 640-651, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38213186

RESUMO

STUDY DESIGN: Comparative analysis. OBJECTIVE: To evaluate Chat Generative Pre-trained Transformer (ChatGPT's) ability to predict appropriate clinical recommendations based on the most recent clinical guidelines for the diagnosis and treatment of low back pain. BACKGROUND: Low back pain is a very common and often debilitating condition that affects many people globally. ChatGPT is an artificial intelligence model that may be able to generate recommendations for low back pain. MATERIALS AND METHODS: Using the North American Spine Society Evidence-Based Clinical Guidelines as the gold standard, 82 clinical questions relating to low back pain were entered into ChatGPT (GPT-3.5) independently. For each question, we recorded ChatGPT's answer, then used a point-answer system-the point being the guideline recommendation and the answer being ChatGPT's response-and asked ChatGPT if the point was mentioned in the answer to assess for accuracy. This response accuracy was repeated with one caveat-a prior prompt is given in ChatGPT to answer as an experienced orthopedic surgeon-for each question by guideline category. A two-sample proportion z test was used to assess any differences between the preprompt and postprompt scenarios with alpha=0.05. RESULTS: ChatGPT's response was accurate 65% (72% postprompt, P =0.41) for guidelines with clinical recommendations, 46% (58% postprompt, P =0.11) for guidelines with insufficient or conflicting data, and 49% (16% postprompt, P =0.003*) for guidelines with no adequate study to address the clinical question. For guidelines with insufficient or conflicting data, 44% (25% postprompt, P =0.01*) of ChatGPT responses wrongly suggested that sufficient evidence existed. CONCLUSION: ChatGPT was able to produce a sufficient clinical guideline recommendation for low back pain, with overall improvements if initially prompted. However, it tended to wrongly suggest evidence and often failed to mention, especially postprompt, when there is not enough evidence to adequately give an accurate recommendation.


Assuntos
Dor Lombar , Cirurgiões Ortopédicos , Humanos , Dor Lombar/diagnóstico , Dor Lombar/terapia , Inteligência Artificial , Coluna Vertebral
13.
Global Spine J ; 14(3): 998-1017, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37560946

RESUMO

STUDY DESIGN: Comparative Analysis and Narrative Review. OBJECTIVE: To assess and compare ChatGPT's responses to the clinical questions and recommendations proposed by The 2011 North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Lumbar Spinal Stenosis (LSS). We explore the advantages and disadvantages of ChatGPT's responses through an updated literature review on spinal stenosis. METHODS: We prompted ChatGPT with questions from the NASS Evidence-based Clinical Guidelines for LSS and compared its generated responses with the recommendations provided by the guidelines. A review of the literature was performed via PubMed, OVID, and Cochrane on the diagnosis and treatment of lumbar spinal stenosis between January 2012 and April 2023. RESULTS: 14 questions proposed by the NASS guidelines for LSS were uploaded into ChatGPT and directly compared to the responses offered by NASS. Three questions were on the definition and history of LSS, one on diagnostic tests, seven on non-surgical interventions and three on surgical interventions. The review process found 40 articles that were selected for inclusion that helped corroborate or contradict the responses that were generated by ChatGPT. CONCLUSIONS: ChatGPT's responses were similar to findings in the current literature on LSS. These results demonstrate the potential for implementing ChatGPT into the spine surgeon's workplace as a means of supporting the decision-making process for LSS diagnosis and treatment. However, our narrative summary only provides a limited literature review and additional research is needed to standardize our findings as means of validating ChatGPT's use in the clinical space.

15.
Global Spine J ; : 21925682231224753, 2023 Dec 26.
Artigo em Inglês | MEDLINE | ID: mdl-38147047

RESUMO

STUDY DESIGN: Retrospective cohort study. OBJECTIVES: This study assessed the effectiveness of a popular large language model, ChatGPT-4, in predicting Current Procedural Terminology (CPT) codes from surgical operative notes. By employing a combination of prompt engineering, natural language processing (NLP), and machine learning techniques on standard operative notes, the study sought to enhance billing efficiency, optimize revenue collection, and reduce coding errors. METHODS: The model was given 3 different types of prompts for 50 surgical operative notes from 2 spine surgeons. The first trial was simply asking the model to generate CPT codes for a given OP note. The second trial included 3 OP notes and associated CPT codes to, and the third trial included a list of every possible CPT code in the dataset to prime the model. CPT codes generated by the model were compared to those generated by the billing department. Model evaluation was performed in the form of calculating the area under the ROC (AUROC), and area under precision-recall curves (AUPRC). RESULTS: The trial that involved priming ChatGPT with a list of every possible CPT code performed the best, with an AUROC of .87 and an AUPRC of .67, and an AUROC of .81 and AUPRC of .76 when examining only the most common CPT codes. CONCLUSIONS: ChatGPT-4 can aid in automating CPT billing from orthopedic surgery operative notes, driving down healthcare expenditures and enhancing billing code precision as the model evolves and fine-tuning becomes available.

16.
Global Spine J ; : 21925682231202579, 2023 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-37703497

RESUMO

STUDY DESIGN: A retrospective database study of patients at an urban academic medical center undergoing an Anterior Cervical Discectomy and Fusion (ACDF) surgery between 2008 and 2019. OBJECTIVE: ACDF is one of the most common spinal procedures. Old age has been found to be a common risk factor for postoperative complications across a plethora of spine procedures. Little is known about how this risk changes among elderly cohorts such as the difference between elderly (60+) and octogenarian (80+) patients. This study seeks to analyze the disparate rates of complications following elective ACDF between patients aged 60-69 or 70-79 and 80+ at an urban academic medical center. METHODS: We identified patients who had undergone ACDF procedures using CPT codes 22,551, 22,552, and 22,554. Emergent procedures were excluded, and patients were subdivided on the basis of age. Then each cohort was propensity matched for univariate and univariate logistic regression analysis. RESULTS: The propensity matching resulted in 25 pairs in both the 70-79 and 80+ y.o. cohort comparison and 60-69 and 80+ y.o. cohort comparison. None of the cohorts differed significantly in demographic variables. Differences between elderly cohorts were less pronounced: the 80+ y.o. cohort experienced only significantly higher total direct cost (P = .03) compared to the 70-79 y.o. cohort and significantly longer operative time (P = .04) compared to the 60-69 y.o. cohort. CONCLUSIONS: Octogenarian patients do not face much riskier outcomes following elective ACDF procedures than do younger elderly patients. Age alone should not be used to screen patients for ACDF.

17.
Clin Shoulder Elb ; 26(3): 231-237, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37607857

RESUMO

BACKGROUND: In the past decade, the number of anatomic total shoulder arthroplasty (aTSA) procedures has steadily increased. Patients over 65 years of age comprise the vast majority of recipients, and outcomes have been well documented; however, patients are opting for definitive surgical treatment at younger ages.We aim to report on the effects of age on the long-term clinical outcomes following aTSA. METHODS: Among the patients who underwent TSA, 119 shoulders were retrospectively analyzed. Preoperative and postoperative clinical outcome data were collected. Linear regression analysis (univariate and multivariate) was conducted to evaluate the associations of clinical outcomes with age. Kaplan-Meier curves and Cox regression analyses were performed to evaluate implant survival. RESULTS: At final follow-up, patients of all ages undergoing aTSA experienced significant and sustained improvements in all primary outcome measures compared with preoperative values. Based on multivariate analysis, age at the time of surgery was a significant predictor of postoperative outcomes. Excellent implant survival was observed over the course of this study, and Cox regression survival analysis indicated age and sex to not be associated with an increased risk of implant failure. CONCLUSIONS: When controlling for sex and follow-up duration, older age was associated with significantly better patient-reported outcome measures. Despite this difference, we noted no significant effects on range of motion or implant survival. Level of evidence: IV.

18.
Spine J ; 23(11): 1684-1691, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37499880

RESUMO

BACKGROUND CONTEXT: Venous thromboembolism is a negative outcome of elective spine surgery. However, the use of thromboembolic chemoprophylaxis in this patient population is controversial due to the possible increased risk of epidural hematoma. ChatGPT is an artificial intelligence model which may be able to generate recommendations for thromboembolic prophylaxis in spine surgery. PURPOSE: To evaluate the accuracy of ChatGPT recommendations for thromboembolic prophylaxis in spine surgery. STUDY DESIGN/SETTING: Comparative analysis. PATIENT SAMPLE: None. OUTCOME MEASURES: Accuracy, over-conclusiveness, supplemental, and incompleteness of ChatGPT responses compared to the North American Spine Society (NASS) clinical guidelines. METHODS: ChatGPT was prompted with questions from the 2009 NASS clinical guidelines for antithrombotic therapies and evaluated for concordance with the clinical guidelines. ChatGPT-3.5 responses were obtained on March 5, 2023, and ChatGPT-4.0 responses were obtained on April 7, 2023. A ChatGPT response was classified as accurate if it did not contradict the clinical guideline. Three additional categories were created to further evaluate the ChatGPT responses in comparison to the NASS guidelines: over-conclusiveness, supplementary, and incompleteness. ChatGPT was classified as over-conclusive if it made a recommendation where the NASS guideline did not provide one. ChatGPT was classified as supplementary if it included additional relevant information not specified by the NASS guideline. ChatGPT was classified as incomplete if it failed to provide relevant information included in the NASS guideline. RESULTS: Twelve clinical guidelines were evaluated in total. Compared to the NASS clinical guidelines, ChatGPT-3.5 was accurate in 4 (33%) of its responses while ChatGPT-4.0 was accurate in 11 (92%) responses. ChatGPT-3.5 was over-conclusive in 6 (50%) of its responses while ChatGPT-4.0 was over-conclusive in 1 (8%) response. ChatGPT-3.5 provided supplemental information in 8 (67%) of its responses, and ChatGPT-4.0 provided supplemental information in 11 (92%) responses. Four (33%) responses from ChatGPT-3.5 were incomplete, and 4 (33%) responses from ChatGPT-4.0 were incomplete. CONCLUSIONS: ChatGPT was able to provide recommendations for thromboembolic prophylaxis with reasonable accuracy. ChatGPT-3.5 tended to cite nonexistent sources and was more likely to give specific recommendations while ChatGPT-4.0 was more conservative in its answers. As ChatGPT is continuously updated, further validation is needed before it can be used as a guideline for clinical practice.

19.
Sci Rep ; 13(1): 10163, 2023 06 22.
Artigo em Inglês | MEDLINE | ID: mdl-37349359

RESUMO

Miniaturized electrical stimulation (ES) implants show great promise in practice, but their real-time control by means of biophysical mechanistic algorithms is not feasible due to computational complexity. Here, we study the feasibility of more computationally efficient machine learning methods to control ES implants. For this, we estimate the normalized twitch force of the stimulated extensor digitorum longus muscle on n = 11 Wistar rats with intra- and cross-subject calibration. After 2000 training stimulations, we reach a mean absolute error of 0.03 in an intra-subject setting and 0.2 in a cross-subject setting with a random forest regressor. To the best of our knowledge, this work is the first experiment showing the feasibility of AI to simulate complex ES mechanistic models. However, the results of cross-subject training motivate more research on error reduction methods for this setting.


Assuntos
Inteligência Artificial , Músculo Esquelético , Ratos , Animais , Ratos Wistar , Estudos de Viabilidade , Músculo Esquelético/fisiologia , Estimulação Elétrica/métodos , Contração Muscular
20.
Global Spine J ; : 21925682231164935, 2023 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-36932733

RESUMO

STUDY DESIGN: Retrospective cohort. OBJECTIVE: Billing and coding-related administrative tasks are a major source of healthcare expenditure in the United States. We aim to show that a second-iteration Natural Language Processing (NLP) machine learning algorithm, XLNet, can automate the generation of CPT codes from operative notes in ACDF, PCDF, and CDA procedures. METHODS: We collected 922 operative notes from patients who underwent ACDF, PCDF, or CDA from 2015 to 2020 and included CPT codes generated by the billing code department. We trained XLNet, a generalized autoregressive pretraining method, on this dataset and tested its performance by calculating AUROC and AUPRC. RESULTS: The performance of the model approached human accuracy. Trial 1 (ACDF) achieved an AUROC of .82 (range: .48-.93), an AUPRC of .81 (range: .45-.97), and class-by-class accuracy of 77% (range: 34%-91%); trial 2 (PCDF) achieved an AUROC of .83 (.44-.94), an AUPRC of .70 (.45-.96), and class-by-class accuracy of 71% (42%-93%); trial 3 (ACDF and CDA) achieved an AUROC of .95 (.68-.99), an AUPRC of .91 (.56-.98), and class-by-class accuracy of 87% (63%-99%); trial 4 (ACDF, PCDF, CDA) achieved an AUROC of .95 (.76-.99), an AUPRC of .84 (.49-.99), and class-by-class accuracy of 88% (70%-99%). CONCLUSIONS: We show that the XLNet model can be successfully applied to orthopedic surgeon's operative notes to generate CPT billing codes. As NLP models as a whole continue to improve, billing can be greatly augmented with artificial intelligence assisted generation of CPT billing codes which will help minimize error and promote standardization in the process.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...