Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Pain ; 2024 May 03.
Article in English | MEDLINE | ID: mdl-38723171

ABSTRACT

ABSTRACT: Pragmatic, randomized, controlled trials hold the potential to directly inform clinical decision making and health policy regarding the treatment of people experiencing pain. Pragmatic trials are designed to replicate or are embedded within routine clinical care and are increasingly valued to bridge the gap between trial research and clinical practice, especially in multidimensional conditions, such as pain and in nonpharmacological intervention research. To maximize the potential of pragmatic trials in pain research, the careful consideration of each methodological decision is required. Trials aligned with routine practice pose several challenges, such as determining and enrolling appropriate study participants, deciding on the appropriate level of flexibility in treatment delivery, integrating information on concomitant treatments and adherence, and choosing comparator conditions and outcome measures. Ensuring data quality in real-world clinical settings is another challenging goal. Furthermore, current trials in the field would benefit from analysis methods that allow for a differentiated understanding of effects across patient subgroups and improved reporting of methods and context, which is required to assess the generalizability of findings. At the same time, a range of novel methodological approaches provide opportunities for enhanced efficiency and relevance of pragmatic trials to stakeholders and clinical decision making. In this study, best-practice considerations for these and other concerns in pragmatic trials of pain treatments are offered and a number of promising solutions discussed. The basis of these recommendations was an Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) meeting organized by the Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks.

2.
Pain ; 165(5): 1013-1028, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38198239

ABSTRACT

ABSTRACT: In the traditional clinical research model, patients are typically involved only as participants. However, there has been a shift in recent years highlighting the value and contributions that patients bring as members of the research team, across the clinical research lifecycle. It is becoming increasingly evident that to develop research that is both meaningful to people who have the targeted condition and is feasible, there are important benefits of involving patients in the planning, conduct, and dissemination of research from its earliest stages. In fact, research funders and regulatory agencies are now explicitly encouraging, and sometimes requiring, that patients are engaged as partners in research. Although this approach has become commonplace in some fields of clinical research, it remains the exception in clinical pain research. As such, the Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials convened a meeting with patient partners and international representatives from academia, patient advocacy groups, government regulatory agencies, research funding organizations, academic journals, and the biopharmaceutical industry to develop consensus recommendations for advancing patient engagement in all stages of clinical pain research in an effective and purposeful manner. This article summarizes the results of this meeting and offers considerations for meaningful and authentic engagement of patient partners in clinical pain research, including recommendations for representation, timing, continuous engagement, measurement, reporting, and research dissemination.


Subject(s)
Pain , Patient Participation , Humans , Research Design
3.
Pain ; 164(7): 1457-1472, 2023 Jul 01.
Article in English | MEDLINE | ID: mdl-36943273

ABSTRACT

ABSTRACT: Many questions regarding the clinical management of people experiencing pain and related health policy decision-making may best be answered by pragmatic controlled trials. To generate clinically relevant and widely applicable findings, such trials aim to reproduce elements of routine clinical care or are embedded within clinical workflows. In contrast with traditional efficacy trials, pragmatic trials are intended to address a broader set of external validity questions critical for stakeholders (clinicians, healthcare leaders, policymakers, insurers, and patients) in considering the adoption and use of evidence-based treatments in daily clinical care. This article summarizes methodological considerations for pragmatic trials, mainly concerning methods of fundamental importance to the internal validity of trials. The relationship between these methods and common pragmatic trials methods and goals is considered, recognizing that the resulting trial designs are highly dependent on the specific research question under investigation. The basis of this statement was an Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) systematic review of methods and a consensus meeting. The meeting was organized by the Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public-private partnership. The consensus process was informed by expert presentations, panel and consensus discussions, and a preparatory systematic review. In the context of pragmatic trials of pain treatments, we present fundamental considerations for the planning phase of pragmatic trials, including the specification of trial objectives, the selection of adequate designs, and methods to enhance internal validity while maintaining the ability to answer pragmatic research questions.


Subject(s)
Analgesics , Pain Management , Humans , Analgesics/therapeutic use , Consensus , Pain/drug therapy , Research Design , Pragmatic Clinical Trials as Topic
4.
Pain ; 162(11): 2669-2681, 2021 11 01.
Article in English | MEDLINE | ID: mdl-33863862

ABSTRACT

ABSTRACT: Randomized clinical trials have demonstrated the efficacy of opioid analgesics for the treatment of acute and chronic pain conditions, and for some patients, these medications may be the only effective treatment available. Unfortunately, opioid analgesics are also associated with major risks (eg, opioid use disorder) and adverse outcomes (eg, respiratory depression and falls). The risks and adverse outcomes associated with opioid analgesics have prompted efforts to reduce their use in the treatment of both acute and chronic pain. This article presents Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) consensus recommendations for the design of opioid-sparing clinical trials. The recommendations presented in this article are based on the following definition of an opioid-sparing intervention: any intervention that (1) prevents the initiation of treatment with opioid analgesics, (2) decreases the duration of such treatment, (3) reduces the total dosages of opioids that are prescribed for or used by patients, or (4) reduces opioid-related adverse outcomes (without increasing opioid dosages), all without causing an unacceptable increase in pain. These recommendations are based on the results of a background review, presentations and discussions at an IMMPACT consensus meeting, and iterative drafts of this article modified to accommodate input from the co-authors. We discuss opioid sparing definitions, study objectives, outcome measures, the assessment of opioid-related adverse events, incorporation of adequate pain control in trial design, interpretation of research findings, and future research priorities to inform opioid-sparing trial methods. The considerations and recommendations presented in this article are meant to help guide the design, conduct, analysis, and interpretation of future trials.


Subject(s)
Analgesics, Opioid , Chronic Pain , Analgesics, Opioid/therapeutic use , Chronic Pain/drug therapy , Humans , Pain Management , Pain Measurement
5.
Neurology ; 95(22): 1005-1014, 2020 12 01.
Article in English | MEDLINE | ID: mdl-33055271

ABSTRACT

OBJECTIVE: To present standardized diagnostic criteria for idiopathic distal sensory polyneuropathy (iDSP) and its subtypes: idiopathic mixed fiber sensory neuropathy (iMFN), idiopathic small fiber sensory neuropathy (iSFN), and idiopathic large fiber sensory neuropathy (iLFN) for use in research. METHODS: The Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities and Networks (ACTTION) public-private partnership with the Food and Drug Administration convened a meeting to develop consensus diagnostic criteria for iMFN, iSFN, and iLFN. After background presentations, a collaborative, iterative approach was used to develop expert consensus for new criteria. RESULTS: An iDSP diagnosis requires at least 1 small fiber (SF) or large fiber (LF) symptom, at least 1 SF or LF sign, abnormalities in sensory nerve conduction studies (NCS) or distal intraepidermal nerve fiber density (IENFD), and exclusion of known etiologies. An iMFN diagnosis requires that at least 1 of the above clinical features is SF and 1 clinical feature is LF with abnormalities in sensory NCS or IENFD. Diagnostic criteria for iSFN require at least 1 SF symptom and at least 1 SF sign with abnormal IENFD, normal sensory NCS, and the absence of LF symptoms and signs. Diagnostic criteria for iLFN require at least 1 LF symptom and at least 1 LF sign with normal IENFD, abnormal sensory NCS, and absence of SF symptoms and signs. CONCLUSION: Adoption of these standardized diagnostic criteria will advance research and clinical trials and spur development of novel therapies for iDSPs.


Subject(s)
Nerve Fibers, Myelinated/pathology , Nerve Fibers, Unmyelinated/pathology , Polyneuropathies/diagnosis , Practice Guidelines as Topic , Small Fiber Neuropathy/diagnosis , Humans , Polyneuropathies/pathology , Polyneuropathies/physiopathology , Small Fiber Neuropathy/pathology , Small Fiber Neuropathy/physiopathology
6.
Lancet Neurol ; 17(5): 405-415, 2018 05.
Article in English | MEDLINE | ID: mdl-29545067

ABSTRACT

BACKGROUND: Although several disease-modifying treatments are available for relapsing multiple sclerosis, treatment effects have been more modest in progressive multiple sclerosis and have been observed particularly in actively relapsing subgroups or those with lesion activity on imaging. We sought to assess whether natalizumab slows disease progression in secondary progressive multiple sclerosis, independent of relapses. METHODS: ASCEND was a phase 3, randomised, double-blind, placebo-controlled trial (part 1) with an optional 2 year open-label extension (part 2). Enrolled patients aged 18-58 years were natalizumab-naive and had secondary progressive multiple sclerosis for 2 years or more, disability progression unrelated to relapses in the previous year, and Expanded Disability Status Scale (EDSS) scores of 3·0-6·5. In part 1, patients from 163 sites in 17 countries were randomly assigned (1:1) to receive 300 mg intravenous natalizumab or placebo every 4 weeks for 2 years. Patients were stratified by site and by EDSS score (3·0-5·5 vs 6·0-6·5). Patients completing part 1 could enrol in part 2, in which all patients received natalizumab every 4 weeks until the end of the study. Throughout both parts, patients and staff were masked to the treatment received in part 1. The primary outcome in part 1 was the proportion of patients with sustained disability progression, assessed by one or more of three measures: the EDSS, Timed 25-Foot Walk (T25FW), and 9-Hole Peg Test (9HPT). The primary outcome in part 2 was the incidence of adverse events and serious adverse events. Efficacy and safety analyses were done in the intention-to-treat population. This trial is registered with ClinicalTrials.gov, number NCT01416181. FINDINGS: Between Sept 13, 2011, and July 16, 2015, 889 patients were randomly assigned (n=440 to the natalizumab group, n=449 to the placebo group). In part 1, 195 (44%) of 439 natalizumab-treated patients and 214 (48%) of 448 placebo-treated patients had confirmed disability progression (odds ratio [OR] 0·86; 95% CI 0·66-1·13; p=0·287). No treatment effect was observed on the EDSS (OR 1·06, 95% CI 0·74-1·53; nominal p=0·753) or the T25FW (0·98, 0·74-1·30; nominal p=0·914) components of the primary outcome. However, natalizumab treatment reduced 9HPT progression (OR 0·56, 95% CI 0·40-0·80; nominal p=0·001). In part 1, 100 (22%) placebo-treated and 90 (20%) natalizumab-treated patients had serious adverse events. In part 2, 291 natalizumab-continuing patients and 274 natalizumab-naive patients received natalizumab (median follow-up 160 weeks [range 108-221]). Serious adverse events occurred in 39 (13%) patients continuing natalizumab and in 24 (9%) patients initiating natalizumab. Two deaths occurred in part 1, neither of which was considered related to study treatment. No progressive multifocal leukoencephalopathy occurred. INTERPRETATION: Natalizumab treatment for secondary progressive multiple sclerosis did not reduce progression on the primary multicomponent disability endpoint in part 1, but it did reduce progression on its upper-limb component. Longer-term trials are needed to assess whether treatment of secondary progressive multiple sclerosis might produce benefits on additional disability components. FUNDING: Biogen.


Subject(s)
Disease Progression , Hand/physiopathology , Immunologic Factors/pharmacology , Multiple Sclerosis, Chronic Progressive/drug therapy , Multiple Sclerosis, Chronic Progressive/physiopathology , Natalizumab/pharmacology , Outcome Assessment, Health Care , Severity of Illness Index , Adolescent , Adult , Double-Blind Method , Female , Humans , Immunologic Factors/administration & dosage , Immunologic Factors/adverse effects , Male , Middle Aged , Natalizumab/administration & dosage , Natalizumab/adverse effects , Research Design , Young Adult
7.
Ann Nutr Metab ; 68(3): 164-72, 2016.
Article in English | MEDLINE | ID: mdl-26855046

ABSTRACT

BACKGROUND AND AIMS: Malnutrition is associated with poor clinical outcomes. Whether there is a causal relationship or it merely mirrors a severe patient condition remains unclear. We examined the association of malnutrition with biomarkers characteristic of different pathophysiological states to better understand the underlying etiological mechanisms. METHODS: We prospectively followed consecutive adult medical inpatients. Multivariable regression models were used to investigate the associations between malnutrition - as assessed using the Nutritional Risk Screening (NRS 2002) - and biomarkers linked to inflammation, stress, renal dysfunction, nutritional status and hematologic function. RESULTS: A total of 529 patients were included. In a fully adjusted model, malnutrition was significantly associated with the inflammatory markers procalcitonin (0.20, 95% CI 0.03-0.37), proadrenomedullin (0.28, 95% CI 0.12-0.43) and albumin (-0.39, 95% CI -0.57 to -0.21), the stress marker copeptin (0.34, 95% CI 0.17-0.51), the renal function marker urea (0.23, 95% CI 0.07-0.38), the nutritional markers vitamin D25 (-0.22, 95% CI -0.41 to -0.02) and corrected calcium (0.29, 95% CI 0.10-0.49) and the hematological markers hemoglobin (-0.27, 95% CI -0.43 to -0.10) and red blood cell distribution width (0.26, 95% CI 0.07-0.44). Subgroup analysis suggested that acute malnutrition rather than chronic malnutrition was associated with elevated biomarker levels. CONCLUSION: Acute malnutrition was associated with a pronounced inflammatory response and an alteration in biomarkers associated with different pathophysiological states. Interventional trials are needed to prove causality.


Subject(s)
Biomarkers/blood , Severe Acute Malnutrition/blood , Up-Regulation , Aged , Aged, 80 and over , Cohort Studies , Combined Modality Therapy , Comorbidity , Emergency Service, Hospital , Female , Follow-Up Studies , Humans , Male , Malnutrition/blood , Malnutrition/diagnosis , Malnutrition/epidemiology , Malnutrition/therapy , Middle Aged , Nutrition Assessment , Patient Outcome Assessment , Prospective Studies , Risk , Severe Acute Malnutrition/diagnosis , Severe Acute Malnutrition/epidemiology , Severe Acute Malnutrition/therapy , Switzerland/epidemiology , Tertiary Care Centers , Triage
8.
Ther Adv Neurol Disord ; 9(1): 31-43, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26788129

ABSTRACT

Multiple sclerosis (MS) is a common and chronic central nervous system (CNS) demyelinating disease and a leading cause of permanent disability. Patients most often present with a relapsing-remitting disease course, typically progressing over time to a phase of relentless advancement in secondary progressive MS (SPMS), for which approved disease-modifying therapies are limited. In this review, we summarize the pathophysiological mechanisms involved in the development of SPMS and the rationale and clinical potential for natalizumab, which is currently approved for the treatment of relapsing forms of MS, to exert beneficial effects in reducing disease progression unrelated to relapses in SPMS. In both forms of MS, active brain-tissue injury is associated with inflammation; but in SPMS, the inflammatory response occurs at least partly behind the blood-brain barrier and is followed by a cascade of events, including persistent microglial activation that may lead to chronic demyelination and neurodegeneration associated with irreversible disability. In patients with relapsing forms of MS, natalizumab therapy is known to significantly reduce intrathecal inflammatory responses which results in reductions in brain lesions and brain atrophy as well as beneficial effects on clinical measures, such as reduced frequency and severity of relapse and reduced accumulation of disability. Natalizumab treatment also reduces levels of cerebrospinal fluid chemokines and other biomarkers of intrathecal inflammation, axonal damage and demyelination, and has demonstrated the ability to reduce innate immune activation and intrathecal immunoglobulin synthesis in patients with MS. The efficacy of natalizumab therapy in SPMS is currently being investigated in a randomized, double-blind, placebo-controlled trial.

9.
J Emerg Med ; 50(4): 678-89, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26458788

ABSTRACT

BACKGROUND: Accurate initial patient triage in the emergency department (ED) is pivotal in reducing time to effective treatment by the medical team and in expediting patient flow. The Manchester Triage System (MTS) is widely implemented for this purpose. Yet the overall effectiveness of its performance remains unclear. OBJECTIVES: We investigated the ability of MTS to accurately assess high treatment priority and to predict adverse clinical outcomes in a large unselected population of medical ED patients. METHODS: We prospectively followed consecutive medical patients seeking ED care for 30 days. Triage nurses implemented MTS upon arrival of patients admitted to the ED. The primary endpoint was high initial treatment priority adjudicated by two independent physicians. Secondary endpoints were 30-day all-cause mortality, admission to the intensive care unit (ICU), and length of stay. We used regression models with area under the receiver operating characteristic curve (AUC) as a measure of discrimination. RESULTS: Of the 2407 patients, 524 (21.8%) included patients (60.5 years, 55.7% males) who were classified as high treatment priority; 3.9% (n = 93) were transferred to the ICU; and 5.7% (n = 136) died. The initial MTS showed fair prognostic accuracy in predicting treatment priority (AUC 0.71) and ICU admission (AUC 0.68), but not in predicting mortality (AUC 0.55). Results were robust across most predefined subgroups, including patients diagnosed with infections, or cardiovascular or gastrointestinal diseases. In the subgroup of neurological symptoms and disorders, the MTS showed the best performance. CONCLUSION: The MTS showed fair performance in predicting high treatment priority and adverse clinical outcomes across different medical ED patient populations. Future research should focus on further refinement of the MTS so that its performance can be improved. TRIAL REGISTRATION: Clinicaltrials.gov: NCT01768494.


Subject(s)
Emergency Service, Hospital/organization & administration , Outcome and Process Assessment, Health Care , Triage/methods , Wounds and Injuries/therapy , Female , Hospital Mortality , Humans , Injury Severity Score , Male , Middle Aged , Prospective Studies , Switzerland , Wounds and Injuries/mortality
10.
Medicine (Baltimore) ; 94(49): e2264, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26656373

ABSTRACT

Only a small proportion of blood cultures routinely performed in emergency department (ED) patients is positive. Multiple clinical scores and biomarkers have previously been examined for their ability to predict bacteremia. Conclusive clinical validation of these scores and biomarkers is essential.This observational cohort study included patients with suspected infection who had blood culture sampling at ED admission. We assessed 5 clinical scores and admission concentrations of procalcitonin (PCT), C-reactive protein (CRP), lymphocyte and white blood cell counts, the neutrophil-lymphocyte count ratio (NLCR), and the red blood cell distribution width (RDW). Two independent physicians assessed true blood culture positivity. We used logistic regression models with area under the curve (AUC) analysis.Of 1083 patients, 104 (9.6%) had positive blood cultures. Of the clinical scores, the Shapiro score performed best (AUC 0.729). The best biomarkers were PCT (AUC 0.803) and NLCR (AUC 0.700). Combining the Shapiro score with PCT levels significantly increased the AUC to 0.827. Limiting blood cultures only to patients with either a Shapiro score of ≥4 or PCT > 0.1 µg/L would reduce negative sampling by 20.2% while still identifying 100% of positive cultures. Similarly, a Shapiro score ≥3 or PCT >0.25 µg/L would reduce cultures by 41.7% and still identify 96.1% of positive blood cultures.Combination of the Shapiro score with admission levels of PCT can help reduce unnecessary blood cultures with minimal false negative rates.The study was registered on January 9, 2013 at the 'ClinicalTrials.gov' registration web site (NCT01768494).


Subject(s)
Bacteremia/blood , Bacteremia/diagnosis , Bacteriological Techniques/methods , Aged , Biomarkers , C-Reactive Protein/analysis , Calcitonin/blood , Calcitonin Gene-Related Peptide , Erythrocytes , False Negative Reactions , Female , Humans , Leukocyte Count , Lymphocytes , Male , Middle Aged , Neutrophils , Prospective Studies , Protein Precursors/blood
11.
Nutrition ; 31(11-12): 1385-93, 2015.
Article in English | MEDLINE | ID: mdl-26429660

ABSTRACT

OBJECTIVE: The aim of this study was to examine the prevalence of nutritional risk and its association with multiple adverse clinical outcomes in a large cohort of acutely ill medical inpatients from a Swiss tertiary care hospital. METHODS: We prospectively followed consecutive adult medical inpatients for 30 d. Multivariate regression models were used to investigate the association of the initial Nutritional Risk Score (NRS 2002) with mortality, impairment in activities of daily living (Barthel Index <95 points), hospital length of stay, hospital readmission rates, and quality of life (QoL; adapted from EQ5 D); all parameters were measured at 30 d. RESULTS: Of 3186 patients (mean age 71 y, 44.7% women), 887 (27.8%) were at risk for malnutrition with an NRS ≥3 points. We found strong associations (odds ratio/hazard ratio [OR/HR], 95% confidence interval [CI]) between nutritional risk and mortality (OR/HR, 7.82; 95% CI, 6.04-10.12), impaired Barthel Index (OR/HR, 2.56; 95% CI, 2.12-3.09), time to hospital discharge (OR/HR, 0.48; 95% CI, 0.43-0.52), hospital readmission (OR/HR, 1.46; 95% CI, 1.08-1.97), and all five dimensions of QoL measures. Associations remained significant after adjustment for sociodemographic characteristics, comorbidities, and medical diagnoses. Results were robust in subgroup analysis with evidence of effect modification (P for interaction < 0.05) based on age and main diagnosis groups. CONCLUSION: Nutritional risk is significant in acutely ill medical inpatients and is associated with increased medical resource use, adverse clinical outcomes, and impairments in functional ability and QoL. Randomized trials are needed to evaluate evidence-based preventive and treatment strategies focusing on nutritional factors to improve outcomes in these high-risk patients.


Subject(s)
Activities of Daily Living , Acute Disease/mortality , Hospitalization , Malnutrition/complications , Nutritional Status , Quality of Life , Aged , Female , Humans , Inpatients , Length of Stay , Male , Odds Ratio , Patient Readmission , Prospective Studies , Socioeconomic Factors , Switzerland/epidemiology , Tertiary Care Centers
12.
BMC Med ; 13: 104, 2015 May 01.
Article in English | MEDLINE | ID: mdl-25934044

ABSTRACT

BACKGROUND: Urinary tract infections (UTIs) are common drivers of antibiotic use. The minimal effective duration of antibiotic therapy for UTIs is unknown, but any reduction is important to diminish selection pressure for antibiotic resistance, costs, and drug-related side-effects. The aim of this study was to investigate whether an algorithm based on procalcitonin (PCT) and quantitative pyuria reduces antibiotic exposure. METHODS: From April 2012 to March 2014, we conducted a factorial design randomized controlled open-label trial. Immunocompetent adults with community-acquired non-catheter-related UTI were enrolled in the emergency department of a tertiary-care 600-bed hospital in northwestern Switzerland. Clinical presentation was used to guide initiation and duration of antibiotic therapy according to current guidelines (control group) or with a PCT-pyuria-based algorithm (PCT-pyuria group). The primary endpoint was overall antibiotic exposure within 90 days. Secondary endpoints included duration of the initial antibiotic therapy, persistent infection 7 days after end of therapy and 30 days after enrollment, recurrence and rehospitalizations within 90 days. RESULTS: Overall, 394 patients were screened, 228 met predefined exclusion criteria, 30 declined to participate, and 11 were not eligible. Of these, 125 (76% women) were enrolled in the intention-to-treat (ITT) analysis and 96 patients with microbiologically confirmed UTI constituted the per protocol group; 84 of 125 (67%) patients had a febrile UTI, 28 (22%) had bacteremia, 5 (4%) died, and 3 (2%) were lost to follow-up. Overall antibiotic exposure within 90 days was shorter in the PCT-pyuria group than in the control group (median 7.0 [IQR, 5.0-14.0] vs. 10.0 [IQR, 7.0-16.0] days, P = 0.011) in the ITT analysis. Mortality, rates of persistent infections, recurrences, and rehospitalizations were not different. CONCLUSIONS: A PCT-pyuria-based algorithm reduced antibiotic exposure by 30% when compared to current guidelines without apparent negative effects on clinical outcomes.


Subject(s)
Algorithms , Anti-Bacterial Agents/therapeutic use , Calcitonin/analysis , Protein Precursors/analysis , Pyuria , Urinary Tract Infections/drug therapy , Adult , Aged , Aged, 80 and over , Biomarkers/analysis , Calcitonin Gene-Related Peptide , Female , Humans , Male , Middle Aged , Switzerland
13.
J Dermatol ; 42(8): 778-85, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25982244

ABSTRACT

Early differentiation of erysipelas from deep vein thrombosis (DVT) based solely on clinical signs and symptoms is challenging. There is a lack of data regarding the usefulness of the inflammatory biomarkers procalcitonin (PCT), C-reactive protein (CRP) and white blood cell (WBC) count in the diagnosis of localized cutaneous infections. Herein, we investigated the diagnostic value of inflammatory markers in a prospective at-risk patient population. This is an observational quality control study including consecutive patients presenting with a final diagnosis of either erysipelas or DVT. The association of PCT (µg/L) and CRP (mg/L) levels and WBC counts (g/L) with the primary outcome was assessed using logistic regression models with area under the receiver-operator curve. Forty-eight patients (erysipelas, n = 31; DVT, n = 17) were included. Compared with patients with DVT, those with erysipelas had significantly higher PCT concentrations. No significant differences in CRP concentrations and WBC counts were found between the two groups. At a PCT threshold of 0.1 µg/L or more, specificity and positive predictive values (PPV) for erysipelas were 82.4% and 85.7%, respectively, and increased to 100% and 100% at a threshold of more than 0.25 µg/L. Levels of PCT also correlated with the severity of erysipelas, with a stepwise increase according to systemic inflammatory response syndrome criteria. We found a high discriminatory value of PCT for differentiation between erysipelas and DVT, in contrast to other commonly used inflammatory biomarkers. Whether the use of PCT levels for early differentiation of erysipelas from DVT reduces unnecessary antibiotic exposure needs to be assessed in an interventional trial.


Subject(s)
Calcitonin/blood , Erysipelas/diagnosis , Venous Thrombosis/diagnosis , Aged , Aged, 80 and over , Biomarkers/blood , C-Reactive Protein/metabolism , Diagnosis, Differential , Erysipelas/blood , Female , Humans , Leukocyte Count , Male , Middle Aged , Prospective Studies , Venous Thrombosis/blood
14.
Dis Markers ; 2015: 795801, 2015.
Article in English | MEDLINE | ID: mdl-25861154

ABSTRACT

The Glasgow Prognostic Score (GPS) is useful for predicting long-term mortality in cancer patients. Our aim was to validate the GPS in ED patients with different cancer-related urgency and investigate whether biomarkers would improve its accuracy. We followed consecutive medical patients presenting with a cancer-related medical urgency to a tertiary care hospital in Switzerland. Upon admission, we measured procalcitonin (PCT), white blood cell count, urea, 25-hydroxyvitamin D, corrected calcium, C-reactive protein, and albumin and calculated the GPS. Of 341 included patients (median age 68 years, 61% males), 81 (23.8%) died within 30 days after admission. The GPS showed moderate prognostic accuracy (AUC 0.67) for mortality. Among the different biomarkers, PCT provided the highest prognostic accuracy (odds ratio 1.6 (95% confidence interval 1.3 to 1.9), P < 0.001, AUC 0.69) and significantly improved the GPS to a combined AUC of 0.74 (P = 0.007). Considering all investigated biomarkers, the AUC increased to 0.76 (P < 0.001). The GPS performance was significantly improved by the addition of PCT and other biomarkers for risk stratification in ED cancer patients. The benefit of early risk stratification by the GPS in combination with biomarkers from different pathways should be investigated in further interventional trials.


Subject(s)
Biomarkers, Tumor/blood , Calcitonin/blood , Glasgow Outcome Scale , Neoplasms/blood , Protein Precursors/blood , Aged , C-Reactive Protein/metabolism , Calcitonin Gene-Related Peptide , Calcium/blood , Cohort Studies , Female , Humans , Male , Middle Aged , Neoplasms/pathology , Predictive Value of Tests , Serum Albumin/metabolism , Urea/blood , Vitamin D/analogs & derivatives , Vitamin D/blood
15.
Klin Monbl Augenheilkd ; 230(4): 329-32, 2013 Apr.
Article in German | MEDLINE | ID: mdl-23629771

ABSTRACT

A problem in cataract surgery consists in the preoperative identification of the appropriate intraocular lens (IOL) power. Different calculation approaches have been developed for this purpose; raytracing methods represent one of the most exact but also mathematically more challenging methods. This article gives a systematic overview of the different raytracing calculations available and described in the literature and compares their results. It has been shown that raytracing includes physical measurements and IOL manufacturing data but no approximations. The prediction error is close to zero and an essential advantage is the applicability to different conditions without the need of modifications. Compared to the classical formulae the raytracing methods are more precise overall, but due to the various data and property situations they are hardly comparable yet. The raytracing calculations represent a good alternative to the 3rd generation formulae. They minimize refractive errors, are wider applicable and provide better results overall, particularly in eyes with preconditions.


Subject(s)
Algorithms , Computer-Aided Design , Lenses, Intraocular , Models, Theoretical , Prosthesis Design/methods , Computer Simulation , Humans , Reproducibility of Results , Sensitivity and Specificity
16.
J Pain Symptom Manage ; 42(6): 903-17, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21945130

ABSTRACT

CONTEXT: This article presents the results of a pivotal Phase 3 study that assesses a new treatment for the management of chronic low back pain: a transdermal patch containing the opioid buprenorphine. In this randomized, placebo-controlled study with an enriched enrollment design, the buprenorphine transdermal system (BTDS) was found to be efficacious and generally well tolerated. OBJECTIVES: This enriched, multicenter, randomized, double-blind study evaluated the efficacy, tolerability, and safety of BTDS in opioid-naïve patients who had moderate to severe chronic low back pain. METHODS: Patients who tolerated and responded to BTDS (10 or 20 mcg/hour) during an open-label run-in period were randomized to continue BTDS 10 or 20 mcg/hour or receive matching placebo. The primary outcome was "average pain over the last 24 hours" at the end of the 12-week double-blind phase, collected on an 11-point scale (0=no pain, 10=pain as bad as you can imagine). Sleep disturbance (Medical Outcomes Study subscale) and total number of supplemental analgesic tablets used were secondary efficacy variables. RESULTS: Fifty-three percent of patients receiving open-label BTDS (541 of 1024) were randomized to receive BTDS (n=257) or placebo (n=284). Patients receiving BTDS reported statistically significantly lower pain scores at Week 12 compared with placebo (least square mean treatment difference: -0.58, P=0.010). Sensitivity analyses of the primary efficacy variable and results of the analysis of secondary efficacy variables supported the efficacy of BTDS relative to placebo. During the double-blind phase, the incidence of treatment-emergent adverse events was 55% for the BTDS treatment group and 52% for the placebo treatment group. Laboratory, vital sign, and electrocardiogram evaluations did not reveal unanticipated safety findings. CONCLUSION: BTDS was efficacious in the treatment of opioid-naïve patients with moderate to severe chronic low back pain. Most treatment-emergent adverse events observed were consistent with those associated with the use of opioid agonists and transdermal patches.


Subject(s)
Analgesics, Opioid/therapeutic use , Buprenorphine/therapeutic use , Low Back Pain/drug therapy , Administration, Cutaneous , Adult , Aged , Analgesics, Opioid/administration & dosage , Analgesics, Opioid/adverse effects , Buprenorphine/administration & dosage , Buprenorphine/adverse effects , Double-Blind Method , Electrocardiography , Female , Humans , Male , Middle Aged , Pain Measurement , Treatment Outcome
17.
J Pain ; 12(11): 1163-73, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21807566

ABSTRACT

UNLABELLED: In this enriched design study, 1,160 opioid-experienced patients with chronic, moderate to severe low back pain entered an open-label run-in period; 660 demonstrated analgesic benefit from and tolerability to buprenorphine transdermal system 20 mcg/hour (BTDS 20) treatment and were randomized to receive either BTDS 20, BTDS 5 mcg/hour (BTDS 5), or the active control (immediate release oxycodone 40-mg/day) during an 84-day double-blind phase. The primary endpoint, "average pain in the last 24 hours" during double-blind weeks 4, 8, and 12, was significantly lower for patients receiving BTDS 20 compared with patients receiving BTDS 5 (P < .001, treatment difference of -.67). A treatment difference of -.75 in favor of oxycodone 40 mg/day versus BTDS 5 (P < .001) indicated the assay sensitivity of the study. Four sensitivity analyses, secondary, and exploratory analyses supported the results of the primary analysis. Incidences of treatment-emergent adverse events were 56% during the open-label period, and 59, 77, and 73% for the BTDS 5, BTDS 20, and oxycodone 40 mg/day treatment groups, respectively, during the double-blind phase. One death considered unrelated to study treatment occurred in a patient receiving BTDS 10 during the run-in period. BTDS 20 treatment was demonstrated to be efficacious and generally well tolerated. PERSPECTIVE: This article presents results of a pivotal Phase 3 study that assesses a new treatment for the management of chronic low back pain: a transdermal patch containing the opioid buprenorphine (BTDS). In this active controlled, superiority study with an enriched design, BTDS 20 was found to be efficacious and generally well tolerated.


Subject(s)
Analgesics, Opioid/administration & dosage , Buprenorphine/administration & dosage , Low Back Pain/drug therapy , Administration, Cutaneous , Analgesics, Opioid/adverse effects , Buprenorphine/adverse effects , Chronic Pain/drug therapy , Double-Blind Method , Female , Humans , Male , Middle Aged , Oxycodone/therapeutic use , Transdermal Patch , Treatment Outcome
19.
Arch Phys Med Rehabil ; 84(3): 329-34, 2003 Mar.
Article in English | MEDLINE | ID: mdl-12638099

ABSTRACT

OBJECTIVE: To evaluate the efficacy of 8 hours of continuous low-level heatwrap therapy for the treatment of acute nonspecific low back pain (LBP). DESIGN: Prospective, randomized, parallel, single-blind (investigator), placebo-controlled, multicenter clinical trial. SETTING: Five community-based research facilities. PARTICIPANTS: Two-hundred nineteen subjects, aged 18 to 55 years, with acute nonspecific LBP. INTERVENTION: Subjects were stratified by baseline pain intensity and gender and randomized to one of the following groups: evaluation of efficacy (heatwrap, n=95; oral placebo, n=96) and blinding (oral ibuprofen, n=12; unheated back, wrap n=16). All treatments were administered for 3 consecutive days with 2 days of follow-up. MAIN OUTCOME MEASURES: Primary: day 1 mean pain relief (0- to 5-point verbal response scale). Secondary: muscle stiffness (101-point numeric rating scale), lateral trunk flexibility (fingertip-floor distance), and Roland-Morris Disability Questionnaire over 3 days of treatment and 2 days of follow-up. RESULTS: Heatwrap therapy was shown to provide significant therapeutic benefits when compared with placebo during both the treatment and follow-up period. On day 1, the heatwrap group had greater pain relief (1.76+/-.10 vs 1.05+/-.11, P <.001), less muscle stiffness (43.1+/-1.21 vs 47.6+/-1.21, P=.008), and increased flexibility (18.6+/-.44 cm vs 16.5+/-.45 cm, P=.001) compared with placebo. Disability was also reduced in the heatwrap group (5.3 vs 7.4, P=.0002). Adverse events were mild and infrequent. CONCLUSION: Continuous low-level heatwrap therapy was shown to be effective for the treatment of acute, nonspecific LBP.


Subject(s)
Hot Temperature/therapeutic use , Low Back Pain/therapy , Physical Therapy Modalities/methods , Acute Disease , Adolescent , Adult , Disability Evaluation , Female , Hot Temperature/adverse effects , Humans , Low Back Pain/diagnosis , Low Back Pain/physiopathology , Male , Middle Aged , Muscle, Skeletal/physiopathology , Pain Measurement , Physical Therapy Modalities/adverse effects , Prospective Studies , Range of Motion, Articular , Single-Blind Method , Treatment Outcome
20.
Arch Phys Med Rehabil ; 84(3): 335-42, 2003 Mar.
Article in English | MEDLINE | ID: mdl-12638100

ABSTRACT

OBJECTIVE: To evaluate of the efficacy and safety of 8 hours of continuous, low-level heatwrap therapy administered during sleep. DESIGN: Prospective, randomized, parallel, single-blind (investigator), placebo-controlled, multicenter clinical trial. SETTING: Two community-based research facilities. PARTICIPANTS: Seventy-six patients, aged 18 to 55 years, with acute, nonspecific low back pain. INTERVENTIONS: Subjects were stratified by baseline pain intensity and gender and randomized to one of the following treatments: evaluation of efficacy (heatwrap, n=33; oral placebo, n=34) or blinding (unheated wrap, n=5; oral ibuprofen, n=4). All treatments were administered for 3 consecutive nights with 2 days of follow-up. MAIN OUTCOME MEASURES: Primary: morning pain relief (hour 0) on days 2 through 4 (0-5-point verbal response scale). Secondary: mean daytime pain relief score (days 2-4, hours 0-8), mean extended pain relief score (day 4, hour 0; day 5, hour 0), muscle stiffness, lateral trunk flexibility, and disability (Roland-Morris Disability Questionnaire). RESULTS: Heatwrap therapy was significantly better than placebo at hour 0 on days 2 through 4 for mean pain relief (P=.00005); at hours 0 through 8 on days 2 through 4 for pain relief (P<.001); at hour 0 on day 4 and at hour 0 on day 5 for mean pain relief (P<.001); on day 4 in reduction of morning muscle stiffness (P<.001); for increased lateral trunk flexibility on day 4 (P<.002); and for decreased low back disability on day 4 (P=.005). Adverse events were mild and infrequent. CONCLUSIONS: Overnight use of heatwrap therapy provided effective pain relief throughout the next day, reduced muscle stiffness and disability, and improved trunk flexibility. Positive effects were sustained more than 48 hours after treatments were completed.


Subject(s)
Hot Temperature/therapeutic use , Low Back Pain/therapy , Physical Therapy Modalities/methods , Acute Disease , Adolescent , Adult , Analgesics, Non-Narcotic/therapeutic use , Demography , Disability Evaluation , Female , Hot Temperature/adverse effects , Humans , Ibuprofen/therapeutic use , Male , Middle Aged , Muscle, Skeletal/physiopathology , Pain Measurement , Physical Therapy Modalities/adverse effects , Prospective Studies , Range of Motion, Articular , Single-Blind Method , Sleep , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...