Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 179
Filter
1.
J Dairy Sci ; 2024 Mar 13.
Article in English | MEDLINE | ID: mdl-38490560

ABSTRACT

Implementing biosecurity protocols is necessary to reduce the spread of disease on dairy farms. In Ontario, biosecurity implementation is variable among farms and barriers to biosecurity are unknown. Thirty-five semi-structured interviews were conducted between July 2022 and January 2023 with dairy producers (n = 17) and veterinarians (n = 18). Participants also completed a demographic survey. Thematic analysis was performed with constructivist and grounded theory paradigms. Thematic coding was done inductively using NVivo software. Dairy producers' understanding of the definition of biosecurity varied, with all understanding that it was to prevent the spread of disease. Furthermore, the most common perception was that biosecurity prevented the spread of disease onto the farm. Both veterinarians and producers stated that closed herds were one of the most important biosecurity protocols. Barriers to biosecurity implementation included a lack of resources, internal and external business influencers, individual perceptions of biosecurity, and a lack of industry initiative. Understanding the barriers producers face provides veterinarians with the chance to tailor their communication to ensure barriers are reduced, or for other industry members to reduce the barriers.

2.
J Dairy Sci ; 107(7): 4605-4615, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38310960

ABSTRACT

The objective of this review was to outline current implementation of biosecurity, the impact of biosecurity on the industry, and producers' and veterinarians' perceptions of biosecurity, with a focus on the Canadian dairy industry. Biosecurity has an important role in farm safety by reducing the spread of pathogens and contaminants, improving animal health and production, and maintaining human safety. Implementation of biosecurity practices varies among farms and countries. Because Canada's supply management system is different than other countries, different barriers and perceptions of biosecurity may exist. Producers may have negative perspectives on biosecurity, such as it being expensive or time consuming. Producers are motivated or deterred from biosecurity implementation for many reasons, including perceived value, disease risk, and financial incentives or deterrents. In addition, because veterinarians are a trusted source of information, their approaches to discussions on biosecurity implementation are important to understand. Veterinarians and producers appear to have differing opinions on the importance of biosecurity and approaches to discussing biosecurity. Improving biosecurity implementation requires a multifactorial approach, such as individualized education and awareness for producers, further research into efficacy of and barriers to biosecurity, and development of strategies for effective communication between veterinarians and producers.


Subject(s)
Dairying , Canada , Animals , Humans , Cattle , Veterinarians/psychology , Perception
3.
Dig Dis Sci ; 68(12): 4350-4359, 2023 12.
Article in English | MEDLINE | ID: mdl-37796405

ABSTRACT

BACKGROUND: The economic impact of perianal fistulas in Crohn's disease (CD) has not been formally assessed in population-based studies in the biologic era. AIM: To compare direct health care costs in persons with and without perianal fistulas. METHODS: We performed a longitudinal population-based study using administrative data from Ontario, Canada. Adults (> 17 years) with CD were identified between 2007 and 2013 using validated algorithms. Perianal fistula positive "cases" were matched to up to 4 "controls" with CD without perianal fistulas based on age, sex, geographic region, year of CD diagnosis and duration of follow-up. Direct health care costs, excluding drug costs from private payers, were estimated annually beginning 5 years before (lookback) and up to 9 years after perianal fistula diagnosis (study completion) for cases and a standardized date for matched controls. RESULTS: A total of 581 cases were matched to 1902 controls. The annual per capita direct cost for cases was similar at lookback compared to controls ($2458 ± 6770 vs $2502 ± 10,752; p = 0.952), maximally greater in the first year after perianal fistulas diagnosis ($16,032 ± 21,101 vs $6646 ± 13,021; p < 0.001) and remained greater at study completion ($11,358 ± 17,151 vs $5178 ± 9792; p < 0.001). At perianal fistula diagnosis, the cost difference was driven primarily by home care cost (tenfold greater), publicly-covered prescription drugs (threefold greater) and hospitalizations (twofold greater), whereas at study completion, prescription drugs were the dominant driver (threefold greater). CONCLUSION: In our population-based cohort, perianal fistulas were associated with significantly higher direct healthcare costs at the time of perianal fistulas diagnosis and sustained long-term.


Subject(s)
Crohn Disease , Rectal Fistula , Adult , Humans , Crohn Disease/diagnosis , Crohn Disease/epidemiology , Crohn Disease/therapy , Follow-Up Studies , Treatment Outcome , Retrospective Studies , Rectal Fistula/diagnosis , Rectal Fistula/epidemiology , Health Care Costs
4.
Headache ; 62(4): 522-529, 2022 04.
Article in English | MEDLINE | ID: mdl-35403223

ABSTRACT

OBJECTIVES: To assess real-world effectiveness, safety, and usage of erenumab in Canadian patients with episodic and chronic migraine with prior ineffective prophylactic treatments. BACKGROUND: In randomized controlled trials, erenumab demonstrated efficacy for migraine prevention in patients with ≤4 prior ineffective prophylactic migraine therapies. The "Migraine prevention with AimoviG: Informative Canadian real-world study" (MAGIC) assessed real-world effectiveness of erenumab in Canadian patients with migraine. METHODS: MAGIC was a prospective open-label, observational study conducted in Canadian patients with chronic migraine (CM) and episodic migraine (EM) with two to six categories of prior ineffective prophylactic therapies. Participants were administered 70 mg or 140 mg erenumab monthly based on physician's assessment. Migraine attacks were self-assessed using an electronic diary and patient-reported outcome questionnaires. The primary outcome was the proportion of subjects achieving ≥50% reduction in monthly migraine days (MMD) after the 3-month treatment period. RESULTS: Among the 95 participants who mostly experienced two (54.7%) or three (32.6%) prior categories of ineffective prophylactic therapies and who initiated erenumab, treatment was generally safe and well tolerated; 89/95 (93.7%) participants initiated treatment with 140 mg erenumab. At week 12, 32/95 (33.7%) participants including 17/64 (26.6%) CM and 15/32 (48.4%) EM achieved ≥50% reduction in MMD while 30/86 (34.9%) participants including 19/55 (34.5%) CM and 11/31 (35.5%) EM achieved ≥50% reduction in MMD at week 24. Through patient-reported outcome questionnaires, 62/95 (65.3%) and 45/86 (52.3%) participants reported improvement of their condition at weeks 12 and 24, respectively. Physicians observed improvement in the condition of 78/95 (82.1%) and 67/86 (77.9%) participants at weeks 12 and 24, respectively. CONCLUSION: One-third of patients with EM and CM achieved ≥50% MMD reduction after 3 months of erenumab treatment. This study provides real-world evidence of erenumab effectiveness, safety, and usage for migraine prevention in adult Canadian patients with multiple prior ineffective prophylactic treatments.


Subject(s)
Calcitonin Gene-Related Peptide Receptor Antagonists , Migraine Disorders , Adult , Analgesics/therapeutic use , Antibodies, Monoclonal, Humanized , Canada , Double-Blind Method , Humans , Migraine Disorders/chemically induced , Migraine Disorders/drug therapy , Migraine Disorders/prevention & control , Prospective Studies , Treatment Outcome
5.
Healthc Manage Forum ; 35(1): 25-28, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34555963

ABSTRACT

A combination of factors during the SARS-CoV-2 pandemic led to a disproportionately high mortality rate among residents of long-term care homes in Canada and around the globe. Retrospectively, some of these factors could have been avoided or minimized. Many infection control approaches recommended by public health experts and regulators, while well intended to keep people safe from disease exposure, threatened other vital aspects of health and well-being. Furthermore, focusing narrowly on infection control practices does not address long-standing operational and infrastructural factors that contributed significantly to the pandemic toll. In this article, we review traditional (ie. institutional) long-term care practices that were associated with increased risk during the pandemic and highlight one transformational model (the Green House Project) that worked well to protect the lives and livelihood of people within congregate care settings. Drawing on this evidence, we identify specific strategies for necessary and overdue improvements in long-term care homes.


Subject(s)
COVID-19 , Pandemics , Humans , Long-Term Care , Pandemics/prevention & control , Retrospective Studies , SARS-CoV-2
6.
Headache ; 62(1): 78-88, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34807454

ABSTRACT

OBJECTIVE: To describe the real-world treatment persistence (defined as the continuation of medication for the prescribed treatment duration), demographics and clinical characteristics, and treatment patterns for patients prescribed erenumab for migraine prevention in Canada. BACKGROUND: The effectiveness of prophylactic migraine treatments is often undermined by poor treatment persistence. In clinical trials, erenumab has demonstrated efficacy and tolerability as a preventive treatment, but less is known about the longer term treatment persistence with erenumab. METHODS: This is a real-world retrospective cohort study where a descriptive analysis of secondary patient data was conducted. Enrollment and prescription data were extracted from a patient support program for a cohort of patients prescribed erenumab in Canada between September 2018 and December 2019 and analyzed for persistence, baseline demographics, clinical characteristics, and treatment patterns. Descriptive analyses and unadjusted Kaplan-Meier (KM) curves were used to summarize the persistence and dose escalation/de-escalation at different timepoints. RESULTS: Data were analyzed for 14,282 patients. Median patient age was 47 years, 11,852 (83.0%) of patients were female, and 9443 (66.1%) had chronic migraine at treatment initiation. Based on KM methods, 71.0% of patients overall were persistent to erenumab 360 days after treatment initiation. Within 360 days of treatment initiation, it is estimated that 59.3% (KM-derived) of patients who initiated erenumab at 70 mg escalated to 140 mg, and 4.4% (KM-derived) of patients who initiated at 140 mg de-escalated to 70 mg. CONCLUSIONS: The majority of patients prescribed erenumab remained persistent for at least a year after treatment initiation, and most patients initiated or escalated to a 140 mg dose. These results suggest that erenumab is well tolerated, and its uptake as a new class of prophylactic treatment for migraine in real-world clinical practice is not likely to be undermined by poor persistence when coverage for erenumab is easily available.


Subject(s)
Antibodies, Monoclonal, Humanized/pharmacology , Calcitonin Gene-Related Peptide Receptor Antagonists/pharmacology , Medication Adherence , Migraine Disorders/prevention & control , Adult , Aged , Antibodies, Monoclonal, Humanized/administration & dosage , Antibodies, Monoclonal, Humanized/adverse effects , Calcitonin Gene-Related Peptide Receptor Antagonists/administration & dosage , Calcitonin Gene-Related Peptide Receptor Antagonists/adverse effects , Canada , Chronic Disease , Female , Humans , Male , Middle Aged , Retrospective Studies , Young Adult
7.
Appl Physiol Nutr Metab ; 46(10): 1257-1264, 2021 Oct.
Article in English | MEDLINE | ID: mdl-33930277

ABSTRACT

Despite compelling muscular structure and function changes resulting from blood flow restricted (BFR) resistance training, mechanisms of action remain poorly characterized. Alterations in tissue O2 saturation (TSI%) and metabolites are potential drivers of observed changes, but their relationships with degree of occlusion pressure are unclear. We examined local TSI% and blood lactate (BL) concentration during BFR training to failure using different occlusion pressures on strength, hypertrophy, and muscular endurance over an 8-week training period. Twenty participants (11 males/9 females) trained 3/wk for 8 wk using high pressure (100% resting limb occlusion pressure, LOP; 20% one-repetition maximum (1RM)), moderate pressure (50% LOP, 20%1RM), or traditional resistance training (TRT; 70%1RM). Strength, size, and muscular endurance were measured pre/post training. TSI% and BL were quantified during a training session. Despite overall increases, no group preferentially increased strength, hypertrophy, or muscular endurance (p > 0.05). Neither TSI% nor BL concentration differed between groups (p > 0.05). Moderate pressure resulted in greater accumulated deoxygenation stress (TSI% × time) (-6352 ± 3081, -3939 ± 1835, -2532 ± 1349 au for moderate pressure, high pressure, and TRT, p = 0.018). We demonstrate that BFR training to task-failure elicits similar strength, hypertrophy, and muscular endurance changes to TRT. Further, varied occlusion pressure does not impact these outcomes or elicit changes in TSI% or BL concentrations. Novelty: Training to task failure with low-load blood flow restriction elicits similar improvements to traditional resistance training, regardless of occlusion pressure. During blood flow restriction, altering occlusion pressure does not proportionally impact tissue O2 saturation nor blood lactate concentrations.


Subject(s)
Hypoxia , Lactic Acid/blood , Muscle, Skeletal/growth & development , Regional Blood Flow , Resistance Training , Adaptation, Physiological , Adult , Constriction , Female , Humans , Male , Muscle Strength , Muscle, Skeletal/blood supply , Young Adult
8.
Obes Sci Pract ; 6(4): 382-389, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32874673

ABSTRACT

OBJECTIVE: Weight management medications can significantly increase patients' chances of achieving a clinically meaningful weight loss if patients persist with treatment. This retrospective observational study of de-identified medical records of 311 patients is the first real-world study examining persistence with liraglutide 3.0 mg in Canada, and also investigates associations between the SaxendaCare® patient support program and persistence and weight loss. METHODS: Overall persistence was assessed, as well as associations of enrollment in SaxendaCare®, persistence and weight loss. RESULTS: Overall mean (standard deviation) persistence with liraglutide 3.0 mg was 6.3 (4.1) months, and 67.5% (n = 210) and 53.7% (n = 167) of patients persisted for ≥4 and ≥ 6 months, respectively. Enrollment in SaxendaCare® was associated with significantly longer persistence with liraglutide 3.0 mg and greater weight loss. Patients enrolled in SaxendaCare® (n = 119) persisted for 7.9 (4.0) versus 5.2 (3.8) months for those not enrolled (n = 184) (p < 0.001), and had significantly greater percent weight loss after 6 months regardless of the duration of their persistence (-7.9% vs -5.5% from baseline, p < 0.01). CONCLUSIONS: These findings suggest that, in clinical settings, persistence with liraglutide 3.0 mg can exceed 6 months, and that enrolling in SaxendaCare® may be associated with comparatively longer persistence and, regardless of persistence, greater weight loss.

9.
Obes Sci Pract ; 6(4): 439-444, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32874678

ABSTRACT

OBJECTIVE: Liraglutide 3.0 mg is associated with clinically significant weight loss in clinical trials, but real-world data are lacking. In this analysis, weight loss and persistence outcomes with liraglutide 3.0 mg were assessed across obesity classes, in a real-world clinical setting. METHODS: Secondary analysis of an observational, retrospective study of liraglutide 3.0 mg for weight management (as adjunct to diet and exercise) at six Wharton Medical Clinics in Canada. Patients were categorized by body mass index (BMI, kg/m2) into obesity class I (BMI 30-34.9); class II (BMI 35-39.9); and class III (BMI ≥40). Change in weight, categorical weight loss, time to maintenance dose (defined as the time to reach the full liraglutide 3.0 mg maintenance dose) and persistence were assessed for each class and for differences between classes. RESULTS: Of 308 patients, 70 (22.7%) had obesity class I, 83 (26.9%) obesity class II and 155 (50.3%) obesity class III. Similar percentage change in weight was observed between obesity classes (mean [standard deviation, SD]: -7.0% [6.0], -6.6% [6.0] and -6.1% [5.0], respectively; p = .640), and similar proportions achieved ≥5% weight loss (60.4%, 62.0% and 55.3%, respectively; p = .717) at 6 months. Mean time to maintenance dose (SD) was 64.2 (56.4) d, 76.4 (56.3) d and 71.4 (54.5) d for obesity classes I, II and III, respectively (p = .509). Persistence with medication was also similar between obesity classes (p = .358). CONCLUSIONS: These findings suggest that real-world treatment with liraglutide 3.0 mg, regardless of obesity class, is associated with similar clinically significant weight loss, time to maintenance dose and medication persistence.

10.
J Affect Disord ; 277: 30-38, 2020 12 01.
Article in English | MEDLINE | ID: mdl-32791390

ABSTRACT

INTRODUCTION: The burden of treatment-resistant depression (TRD) in Canada requires empirical characterization to better inform clinicians and policy decision-making in mental health. Towards this aim, this study utilized the Institute for Clinical Evaluative Sciences (ICES) databases to quantify the economic burden and resource utilization of Patients with TRD in Ontario. METHODS: TRD, Non-TRD Major Depressive Disorder (Non-TRD MDD) and Non-MDD cohorts were selected from the ICES databases between April 2006-March 2015 and followed-up for at least two years. TRD was defined as a minimum of two treatment failures within one-year of the index MDD diagnosis. Non-TRD and Non-MDD patients were matched with patients with TRD to analyze costs, resource utilization, and demographic information. RESULTS: Out of 277 patients with TRD identified, the average age was 52 years (SD 16) and 53% were female. Compared to Non-TRD, the patients with TRD had more all-cause visits to outpatient (38.2 vs. 24.2) and emergency units (2.7 vs. 2.0) and more depression-related visits to GPs (3.06 vs. 1.63) and psychiatrists (5.88 vs. 1.95) (all p < 0.05). The average two-year cost for TRD patients was $20,998 (CAD). LIMITATIONS: This study included patients with only public plan coverage; therefore, overall TRD population and cash and private claims were not captured. CONCLUSIONS: Patients with TRD exhibit a significantly higher demand on healthcare resources and higher overall payments compared to Non-TRD patients. The findings suggest that there are current challenges in adequately managing this difficult-to-treat patient group and there remains a high unmet need for new therapies.


Subject(s)
Depressive Disorder, Major , Depressive Disorder, Treatment-Resistant , Cost of Illness , Depressive Disorder, Major/drug therapy , Depressive Disorder, Treatment-Resistant/therapy , Female , Health Care Costs , Humans , Male , Middle Aged , Ontario , Retrospective Studies
11.
Phys Med ; 69: 262-268, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31927263

ABSTRACT

PURPOSE: Detector uniformity is an important parameter in digital mammography to guarantee a level of image quality adequate for early detection of breast cancer. Many problems with digital systems have been determined through the uniformity measurement, primarily as a result of incorrect flat-field calibration and artifacts caused by image receptor defects. The European guidelines suggest a method for the image uniformity assessment based on measurement of Signal-to-Noise ratio (SNR) and Pixel Value (PV) across a uniform image. Nineteen mammography systems from the same manufacturer installed in our organization incorporate an a-Se direct conversion detector. Since their installation, instability and inconsistency of image uniformity has attracted medical physicist attention. A number of different tests have been carried out in order to understand and establish reasons for this instability. METHODS: Three different tests have been performed to evaluate the impact of the heel effect, detector temperature and ghosting on the uniformity images. All the tests are based on the acquisition of uniform images as suggested by the European Guidelines for Quality Assurance in Breast Cancer Screening and Diagnosis. RESULTS: Results show that an increase in detector temperature produces an increase of SNR and decrease of uniformity. A further decrease of uniformity ranging between 20% and 30% is due to the ghosting while a decrease of about 10% is due the heel effect. CONCLUSIONS: X-ray tube, system geometry and detector have an impact on the system uniformity and an understanding of the contribution of each is necessary in order to obtain comparable image quality among all the systems.


Subject(s)
Breast Neoplasms/diagnostic imaging , Mammography/instrumentation , Radiographic Image Enhancement/instrumentation , Algorithms , Artifacts , Calibration , Dose-Response Relationship, Drug , Female , Humans , Image Processing, Computer-Assisted , Mammography/methods , Molybdenum , Phantoms, Imaging , Quality Control , Radiographic Image Enhancement/methods , Reproducibility of Results , Rhodium , Selenium , Signal-To-Noise Ratio , Temperature , Tungsten , X-Rays
12.
Eur J Appl Physiol ; 119(11-12): 2745-2755, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31696316

ABSTRACT

PURPOSE: Transcutaneous electrical nerve stimulation (TENS) can reduce acute and chronic pain. Unilateral fatigue can produce discomfort in the affected limb and force and activation deficits in contralateral non-exercised muscles. TENS-induced local pain analgesia effects on non-local fatigue performance are unknown. Hence, the aim of the study was to determine if TENS-induced pain suppression would augment force output during a fatiguing protocol in the treated and contralateral muscles. METHODS: Three experiments were integrated for this article. Following pre-tests, each experiment involved 20 min of TENS, sham, or a control condition on the dominant quadriceps. Then either the TENS-treated quadriceps (TENS_Treated) or the contralateral quadriceps (TENS_Contra) was tested. In a third experiment, the TENS and sham conditions involved two\; 100-s isometric maximal voluntary contractions (MVC) (30-s recovery) followed by testing of the contralateral quadriceps (TENS_Contra-Fatigue). Testing involved single knee extensors (KE) MVCs (pre- and post-test) and a post-test 30% MVC to task failure. RESULTS: The TENS-treated study induced greater (p = 0.03; 11.0%) time to KE (treated leg) failure versus control. The TENS_Contra-Fatigue induced significant (p = 0.04; 11.7%) and near-significant (p = 0.1; 7.1%) greater time to contralateral KE failure versus sham and control, respectively. There was a 14.5% (p = 0.02) higher fatigue index with the TENS (36.2 ± 10.1%) versus sham (31.6 ± 10.6%) conditions in the second fatigue intervention set (treated leg). There was no significant post-fatigue KE fatigue interaction with the TENS_Contra. CONCLUSIONS: Unilateral TENS application to the dominant KE prolonged time to failure in the treated and contralateral KE suggesting a global pain modulatory response.


Subject(s)
Isometric Contraction/physiology , Knee Joint/physiology , Knee/physiopathology , Muscle Fatigue/physiology , Adult , Electromyography/methods , Female , Humans , Male , Muscle Strength/physiology , Quadriceps Muscle/physiology , Transcutaneous Electric Nerve Stimulation/methods , Young Adult
14.
Arch Dis Child ; 104(11): 1070-1076, 2019 11.
Article in English | MEDLINE | ID: mdl-31272968

ABSTRACT

OBJECTIVE: To understand the association of seizure frequency with healthcare resource utilisation (HCRU) and mortality in UK children with epilepsy (CWE). DESIGN: Retrospective cohort study. SETTING: Routinely collected data in primary care from The Health Improvement Network UK database. PATIENTS: CWE ≥1 and<18 years of age with a record of seizure frequency were included in mortality analyses from 2005 to 2015 and HCRU analyses from 2010 to 2015. MAIN OUTCOME MEASURES: Frequency of HCRU contacts during the year following latest seizure frequency and mortality (descriptive and Cox proportional hazards regression) from first record of seizure frequency. RESULTS: Higher seizure frequency was related to increased HCRU utilisation and mortality. In negative binomial regression, each category increase in seizure frequency related to 11% more visits to general practitioners, 35% more inpatient admissions, 15% more outpatient visits and increased direct HCRU costs (24%). 11 patients died during 12 490 patient-years follow-up. The unadjusted HR of mortality per higher category of seizure frequency was 2.56 (95% CI: 1.52 to 4.31). Adjustment for age and number of prescribed anti-epileptic drugs at index attenuated this estimate to 2.11 (95% CI: 1.24 to 3.60). CONCLUSION: Higher seizure frequency is associated with greater HCRU and mortality in CWE in the UK. Improvement in seizure control may potentially lead to better patient outcomes and reduced healthcare use.


Subject(s)
Epilepsy/epidemiology , Patient Acceptance of Health Care/statistics & numerical data , Seizures/epidemiology , Adolescent , Child , Child, Preschool , Databases, Factual , Epilepsy/mortality , Epilepsy/physiopathology , Female , Humans , Infant , Male , Retrospective Studies , Seizures/mortality , Seizures/physiopathology , Severity of Illness Index , United Kingdom/epidemiology
15.
Obesity (Silver Spring) ; 27(6): 917-924, 2019 06.
Article in English | MEDLINE | ID: mdl-31062937

ABSTRACT

OBJECTIVE: Real-world clinical effectiveness of liraglutide 3.0 mg, in combination with diet and exercise, was investigated 4 and 6 months post initiation. Changes in absolute and percent body weight were examined from baseline. METHODS: A cohort of liraglutide 3.0 mg initiators in 2015 and 2016 was identified from six Canadian weight-management clinics. Post initiation values at 4 and 6 months were compared with baseline values using a paired t test. RESULTS: The full cohort consisted of 311 participants, with 210 in the ≥ 4-month persistence group and 167 in the ≥ 6-month persistence group. Average baseline BMI was 40.7 kg/m2 , and weight was 114.8 kg. There was a significant change in body weight 6 and 4 months after initiation of treatment in persistent subjects (≥ 6-month: -8.0 kg, P < 0.001; ≥ 4-month: -7.0 kg, P < 0.001) and All Subjects, regardless of persistence (-7.3 kg; P < 0.001). Percentage change in body weight from baseline was -7.1% in the ≥ 6-month group and -6.3% in the ≥ 4-month group, and All Subjects lost 6.5% body weight. Of participants in the ≥ 6-month group, 64.10% and 34.5% lost ≥ 5% and > 10% body weight, respectively. CONCLUSIONS: In a real-world setting, liraglutide 3.0 mg, when combined with diet and exercise, was associated with clinically meaningful weight loss.


Subject(s)
Hypoglycemic Agents/therapeutic use , Liraglutide/therapeutic use , Weight Loss/drug effects , Adult , Canada , Female , Humans , Hypoglycemic Agents/pharmacology , Liraglutide/pharmacology , Male , Middle Aged , Retrospective Studies
16.
Invest Ophthalmol Vis Sci ; 58(11): 4818-4825, 2017 09 01.
Article in English | MEDLINE | ID: mdl-28973328

ABSTRACT

Purpose: The purpose of this study was to explore the relationship between visual acuity and utility (health-related quality of life) in diabetic macular edema (DME) using intravitreal aflibercept data. Methods: The relationship between visual acuity in the best-seeing eye (BSE) and worse-seeing eye (WSE) and utility was explored using ordinary least squares (OLS) and random-effects models adjusted for different covariates (age, age2, sex, body mass index, smoking status, glycated hemoglobin, diabetes severity, comorbidities, and geographic region). Utility was measured using the EuroQoL-five dimensions questionnaire (EQ-5D) and Visual Functioning Questionnaire-Utility Index (VFQ-UI). For each model, coefficients (R2) were reported, and WSE/BSE was expressed as the ratio of coefficients (OLS models). Models were independent of treatment effects, and outcomes from all time points (up to week 100) were included where available. Results: Data from 1320 patients with DME were analyzed. In all models, the association between visual acuity (BSE > WSE) was stronger with VFQ-UI- than EQ-5D-derived utilities. The estimated relationship between VFQ-UI and visual acuity in the BSE and WSE was robust, even with an increasing number of covariates. WSE/BSE coefficient ratios were similar across VFQ-UI OLS models (32%) compared with EQ-5D models (41%-48%). Actual (unadjusted) versus predicted data plots also showed a better fit with VFQ-UI- than EQ-5D-derived utilities. Conclusions: These analyses show that VFQ-UI was more sensitive than EQ-5D-derived utilities for measuring the impact of visual acuity in the BSE and WSE. Visual acuity in the BSE was a major contributor to utility, but WSE is also important though to a lesser degree as shown by the coefficient ratios. These new data will be useful for health technology assessments in DME, where utilities data are lacking.


Subject(s)
Angiogenesis Inhibitors/administration & dosage , Diabetic Retinopathy/drug therapy , Health Status , Macular Edema/drug therapy , Quality of Life , Receptors, Vascular Endothelial Growth Factor/administration & dosage , Recombinant Fusion Proteins/administration & dosage , Visual Acuity/physiology , Adult , Aged , Comorbidity , Diabetic Retinopathy/physiopathology , Female , Humans , Intravitreal Injections , Macular Edema/physiopathology , Male , Middle Aged , Regression Analysis
17.
J Wound Care ; 26(7): 381-397, 2017 Jul 02.
Article in English | MEDLINE | ID: mdl-28704150

ABSTRACT

OBJECTIVE: To assess the potential of measurements of pH, exudate composition and temperature in wounds to predict healing outcomes and to identify the methods that are employed to measure them. METHOD: A systematic review based on the outcomes of a search strategy of quantitative primary research published in the English language was conducted. Inclusion criteria limited studies to those involving in vivo and human participants with an existing or intentionally provoked wound, defined as 'a break in the epithelial integrity of the skin', and excluded in vitro and animal studies. Data synthesis and analysis was performed using structured narrative summaries of each included study arranged by concept, pH, exudate composition and temperature. The Evidence Based Literature (EBL) Critical Appraisal Checklist was implemented to appraise the quality of the included studies. RESULTS: A total of 23 studies, three for pH (mean quality score 54.48%), 12 for exudate composition (mean quality score 46.54%) and eight for temperature (mean quality score 36.66%), were assessed as eligible for inclusion in this review. Findings suggest that reduced pH levels in wounds, from alkaline towards acidic, are associated with improvements in wound condition. Metalloproteinase-9 (MMP-9), matrix metalloproteinase-2 (MMP-2), tissue inhibitor of metalloproteinase (TIMP), neutrophil elastase (NE) and albumin, in descending order, were the most frequently measured analytes in wounds. MMP-9 emerged as the analyte which offers the most potential as a biomarker of wound healing, with elevated levels observed in acute or non-healing wounds and decreasing levels in wounds progressing in healing. Combined measures of different exudate components, such as MMP/TIMP ratios, also appeared to offer substantial potential to indicate wound healing. Finally, temperature measurements are highest in non-healing, worsening or acute wounds and decrease as wounds progress towards healing. Methods used to measure pH, exudate composition and temperature varied greatly and, despite noting some similarities, the studies often yielded significantly contrasting results. Furthermore, a limitation to the generalisability of the findings was the overall quality scores of the research studies, which appeared suboptimal. CONCLUSION: Despite some promising findings, there was insufficient evidence to confidently recommend the use of any of these measures as predictors of wound healing. pH measurement appeared as the most practical method for use in clinical practice to indicate wound healing outcomes. Further research is required to increase the strength of evidence and develop a greater understanding of wound healing dynamics.


Subject(s)
Biomarkers/metabolism , Exudates and Transudates/metabolism , Skin Temperature , Wound Healing , Wounds and Injuries , Albumins/metabolism , Exudates and Transudates/chemistry , Humans , Hydrogen-Ion Concentration , Leukocyte Elastase/metabolism , Matrix Metalloproteinase 2/metabolism , Matrix Metalloproteinase 9/metabolism , Temperature , Tissue Inhibitor of Metalloproteinases/metabolism
18.
Crit Care ; 20(1): 355, 2016 10 27.
Article in English | MEDLINE | ID: mdl-27788680

ABSTRACT

BACKGROUND: The present study was designed to (1) establish current sedation practice in UK critical care to inform evidence synthesis and potential future primary research and (2) to compare practice reported via a survey with actual practice assessed in a point prevalence study (PPS). METHODS: UK adult general critical care units were invited to participate in a survey of current sedation practice, and a representative sample of units was invited to participate in a PPS of sedation practice at the patient level. Survey responses were compared with PPS data where both were available. RESULTS: Survey responses were received from 214 (91 %) of 235 eligible critical care units. Of these respondents, 57 % reported having a written sedation protocol, 94 % having a policy of daily sedation holds and 94 % using a sedation scale to assess depth of sedation. In the PPS, across units reporting a policy of daily sedation holds, a median of 50 % (IQR 33-75 %) of sedated patients were considered for a sedation hold. A median of 88 % (IQR 63-100 %) of patients were assessed using the same sedation scale as reported in the survey. Both the survey and the PPS indicated propofol as the preferred sedative and alfentanil, fentanyl and morphine as the preferred analgesics. In most of the PPS units, all patients had received the unit's reported first-choice sedative (median across units 100 %, IQR 64-100 %), and a median of 80 % (IQR 67-100 %) of patients had received the unit's reported first-choice analgesic. Most units (83 %) reported in the survey that sedatives are usually administered in combination with analgesics. Across units that participated in the PPS, 69 % of patients had received a combination of agents - most frequently propofol combined with either alfentanil or fentanyl. CONCLUSIONS: Clinical practice reported in the national survey did not accurately reflect actual clinical practice at the patient level observed in the PPS. Employing a mixed methods approach provided a more complete picture of sedation practice in terms of breadth and depth of information.


Subject(s)
Analgesics/therapeutic use , Critical Care/statistics & numerical data , Hypnotics and Sedatives/therapeutic use , Intensive Care Units/statistics & numerical data , Physician Executives/statistics & numerical data , Surveys and Questionnaires , Case-Control Studies , Critical Care/methods , Drug Utilization/statistics & numerical data , Humans , Pilot Projects , Prevalence , United Kingdom/epidemiology
19.
Anaesthesia ; 71(12): 1410-1416, 2016 12.
Article in English | MEDLINE | ID: mdl-27667471

ABSTRACT

The models used to predict outcome after adult general critical care may not be applicable to cardiothoracic critical care. Therefore, we analysed data from the Case Mix Programme to identify variables associated with hospital mortality after admission to cardiothoracic critical care units and to develop a risk-prediction model. We derived predictive models for hospital mortality from variables measured in 17,002 patients within 24 h of admission to five cardiothoracic critical care units. The final model included 10 variables: creatinine; white blood count; mean arterial blood pressure; functional dependency; platelet count; arterial pH; age; Glasgow Coma Score; arterial lactate; and route of admission. We included additional interaction terms between creatinine, lactate, platelet count and cardiac surgery as the admitting diagnosis. We validated this model against 10,238 other admissions, for which the c index (95% CI) was 0.904 (0.89-0.92) and the Brier score was 0.055, while the slope and intercept of the calibration plot were 0.961 and -0.183, respectively. The discrimination and calibration of our model suggest that it might be used to predict hospital mortality after admission to cardiothoracic critical care units.


Subject(s)
Cardiac Surgical Procedures/mortality , Critical Care , Hospital Mortality , Risk Assessment , Adult , Aged , Aged, 80 and over , Diagnosis-Related Groups , Female , Humans , Intensive Care Units , Male , Middle Aged , Patient Admission
20.
Health Technol Assess ; 19(97): i-xxv, 1-150, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26597979

ABSTRACT

BACKGROUND: Early goal-directed therapy (EGDT) is recommended in international guidance for the resuscitation of patients presenting with early septic shock. However, adoption has been limited and uncertainty remains over its clinical effectiveness and cost-effectiveness. OBJECTIVES: The primary objective was to estimate the effect of EGDT compared with usual resuscitation on mortality at 90 days following randomisation and on incremental cost-effectiveness at 1 year. The secondary objectives were to compare EGDT with usual resuscitation for requirement for, and duration of, critical care unit organ support; length of stay in the emergency department (ED), critical care unit and acute hospital; health-related quality of life, resource use and costs at 90 days and at 1 year; all-cause mortality at 28 days, at acute hospital discharge and at 1 year; and estimated lifetime incremental cost-effectiveness. DESIGN: A pragmatic, open, multicentre, parallel-group randomised controlled trial with an integrated economic evaluation. SETTING: Fifty-six NHS hospitals in England. PARTICIPANTS: A total of 1260 patients who presented at EDs with septic shock. INTERVENTIONS: EGDT (n = 630) or usual resuscitation (n = 630). Patients were randomly allocated 1 : 1. MAIN OUTCOME MEASURES: All-cause mortality at 90 days after randomisation and incremental net benefit (at £20,000 per quality-adjusted life-year) at 1 year. RESULTS: Following withdrawals, data on 1243 (EGDT, n = 623; usual resuscitation, n = 620) patients were included in the analysis. By 90 days, 184 (29.5%) in the EGDT and 181 (29.2%) patients in the usual-resuscitation group had died [p = 0.90; absolute risk reduction -0.3%, 95% confidence interval (CI) -5.4 to 4.7; relative risk 1.01, 95% CI 0.85 to 1.20]. Treatment intensity was greater for the EGDT group, indicated by the increased use of intravenous fluids, vasoactive drugs and red blood cell transfusions. Increased treatment intensity was reflected by significantly higher Sequential Organ Failure Assessment scores and more advanced cardiovascular support days in critical care for the EGDT group. At 1 year, the incremental net benefit for EGDT versus usual resuscitation was negative at -£725 (95% CI -£3000 to £1550). The probability that EGDT was more cost-effective than usual resuscitation was below 30%. There were no significant differences in any other secondary outcomes, including health-related quality of life, or adverse events. LIMITATIONS: Recruitment was lower at weekends and out of hours. The intervention could not be blinded. CONCLUSIONS: There was no significant difference in all-cause mortality at 90 days for EGDT compared with usual resuscitation among adults identified with early septic shock presenting to EDs in England. On average, costs were higher in the EGDT group than in the usual-resuscitation group while quality-adjusted life-years were similar in both groups; the probability that it is cost-effective is < 30%. FUTURE WORK: The ProMISe (Protocolised Management In Sepsis) trial completes the planned trio of evaluations of EGDT across the USA, Australasia and England; all have indicated that EGDT is not superior to usual resuscitation. Recognising that each of the three individual, large trials has limited power for evaluating potentially important subgroups, the harmonised approach adopted provides the opportunity to conduct an individual patient data meta-analysis, enhancing both knowledge and generalisability. TRIAL REGISTRATION: Current Controlled Trials ISRCTN36307479. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 19, No. 97. See the NIHR Journals Library website for further project information.


Subject(s)
Resuscitation/methods , Shock, Septic/therapy , Adult , Cost-Benefit Analysis , Disease Management , Emergency Service, Hospital , England , Guideline Adherence , Humans , Resuscitation/economics , Shock, Septic/economics , Shock, Septic/mortality , Technology Assessment, Biomedical , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...