Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
J Obstet Gynaecol Can ; 40(6): 669-676, 2018 06.
Article in English | MEDLINE | ID: mdl-29248358

ABSTRACT

OBJECTIVE: Methicillin-resistant Staphylococcus aureus (MRSA) among obstetrical patients can increase birth complications for both mothers and infants, but little is known about the risk factors for MRSA in this population. The objective of this study was to determine the prevalence of MRSA among obstetrical patients and identify risk factors associated with MRSA colonization. METHODS: This nested case-control study used obstetrical patients with MRSA colonization identified through a universal screening program at The Ottawa Hospital (February 2008-January 2010). Cases and three matched controls were compared using chi-square tests for categorical variables, median and interquartile range (IQR), and Wilcoxon rank-sum tests for continuous variables. Conditional logistic regression using ORs and 95% CIs was used to identify risk factors. Standard microbiologic techniques and pulsed-field gel electrophoresis of the MRSA isolates from case patients were performed. RESULTS: Out of 11 478 obstetrical patients, 39 (0.34%) were MRSA colonized; 117 patients were selected as matched controls. The median age was 30 (IQR 27.5-35.00) and median length of stay was 2.55 days (IQR 1.95-3.24). Only MRSA cases had a previous MRSA infection (4 vs. 0). MRSA cases had significantly higher parity (median 3; IQR 2-5) compared with controls (median 2; IQR 1-3) (OR 1.52; 95% CI 1.22-1.90) CONCLUSION: This study identified a low prevalence of MRSA among obstetrical patients. Risk factors associated with MRSA colonization were previous MRSA infection and multiparity. Obstetrical patients who previously tested positive for MRSA should be placed on contact precautions at the time of hospital admission because this is a risk factor for future colonization.


Subject(s)
Hospital Units/statistics & numerical data , Methicillin-Resistant Staphylococcus aureus/growth & development , Methicillin-Resistant Staphylococcus aureus/isolation & purification , Obstetrics , Adult , Canada , Case-Control Studies , Delivery, Obstetric/methods , Female , Humans , Nasal Cavity/microbiology , Pregnancy , Rectum/microbiology , Risk Factors , Staphylococcal Infections/diagnosis , Staphylococcal Infections/microbiology
2.
BMJ Qual Saf ; 22(4): 306-16, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23396853

ABSTRACT

OBJECTIVE: To assess the usability of a computerised drug monitoring programme for ambulatory patients receiving outpatient prescriptions. MATERIALS AND METHODS: A prospective cohort of 200 patients received two automated calls after a new drug prescription (day 3 and day 17) to screen for unfilled prescriptions and medication problems. Usability was assessed objectively and subjectively with the coding of technical (eg, voice recognition problems) and respondent burden (eg, failing to follow instructions) problems observed during the calls, and with an interview 21 days after the prescription. Associations between personal factors, usability and call outcome were examined with logistic regression models. RESULTS: The automated calls successfully reached 70.0% of enrolled patients. Older age increased the likelihood of experiencing technical (OR 2.18, 95% CI 1.22 to 3.88) and respondent burden problems (OR 3.32, 95% CI 1.88 to 5.87), as well as unsuccessful calls (OR 2.16, 95% CI 1.19 to 3.91). Patients with higher education experienced less respondent burden problems (OR 0.44, 95% CI 0.21 to 0.91), but they were more prone to have unsuccessful calls (OR 2.65, 95% CI 1.07 to 6.56) and less likely to find them useful (OR 0.23 95% CI 0.08 to 0.68). Older adults perceived the calls as easy to use and useful, although they reported lower intention to use the automated calls in the future (OR 0.32, 95% CI 0.15 to 0.70). DISCUSSION: As reported in previous studies, we found that older adults tend to have more difficulty when interacting with automated calls. Evidence about the association between education and usability was mixed. CONCLUSIONS: Our results highlight practical suggestions to improve the feasibility and usability of automated calls in primary care screening programmes.


Subject(s)
Drug Monitoring , Electronic Prescribing , Patient Acceptance of Health Care , Patient Compliance , Adult , Age Factors , Aged , Aged, 80 and over , Cohort Studies , Feasibility Studies , Female , Humans , Male , Medication Adherence , Medication Therapy Management , Middle Aged , Prescription Drugs , Prospective Studies , Reminder Systems , Speech Recognition Software
3.
Am J Infect Control ; 41(3): 214-20, 2013 Mar.
Article in English | MEDLINE | ID: mdl-22999773

ABSTRACT

BACKGROUND: Selective methicillin-resistant Staphylococcus aureus (MRSA) screening programs target high-risk populations. To characterize high-risk populations, we conducted this systematic review to identify patient-level factors associated with MRSA carriage at hospital admission. METHODS: Studies were identified in the MEDLINE (1950-2011) and EMBASE (1980-2011) databases. English studies were included if they examined adult populations and used multivariable analyses to examine patient-level factors associated with MRSA carriage at hospital admission. From each study, we abstracted details of the population, the risk factors examined, and the association between the risk factors and MRSA carriage at hospital admission. RESULTS: Our electronic search identified 972 citations, from which we selected 27 studies meeting our inclusion criteria. The patient populations varied across the studies. Ten studies included all patients admitted to hospital, and the others were limited to specific hospital areas. MRSA detection methods also varied across studies. Ten studies obtained specimens from the nares only, whereas other studies also swabbed wounds, catheter sites, and the perianal region. Methods of MRSA diagnoses included polymerase chain reaction tests, cultures in various agar mediums, and latex agglutination tests. Patient age, gender, previous admission to hospital, and previous antibiotic use were the risk factors most commonly examined. The risk factor definition and study methods varied among studies to an extent that precluded meta-analysis. CONCLUSION: The existing literature cannot be used to identify risk factors for MRSA colonization at the time of hospitalization. Future studies should be aware of the differences in the existing literature and aim to develop standardized risk factor definitions.


Subject(s)
Carrier State/epidemiology , Methicillin-Resistant Staphylococcus aureus/isolation & purification , Staphylococcal Infections/epidemiology , Adult , Carrier State/microbiology , Diagnostic Tests, Routine , Female , Humans , Male , Risk Factors , Staphylococcal Infections/microbiology
4.
CMAJ ; 184(1): 37-42, 2012 Jan 10.
Article in English | MEDLINE | ID: mdl-22143235

ABSTRACT

BACKGROUND: The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital is not yet fully understood. We determined the independent impact of hospital-acquired infection with C. difficile on length of stay in hospital. METHODS: We conducted a retrospective observational cohort study of admissions to hospital between July 1, 2002, and Mar. 31, 2009, at a single academic hospital. We measured the association between infection with hospital-acquired C. difficile and time to discharge from hospital using Kaplan-Meier methods and a Cox multivariable proportional hazards regression model. We controlled for baseline risk of death and accounted for C. difficile as a time-varying effect. RESULTS: Hospital-acquired infection with C. difficile was identified in 1393 of 136,877 admissions to hospital (overall risk 1.02%, 95% confidence interval [CI] 0.97%-1.06%). The crude median length of stay in hospital was greater for patients with hospital-acquired C. difficile (34 d) than for those without C. difficile (8 d). Survival analysis showed that hospital-acquired infection with C. difficile increased the median length of stay in hospital by six days. In adjusted analyses, hospital-acquired C. difficile was significantly associated with time to discharge, modified by baseline risk of death and time to acquisition of C. difficile. The hazard ratio for discharge by day 7 among patients with hospital-acquired C. difficile was 0.55 (95% CI 0.39-0.70) for patients in the lowest decile of baseline risk of death and 0.45 (95% CI 0.32-0.58) for those in the highest decile; for discharge by day 28, the corresponding hazard ratios were 0.74 (95% CI 0.60-0.87) and 0.61 (95% CI 0.53-0.68). INTERPRETATION: Hospital-acquired infection with C. difficile significantly prolonged length of stay in hospital independent of baseline risk of death.


Subject(s)
Clostridioides difficile/isolation & purification , Cross Infection/epidemiology , Enterocolitis, Pseudomembranous/epidemiology , Length of Stay/trends , Aged , Cross Infection/microbiology , Cross Infection/transmission , Enterocolitis, Pseudomembranous/microbiology , Enterocolitis, Pseudomembranous/transmission , Female , Hospital Mortality/trends , Humans , Incidence , Male , Middle Aged , Ontario/epidemiology , Retrospective Studies , Risk Factors
5.
J Am Coll Radiol ; 8(6): 428-435.e3, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21636058

ABSTRACT

PURPOSE: Medical imaging is a large and growing component of health care expenditures. To better understand some of the determinants of imaging ordering behavior, the authors analyzed the effect of differential capacity on the imaging workup of patients with acute nonhemorrhagic stroke. METHODS: All patients at a US teaching hospital and a two-campus Canadian teaching hospital between 2001 and 2005 discharged with diagnoses of acute nonhemorrhagic stroke were identified. Billing data were linked with clinical information systems to identify all imaging studies performed, comorbidities, and patient disposition. RESULTS: Nine hundred eighteen patients at the US hospital and 1,759 patients at the Canadian hospital were included. Patients were similar in age and distribution of comorbid illnesses. The rate of MRI scans at the US hospital was more than twice that at either of the Canadian hospitals (95.75 scans per 100 patients vs 41.39 scans per 100 patients). The length of stay was significantly shorter and the inpatient mortality rate significantly lower at the US hospital compared with the Canadian hospital. A multivariate regression analysis demonstrated that only patient age and site (US vs Canada) were significant predictors of MRI use, controlling for patient gender, comorbidities, and use of anticoagulants. CONCLUSIONS: Scanning utilization varied at hospitals with differential access to scanning technologies. There was less frequent use of MRI scanning at hospitals with limited access to this modality. Patient and health system factors are important considerations when interpreting the mechanisms for this variation, its importance, and the potential relationship of imaging use with patient outcomes.


Subject(s)
Angiography/statistics & numerical data , Health Services Accessibility/statistics & numerical data , Stroke/diagnosis , Stroke/epidemiology , Canada/epidemiology , Female , Humans , Male , Pilot Projects , Prevalence , United States/epidemiology
6.
Arch Intern Med ; 170(20): 1804-10, 2010 Nov 08.
Article in English | MEDLINE | ID: mdl-21059973

ABSTRACT

BACKGROUND: The effects of hospital-acquired Clostridium difficile infection (CDI) on patient outcomes are incompletely understood. We conducted this study to determine the independent impact of hospital-acquired CDI on in-hospital mortality after adjusting for the time-varying nature of CDI and baseline mortality risk at hospital admission. METHODS: This retrospective observational study used data from the Ottawa Hospital (Ottawa, Ontario, Canada) data warehouse. Inpatient admissions with a start date after July 1, 2002, and a discharge date before March 31, 2009, were included. Stratified analyses and a Cox multivariate proportional hazards regression model were used to determine if hospital-acquired CDI was associated with time to in-hospital death. RESULTS: A total of 136 877 admissions were included. Hospital-acquired CDI was identified in 1393 admissions (overall risk per admission, 1.02%; 95% confidence interval [CI], 0.97%-1.06%). The risk of hospital-acquired CDI significantly increased as the baseline mortality risk increased: from 0.2% to 2.6% in the lowest to highest deciles of baseline risk. Hospital-acquired CDI significantly increased the absolute risk of in-hospital death across all deciles of baseline risk (pooled absolute increase, 11%; 95% CI, 9%-13%). Cox regression analysis revealed an average 3-fold increase in the hazard of death associated with hospital-acquired CDI (95% CI, 2.4-3.7); this hazard ratio decreased with increasing baseline mortality risk. CONCLUSIONS: Hospital-acquired CDI was independently associated with an increased risk of in-hospital death. Across all baseline risk strata, for every 10 patients acquiring the infection, 1 person died.


Subject(s)
Clostridioides difficile/isolation & purification , Clostridium Infections/mortality , Cross Infection/epidemiology , Clostridium Infections/microbiology , Cross Infection/microbiology , Hospital Mortality/trends , Hospitalization/statistics & numerical data , Humans , Incidence , Ontario/epidemiology , Retrospective Studies , Risk Factors
7.
J Eval Clin Pract ; 16(5): 947-56, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20553366

ABSTRACT

BACKGROUND: Numerous studies have tried to determine the association between continuity and outcomes. Studies doing so must actually measure continuity. If continuity and outcomes are measured concurrently, their association can only be determined with time-dependent methods. OBJECTIVE: To identify and summarize all methodologically studies that measure the association between continuity of care and patient outcomes. METHODS: We searched MEDLINE database (1950-2008) and hand-searched to identify studies that tried to associate continuity and outcomes. English studies were included if they: actually measured continuity; determined the association of continuity with patient outcomes; and properly accounted for the relative timing of continuity and outcome measures. RESULTS: A total of 139 English language studies tried to measure the association between continuity and outcomes but only 18 studies (12.9%) met methodological criteria. All but two studies measured provider continuity and used health utilization or patient satisfaction as the outcome. Eight of nine high-quality studies found a significant association between increased continuity and decreased health utilization including hospitalization and emergency visits. Five of seven studies found improved patient satisfaction with increased continuity. CONCLUSIONS: These studies validate the belief that increased provider continuity is associated with improved patient outcomes and satisfaction. Further research is required to determine whether information or management continuity improves outcomes.


Subject(s)
Continuity of Patient Care , Outcome Assessment, Health Care , Adult , Aged , Female , Humans , Male , Middle Aged , Young Adult
8.
Am J Manag Care ; 15(6): 383-91, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19514804

ABSTRACT

OBJECTIVE: To comprehensively describe the populations, interventions, and outcomes of interactive voice response system (IVRS) clinical trials. METHODS: We identified studies using MEDLINE (1950-2008) and EMBASE (1980-2008). We also identified studies using hand searches of the Science Citation Index and the reference lists of included articles. Included were randomized and controlled clinical trials that examined the effect of an IVRS intervention on clinical end points, measures of disease control, process adherence, or quality-of-life measures. Continuous and dichotomous outcomes were meta-analyzed using mean difference and median effects methodology, respectively. RESULTS: Forty studies (n = 106,959 patients) met inclusion criteria. Of these studies, 25 used an IVRS intervention aimed at encouraging adherence with recommended tests, treatments, or behaviors; the remaining 15 used an IVRS for chronic disease management. Three studies reported clinical end points, which could not be statistically pooled. In 6 studies that reported objective clinical measures of disease control (glycosylated hemoglobin, total cholesterol, and serum glucose), the IVRS was associated with nonsignificant improvements. In 14 studies that measured objective process adherence outcomes, the median effect was 7.9% (25th-75th percentile: 2.8%, 19.5%). For the 16 studies that assessed patient-reported measures of disease control and the 11 studies that assessed patient-reported process adherence outcomes, approximately one-third of the outcomes significantly favored the IVRS group. CONCLUSION: IVRS interventions, which enable patients to interact with computer databases via telephone, have shown a significant benefit in adherence to various processes of care. Future IVRS studies should include clinically relevant outcomes.


Subject(s)
Ambulatory Care , Speech Recognition Software , User-Computer Interface , Humans , Quality Assurance, Health Care , Quality of Health Care
9.
J Clin Epidemiol ; 62(12): 1306-15, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19545974

ABSTRACT

OBJECTIVE: To meaningfully interpret trials using surrogate outcomes, one must translate changes in the surrogate outcome to changes in the clinical outcome. Formulae to do this are uncommon because they require primary data from multiple randomized trials that measure both the surrogate and clinical outcome. STUDY DESIGN AND SETTING: We developed a model to translate changes in anticoagulation control (the surrogate outcome) into hemorrhagic and thromboembolic event rates (the clinical outcome). The model used Monte Carlo simulation and association measures between the surrogate and the clinical outcome from a meta-analysis. In randomized trials having interventions that improved anticoagulation control, we used the model to predict and statistically compare event rates between the study groups. RESULTS: Seven randomized trials found significantly improved anticoagulation control (mean increase in proportion of time in therapeutic range: 8.4%; range: 1.8-18%). These improvements in anticoagulation control translated to small decreases in hemorrhagic and thromboembolic events (mean: 0.66%/yr; range: 0.13-1.42%). These changes were never statistically significant. CONCLUSION: Monte Carlo modeling can be used to translate surrogate outcomes into clinical outcomes. Statistically significant changes in anticoagulation control did not translate to significant differences in clinical outcomes. This methodology could be applied to other areas in medicine to assess surrogate outcomes.


Subject(s)
Monte Carlo Method , Randomized Controlled Trials as Topic/methods , Adult , Aged , Anticoagulants/adverse effects , Anticoagulants/therapeutic use , Hemorrhage/chemically induced , Humans , International Normalized Ratio , Middle Aged , Thromboembolism/prevention & control , Treatment Outcome , Young Adult
10.
CMAJ ; 180(9): 927-33, 2009 Apr 28.
Article in English | MEDLINE | ID: mdl-19398739

ABSTRACT

BACKGROUND: Monitoring oral anticoagulants is logistically challenging for both patients and medical staff. We evaluated the effect of adding an interactive voice response system to computerized decision support for oral anticoagulant management. METHODS: We developed an interactive voice response system to communicate to patients the results of international normalized ratio testing and their dosage schedules for anticoagulation therapy. The system also reminded patients of upcoming and missed appointments for blood tests. We recruited patients whose anticoagulation control was stable after at least 3 months of warfarin therapy. We prospectively examined clinical data and outcomes for these patients for an intervention period of at least 3 months. We also collected retrospective data for each patient for the 3 months before study enrolment. RESULTS: We recruited 226 patients between Nov. 23, 2006, and Aug. 1, 2007. The mean duration of the intervention period (prospective data collection) was 4.2 months. Anticoagulation control was similar for the periods during and preceding the intervention (mean time within the therapeutic range 80.3%, 95% confidence interval [CI] 77.5% to 83.1% v. 79.9%, 95% CI 77.3% to 82.6%). The interactive voice response system delivered 1211 (77.8%) of 1557 scheduled dosage messages, with no further input required from clinic staff. The most common reason for clinic staff having to deliver the remaining messages (accounting for 143 [9.2%] of all messages) was an international normalized ratio that was excessively high or low, (i.e., 0.5 or more outside the therapeutic range). When given the option, 76.6% of patients (164/214) chose to continue with the interactive voice response system for management of their anticoagulation after the study was completed. The system reduced staff workload for monitoring anticoagulation therapy by 48 min/wk, a 33% reduction from the baseline of 2.4 hours. INTERPRETATION: Interactive voice response systems have a potential role in improving the monitoring of patients taking oral anticoagulants. Further work is required to determine the generalizability and cost-effectiveness of these results.


Subject(s)
Anticoagulants/administration & dosage , Decision Support Techniques , Drug Monitoring/methods , Self Care/methods , Warfarin/administration & dosage , Administration, Oral , Adult , Aged , Aged, 80 and over , Case Management , Drug Administration Schedule , Drug Therapy, Computer-Assisted , Feasibility Studies , Female , Humans , Male , Middle Aged , Point-of-Care Systems , Prospective Studies , Retrospective Studies , Treatment Outcome , Young Adult
11.
CMAJ ; 179(3): 235-44, 2008 Jul 29.
Article in English | MEDLINE | ID: mdl-18663203

ABSTRACT

BACKGROUND: Patients taking oral anticoagulant therapy balance the risks of hemorrhage and thromboembolism. We sought to determine the association between anticoagulation intensity and the risk of hemorrhagic and thromboembolic events. We also sought to determine how under-or overanticoagulation would influence patient outcomes. METHODS: We reviewed the MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials and CINAHL databases to identify studies involving patients taking anticoagulants that reported person-years of observation and the number of hemorrhages or thromboemboli in 3 or more discrete ranges of international normalized ratios. We estimated the overall relative and absolute risks of events specific to anticoagulation intensity. RESULTS: We included 19 studies. The risk of hemorrhage increased significantly at high international normalized ratios. Compared with the therapeutic ratio of 2-3, the relative risk (RR) of hemorrhage (and 95% confidence intervals [CIs]) were 2.7 (1.8-3.9; p < 0.01) at a ratio of 3-5 and 21.8 (12.1-39.4; p < 0.01) at a ratio greater than 5. The risk of thromboemboli increased significantly at ratios less than 2, with a relative risk of 3.5 (95% CI 2.8-4.4; p < 0.01). The risk of hemorrhagic or thromboembolic events was lower at ratios of 3-5 (RR 1.8, 95% CI 1.2-2.6) than at ratios of less than 2 (RR 2.4, 95% CI 1.9-3.1; p = 0.10). We found that a ratio of 2-3 had the lowest absolute risk (AR) of events (AR 4.3%/yr, 95% CI 3.0%-6.3%). CONCLUSIONS: The risks of hemorrhage and thromboemboli are minimized at international normalized ratios of 2-3. Ratios that are moderately higher than this therapeutic range appear safe and more effective than subtherapeutic ratios.


Subject(s)
Anticoagulants/administration & dosage , Drug Prescriptions , Hemorrhage/prevention & control , Thromboembolism/drug therapy , Administration, Oral , Dose-Response Relationship, Drug , Humans , International Normalized Ratio , Prognosis , Thromboembolism/blood
12.
CMAJ ; 176(11): 1589-94, 2007 May 22.
Article in English | MEDLINE | ID: mdl-17515585

ABSTRACT

BACKGROUND: Patients taking anticoagulants orally over the long term have international normalized ratios (INRs) outside the individual therapeutic range more than one-third of the time. Improved anticoagulation control will reduce hemorrhagic and thromboembolic event rates. To gauge the potential effect of improved anticoagulation control, we undertook to determine the proportion of anticoagulant-associated events that occur when INRs are outside the therapeutic range. METHODS: We conducted a meta-analysis of all studies that assigned hemorrhagic and thromboembolic events in patients taking anticoagulants to discrete INR ranges. We identified studies using the MEDLINE (1966-2006) and EMBASE (1980-2006) databases. We included studies reported in English if the majority of patients taking oral anticoagulants had an INR range with a lower limit between 1.8 and 2 and an upper limit between 3 and 3.5, and their INR at the time of the hemorrhagic or thromboembolic event was recorded. RESULTS: The final analysis included results from 45 studies (23 that reported both hemorrhages and thromboemboli; 14 that reported hemorrhages only; and 8, thromboemboli only) involving a median of 208 patients (limits of interquartile range [25th-75th percentile] 131-523 subjects; total n = 71 065). Of these studies, 64% were conducted at community practices; the remainder, at anticoagulation clinics. About 69% of the studies were classed as having moderate or high quality. Overall, 44% (95% confidence interval [CI] 39%-49%) of hemorrhages occurred when INRs were above the therapeutic range, and 48% (95% CI 41%-55%) of thromboemboli took place when below it. The mean proportion of events that occurred while the patient's INR was outside the therapeutic range was greater for studies with a short mean follow-up (< 1 yr). Between-study heterogeneity was significant (p < 0.001). INTERPRETATION: Improved anticoagulation control could decrease the likelihood of almost half of all anticoagulant-associated adverse events.


Subject(s)
Anticoagulants/adverse effects , Hemorrhage/chemically induced , Thromboembolism/chemically induced , Anticoagulants/blood , Hemorrhage/blood , Humans , International Normalized Ratio , Reference Values , Thromboembolism/blood , Treatment Outcome
13.
Chest ; 131(5): 1508-15, 2007 May.
Article in English | MEDLINE | ID: mdl-17317732

ABSTRACT

BACKGROUND: On average, patients receiving therapy with oral anticoagulants (OACs) in the community are in the therapeutic range only 55% of the time. Anticoagulation control strongly influences the risk of hemorrhagic and thromboembolic events in such patients. However, not all anticoagulation-associated events are attributable to poor anticoagulation control, nor do all hemorrhagic or thromboembolic events occur in anticoagulated patients. OBJECTIVE: Measure the proportion of serious hemorrhagic and thromboembolic events that would be avoided if anticoagulation control was perfect. METHODS: A retrospective cohort study of eastern Ontario using population-based administrative databases. Anticoagulation control was determined for each day of OAC exposure using linear interpolation. Incident hemorrhagic or thromboembolic hospitalizations for control and OAC patients were identified. Hemorrhages and thromboemboli in OAC patients were deemed to be avoidable if they occurred at international normalized ratios of > 3 and < 2, respectively. RESULTS: The study included > 183,000 patient-years of observation with 6,400 patient-years of OAC exposure. Anticoagulation control could be determined for 51.5% of OAC exposure time. Control patients had hemorrhagic and thromboembolic event rates of 1.8% and 1.5% per year, respectively. A total of 10,020 people were exposed to OACs, and spent 14.2% and 26.7% of the time, respectively, with excessively high and low anticoagulation intensity. Excessively high anticoagulation intensity explained 25.6% (95% confidence interval [CI], 19.4 to 31.7) and 2.0% (95% CI, 1.5 to 2.5) , respectively, of all serious hemorrhages in the anticoagulated and entire population. Excessively low anticoagulation intensity explained 11.1% (95% CI, 4.4 to 17.7) and 1.1% (95% CI, 0.7 to 1.6) of all thromboemboli, respectively. CONCLUSIONS: Our study showed that extreme anticoagulation intensity significantly impacted the health of the population. Improving anticoagulation control will have significant effects on the incidence of serious hemorrhagic and thromboembolic events in the both the anticoagulated and entire populations.


Subject(s)
Anticoagulants/adverse effects , Anticoagulants/therapeutic use , Hemorrhage/chemically induced , Hemorrhage/prevention & control , Thromboembolism/prevention & control , Aged , Aged, 80 and over , Cohort Studies , Confidence Intervals , Cost of Illness , Dose-Response Relationship, Drug , Female , Hemorrhage/epidemiology , Humans , Incidence , International Normalized Ratio , Male , Ontario/epidemiology , Retrospective Studies , Risk Assessment , Thromboembolism/epidemiology
14.
Thromb Res ; 119(6): 705-14, 2007.
Article in English | MEDLINE | ID: mdl-16844204

ABSTRACT

BACKGROUND: For patients taking oral anticoagulants (OAC), the proportion of time spent in the therapeutic range is strongly associated with bleeding and thromboembolic risk. Previous studies examining OAC control may not generalize because the patient population was select or INR capture was incomplete. OBJECTIVES: Measure OAC control for an entire population of elderly people and determine patient factors associated with OAC control. PATIENTS: People in Eastern Ontario without valve replacement aged 65 years or greater who were treated with warfarin between 1 September 1999 and 1 September 2000. DESIGN: Retrospective cohort study using population-based administrative databases. OAC control was measured as the proportion of days in therapeutic range (PDTR), defined as the number days with the INR between 2 and 3 divided by total number of days observation. Linear interpolation was used to determine INR levels between measures. Negative binomial regression was used to identify patient factors independently associated with PDTR. We also determined which factors were associated with proportion of days with a critically low (<1.5) or critically high (>/=5) INR. RESULTS: 7179 people were followed for a total of 3238 years. 15% of people were hospitalized during the study. Overall, PDTR was 59.2% (95% CI 59.1%-59.2%). Independent of all other significant factors, hospitalization was associated with a 15% decrease in the PDTR 15% (rate ratio 0.85, 95% CI 0.83-0.87). Hospitalization was also independently associated with greater proportion of time with a critically low INR (rate ratio 1.68, 95% CI 1.51-1.88) and a critically high INR (1.70, 95% CI 1.38-2.08). CONCLUSIONS: Elderly people in eastern Ontario taking warfarin were therapeutic 59.2% of the time. Independent of other patient factors, patients who are hospitalized have the greatest risk of poor anticoagulation control. Control for anticoagulated patients who get hospitalized should be reviewed to determine if and how it could be improved.


Subject(s)
Anticoagulants/administration & dosage , Hospitalization , Warfarin/administration & dosage , Aged , Anticoagulants/therapeutic use , Cohort Studies , Databases, Factual , Drug Monitoring/statistics & numerical data , Female , Humans , International Normalized Ratio , Male , Retrospective Studies , Time Factors , Warfarin/therapeutic use
15.
Chest ; 129(5): 1155-66, 2006 May.
Article in English | MEDLINE | ID: mdl-16685005

ABSTRACT

BACKGROUND: For patients receiving therapy with oral anticoagulants (OACs), the proportion of time spent in the therapeutic range (ie, anticoagulation control) is strongly associated with bleeding and thromboembolic risk. The effect of study-level factors, especially study setting, on anticoagulation control is unknown. OBJECTIVES: Describe anticoagulation control achieved in the published literature. We also used metaregressive techniques to determine which study-level factors significantly influenced anticoagulation control. STUDIES: All published randomized or cohort studies that measured international normalized ratios (INRs) serially in anticoagulated patients and reported the proportion of time between INRs ranging from 1.8 to 2.0 and 3.0 to 3.5. RESULTS: We identified 67 studies with 123 patient groups having 50,208 patients followed for a total of 57,154.7 patient-years. A total of 68.3% of groups were from anticoagulation clinics, 7.3% were from clinical trials, and 24.4% were from community practices. Overall, patients were therapeutic 63.6% of time (95% confidence interval [CI], 61.6 to 65.6). In the metaregression model, study setting had the greatest effect on anticoagulation control with studies in community practices having significantly lower control than either anticoagulation clinics or clinical trials (-12.2%; 95% CI, -19.5 to -4.8; p < 0.0001). Self-management was associated with a significant improvement of time spent in the therapeutic range (+7.0%; 95% CI, 0.7 to 13.3; p = 0.03). CONCLUSIONS: Patients who have received anticoagulation therapy spend a significant proportion of their time with an INR out of the therapeutic range. Patients from community practices showed significantly worse anticoagulation control than those from anticoagulation clinics or clinical trials. This should be considered when interpreting the results of, and generalizing from, studies involving OACs.


Subject(s)
Anticoagulants/therapeutic use , Blood Coagulation/drug effects , Pulmonary Embolism/prevention & control , Randomized Controlled Trials as Topic/methods , Venous Thrombosis/prevention & control , Confidence Intervals , Humans , Pulmonary Embolism/blood , Treatment Outcome , Venous Thrombosis/blood
SELECTION OF CITATIONS
SEARCH DETAIL
...