Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Res Synth Methods ; 12(1): 106-117, 2021 Jan.
Article in English | MEDLINE | ID: mdl-32657532

ABSTRACT

INTRODUCTION: Interrupted Time Series (ITS) studies may be used to assess the impact of an interruption, such as an intervention or exposure. The data from such studies are particularly amenable to visual display and, when clearly depicted, can readily show the short- and long-term impact of an interruption. Further, well-constructed graphs allow data to be extracted using digitizing software, which can facilitate their inclusion in systematic reviews and meta-analyses. AIM: We provide recommendations for graphing ITS data, examine the properties of plots presented in ITS studies, and provide examples employing our recommendations. METHODS AND RESULTS: Graphing recommendations from seminal data visualization resources were adapted for use with ITS studies. The adapted recommendations cover plotting of data points, trend lines, interruptions, additional lines and general graph components. We assessed whether 217 graphs from recently published (2013-2017) ITS studies met our recommendations and found that 130 graphs (60%) had clearly distinct data points, 100 (46%) had trend lines, and 161 (74%) had a clearly defined interruption. Accurate data extraction (requiring distinct points that align with axis tick marks and labels that allow the points to be interpreted) was possible in only 72 (33%) graphs. CONCLUSION: We found that many ITS graphs did not meet our recommendations and could be improved with simple changes. Our proposed recommendations aim to achieve greater standardization and improvement in the display of ITS data, and facilitate re-use of the data in systematic reviews and meta-analyses.


Subject(s)
Data Visualization , Interrupted Time Series Analysis , Computer Graphics , Humans , Interrupted Time Series Analysis/standards , Interrupted Time Series Analysis/statistics & numerical data , Meta-Analysis as Topic , Software , Systematic Reviews as Topic
2.
Pediatrics ; 146(3)2020 09.
Article in English | MEDLINE | ID: mdl-32817268

ABSTRACT

BACKGROUND: Although required for healing, sleep is often disrupted during hospitalization. Blood pressure (BP) monitoring can be especially disruptive for pediatric inpatients and has few clinical indications. Our aim in this pilot study was to reduce unnecessary overnight BP monitoring and improve sleep for pediatric inpatients. METHODS: The intervention in June 2018 involved clinician education sessions and updated electronic health record (EHR) orders that enabled the forgoing of overnight BP checks. The postintervention period from July 2018 to May 2019 examined patient-caregiver surveys as outcome measures. These surveys measured inpatient sleep and overnight disruptions and were adopted from validated surveys: the Patient Sleep Questionnaire, expanded Brief Infant Sleep Questionnaire, and Potential Hospital Sleep Disruptions and Noises Questionnaire. Uptake of new sleep-friendly EHR orders was a process measure. Reported patient care escalations served as a balancing measure. RESULTS: Interrupted time series analysis of EHR orders (npre = 493; npost = 1472) showed an increase in intercept for the proportion of patients forgoing overnight BP postintervention (+50.7%; 95% confidence interval 41.2% to 60.3%; P < .001) and a subsequent decrease in slope each week (-0.16%; 95% confidence interval -0.32% to -0.01%; P = .037). Statistical process control of surveys (npre = 263; npost = 131) showed a significant increase in sleep duration for patients older than 2, and nighttime disruptions by clinicians decreased by 19% (P < .001). Annual estimated cost savings were $15 842.01. No major adverse events in patients forgoing BP were reported. CONCLUSIONS: A pilot study combining EHR changes and clinician education safely decreased overnight BP checks, increased pediatric inpatient sleep duration, and reduced nighttime disruptions by clinicians.


Subject(s)
Blood Pressure Determination/standards , Child, Hospitalized , Health Personnel/standards , Interrupted Time Series Analysis/standards , Quality Improvement/standards , Sleep/physiology , Adolescent , Blood Pressure Determination/psychology , Blood Pressure Determination/trends , Caregivers/education , Caregivers/standards , Caregivers/trends , Child , Child, Hospitalized/psychology , Child, Preschool , Electronic Health Records/standards , Electronic Health Records/trends , Female , Health Personnel/education , Health Personnel/trends , Humans , Infant , Infant, Newborn , Interrupted Time Series Analysis/trends , Male , Pilot Projects , Prospective Studies , Quality Improvement/trends
3.
J Clin Epidemiol ; 122: 1-11, 2020 06.
Article in English | MEDLINE | ID: mdl-32109503

ABSTRACT

OBJECTIVES: Interrupted time series (ITS) designs are frequently used in public health to examine whether an intervention or exposure has influenced health outcomes. Few reviews have been undertaken to examine the design characteristics, statistical methods, and completeness of reporting of published ITS studies. STUDY DESIGN AND SETTING: We used stratified random sampling to identify 200 ITS studies that evaluated public health interventions or exposures from PubMed (2013-2017). Study characteristics, details of statistical models and estimation methods used, effect metrics, and parameter estimates were extracted. From the 200 studies, 230 time series were examined. RESULTS: Common statistical methods used were linear regression (31%, 72/230) and autoregressive integrated moving average (19%, 43/230). In 17% (40/230) of the series, we could not determine the statistical method used. Autocorrelation was acknowledged in 63% (145/230) of the series. An estimate of the autocorrelation coefficient was given for only 1% of the series (3/230). Measures of precision were reported for 63% of effect measures (541/852). CONCLUSION: Many aspects of the design, methods, analysis, and reporting of ITS studies can be improved, particularly description of the statistical methods and approaches to adjust for and estimate autocorrelation. More guidance on the conduct and reporting of ITS studies is needed to improve this study design.


Subject(s)
Guidelines as Topic , Interrupted Time Series Analysis/standards , Public Health/statistics & numerical data , Publishing/standards , Research Design/standards , Humans , Linear Models , Models, Statistical
4.
Arthritis Care Res (Hoboken) ; 72(2): 283-291, 2020 02.
Article in English | MEDLINE | ID: mdl-30740931

ABSTRACT

OBJECTIVE: Applying treat-to-target strategies in the care of patients with rheumatoid arthritis (RA) is critical for improving outcomes, yet electronic health records (EHRs) have few features to facilitate this goal. We undertook this study to evaluate the effect of 3 health information technology (health-IT) initiatives on the performance of RA disease activity measures and outcomes in an academic rheumatology clinic. METHODS: We implemented the 3 following initiatives designed to facilitate performance of the Clinical Disease Activity Index (CDAI): an EHR flowsheet to input scores, peer performance reports, and an EHR SmartForm including a CDAI calculator. We performed an interrupted time-series trial to assess effects on the proportion of RA visits with a documented CDAI. Mean CDAI scores before and after the last initiative were compared using t-tests. Additionally, we measured physician satisfaction with the initiatives. RESULTS: We included data from 995 patients with 8,040 encounters between 2012 and 2017. Over this period, electronic capture of CDAI scores increased from 0% to 64%. Performance remained stable after peer reporting and the SmartForm were introduced. We observed no meaningful changes in disease activity levels. However, physician satisfaction increased after SmartForm implementation. CONCLUSION: Modifications to the EHR, provider culture, and clinical workflows effectively improved capture of RA disease activity scores and physician satisfaction, but parallel gains in disease activity levels were missing. This study illustrates how a series of health-IT initiatives can evolve to enable sustained changes in practice. However, capture of RA outcomes alone may not be sufficient to improve levels of disease activity without a comprehensive treat-to-target program.


Subject(s)
Arthritis, Rheumatoid/diagnosis , Disease Progression , Electronic Health Records/trends , Health Personnel/trends , Interrupted Time Series Analysis/trends , Quality Improvement/trends , Adult , Aged , Arthritis, Rheumatoid/epidemiology , Electronic Health Records/standards , Female , Health Personnel/standards , Humans , Interrupted Time Series Analysis/standards , Male , Middle Aged , Quality Improvement/standards
5.
BMC Med Res Methodol ; 19(1): 137, 2019 07 04.
Article in English | MEDLINE | ID: mdl-31272382

ABSTRACT

BACKGROUND: Randomised controlled trials (RCTs) are considered the gold standard when evaluating the causal effects of healthcare interventions. When RCTs cannot be used (e.g. ethically difficult), the interrupted time series (ITS) design is a possible alternative. ITS is one of the strongest quasi-experimental designs. The aim of this methodological study was to describe how ITS designs were being used, the design characteristics, and reporting in the healthcare setting. METHODS: We searched MEDLINE for reports of ITS designs published in 2015 which had a minimum of two data points collected pre-intervention and one post-intervention. There was no restriction on participants, language of study, or type of outcome. Data were summarised using appropriate summary statistics. RESULTS: One hundred and sixteen studies were included in the study. Interventions evaluated were mainly programs 41 (35%) and policies 32 (28%). Data were usually collected at monthly intervals, 74 (64%). Of the 115 studies that reported an analysis, the most common method was segmented regression (78%), 55% considered autocorrelation, and only seven reported a sample size calculation. Estimation of intervention effects were reported as change in slope (84%) and change in level (70%) and 21% reported long-term change in levels. CONCLUSIONS: This methodological study identified problems in the reporting of design features and results of ITS studies, and highlights the need for future work in the development of formal reporting guidelines and methodological work.


Subject(s)
Interrupted Time Series Analysis/standards , Outcome Assessment, Health Care/standards , Research Design/standards , Research Report/standards , Humans , Interrupted Time Series Analysis/methods , Interrupted Time Series Analysis/statistics & numerical data , MEDLINE/standards , MEDLINE/statistics & numerical data , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/statistics & numerical data , Regression Analysis , Research Design/statistics & numerical data
6.
PLoS Med ; 16(6): e1002825, 2019 06.
Article in English | MEDLINE | ID: mdl-31173597

ABSTRACT

BACKGROUND: Primary care antimicrobial stewardship interventions can improve antimicrobial prescribing, but there is less evidence that they reduce rates of resistant infection. This study examined changes in broad-spectrum antimicrobial prescribing in the community and resistance in people admitted to hospital with community-associated coliform bacteraemia associated with a primary care stewardship intervention. METHODS AND FINDINGS: Segmented regression analysis of data on all patients registered with a general practitioner in the National Health Service (NHS) Tayside region in the east of Scotland, UK, from 1 January 2005 to 31 December 2015 was performed, examining associations between a primary care antimicrobial stewardship intervention in 2009 and primary care prescribing of fluoroquinolones, cephalosporins, and co-amoxiclav and resistance to the same three antimicrobials/classes among community-associated coliform bacteraemia. Prescribing outcomes were the rate per 1,000 population prescribed each antimicrobial/class per quarter. Resistance outcomes were proportion of community-associated (first 2 days of hospital admission) coliform (Escherichia coli, Proteus spp., or Klebsiella spp.) bacteraemia among adult (18+ years) patients resistant to each antimicrobial/class. 11.4% of 3,442,205 oral antimicrobial prescriptions dispensed in primary care over the study period were for targeted antimicrobials. There were large, statistically significant reductions in prescribing at 1 year postintervention that were larger by 3 years postintervention when the relative reduction was -68.8% (95% CI -76.3 to -62.1) and the absolute reduction -6.3 (-7.6 to -5.2) people exposed per 1,000 population per quarter for fluoroquinolones; relative -74.0% (-80.3 to -67.9) and absolute reduction -6.1 (-7.2 to -5.2) for cephalosporins; and relative -62.3% (-66.9 to -58.1) and absolute reduction -6.8 (-7.7 to -6.0) for co-amoxiclav, all compared to their prior trends. There were 2,143 eligible bacteraemia episodes involving 2,004 patients over the study period (mean age 73.7 [SD 14.8] years; 51.4% women). There was no increase in community-associated coliform bacteraemia admissions associated with reduced community broad-spectrum antimicrobial use. Resistance to targeted antimicrobials reduced by 3.5 years postintervention compared to prior trends, but this was not statistically significant for co-amoxiclav. Relative and absolute changes were -34.7% (95% CI -52.3 to -10.6) and -63.5 (-131.8 to -12.8) resistant bacteraemia per 1,000 bacteraemia per quarter for fluoroquinolones; -48.3% (-62.7 to -32.3) and -153.1 (-255.7 to -77.0) for cephalosporins; and -17.8% (-47.1 to 20.8) and -63.6 (-206.4 to 42.4) for co-amoxiclav, respectively. Overall, there was reversal of a previously rising rate of fluoroquinolone resistance and flattening of previously rising rates of cephalosporin and co-amoxiclav resistance. The limitations of this study include that associations are not definitive evidence of causation and that potential effects of underlying secular trends in the postintervention period and/or of other interventions occurring simultaneously cannot be definitively excluded. CONCLUSIONS: In this population-based study in Scotland, compared to prior trends, there were very large reductions in community broad-spectrum antimicrobial use associated with the stewardship intervention. In contrast, changes in resistance among coliform bacteraemia were more modest. Prevention of resistance through judicious use of new antimicrobials may be more effective than trying to reverse resistance that has become established.


Subject(s)
Anti-Bacterial Agents/therapeutic use , Antimicrobial Stewardship/standards , Drug Resistance, Bacterial/drug effects , Enterobacteriaceae/drug effects , Interrupted Time Series Analysis/standards , Physicians, Primary Care/standards , Anti-Bacterial Agents/pharmacology , Antimicrobial Stewardship/methods , Drug Prescriptions/standards , Drug Resistance, Bacterial/physiology , Enterobacteriaceae/physiology , Enterobacteriaceae Infections/drug therapy , Enterobacteriaceae Infections/epidemiology , Humans , Interrupted Time Series Analysis/methods , Physicians, Primary Care/education , Population Surveillance , Primary Health Care/methods , Primary Health Care/standards , Scotland/epidemiology
7.
PLoS One ; 13(3): e0193902, 2018.
Article in English | MEDLINE | ID: mdl-29538401

ABSTRACT

BACKGROUND: In 2009, the Western Australian (WA) Government introduced the Four-Hour Rule (FHR) program. The policy stated that most patients presenting to Emergency Departments (EDs) were to be seen and either admitted, transferred, or discharged within 4 hours. This study utilised de-identified data from five participating hospitals, before and after FHR implementation, to assess the impact of the FHR on several areas of ED functioning. METHODS: A state (WA) population-based intervention study design, using longitudinal data obtained from administrative health databases via record linkage methodology, and interrupted time series analysis technique. FINDINGS: There were 3,214,802 ED presentations, corresponding to 1,203,513 ED patients. After the FHR implementation, access block for patients admitted through ED for all five sites showed a significant reduction of up to 13.2% (Rate Ratio 0.868, 95%CI 0.814, 0.925) per quarter. Rate of ED attendances for most hospitals continued to rise throughout the entire study period and were unaffected by the FHR, except for one hospital. Pattern of change in ED re-attendance rate post-FHR was similar to pre-FHR, but the trend reduced for two hospitals. ED occupancy was reduced by 6.2% per quarter post-FHR for the most 'crowded' ED. ED length of stay and ED efficiency improved in four hospitals and deteriorated in one hospital. Time to being seen by ED clinician and Did-Not-Wait rate improved for some hospitals. Admission rates in post-FHR increased, by up to 1% per quarter, for two hospitals where the pre-FHR trend was decreasing. CONCLUSIONS: The FHR had a consistent effect on 'flow' measures: significantly reducing ED overcrowding and access block and enhancing ED efficiency. Time-based outcome measures mostly improved with the FHR. There is some evidence of increased ED attendance, but no evidence of increased ED re-attendance. Effects on patient disposition status were mixed. Overall, this reflects the value of investing resources into the ED/hospital system to improve efficiency and patient experience. Further research is required to illuminate the exact mechanisms of the effects of FHR on the ED and hospital functioning across Australia.


Subject(s)
Hospitals/standards , Australia , Crowding , Databases, Factual , Emergency Service, Hospital/trends , Female , Hospitalization , Humans , Information Storage and Retrieval/standards , Interrupted Time Series Analysis/standards , Length of Stay , Male , Patient Admission/standards , Patient Discharge/standards , Time Factors
8.
Int J Clin Pharm ; 40(1): 15-19, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29170978

ABSTRACT

Background The use of STOPP-START criteria during hospitalization reduced inappropriate medications in randomized controlled trials. Objective To evaluate whether the implementation of a screening tool (short version of STOPP-START criteria) in routine geriatric practice reduces potentially inappropriate medications (PIM) and potential prescribing omissions (PPO) at discharge. Methods We conducted a retrospective interrupted time series analysis. Four periods were selected between February and September 2013: (1) baseline situation; (2) screening tool made available to physicians; (3) 3 months later; (4) weekly meetings with junior doctors and a clinical pharmacist to review treatments according to the tool. The primary outcome was the proportion of patients with prescribing improvement from admission to discharge. Results We included 120 patients (median age 85 years). The prevalence of PIMs and PPOs on admission was 56% (67/120) and 51% (61/120) respectively. Hospitalization improved prescribing appropriateness in 49% of patients with PIMs (33/67) and 39% of patients with PPOs (24/61). The use of the screening tool by way of multidisciplinary meetings was a predictor of PIMs reduction at discharge. Conclusions The sole distribution of a screening tool in a geriatric unit did not reduce PIMs and PPOs. Multidisciplinary meetings to review treatments should be encouraged.


Subject(s)
Drug Prescriptions/standards , Hospitalization/trends , Inappropriate Prescribing/prevention & control , Inappropriate Prescribing/trends , Interrupted Time Series Analysis/standards , Interrupted Time Series Analysis/trends , Aged , Aged, 80 and over , Female , Humans , Male , Pilot Projects , Retrospective Studies
9.
Anesth Analg ; 126(1): 150-160, 2018 01.
Article in English | MEDLINE | ID: mdl-28742774

ABSTRACT

BACKGROUND: Intraoperative lung-protective ventilation (ILPV) is defined as tidal volumes <8 mL/kg ideal bodyweight and is increasingly a standard of care for major abdominal surgical procedures performed under general anesthesia. In this study, we report the result of a quality improvement initiative targeted at improving adherence to ILPV guidelines in a large academic teaching hospital. METHODS: We performed a time-series study to determine whether anesthesia provider adherence to ILPV was affected by certain improvement interventions and patient ideal body weight (IBW). Tidal volume data were collected at 3 different time points for 191 abdominal surgical cases from June 2014 through April 2015. Improvement interventions during that period included education at departmental grand rounds, creation of a departmental ILPV policy, feedback of tidal volume and failure rate data at grand rounds sessions, and reducing default ventilator settings for tidal volume. Mean tidal volume per kilogram of ideal body weight (VT/kg IBW) and rates of noncompliance with ILPV were analyzed before and after the interventions. A survey was administered to assess provider attitudes after implementation of improvement interventions. Responses before and after interventions and between physician and nonphysician providers were analyzed. RESULTS: Reductions in mean VT/kg IBW and rates of failure for providers to use ILPV occurred after improvement interventions. Patients with IBW <65 kg received higher VT/kg IBW and had higher rates of failure to use ILPV than patients with IBW >65 kg. Surveyed providers demonstrated stronger agreement to having knowledge and practice consistent with ILPV after interventions. CONCLUSIONS: Our interventions improved anesthesia provider adherence to low tidal volume ILPV. IBW was found to be an important factor related to provider adherence to ILPV. Provider attitudes about their knowledge and practice consistent with ILPV also changed with our interventions.


Subject(s)
Academic Medical Centers/standards , Guideline Adherence/standards , Lung/physiology , Monitoring, Intraoperative/standards , Pulmonary Ventilation/physiology , Respiration, Artificial/standards , Adult , Aged , Female , Humans , Interrupted Time Series Analysis/methods , Interrupted Time Series Analysis/standards , Male , Middle Aged , Monitoring, Intraoperative/methods , Respiration, Artificial/methods , Retrospective Studies , Tidal Volume/physiology
10.
J Eval Clin Pract ; 23(2): 413-418, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27630090

ABSTRACT

RATIONALE, AIMS AND OBJECTIVES: Single-group interrupted time series analysis (ITSA) is a popular evaluation methodology in which a single unit of observation is studied; the outcome variable is serially ordered as a time series, and the intervention is expected to "interrupt" the level and/or trend of the time series, subsequent to its introduction. The most common threat to validity is history-the possibility that some other event caused the observed effect in the time series. Although history limits the ability to draw causal inferences from single ITSA models, it can be controlled for by using a comparable control group to serve as the counterfactual. METHOD: Time series data from 2 natural experiments (effect of Florida's 2000 repeal of its motorcycle helmet law on motorcycle fatalities and California's 1988 Proposition 99 to reduce cigarette sales) are used to illustrate how history biases results of single-group ITSA results-as opposed to when that group's results are contrasted to those of a comparable control group. RESULTS: In the first example, an external event occurring at the same time as the helmet repeal appeared to be the cause of a rise in motorcycle deaths, but was only revealed when Florida was contrasted with comparable control states. Conversely, in the second example, a decreasing trend in cigarette sales prior to the intervention raised question about a treatment effect attributed to Proposition 99, but was reinforced when California was contrasted with comparable control states. CONCLUSIONS: Results of single-group ITSA should be considered preliminary, and interpreted with caution, until a more robust study design can be implemented.


Subject(s)
Interrupted Time Series Analysis/standards , Research Design/standards , Accidents, Traffic/mortality , California , Florida , Head Protective Devices , Humans , Motorcycles/legislation & jurisprudence , Reproducibility of Results , Tobacco Products/economics
11.
J Eval Clin Pract ; 23(2): 419-425, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27804216

ABSTRACT

RATIONALE, AIMS AND OBJECTIVES: The basic single-group interrupted time series analysis (ITSA) design has been shown to be susceptible to the most common threat to validity-history-the possibility that some other event caused the observed effect in the time series. A single-group ITSA with a crossover design (in which the intervention is introduced and withdrawn 1 or more times) should be more robust. In this paper, we describe and empirically assess the susceptibility of this design to bias from history. METHOD: Time series data from 2 natural experiments (the effect of multiple repeals and reinstatements of Louisiana's motorcycle helmet law on motorcycle fatalities and the association between the implementation and withdrawal of Gorbachev's antialcohol campaign with Russia's mortality crisis) are used to illustrate that history remains a threat to ITSA validity, even in a crossover design. RESULTS: Both empirical examples reveal that the single-group ITSA with a crossover design may be biased because of history. In the case of motorcycle fatalities, helmet laws appeared effective in reducing mortality (while repealing the law increased mortality), but when a control group was added, it was shown that this trend was similar in both groups. In the case of Gorbachev's antialcohol campaign, only when contrasting the results against those of a control group was the withdrawal of the campaign found to be the more likely culprit in explaining the Russian mortality crisis than the collapse of the Soviet Union. CONCLUSIONS: Even with a robust crossover design, single-group ITSA models remain susceptible to bias from history. Therefore, a comparable control group design should be included, whenever possible.


Subject(s)
Cross-Over Studies , Interrupted Time Series Analysis/standards , Research Design/standards , Accidents, Traffic/mortality , Head Protective Devices , Health Promotion/organization & administration , Humans , Louisiana , Mortality/trends , Motorcycles/legislation & jurisprudence , Reproducibility of Results , Russia
12.
Am J Health Syst Pharm ; 73(24): 2043-2054, 2016 Dec 15.
Article in English | MEDLINE | ID: mdl-27806937

ABSTRACT

PURPOSE: A protocol to optimize the duration of antimicrobial therapy (DAT) for uncomplicated pneumonia at hospital discharge was evaluated. METHODS: This retrospective quasiexperimental study was conducted at Boise Veterans Affairs Medical Center from March 2013 through June 2015. Patients were included in the study if they were diagnosed with pneumonia, were hospitalized for more than 24 hours, received antimicrobial treatment within 48 hours of admission, and survived until hospital discharge. The intervention included development of a pneumonia DAT triage algorithm, a process for assessment of the appropriate DAT by pharmacists, and recommendations to providers to limit excessive discharge DATs prescribed. Interrupted time-series analysis was performed to determine the mean monthly DAT per patient and the 30-day readmission rate. RESULTS: Of the 707 patients discharged with a diagnosis of pneumonia, 560 met the criteria for study inclusion (366 in the preimplementation group and 194 in the postimplementation group). Change in slope of monthly mean DAT per patient postimplementation was significantly reduced (p = 0.03) from the preimplementation slope (p = 0.95), indicating an association between the intervention and mean DAT per patient. The intervention was not associated with the 30-day readmission rate. The mean ± S.D. DAT decreased from 9.5 ± 2.4 days preimplementation to 8.2 ± 2.9 days postimplementation, primarily due to the reduction of outpatient DAT from 5.2 ± 3.0 days preimplementation to 4.2 ± 3.0 days postimplementation. CONCLUSION: A pharmacy-based triage algorithm helped to reduce excessive DATs for patients with pneumonia at hospital discharge without negatively affecting 30-day readmission rates.


Subject(s)
Anti-Infective Agents/administration & dosage , Community-Acquired Infections/drug therapy , Hospitals, Veterans/standards , Interrupted Time Series Analysis/standards , Patient Discharge/standards , Pneumonia/drug therapy , Community-Acquired Infections/epidemiology , Drug Administration Schedule , Female , Humans , Interrupted Time Series Analysis/methods , Male , Middle Aged , Patient Discharge/trends , Pneumonia/epidemiology , Retrospective Studies
13.
Pediatrics ; 137(4)2016 04.
Article in English | MEDLINE | ID: mdl-27002007

ABSTRACT

BACKGROUND AND OBJECTIVE: Clinical pathways standardize care for common health conditions. We sought to assess whether institution-wide implementation of multiple standardized pathways was associated with changes in utilization and physical functioning after discharge among pediatric inpatients. METHODS: Interrupted time series analysis of admissions to a tertiary care children's hospital from December 1, 2009 through March 30, 2014. On the basis of diagnosis codes, included admissions were eligible for 1 of 15 clinical pathways implemented during the study period; admissions from both before and after implementation were included. Postdischarge physical functioning improvement was assessed with the Pediatric Quality of Life Inventory 4.0 Generic Core or Infant Scales. Average hospitalization costs, length of stay, readmissions, and physical functioning improvement scores were calculated by month relative to pathway implementation. Segmented linear regression was used to evaluate differences in intercept and trend over time before and after pathway implementation. RESULTS: There were 3808 and 2902 admissions in the pre- and postpathway groups, respectively. Compared with prepathway care, postpathway care was associated with a significant halt in rising costs (prepathway vs postpathway slope difference -$155 per month [95% confidence interval -$246 to -$64]; P = .001) and significantly decreased length of stay (prepathway vs post-pathway slope difference -0.03 days per month [95% confidence interval -0.05 to -0.02]; P = .02), without negatively affecting patient physical functioning improvement or readmissions. CONCLUSIONS: Implementation of multiple evidence-based, standardized clinical pathways was associated with decreased resource utilization without negatively affecting patient physical functioning improvement. This approach could be widely implemented to improve the value of care provided.


Subject(s)
Child, Hospitalized , Critical Pathways/standards , Interrupted Time Series Analysis/methods , Interrupted Time Series Analysis/standards , Quality of Life , Child , Cohort Studies , Critical Pathways/trends , Humans , Interrupted Time Series Analysis/trends , Length of Stay/trends , Retrospective Studies , Treatment Outcome
16.
J Hosp Med ; 10(1): 41-5, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25603790

ABSTRACT

As part of the Choosing Wisely Campaign, the Society of Hospital Medicine identified reducing inappropriate use of acid-suppressive medication for stress ulcer prophylaxis as 1 of 5 key opportunities to improve the value of care for hospitalized patients. We designed a computerized clinical decision support intervention to reduce use of acid-suppressive medication for stress ulcer prophylaxis in hospitalized patients outside of the intensive care unit at an academic medical center. Using quasiexperimental interrupted time series analysis, we found that the decision support intervention resulted in a significant reduction in use of acid-suppressive medication with stress ulcer prophylaxis selected as the only indication, a nonsignificant reduction in overall use, and no change in use on discharge. We found low rates of use of acid-suppressive medication for the purpose of stress ulcer prophylaxis even before the intervention, and continuing preadmission medication was the most commonly selected indication throughout the study. Our results suggest that attention should be focused on both the inpatient and outpatient settings when designing future initiatives to improve the appropriateness of acid-suppressive medication use.


Subject(s)
Anti-Ulcer Agents/therapeutic use , Decision Support Systems, Clinical/standards , Interrupted Time Series Analysis/standards , Adult , Aged , Decision Support Systems, Clinical/trends , Female , Humans , Interrupted Time Series Analysis/trends , Male , Middle Aged , Peptic Ulcer/diagnosis , Peptic Ulcer/drug therapy , Proton Pump Inhibitors/therapeutic use , Stomach Ulcer/diagnosis , Stomach Ulcer/drug therapy
17.
Anesthesiology ; 122(3): 560-70, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25485470

ABSTRACT

BACKGROUND: Cardiac surgery requiring the use of cardiopulmonary bypass is frequently complicated by coagulopathic bleeding that, largely due to the shortcomings of conventional coagulation tests, is difficult to manage. This study evaluated a novel transfusion algorithm that uses point-of-care coagulation testing. METHODS: Consecutive patients who underwent cardiac surgery with bypass at one hospital before (January 1, 2012 to January 6, 2013) and after (January 7, 2013 to December 13, 2013) institution of an algorithm that used the results of point-of-care testing (ROTEM; Tem International GmBH, Munich, Germany; Plateletworks; Helena Laboratories, Beaumont, TX) during bypass to guide management of coagulopathy were included. Pre- and postalgorithm outcomes were compared using interrupted time-series analysis to control for secular time trends and other confounders. RESULTS: Pre- and postalgorithm groups included 1,311 and 1,170 patients, respectively. Transfusion rates for all blood products (except for cryoprecipitate, which did not change) were decreased after algorithm institution. After controlling for secular pre- and postalgorithm time trends and potential confounders, the posttransfusion odds ratios (95% CIs) for erythrocytes, platelets, and plasma were 0.50 (0.32 to 0.77), 0.22 (0.13 to 0.37), and 0.20 (0.12 to 0.34), respectively. There were no indications that the algorithm worsened any of the measured processes of care or outcomes. CONCLUSIONS: Institution of a transfusion algorithm based on point-of-care testing was associated with reduced transfusions. This suggests that the algorithm could improve the management of the many patients who develop coagulopathic bleeding after cardiac surgery. The generalizability of the findings needs to be confirmed.


Subject(s)
Algorithms , Blood Coagulation , Blood Transfusion/standards , Cardiac Surgical Procedures/standards , Interrupted Time Series Analysis/standards , Point-of-Care Systems/standards , Adult , Aged , Aged, 80 and over , Blood Coagulation/physiology , Blood Transfusion/methods , Cardiac Surgical Procedures/methods , Cohort Studies , Humans , Interrupted Time Series Analysis/methods , Middle Aged , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...